parquet-converter commited on
Commit
62cb513
·
1 Parent(s): 880a24b

Update parquet files (step 76 of 121)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Danea easyfatt 2013 crack the risks and consequences of using illegal software.md +0 -173
  2. spaces/1gistliPinn/ChatGPT4/Examples/Download BEST Microsoft Office Professional Plus 2013 Rtm Activation.md +0 -7
  3. spaces/1phancelerku/anime-remove-background/Clash Royale is waiting for you on yapup.site download and play today.md +0 -116
  4. spaces/1phancelerku/anime-remove-background/Dislyte A Stylish Urban RPG with Divine Power and Funky Music.md +0 -81
  5. spaces/1toTree/lora_test/ppdiffusers/experimental/__init__.py +0 -17
  6. spaces/1toTree/lora_test/ppdiffusers/pipelines/ddpm/__init__.py +0 -17
  7. spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/audio/audio_processing.py +0 -100
  8. spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/timm_model.py +0 -106
  9. spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/ps_adv_mlm_history.py +0 -171
  10. spaces/AIWaves/SOP_Generation-single/Prompt/base_Prompts.py +0 -84
  11. spaces/Abhilashvj/planogram-compliance/utils/flask_rest_api/example_request.py +0 -19
  12. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/rings/Factory.d.ts +0 -6
  13. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspectivecard/Factory.js +0 -13
  14. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/rotate/Rotate.js +0 -2
  15. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/tabpages/Factory.d.ts +0 -5
  16. spaces/AirtistDesign/stablediffusionapi-rev-animated/app.py +0 -3
  17. spaces/Ameaou/academic-chatgpt3.1/crazy_functions/crazy_utils.py +0 -566
  18. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/intel_opts/textual_inversion_dfq/text2images.py +0 -112
  19. spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/cgnet.py +0 -35
  20. spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x512_160k_ade20k.py +0 -2
  21. spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x512_160k_ade20k.py +0 -6
  22. spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_512x512_40k_voc12aug.py +0 -10
  23. spaces/Artrajz/vits-simple-api/vits/commons.py +0 -96
  24. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/protocol.py +0 -42
  25. spaces/Audio-AGI/WavJourney/scripts/start_service_and_ui.sh +0 -2
  26. spaces/Awesimo/jojogan/e4e/editings/latent_editor.py +0 -45
  27. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/structures/instances.py +0 -192
  28. spaces/Benson/text-generation/Examples/3d Paint Download.md +0 -151
  29. spaces/Benson/text-generation/Examples/Apk Moderno Mod Ops.md +0 -78
  30. spaces/Benson/text-generation/Examples/Descarga De Microsoft Word 2016.md +0 -61
  31. spaces/BetterAPI/BetterChat_new/src/routes/settings/+server.ts +0 -34
  32. spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/basic/install.md +0 -207
  33. spaces/CVPR/LIVE/thrust/thrust/set_operations.h +0 -0
  34. spaces/CVPR/regionclip-demo/detectron2/data/clip_build.py +0 -158
  35. spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py +0 -14
  36. spaces/CikeyQI/meme-api/meme_generator/memes/dont_go_near/__init__.py +0 -22
  37. spaces/CoWork/dreambooth-training-public/app.py +0 -687
  38. spaces/CofAI/picscore/picscore.py +0 -7
  39. spaces/CofAI/picscore1/README.md +0 -15
  40. spaces/CoreyMorris/MMLU-by-task-Leaderboard/plotting_utils.py +0 -152
  41. spaces/Cyril666/ContourNet-ABI/setup.py +0 -69
  42. spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/common/gradcam.py +0 -24
  43. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/openapi/models.py +0 -611
  44. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Button-9b719f62.css +0 -1
  45. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/index.html +0 -84
  46. spaces/DRAGSclub/README/README.md +0 -10
  47. spaces/Darkk88/medium-GPT4/app.py +0 -3
  48. spaces/Datasculptor/MusicGen/audiocraft/data/__init__.py +0 -8
  49. spaces/Deepak107/Bottle_images/README.md +0 -13
  50. spaces/Duskfallcrew/textual-inversion-training/app.py +0 -559
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Danea easyfatt 2013 crack the risks and consequences of using illegal software.md DELETED
@@ -1,173 +0,0 @@
1
-
2
- <h1>Danea easyfatt 2013 crack: What is it and how to use it?</h1>
3
- <p>If you are looking for a software that can help you manage your invoices, inventory, orders, quotes, and accounting, you might have heard of Danea easyfatt. This is a popular program that is designed for small and medium businesses in Italy. However, if you want to use this software without paying for a license, you might also be interested in Danea easyfatt 2013 crack. This is a modified version of the program that allows you to bypass the activation process and use it for free. But what exactly is a crack and how can you use it safely? In this article, we will explain everything you need to know about Danea easyfatt 2013 crack, including how to download, install, and use it.</p>
4
- <h2>Danea easyfatt 2013 crack</h2><br /><p><b><b>Download File</b> &hArr; <a href="https://byltly.com/2uKuVE">https://byltly.com/2uKuVE</a></b></p><br /><br />
5
- <h2>Introduction</h2>
6
- <h3>What is Danea easyfatt?</h3>
7
- <p>Danea easyfatt is a software developed by Danea Soft (Italia), a company that specializes in creating solutions for small and medium enterprises. Danea easyfatt is one of their flagship products, which offers a comprehensive and user-friendly interface for managing various aspects of your business. With Danea easyfatt, you can:</p>
8
- <ul>
9
- <li>Create and print invoices, receipts, delivery notes, quotes, orders, and more.</li>
10
- <li>Manage your inventory, stock movements, suppliers, and purchases.</li>
11
- <li>Keep track of your customers, contacts, payments, and reminders.</li>
12
- <li>Generate reports, statistics, charts, and graphs.</li>
13
- <li>Integrate with other software such as Excel, Outlook, Word, e-commerce platforms, etc.</li>
14
- <li>Synchronize your data with cloud services such as Dropbox, Google Drive, OneDrive, etc.</li>
15
- </ul>
16
- <p>Danea easyfatt is compatible with Windows operating systems and supports multiple languages. It also comes in different editions depending on your needs: Basic, Professional, Enterprise, etc. However, each edition has a different price tag and requires a license key to activate.</p>
17
- <h3>What is a crack?</h3>
18
- <p>A crack is a term used to describe a file or a program that modifies or alters the original software in order to remove or bypass its protection mechanisms. For example, some software require an activation code or a serial number to verify that you have purchased a legitimate copy. A crack can either generate a fake code or replace the original file that checks for the code with a modified one that allows you to use the software without any restrictions.</p>
19
- <p>A crack can also be used to unlock or enable features that are otherwise unavailable or limited in the original software. For example, some software have trial versions that expire after a certain period of time or have reduced functionality. A crack can either extend the trial period indefinitely or enable all the features as if you have bought the full version.</p>
20
- <p>A crack can be either an executable file (.exe) that you run before or after installing the original software or a patch file (.dll) that you copy and paste into the installation folder of the original software. Sometimes, a crack can also come with instructions or a keygen (a program that generates keys) that you need to follow carefully.</p>
21
- <h3>Why would you need a crack for Danea easyfatt 2013?</h3>
22
- <p>There are many reasons why someone would want to use a crack for Danea easyfatt 2013. Some of them are:</p>
23
- <p>Danea easyfatt 2013 full version download<br />
24
- How to crack Danea easyfatt 2013 software<br />
25
- Danea easyfatt 2013 serial key generator<br />
26
- Danea easyfatt 2013 activation code free<br />
27
- Danea easyfatt 2013 patch download<br />
28
- Danea easyfatt 2013 license key crack<br />
29
- Danea easyfatt 2013 torrent download<br />
30
- Danea easyfatt 2013 keygen online<br />
31
- Danea easyfatt 2013 cracked version for windows<br />
32
- Danea easyfatt 2013 registration code crack<br />
33
- Danea easyfatt 2013 product key crack<br />
34
- Danea easyfatt 2013 crack mac os x<br />
35
- Danea easyfatt 2013 crack no survey<br />
36
- Danea easyfatt 2013 crack without password<br />
37
- Danea easyfatt 2013 crack direct download link<br />
38
- Danea easyfatt 2013 crack rar file<br />
39
- Danea easyfatt 2013 crack zip file<br />
40
- Danea easyfatt 2013 crack iso file<br />
41
- Danea easyfatt 2013 crack exe file<br />
42
- Danea easyfatt 2013 crack setup file<br />
43
- Danea easyfatt 2013 crack installer file<br />
44
- Danea easyfatt 2013 crack portable file<br />
45
- Danea easyfatt 2013 crack working file<br />
46
- Danea easyfatt 2013 crack latest version<br />
47
- Danea easyfatt 2013 crack updated version<br />
48
- Danea easyfatt 2013 crack with tutorial<br />
49
- Danea easyfatt 2013 crack with instructions<br />
50
- Danea easyfatt 2013 crack with guide<br />
51
- Danea easyfatt 2013 crack with manual<br />
52
- Danea easyfatt 2013 crack with video<br />
53
- Danea easyfatt 2013 crack with proof<br />
54
- Danea easyfatt 2013 crack with reviews<br />
55
- Danea easyfatt 2013 crack with testimonials<br />
56
- Danea easyfatt 2013 crack with feedbacks<br />
57
- Danea easyfatt 2013 crack with ratings<br />
58
- Danea easyfatt 2013 crack with comments<br />
59
- Danea easyfatt 2013 crack with support<br />
60
- Danea easyfatt 2013 crack with helpdesk<br />
61
- Danea easyfatt 2013 crack with customer service<br />
62
- Danea easyfatt 2013 crack with warranty<br />
63
- Danea easyfatt 2013 crack with guarantee<br />
64
- Danea easyfatt 2013 crack with refund policy<br />
65
- Danea easyfatt 2013 crack with discount offer<br />
66
- Danea easyfatt 2013 crack with coupon code<br />
67
- Danea easyfatt 2013 crack with promo code<br />
68
- Danea easyfatt 2013 crack with free trial<br />
69
- Danea easyfatt 2013 crack with free download link</p>
70
- <ul>
71
- <li>You want to try out the software before buying it.</li>
72
- <li>You cannot afford to pay for the license fee.</li>
73
- <li>You want to use the software for personal or educational purposes only.</li>
74
- <li>You want to access features that are not available in your edition.</li>
75
- <li>You want to use the software on multiple devices or share it with others.</li>
76
- </ul>
77
- <p>However, using a crack also comes with some risks and disadvantages. Some of them are:</p>
78
- <ul>
79
- <li>You may violate the terms and conditions of the software developer and face legal consequences.</li>
80
- <li>You may expose your device or data to malware or viruses that are hidden in the crack file.</li>
81
- <li>You may encounter errors or problems with the software functionality or compatibility.</li>
82
- <li>You may not receive updates or support from the software developer.</li>
83
- <li>You may damage your reputation or credibility as a professional or ethical user.</li>
84
- </ul>
85
- <p>Therefore, before using a crack for Danea easyfatt 2013, you should weigh the pros and cons carefully and decide whether it is worth it or not.</p>
86
- <h2>How to download and install Danea easyfatt 2013 crack</h2>
87
- <h3>Where to find the crack file</h3>
88
- <p>If you have decided to use a crack for Danea easyfatt 2013, you need to find a reliable source where you can download it. There are many websites that offer cracks for various software but not all of them are trustworthy or safe. Some of them may contain fake links or malicious files that can harm your device or data. Therefore, you should be careful when choosing where to download from.</p>
89
- <p>One way to find a reputable website is to look for reviews or feedback from other users who have downloaded from there before. You can also check if the website has any security certificates or badges that indicate its legitimacy. Another way is to use an antivirus program or an online scanner tool that can scan the website or the file for any potential threats before downloading.</p>
90
- <p>For example, one website that claims to offer Danea easyfatt 2013 crack is <strong></strong>. According to this website,</p>
91
- <blockquote><p>"Salve a tutti, come da richiesta abbiamo messo a disposizione <strong>Danea Easyfatt Enterprise</strong> per i sistemi Windows. Consiglio di utilizzare il software jdownloader.org per poter scaricare le varie parti comodamente e WinRaR per estrarre l’archivio."</p></blockquote>
92
- <p>This means "Hello everyone, as requested we have made available <strong>Danea Easyfatt Enterprise</strong> for Windows systems. I recommend using jdownloader.org software to download various parts comfortably and WinRaR to extract the archive."</p>
93
- <p>The website also provides three mirror links where you can download the archive file named <code>Danea_EasyFatt_Enterprise_2020_v46c_Build_6011.rar</code>. The password to open the archive is <code>apritisesamo</code>.</p>
94
- <h3>How to disable antivirus and extract the file</h3>
95
- <p>Before installing the program, you need to disable your antivirus and extract the file from the archive. This is because your antivirus may detect the crack as a threat and block or delete it. To disable your antivirus, you can follow these steps:</p>
96
- <ol>
97
- <li>Open your antivirus program and go to its settings or options menu.</li>
98
- <li>Look for an option that allows you to turn off or pause the protection temporarily. It may be called something like "Disable", "Deactivate", "Suspend", etc.</li>
99
- <li>Select the option and choose how long you want to disable it. It may be in minutes, hours, or until restart. You can also choose which components of protection you want to disable, such as real-time scanning, firewall, etc.</li>
100
- <li>Confirm your choice and close your antivirus program. You should see an icon on your taskbar indicating that your antivirus is off.</li>
101
- </ol> To extract the file from the archive, you need to use a software that can handle RAR files. One of the most popular and free options is 7-Zip, which you can download from <strong></strong>. After installing 7-Zip, you can follow these steps:</p>
102
- <ol>
103
- <li>Right-click on the archive file and select "7-Zip" from the menu.</li>
104
- <li>Select one of the "Extract" options, depending on where you want to extract the files. You can choose to extract them to a new folder with the same name as the archive, to the current folder, or to a custom location.</li>
105
- <li>Enter the password <code>apritisesamo</code> when prompted and click "OK".</li>
106
- <li>Wait for the extraction process to finish. You should see a new folder or files in the destination you chose.</li>
107
- </ol>
108
- <h3>How to install the program and replace the exe file</h3>
109
- <p>After extracting the file from the archive, you need to install the program and replace the original exe file with the cracked one. To do that, you can follow these steps:</p>
110
- <ol>
111
- <li>Open the folder where you extracted the files and double-click on the <code>Setup.exe</code> file.</li>
112
- <li>Follow the instructions on the screen to install Danea easyfatt 2013 on your device. You can choose your preferred language, destination folder, and shortcuts.</li>
113
- <li>When the installation is complete, close the program completely. You can also exit it from the system tray if it is running in the background.</li>
114
- <li>Open the folder named "Crack" and copy the <code>DaneaEasyFatt.exe</code> file.</li>
115
- <li>Paste it into the installation folder of Danea easyfatt 2013, which is usually located at <code>C:\Program Files (x86)\Danea Easyfatt 2013</code>.</li>
116
- <li>If prompted to replace or overwrite the existing file, click "Yes" or "Replace".</li>
117
- </ol>
118
- <h2>How to use Danea easyfatt 2013 crack</h2>
119
- <h3>How to activate the program with the crack</h3>
120
- <p>Now that you have installed the program and replaced the exe file, you can activate the program with the crack. To do that, you can follow these steps:</p>
121
- <ol>
122
- <li>Launch Danea easyfatt 2013 from your desktop or start menu shortcut.</li>
123
- <li>You should see a window asking you to enter your license key or activate online. Click on "Activate online".</li>
124
- <li>You should see another window asking you to enter your email address and password. Enter any email address and password you want and click "OK".</li>
125
- <li>You should see a message saying that your activation was successful and that you have a valid license for Danea easyfatt Enterprise 2020.</li>
126
- <li>Click "OK" and enjoy using Danea easyfatt 2013 crack.</li>
127
- </ol>
128
- <h3>How to access the features and functions of Danea easyfatt</h3>
129
- <p>Danea easyfatt 2013 crack allows you to access all the features and functions of Danea easyfatt Enterprise 2020, which is the most advanced edition of the software. You can explore the various menus, tabs, and buttons on the main interface to find what you need. Some of the main features and functions are:</p>
130
- <ul>
131
- <li>Create and manage documents such as invoices, quotes, orders, receipts, etc. You can customize their layout, format, content, and print options. You can also export them to PDF, Excel, Word, or email them directly.</li>
132
- <li>Manage your inventory, stock movements, suppliers, and purchases. You can track your products, categories, prices, quantities, barcodes, etc. You can also import or export data from Excel or other sources.</li>
133
- <li>Manage your customers, contacts, payments, and reminders. You can store your customer information, history, preferences, etc. You can also send emails or SMS messages to them or create mailing lists.</li>
134
- <li>Generate reports, statistics, charts, and graphs. You can analyze your data, performance, trends, etc. You can also customize your reports, filters, criteria, etc.</li>
135
- <li>Integrate with other software such as Excel, Outlook, Word, e-commerce platforms, etc. You can import or export data, sync your contacts, calendar, tasks, etc.</li>
136
- <li>Synchronize your data with cloud services such as Dropbox, Google Drive, OneDrive, etc. You can backup or restore your data, access it from anywhere, or share it with others.</li>
137
- </ul>
138
- <h3>How to avoid errors or problems with the crack</h3>
139
- <p>Danea easyfatt 2013 crack may not work perfectly for everyone. You may encounter some errors or problems with the software functionality or compatibility. To avoid or fix them, you can try some of these tips:</p>
140
- <ul>
141
- <li>Make sure you have disabled your antivirus before installing or running the crack. Your antivirus may interfere with the crack operation or delete it.</li>
142
- <li>Make sure you have replaced the original exe file with the cracked one in the installation folder. If you have not done so, the program may not activate properly or ask for a license key.</li>
143
- <li>Make sure you have entered any email address and password when activating online. If you have left them blank or entered invalid ones, the program may not activate properly or show an error message.</li>
144
- <li>Make sure you have installed Danea easyfatt 2013 on a compatible device and operating system. The software requires Windows XP SP3 or later versions (32-bit or 64-bit) and at least 1 GB of RAM and 500 MB of free disk space.</li>
145
- <li>If you encounter any other errors or problems with Danea easyfatt 2013 crack, you can try to uninstall and reinstall it following the same steps above. You can also look for solutions online or contact Danea Soft (Italia) for support (but be careful not to reveal that you are using a crack).</li>
146
- </ul>
147
- <h2>Conclusion</h2>
148
- <h3>Summary of the main points</h3>
149
- <p>In this article, we have explained what Danea easyfatt 2013 crack is and how to use it. We have covered:</p>
150
- <ul>
151
- <li>What is Danea easyfatt and what are its features and functions?</li>
152
- <li>What is a crack and why would you need one for Danea easyfatt 2013?</li>
153
- <li>How to download and install Danea easyfatt 2013 crack?</li>
154
- <li>How to activate and use Danea easyfatt 2013 crack?</li>
155
- <li>How to avoid errors or problems with Danea easyfatt 2013 crack?</li>
156
- </ul>
157
- <h3>Benefits and risks of using a crack</h3>
158
- <p>We have also discussed some of the benefits and risks of using a crack for Danea easyfatt 2013. Some of them are:</p>
159
- <ul>
160
- <li>You can use Danea easyfatt without paying for a license fee.</li>
161
- <li>You can access all the features and functions of Danea easyfatt Enterprise 2020.</li>
162
- <li>You can use Danea easyfatt on multiple devices or share it with others.</li>
163
- <li>You may violate the terms and conditions of Danea Soft (Italia) and face legal consequences.</li>
164
- <li>You may expose your device or data to malware or viruses that are hidden in the crack file.</li>
165
- <li>You may encounter errors or problems with the software functionality or compatibility.</li>
166
- <li>You may not receive updates or support from Danea Soft (Italia).</li>
167
- <li>You may damage your reputation or credibility as a professional or ethical user.</li>
168
- </ul>
169
- <h3>Call to action and disclaimer</h3>
170
- <p>We hope this article has been helpful for you in understanding and using Danea easyfatt 2013 crack. However, we do not endorse or recommend using cracks for any software as they are illegal and unethical. We are not responsible for any damages or losses that may result from using cracks. We advise you to use cracks at your own risk and discretion. If you like Danea easyfatt and find it useful for your business needs, we encourage you to buy a legitimate license from Danea Soft (Italia) and support their work. Thank you for reading this article!</p>
171
- **FAQs** Q: What is Danea easyfatt? A: Danea easyfatt is a software that helps you manage your invoices, inventory, orders, quotes, and accounting. Q: What is a crack? A: A crack is a file or a program that modifies or alters the original software in order to remove or bypass its protection mechanisms. Q: How do I download Danea easyfatt 2013 crack? A: You need</p> 0a6ba089eb<br />
172
- <br />
173
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Download BEST Microsoft Office Professional Plus 2013 Rtm Activation.md DELETED
@@ -1,7 +0,0 @@
1
- <br />
2
- <p>microsoft office is a series of office applications offered by microsoft for home and business use. office has advanced features like edit pdfs, advanced multimedia functions, good touch navigation, helpful new assistants and also some disadvantages since the user has almost no choice but to take cloud use, and tablet work. both 32-bit and the 64-bit client application are supported by office 2013. you can even use the trial version for office 2013 for 30 days to get a chance to test it without having to buy it, youll get different office 2013 product key to keeping it operating for one month. you will be able to access word 2013, powerpoint 2013, excel 2013, outlook 2013 with this package.</p>
3
- <p>yes. aws support has been successfully supporting our customers who run microsoft windows-based ec2 instances in the aws cloud since 2008 when we first launched windows server on ec2. our support engineers have deep experience with microsoft technologies on aws including amazon ec2, amazon ecs, amazon rds, amazon workspaces and others. now aws has further enhanced our support capabilities with a new additional direct engagement between aws support and microsoft support, to help ensure high quality support and issue resolution for our customers. to find more information on end of support (eos) for microsoft products go here.</p>
4
- <h2>download microsoft office professional plus 2013 rtm activation</h2><br /><p><b><b>Download</b> &#128279; <a href="https://imgfil.com/2uxXBc">https://imgfil.com/2uxXBc</a></b></p><br /><br />
5
- <p>per microsofts visual studio licensing guide, visual studio subscriptions purchased through certain channels provide perpetual use rights even after the subscription has expired. the use of perpetual licenses acquired before 10/1/2019 for products released prior to 10/1/2019 is permitted on aws dedicated infrastructure regardless of the renewal or expiration of the subscription under which the perpetual licenses were acquired.aws also offers fully-compliant, amazon-provided licenses formicrosoft visual studio enterprise 2022 and microsoft visual studio professional 2022 amazon machine images (amis) on amazon elastic compute cloud (amazon ec2). these amis are available on the amazon ec2 console and on aws marketplace, to launch instances on-demand without any long-term licensing commitments.to learn more, visit aws license manager user guide.<br> </p> 899543212b<br />
6
- <br />
7
- <br />
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Clash Royale is waiting for you on yapup.site download and play today.md DELETED
@@ -1,116 +0,0 @@
1
- <br />
2
- <h1>How to Download Clash Royale from Yapup.site</h1>
3
- <p>If you are looking for a fun and addictive game to play on your Android device, you might want to try Clash Royale. It is a real-time multiplayer battle game that features your favorite characters from Clash of Clans and more. In this article, we will show you how to download Clash Royale from Yapup.site, a website that offers free APK downloads for Android games and apps. We will also give you some tips and tricks to help you win at Clash Royale.</p>
4
- <h2>What is Clash Royale?</h2>
5
- <h3>A real-time multiplayer battle game</h3>
6
- <p>Clash Royale is a game developed and published by Supercell, the same company behind the popular Clash of Clans. It was released in 2016 and has since become one of the most played mobile games in the world. In Clash Royale, you have to collect and upgrade cards that feature troops, spells, and defenses from the Clash universe. You then use these cards to create your own battle deck and fight against other players online in fast-paced matches. The goal is to destroy your opponent's three towers, including the king tower, while protecting your own. You can also join or form clans with other players and participate in clan wars, tournaments, and seasonal events.</p>
7
- <h2>yapup.site download clash royale</h2><br /><p><b><b>DOWNLOAD</b> &#8250;&#8250;&#8250;&#8250;&#8250; <a href="https://jinyurl.com/2uNJOn">https://jinyurl.com/2uNJOn</a></b></p><br /><br />
8
- <h3>Features of Clash Royale</h3>
9
- <p>Some of the features that make Clash Royale an exciting and challenging game are:</p>
10
- <ul>
11
- <li>Over 100 cards to collect and upgrade, each with unique abilities and interactions.</li>
12
- <li>Nine arenas to unlock and progress through, each with different themes and difficulties.</li>
13
- <li>Various game modes to choose from, such as 1v1, 2v2, special challenges, clan wars, global tournaments, and more.</li>
14
- <li>New seasonal items to unlock with the season pass, such as tower skins, emotes, and magic items.</li>
15
- <li>A vibrant community of millions of players around the world.</li>
16
- </ul>
17
- <h2>What is Yapup.site?</h2>
18
- <h3>A website that offers free APK downloads</h3>
19
- <p>Yapup.site is a website that provides free APK downloads for Android games and apps. APK stands for Android Package Kit, which is a file format that contains all the elements needed to install an app on an Android device. By downloading APK files from Yapup.site, you can access games and apps that are not available on the Google Play Store or that are restricted in your region. You can also get the latest updates and versions of your favorite games and apps before they are officially released.</p>
20
- <h3>Benefits of using Yapup.site</h3>
21
- <p>Some of the benefits of using Yapup.site to download APK files are:</p>
22
- <ul>
23
- <li>You can download games and apps for free without any registration or subscription.</li>
24
- <li>You can download games and apps that are not available on the Google Play Store or that are restricted in your region.</li>
25
- <li>You can download games and apps that are modded or hacked with unlimited resources or features.</li>
26
- <li>You can download games and apps that are updated regularly with new content and bug fixes.</li>
27
- <li>You can download games and apps that are safe and virus-free.</li>
28
- </ul>
29
- <h2>How to Download Clash Royale from Yapup.site</h2>
30
- <h3>Step 1: Visit the website</h3>
31
- <p>The first step to download Clash Royale from Yapup.site is to visit the website using your web browser. You can use any browser you prefer, such as Chrome, Firefox, Safari, or Opera. The website has a simple and user-friendly interface that allows you to easily navigate and find the games and apps you want.</p>
32
- <h3>Step 2: Search for Clash Royale</h3>
33
- <p>The next step is to search for Clash Royale on the website. You can use the search bar at the top of the homepage to type in the name of the game. Alternatively, you can browse through the categories and genres of games and apps on the website. You can also check out the featured, popular, and new games and apps on the homepage. Once you find Clash Royale, click on it to open its page.</p>
34
- <h3>Step 3: Click on the download button</h3>
35
- <p>The third step is to click on the download button on the Clash Royale page. You will see a green button that says "Download APK" at the bottom of the page. You will also see some information about the game, such as its size, version, developer, rating, and description. You can read this information to learn more about the game and its features. You can also see some screenshots and videos of the game to get a glimpse of its gameplay. After you click on the download button, you will be redirected to another page where you have to wait for a few seconds before the download starts.</p>
36
- <p>yapup.site clash royale apk free download<br />
37
- yapup.site clash royale mod download for android<br />
38
- yapup.site download clash royale latest version<br />
39
- yapup.site download clash royale on pc<br />
40
- yapup.site download clash royale hack<br />
41
- yapup.site download clash royale update<br />
42
- yapup.site download clash royale private server<br />
43
- yapup.site download clash royale for ios<br />
44
- yapup.site download clash royale online<br />
45
- yapup.site download clash royale game<br />
46
- yapup.site download clash royale cheats<br />
47
- yapup.site download clash royale gems generator<br />
48
- yapup.site download clash royale cards<br />
49
- yapup.site download clash royale decks<br />
50
- yapup.site download clash royale strategy guide<br />
51
- yapup.site download clash royale tips and tricks<br />
52
- yapup.site download clash royale wallpaper<br />
53
- yapup.site download clash royale videos<br />
54
- yapup.site download clash royale replays<br />
55
- yapup.site download clash royale tournaments<br />
56
- yapup.site download clash royale clan wars<br />
57
- yapup.site download clash royale season pass<br />
58
- yapup.site download clash royale emotes<br />
59
- yapup.site download clash royale skins<br />
60
- yapup.site download clash royale magic items<br />
61
- yapup.site download clash royale challenges<br />
62
- yapup.site download clash royale events<br />
63
- yapup.site download clash royale news<br />
64
- yapup.site download clash royale reddit<br />
65
- yapup.site download clash royale wiki<br />
66
- yapup.site download clash royale fan art<br />
67
- yapup.site download clash royale memes<br />
68
- yapup.site download clash royale merchandise<br />
69
- yapup.site download clash royale forum<br />
70
- yapup.site download clash royale support<br />
71
- yapup.site download clash royale reviews<br />
72
- yapup.site download clash royale ratings<br />
73
- yapup.site download clash royale statistics<br />
74
- yapup.site download clash royale history<br />
75
- yapup.site download clash royale developer blog</p>
76
- <h3>Step 4: Install the APK file</h3>
77
- <p>The final step is to install the APK file on your Android device. After the download is complete, you will see a notification on your device that says "Download complete". You can tap on this notification to open the APK file. Alternatively, you can go to your device's file manager and locate the APK file in your downloads folder. Before you install the APK file, you have to enable the installation of unknown sources on your device. To do this, go to your device's settings and then security. Find the option that says "Unknown sources" and toggle it on. This will allow you to install apps from sources other than the Google Play Store. After you enable this option, you can tap on the APK file and follow the instructions on your screen to install Clash Royale on your device.</p>
78
- <h2>Tips and Tricks for Playing Clash Royale</h2>
79
- <h3>Join a clan and share cards</h3>
80
- <p>One of the best ways to improve your skills and progress in Clash Royale is to join a clan and share cards with other players. A clan is a group of players who can chat, donate, request, and trade cards with each other. By joining a clan, you can get more cards to upgrade your deck and also learn from other players' strategies and tips. You can also participate in clan wars and earn rewards for your clan.</p>
81
- <h3>Build a balanced deck and use your elixir wisely</h3>
82
- <p>Another important tip for playing Clash Royale is to build a balanced deck and use your elixir wisely. A balanced deck is one that has a good mix of cards that can counter different types of threats and also deal damage to your opponent's towers. You should have cards that can attack from a distance, such as archers or fireball; cards that can tank damage, such as giant or knight; cards that can swarm or distract, such as goblins or skeletons; and cards that can support or enhance, such as witch or rage. You should also have cards that cost different amounts of elixir, so that you can always have something to play depending on your elixir level. Elixir is the resource that you use to play cards in Clash Royale. It regenerates over time during a match, but it is limited by a maximum of 10 units. Therefore, you have to be careful not to waste elixir by playing cards that are not needed or effective. You should also try to gain an elixir advantage over your opponent by playing cards that cost less than their counters or by making positive trades. For example, if you use a fireball that costs 4 elixir to destroy a minion horde that costs 5 elixir, you gain an elixir advantage of 1 unit.</p>
83
- <h3>Defend your towers and attack the enemy's weak spots</h3>
84
- <p>The last tip for playing Clash Royale is to defend your towers and attack the enemy's weak spots. Your towers are your main defense against your opponent's attacks. They have high health and damage output, but they are vulnerable to certain types of cards or combinations. Therefore, you have to protect them by placing your troops strategically and using spells or buildings when necessary. On the other hand, you also have to find opportunities to attack your opponent's towers and deal damage to them. You should look for their weak spots, such as their low-health towers or their lack of counters for your cards. You should also try to exploit their mistakes, such as their overcommitment or their poor placement of cards. You should also try to create combos or synergies with your cards, such as using a hog rider with a freeze spell or using a balloon with a rage spell.</p>
85
- <h2>Conclusion</h2>
86
- <p>Clash Royale is a fun and addictive game that you can download and play on your Android device. You can download it from Yapup.site, a website that offers free APK downloads for Android games and apps. You can also follow the tips and tricks we shared in this article to improve your skills and win more matches. We hope you enjoyed this article and found it helpful. If you have any questions or feedback, please let us know in the comments section below. Happy clashing!</p>
87
- <h2>FAQs</h2>
88
- <p>Here are some frequently asked questions about Clash Royale and Yapup.site:</p>
89
- <table>
90
- <tr>
91
- <th>Question</th>
92
- <th>Answer</th>
93
- </tr>
94
- <tr>
95
- <td>Is Clash Royale free to play?</td>
96
- <td>Yes, Clash Royale is free to download and play. However, it also offers in-app purchases that can enhance your gaming experience.</td>
97
- </tr>
98
- <tr>
99
- <td>Is Yapup.site safe to use?</td>
100
- <td>Yes, Yapup.site is safe to use. It does not contain any malware or viruses that can harm your device. However, you should always be careful when downloading APK files from unknown sources and scan them with an antivirus before installing them.</td>
101
- </tr>
102
- <tr>
103
- <td>How can I update Clash Royale from Yapup.site?</td>
104
- <td>You can update Clash Royale from Yapup.site by visiting the website again and downloading the latest version of the game. You can also enable the auto-update feature on your device's settings to get the updates automatically.</td>
105
- </tr>
106
- <tr>
107
- <td>How can I contact the support team of Clash Royale?</td>
108
- <td>You can contact the support team of Clash Royale by tapping on the settings icon on the top right corner of the game screen and then tapping on the help and support button. You can also visit the official website or social media pages of Clash Royale for more information and assistance.</td>
109
- </tr>
110
- <tr>
111
- <td>How can I contact the support team of Yapup.site?</td>
112
- <td>You can contact the support team of Yapup.site by visiting the website and clicking on the contact us button at the bottom of the page. You can also email them at [email protected] or follow them on Facebook or Twitter for more updates and news.</td>
113
- </tr>
114
- </table></p> 401be4b1e0<br />
115
- <br />
116
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Dislyte A Stylish Urban RPG with Divine Power and Funky Music.md DELETED
@@ -1,81 +0,0 @@
1
-
2
- <h1>Dislyte Global Download: How to Play the Stylish Urban Mythological RPG on PC and Mobile</h1>
3
- <h2>Introduction</h2>
4
- <p>If you are a fan of pop-fantasy RPGs with striking audio-visual experience, you might want to check out Dislyte, a new game that features heroes and monsters from mythologies. Dislyte is set in a futuristic urban playground where mysterious powers and mythology collide. You can build your own squad of Espers, who are ordinary people with divine powers from gods of worldwide mythologies, and fight against the greatest threat to humanity.</p>
5
- <h2>dislyte global download</h2><br /><p><b><b>Download Zip</b> &#10001; &#10001; &#10001; <a href="https://jinyurl.com/2uNL2D">https://jinyurl.com/2uNL2D</a></b></p><br /><br />
6
- <p>In this article, we will show you how to download and play Dislyte on PC and mobile devices, so that you can enjoy the game's high-quality soundtracks and graphics, as well as grind easier without draining your battery. We will also share some tips and tricks to improve your gaming experience.</p>
7
- <h2>What is Dislyte?</h2>
8
- <p>Dislyte is a pop-fantasy RPG developed by FARLIGHT and published by Lilith Games. It was released globally in May 2023, after a successful soft launch in selected regions. The game has received positive reviews from players and critics, who praised its unique art style, engaging gameplay, and diverse characters.</p>
9
- <p>Dislyte is inspired by various mythologies, such as Chinese, Egyptian, Greek, and Northern European. You can collect and customize over 100 Espers, each with their own skills, personalities, and appearances. You can also form teams with other players and participate in various modes, such as story mode, arena mode, raid mode, and more.</p>
10
- <h2>Why play Dislyte on PC and mobile?</h2>
11
- <p>Dislyte is a game that can be enjoyed on both PC and mobile devices. Playing Dislyte on PC has some advantages, such as:</p>
12
- <ul>
13
- <li>You can enjoy the game's stunning graphics and soundtracks on a bigger screen.</li>
14
- <li>You can use keyboard and mouse controls for better accuracy and comfort.</li>
15
- <li>You can grind levels and farm relics easier with auto mode.</li>
16
- <li>You don't have to worry about battery draining or overheating issues.</li>
17
- </ul>
18
- <p>Playing Dislyte on mobile devices also has some benefits, such as:</p>
19
- <ul>
20
- <li>You can play the game anytime and anywhere with an internet connection.</li>
21
- <li>You can use touch screen controls for more intuitive gameplay.</li>
22
- <li>You can receive notifications and updates from the game.</li>
23
- <li>You can connect your account with your social media platforms.</li>
24
- </ul>
25
- <p>No matter what device you choose to play Dislyte on, you will have a fun and immersive gaming experience.</p>
26
- <h2>How to download and play Dislyte on PC and Mac</h2>
27
- <p>If you want to play Dislyte on PC or Mac, you will need an emulator that can run Android apps on your computer. We recommend using LDPlayer, which is one of the best emulators for playing mobile games on PC. Here are the steps to download and play Dislyte on PC and Mac using LDPlayer:</p>
28
- <h3>Step 1: Download LDPlayer emulator</h3>
29
- <p>Go to <a href="(^1^)">this link</a> and download LDPlayer emulator for your PC or Mac. Make sure you download the 64-bit version if asked. After downloading, install LDPlayer on your computer by following the instructions.</p>
30
- <p>How to download and play Dislyte on PC, Mac & Mobile<br />
31
- Dislyte APK download for Android devices<br />
32
- Dislyte official website and social media links<br />
33
- Dislyte review and gameplay guide<br />
34
- Dislyte best espers and tier list<br />
35
- Dislyte codes and how to redeem them<br />
36
- Dislyte latest news and updates<br />
37
- Dislyte tips and tricks for beginners<br />
38
- Dislyte soundtrack and graphics quality<br />
39
- Dislyte system requirements and compatibility<br />
40
- Dislyte vs other pop-fantasy RPGs<br />
41
- Dislyte characters and their mythological origins<br />
42
- Dislyte story and lore overview<br />
43
- Dislyte PvP and PvE modes<br />
44
- Dislyte gacha system and rates<br />
45
- Dislyte relics and how to farm them<br />
46
- Dislyte team building and strategy<br />
47
- Dislyte events and rewards<br />
48
- Dislyte bugs and issues report<br />
49
- Dislyte fan art and community<br />
50
- Dislyte wiki and FAQ<br />
51
- Dislyte emulator download for PC users<br />
52
- Dislyte VPN download for region locked players<br />
53
- Dislyte mod apk download and features<br />
54
- Dislyte cheats and hacks warning<br />
55
- Dislyte support and customer service contact<br />
56
- Dislyte gameplay video and streamers recommendation<br />
57
- Dislyte memes and funny moments<br />
58
- Dislyte skins and costumes preview<br />
59
- Dislyte collaborations and crossover events<br />
60
- Dislyte reroll guide and best starter espers<br />
61
- Dislyte coupon codes and freebies giveaway<br />
62
- Dislyte QooApp download for iOS users<br />
63
- Dislyte discord server and reddit forum join link<br />
64
- Dislyte ratings and feedback from players<br />
65
- Dislyte developer interview and behind the scenes<br />
66
- Dislyte future plans and roadmap reveal<br />
67
- Dislyte comparison with Farlight 84, another game by Lilith Games<br />
68
- Dislyte global release date and countdown timer<br />
69
- Dislyte pre-registration rewards and how to claim them</p>
70
- <h3>Step 2: Install Dislyte from Google <li>A: For playing Dislyte on PC, you need a Windows 7 or higher operating system, an Intel or AMD CPU, 4 GB of RAM, and 4 GB of disk space. For playing Dislyte on mobile, you need an Android 5.0 or higher device with at least 2 GB of RAM and 3 GB of storage space.</li>
71
- <li><b>Q: How can I get more Espers in Dislyte?</b></li>
72
- <li>A: You can get more Espers in Dislyte by summoning them with crystals or tickets, which can be obtained from completing quests, events, achievements, or purchasing them with real money. You can also upgrade your Espers by enhancing their skills, relics, and star levels.</li>
73
- <li><b>Q: How can I join a guild in Dislyte?</b></li>
74
- <li>A: You can join a guild in Dislyte by tapping on the guild icon on the main screen and searching for a guild that suits your preferences. You can also create your own guild if you have enough crystals. Joining a guild will allow you to chat with other members, participate in guild wars, and receive guild rewards.</li>
75
- <li><b>Q: How can I contact the customer service of Dislyte?</b></li>
76
- <li>A: You can contact the customer service of Dislyte by tapping on the gear icon on the top right corner and then tapping on Customer Service. You can also send an email to [email protected] or visit their official website or social media pages for more information.</li>
77
- <li><b>Q: What are the best Espers to use in Dislyte?</b></li>
78
- <li>A: There is no definitive answer to this question, as different Espers have different strengths and weaknesses, and the best Espers may vary depending on your play style, team composition, and game mode. However, some of the popular Espers that are considered to be powerful and versatile are Zeus, Athena, Odin, Thor, Ra, Anubis, and Sun Wukong.</li>
79
- </ul></p> 197e85843d<br />
80
- <br />
81
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/experimental/__init__.py DELETED
@@ -1,17 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- # Copyright 2022 The HuggingFace Team. All rights reserved.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- # flake8: noqa
16
-
17
- from .rl import ValueGuidedRLPipeline
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/pipelines/ddpm/__init__.py DELETED
@@ -1,17 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- # Copyright 2022 The HuggingFace Team. All rights reserved.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # flake8: noqa
17
- from .pipeline_ddpm import DDPMPipeline
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/audio/audio_processing.py DELETED
@@ -1,100 +0,0 @@
1
- import torch
2
- import numpy as np
3
- import librosa.util as librosa_util
4
- from scipy.signal import get_window
5
-
6
-
7
- def window_sumsquare(
8
- window,
9
- n_frames,
10
- hop_length,
11
- win_length,
12
- n_fft,
13
- dtype=np.float32,
14
- norm=None,
15
- ):
16
- """
17
- # from librosa 0.6
18
- Compute the sum-square envelope of a window function at a given hop length.
19
-
20
- This is used to estimate modulation effects induced by windowing
21
- observations in short-time fourier transforms.
22
-
23
- Parameters
24
- ----------
25
- window : string, tuple, number, callable, or list-like
26
- Window specification, as in `get_window`
27
-
28
- n_frames : int > 0
29
- The number of analysis frames
30
-
31
- hop_length : int > 0
32
- The number of samples to advance between frames
33
-
34
- win_length : [optional]
35
- The length of the window function. By default, this matches `n_fft`.
36
-
37
- n_fft : int > 0
38
- The length of each analysis frame.
39
-
40
- dtype : np.dtype
41
- The data type of the output
42
-
43
- Returns
44
- -------
45
- wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))`
46
- The sum-squared envelope of the window function
47
- """
48
- if win_length is None:
49
- win_length = n_fft
50
-
51
- n = n_fft + hop_length * (n_frames - 1)
52
- x = np.zeros(n, dtype=dtype)
53
-
54
- # Compute the squared window at the desired length
55
- win_sq = get_window(window, win_length, fftbins=True)
56
- win_sq = librosa_util.normalize(win_sq, norm=norm) ** 2
57
- win_sq = librosa_util.pad_center(win_sq, n_fft)
58
-
59
- # Fill the envelope
60
- for i in range(n_frames):
61
- sample = i * hop_length
62
- x[sample : min(n, sample + n_fft)] += win_sq[: max(0, min(n_fft, n - sample))]
63
- return x
64
-
65
-
66
- def griffin_lim(magnitudes, stft_fn, n_iters=30):
67
- """
68
- PARAMS
69
- ------
70
- magnitudes: spectrogram magnitudes
71
- stft_fn: STFT class with transform (STFT) and inverse (ISTFT) methods
72
- """
73
-
74
- angles = np.angle(np.exp(2j * np.pi * np.random.rand(*magnitudes.size())))
75
- angles = angles.astype(np.float32)
76
- angles = torch.autograd.Variable(torch.from_numpy(angles))
77
- signal = stft_fn.inverse(magnitudes, angles).squeeze(1)
78
-
79
- for i in range(n_iters):
80
- _, angles = stft_fn.transform(signal)
81
- signal = stft_fn.inverse(magnitudes, angles).squeeze(1)
82
- return signal
83
-
84
-
85
- def dynamic_range_compression(x, normalize_fun=torch.log, C=1, clip_val=1e-5):
86
- """
87
- PARAMS
88
- ------
89
- C: compression factor
90
- """
91
- return normalize_fun(torch.clamp(x, min=clip_val) * C)
92
-
93
-
94
- def dynamic_range_decompression(x, C=1):
95
- """
96
- PARAMS
97
- ------
98
- C: compression factor used to compress
99
- """
100
- return torch.exp(x) / C
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/timm_model.py DELETED
@@ -1,106 +0,0 @@
1
- """ timm model adapter
2
-
3
- Wraps timm (https://github.com/rwightman/pytorch-image-models) models for use as a vision tower in CLIP model.
4
- """
5
- from collections import OrderedDict
6
-
7
- import torch.nn as nn
8
-
9
- try:
10
- import timm
11
- from timm.models.layers import Mlp, to_2tuple
12
- from timm.models.layers.attention_pool2d import RotAttentionPool2d
13
- from timm.models.layers.attention_pool2d import AttentionPool2d as AbsAttentionPool2d
14
- except ImportError as e:
15
- timm = None
16
-
17
- from .utils import freeze_batch_norm_2d
18
-
19
-
20
- class TimmModel(nn.Module):
21
- """ timm model adapter
22
- # FIXME this adapter is a work in progress, may change in ways that break weight compat
23
- """
24
-
25
- def __init__(
26
- self,
27
- model_name,
28
- embed_dim,
29
- image_size=224,
30
- pool='avg',
31
- proj='linear',
32
- drop=0.,
33
- pretrained=False):
34
- super().__init__()
35
- if timm is None:
36
- raise RuntimeError("Please `pip install timm` to use timm models.")
37
-
38
- self.image_size = to_2tuple(image_size)
39
- self.trunk = timm.create_model(model_name, pretrained=pretrained)
40
- feat_size = self.trunk.default_cfg.get('pool_size', None)
41
- feature_ndim = 1 if not feat_size else 2
42
- if pool in ('abs_attn', 'rot_attn'):
43
- assert feature_ndim == 2
44
- # if attn pooling used, remove both classifier and default pool
45
- self.trunk.reset_classifier(0, global_pool='')
46
- else:
47
- # reset global pool if pool config set, otherwise leave as network default
48
- reset_kwargs = dict(global_pool=pool) if pool else {}
49
- self.trunk.reset_classifier(0, **reset_kwargs)
50
- prev_chs = self.trunk.num_features
51
-
52
- head_layers = OrderedDict()
53
- if pool == 'abs_attn':
54
- head_layers['pool'] = AbsAttentionPool2d(prev_chs, feat_size=feat_size, out_features=embed_dim)
55
- prev_chs = embed_dim
56
- elif pool == 'rot_attn':
57
- head_layers['pool'] = RotAttentionPool2d(prev_chs, out_features=embed_dim)
58
- prev_chs = embed_dim
59
- else:
60
- assert proj, 'projection layer needed if non-attention pooling is used.'
61
-
62
- # NOTE attention pool ends with a projection layer, so proj should usually be set to '' if such pooling is used
63
- if proj == 'linear':
64
- head_layers['drop'] = nn.Dropout(drop)
65
- head_layers['proj'] = nn.Linear(prev_chs, embed_dim)
66
- elif proj == 'mlp':
67
- head_layers['mlp'] = Mlp(prev_chs, 2 * embed_dim, embed_dim, drop=drop)
68
-
69
- self.head = nn.Sequential(head_layers)
70
-
71
- def lock(self, unlocked_groups=0, freeze_bn_stats=False):
72
- """ lock modules
73
- Args:
74
- unlocked_groups (int): leave last n layer groups unlocked (default: 0)
75
- """
76
- if not unlocked_groups:
77
- # lock full model
78
- for param in self.trunk.parameters():
79
- param.requires_grad = False
80
- if freeze_bn_stats:
81
- freeze_batch_norm_2d(self.trunk)
82
- else:
83
- # NOTE: partial freeze requires latest timm (master) branch and is subject to change
84
- try:
85
- # FIXME import here until API stable and in an official release
86
- from timm.models.helpers import group_parameters, group_modules
87
- except ImportError:
88
- raise RuntimeError(
89
- 'Please install latest timm `pip install git+https://github.com/rwightman/pytorch-image-models`')
90
- matcher = self.trunk.group_matcher()
91
- gparams = group_parameters(self.trunk, matcher)
92
- max_layer_id = max(gparams.keys())
93
- max_layer_id = max_layer_id - unlocked_groups
94
- for group_idx in range(max_layer_id + 1):
95
- group = gparams[group_idx]
96
- for param in group:
97
- self.trunk.get_parameter(param).requires_grad = False
98
- if freeze_bn_stats:
99
- gmodules = group_modules(self.trunk, matcher, reverse=True)
100
- gmodules = {k for k, v in gmodules.items() if v <= max_layer_id}
101
- freeze_batch_norm_2d(self.trunk, gmodules)
102
-
103
- def forward(self, x):
104
- x = self.trunk(x)
105
- x = self.head(x)
106
- return x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/ps_adv_mlm_history.py DELETED
@@ -1,171 +0,0 @@
1
- import torch
2
- from torch import nn
3
- from tasks.tts.ps_adv import PortaSpeechAdvTask, FastSpeechTask
4
- from text_to_speech.utils.commons.hparams import hparams
5
-
6
-
7
- class PortaSpeechAdvMLMTask(PortaSpeechAdvTask):
8
-
9
- def build_optimizer(self, model):
10
- optimizer_gen = torch.optim.AdamW(
11
- self.model.parameters(),
12
- lr=hparams['lr'],
13
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
14
- weight_decay=hparams['weight_decay'])
15
-
16
- optimizer_disc = torch.optim.AdamW(
17
- self.disc_params,
18
- lr=hparams['disc_lr'],
19
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
20
- **hparams["discriminator_optimizer_params"]) if len(self.disc_params) > 0 else None
21
-
22
- optimizer_encoder = torch.optim.AdamW(
23
- self.model.encoder.parameters(),
24
- lr=hparams['lr'],
25
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
26
- weight_decay=hparams['weight_decay'])
27
- return [optimizer_gen, optimizer_disc, optimizer_encoder]
28
-
29
- def build_scheduler(self, optimizer):
30
- return [
31
- FastSpeechTask.build_scheduler(self, optimizer[0]), # Generator Scheduler
32
- torch.optim.lr_scheduler.StepLR(optimizer=optimizer[1], # Discriminator Scheduler
33
- **hparams["discriminator_scheduler_params"]),
34
- FastSpeechTask.build_scheduler(self, optimizer[2]), # Generator Scheduler
35
- ]
36
-
37
- def on_before_optimization(self, opt_idx):
38
- if opt_idx in [0,2]:
39
- nn.utils.clip_grad_norm_(self.dp_params, hparams['clip_grad_norm'])
40
- if self.use_graph_encoder:
41
- nn.utils.clip_grad_norm_(self.gen_params_except_gae_and_dp, hparams['clip_grad_norm'])
42
- nn.utils.clip_grad_norm_(self.gae_params, hparams['clip_grad_norm'])
43
- elif self.use_bert:
44
- nn.utils.clip_grad_norm_(self.gen_params_except_bert_and_dp, hparams['clip_grad_norm'])
45
- nn.utils.clip_grad_norm_(self.bert_params, hparams['clip_grad_norm'])
46
- else:
47
- nn.utils.clip_grad_norm_(self.gen_params_except_dp, hparams['clip_grad_norm'])
48
- else:
49
- nn.utils.clip_grad_norm_(self.disc_params, hparams["clip_grad_norm"])
50
-
51
- def on_after_optimization(self, epoch, batch_idx, optimizer, optimizer_idx):
52
- if self.scheduler is not None:
53
- self.scheduler[0].step(self.global_step // hparams['accumulate_grad_batches'])
54
- self.scheduler[1].step(self.global_step // hparams['accumulate_grad_batches'])
55
- self.scheduler[2].step(self.global_step // hparams['accumulate_grad_batches'])
56
-
57
- def on_after_optimization(self, epoch, batch_idx, optimizer, optimizer_idx):
58
- if self.scheduler is not None:
59
- self.scheduler[0].step(self.global_step // hparams['accumulate_grad_batches'])
60
- self.scheduler[1].step(self.global_step // hparams['accumulate_grad_batches'])
61
- self.scheduler[2].step(self.global_step // hparams['accumulate_grad_batches'])
62
-
63
- def _training_step(self, sample, batch_idx, optimizer_idx):
64
- loss_output = {}
65
- loss_weights = {}
66
- disc_start = self.global_step >= hparams["disc_start_steps"] and hparams['lambda_mel_adv'] > 0
67
- if optimizer_idx == 0:
68
- #######################
69
- # Generator #
70
- #######################
71
- loss_output, model_out = self.run_model(sample, infer=False)
72
- self.model_out_gt = self.model_out = \
73
- {k: v.detach() for k, v in model_out.items() if isinstance(v, torch.Tensor)}
74
- if disc_start:
75
- mel_p = model_out['mel_out']
76
- if hasattr(self.model, 'out2mel'):
77
- mel_p = self.model.out2mel(mel_p)
78
- o_ = self.mel_disc(mel_p)
79
- p_, pc_ = o_['y'], o_['y_c']
80
- if p_ is not None:
81
- loss_output['a'] = self.mse_loss_fn(p_, p_.new_ones(p_.size()))
82
- loss_weights['a'] = hparams['lambda_mel_adv']
83
- if pc_ is not None:
84
- loss_output['ac'] = self.mse_loss_fn(pc_, pc_.new_ones(pc_.size()))
85
- loss_weights['ac'] = hparams['lambda_mel_adv']
86
- elif optimizer_idx == 1:
87
- #######################
88
- # Discriminator #
89
- #######################
90
- if disc_start and self.global_step % hparams['disc_interval'] == 0:
91
- model_out = self.model_out_gt
92
- mel_g = sample['mels']
93
- mel_p = model_out['mel_out']
94
- o = self.mel_disc(mel_g)
95
- p, pc = o['y'], o['y_c']
96
- o_ = self.mel_disc(mel_p)
97
- p_, pc_ = o_['y'], o_['y_c']
98
- if p_ is not None:
99
- loss_output["r"] = self.mse_loss_fn(p, p.new_ones(p.size()))
100
- loss_output["f"] = self.mse_loss_fn(p_, p_.new_zeros(p_.size()))
101
- if pc_ is not None:
102
- loss_output["rc"] = self.mse_loss_fn(pc, pc.new_ones(pc.size()))
103
- loss_output["fc"] = self.mse_loss_fn(pc_, pc_.new_zeros(pc_.size()))
104
- else:
105
- loss_output, model_out = self.run_contrastive_learning(sample)
106
-
107
- total_loss = sum([loss_weights.get(k, 1) * v for k, v in loss_output.items() if isinstance(v, torch.Tensor) and v.requires_grad])
108
- loss_output['batch_size'] = sample['txt_tokens'].size()[0]
109
- return total_loss, loss_output
110
-
111
- def run_contrastive_learning(self, sample):
112
- losses = {}
113
- outputs = {}
114
-
115
- bert = self.model.encoder.bert
116
- pooler = self.model.encoder.pooler
117
- sim = self.model.encoder.sim
118
- # electra_gen = self.model.encoder.electra_gen
119
- # electra_disc = self.model.encoder.electra_disc
120
- # electra_head = self.model.encoder.electra_head
121
-
122
- cl_feats = sample['cl_feats']
123
- bs, _, t = cl_feats['cl_input_ids'].shape
124
- cl_input_ids = cl_feats['cl_input_ids'].reshape([bs*2, t])
125
- cl_attention_mask = cl_feats['cl_attention_mask'].reshape([bs*2, t])
126
- cl_token_type_ids = cl_feats['cl_token_type_ids'].reshape([bs*2, t])
127
- cl_output = bert(cl_input_ids, attention_mask=cl_attention_mask,token_type_ids=cl_token_type_ids,)
128
- pooler_output = pooler(cl_attention_mask, cl_output)
129
- pooler_output = pooler_output.reshape([bs, 2, -1])
130
- z1, z2 = pooler_output[:,0], pooler_output[:,1]
131
-
132
- cos_sim = sim(z1.unsqueeze(1), z2.unsqueeze(0))
133
- labels = torch.arange(cos_sim.size(0)).long().to(z1.device)
134
- ce_fn = nn.CrossEntropyLoss()
135
- cl_loss = ce_fn(cos_sim, labels)
136
- losses['cl_v'] = cl_loss.detach()
137
- losses['cl'] = cl_loss * hparams['lambda_mlm']
138
-
139
- # mlm_input_ids = cl_feats['mlm_input_ids']
140
- # mlm_input_ids = mlm_input_ids.view((-1, mlm_input_ids.size(-1)))
141
- # with torch.no_grad():
142
- # g_pred = electra_gen(mlm_input_ids, cl_attention_mask)[0].argmax(-1)
143
- # g_pred[:, 0] = 101 # CLS token
144
- # replaced = (g_pred != cl_input_ids) * cl_attention_mask
145
- # e_inputs = g_pred * cl_attention_mask
146
- # mlm_outputs = electra_disc(
147
- # e_inputs,
148
- # attention_mask=cl_attention_mask,
149
- # token_type_ids=cl_token_type_ids,
150
- # position_ids=None,
151
- # head_mask=None,
152
- # inputs_embeds=None,
153
- # output_attentions=None,
154
- # output_hidden_states=False, # True if cls.model_args.pooler_type in ['avg_top2', 'avg_first_last'] else False,
155
- # return_dict=True,
156
- # cls_input=pooler_output.view((-1, pooler_output.size(-1))),
157
- # )
158
- # e_labels = replaced.view(-1, replaced.size(-1))
159
- # prediction_scores = electra_head(mlm_outputs.last_hidden_state)
160
- # # rep = (e_labels == 1) * cl_attention_mask
161
- # # fix = (e_labels == 0) * cl_attention_mask
162
- # # prediction = prediction_scores.argmax(-1)
163
- # # self.electra_rep_acc = float((prediction*rep).sum()/rep.sum())
164
- # # self.electra_fix_acc = float(1.0 - (prediction*fix).sum()/fix.sum())
165
- # # self.electra_acc = float(((prediction == e_labels) * cl_attention_mask).sum()/cl_attention_mask.sum())
166
- # masked_lm_loss = ce_fn(prediction_scores.view(-1, 2), e_labels.view(-1))
167
- # losses['mlm_v'] = masked_lm_loss.detach()
168
- # losses['mlm'] = masked_lm_loss * hparams['lambda_mlm']
169
-
170
- return losses, outputs
171
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIWaves/SOP_Generation-single/Prompt/base_Prompts.py DELETED
@@ -1,84 +0,0 @@
1
-
2
- # SOP========================================================================================================
3
- # "environment_prompt"
4
- # current_state , self(sop)
5
- Get_environment_prompt = "f\"Here are the description of current scenario:{self.current_state.environment_prompt};\\n\""
6
-
7
-
8
- # sop.transit
9
- #================================================================
10
- Transit_system_prompt = "f\"{environment_prompt};\\n{judge_system_prompt}\\n\"";
11
-
12
- # transit chat message
13
- # "environment_prompt" is get from "Get_environment_prompt" ; "chat_history_message" if from Memory
14
- Transit_message = "f\"{environment_summary};\\n Here is the The chat history:\\n {chat_history_message};\\nHere is the last query you especially need to pay attention:\\n{query};\\n Here is the relevant conversation: \\n{relevant_history} \\n\\n\""
15
-
16
-
17
- Transit_last_prompt = "f\"{judge_last_prompt}\""
18
- #sop.transit================================================================
19
-
20
- # sop.call
21
- #================================================================
22
- # help controller to determine the next role to speak.(the {} is agent role) call_prompt + allocate_component
23
- Allocate_component = "f\"If it's currently supposed to be speaking for {role}, then output <end>{role}</end>.\\n\""
24
-
25
- # environment_prompt is get from "Get_environment_prompt" ; "chat_history_message" if from Memory
26
- Call_system_prompt = "f\"{environment_prompt};\\n{call_system_prompt};\\n{allocate_prompt}.\\n\""
27
-
28
- #
29
- Call_last_prompt = "f\"Here is the last query you especially need to pay attention:\\n{query};\\n Here is the the relevant conversation :\\n{relevant_history};\\nNow please choose the person to speak according to the following rules :{allocate_prompt};\\nNote: The person whose turn it is now cannot be the same as the person who spoke last time, so {last_name} cannot be output\\n.\""
30
-
31
- Call_message = "f\"Here is the chat history:\\n{chat_history_message};\\nHere is the name of the person who last speak: {last_name}.\\n \""
32
- #sop.call================================================================
33
- # SOP========================================================================================================
34
-
35
-
36
-
37
-
38
-
39
-
40
- # Memory========================================================================================================
41
- Single_message = "f\"role: {role} \\n speak content : {content}; \""
42
-
43
- Chat_total_message = "f\"<chat history>{{{chat_history}}}</chat history>\""
44
- # Memory========================================================================================================
45
-
46
-
47
-
48
-
49
-
50
-
51
- # Environment========================================================================================================
52
- Default_environment_summary_system_prompt = "\"\\nYour task is to summarize the historical dialogue records according to the current scene, and summarize the most important information\""
53
-
54
- Default_environment_summary_last_prompt = "\"Please make a summary based on the historical chat records, the output format is history summary: \{your summary content\} \""
55
-
56
- Environment_summary_memory = "f\"Here is the information you need to know:\\n\\n\
57
- Here is the summary of the previous dialogue history:\\n{summary}.\\n\
58
- Here is the latest conversation record:\\n {chat_history},\\n\
59
- Here is the relevant chat history you may need:{relevant_history}.\\n\""
60
-
61
- Environment_summary_system_prompt = "f\"{environment_prompt};\\n{current_memory};\\n{summary_system_prompt};\\n\""
62
-
63
-
64
- # observe
65
- Agent_observe_relevant_memory = "f\"\\n{relevant_memory}. \\n\""
66
-
67
-
68
- Agent_observe_memory = "f\"Here's what you need to know(Remember, this is just information, Try not to repeat what's inside):\\nHere is the relevant chat history you may need:{relevant_memory};\\n\
69
- Here is the previous summary of chat history :\\n{agent.short_term_memory}.\\n\
70
- Here is the relevant memory :\\n{agent.relevant_memory}.\\n\
71
- Here is the new chat history:\\n {conversations};\\n\
72
- \""
73
- # Environment========================================================================================================
74
-
75
-
76
-
77
-
78
- # Agent========================================================================================================
79
- Agent_summary_system_prompt = "f\"{summary_prompt};\\n Here is the past summary:{self.short_term_memory};\\nHere is the new chat_history:\\n{conversations};\\nPlease summary Please summarize based on the above information;\\n\""
80
-
81
- Agent_last_prompt = "f\"{last_prompt};Please continue the talk based on your known information;Remember that you just represent {name}, do not speak for others,just speak as normal.\""
82
-
83
- Agent_system_prompt = "f\"{system_prompt},\""
84
- # Agent========================================================================================================
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abhilashvj/planogram-compliance/utils/flask_rest_api/example_request.py DELETED
@@ -1,19 +0,0 @@
1
- # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
- """
3
- Perform test request
4
- """
5
-
6
- import pprint
7
-
8
- import requests
9
-
10
- DETECTION_URL = "http://localhost:5000/v1/object-detection/yolov5s"
11
- IMAGE = "zidane.jpg"
12
-
13
- # Read image
14
- with open(IMAGE, "rb") as f:
15
- image_data = f.read()
16
-
17
- response = requests.post(DETECTION_URL, files={"image": image_data}).json()
18
-
19
- pprint.pprint(response)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/rings/Factory.d.ts DELETED
@@ -1,6 +0,0 @@
1
- import Rings from './Rings';
2
- import Base from '../base/Base';
3
-
4
- export default function Factory(
5
- config?: Base.IConfig
6
- ): Rings;
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspectivecard/Factory.js DELETED
@@ -1,13 +0,0 @@
1
- import PerspectiveCard from './PerspectiveCard.js';
2
- import ObjectFactory from '../ObjectFactory.js';
3
- import SetValue from '../../../plugins/utils/object/SetValue.js';
4
-
5
- ObjectFactory.register('perspectiveCard', function (config) {
6
- var gameObject = new PerspectiveCard(this.scene, config);
7
- this.scene.add.existing(gameObject);
8
- return gameObject;
9
- });
10
-
11
- SetValue(window, 'RexPlugins.UI.PerspectiveCard', PerspectiveCard);
12
-
13
- export default PerspectiveCard;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/rotate/Rotate.js DELETED
@@ -1,2 +0,0 @@
1
- import { Rotate } from '../../../plugins/gestures.js';
2
- export default Rotate;
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/tabpages/Factory.d.ts DELETED
@@ -1,5 +0,0 @@
1
- import TabPages from './TabPages';
2
-
3
- export default function (
4
- config?: TabPages.IConfig
5
- ): TabPages;
 
 
 
 
 
 
spaces/AirtistDesign/stablediffusionapi-rev-animated/app.py DELETED
@@ -1,3 +0,0 @@
1
- import gradio as gr
2
-
3
- gr.Interface.load("models/stablediffusionapi/rev-animated").launch()
 
 
 
 
spaces/Ameaou/academic-chatgpt3.1/crazy_functions/crazy_utils.py DELETED
@@ -1,566 +0,0 @@
1
- import traceback
2
- from toolbox import update_ui, get_conf
3
-
4
- def input_clipping(inputs, history, max_token_limit):
5
- import numpy as np
6
- from request_llm.bridge_all import model_info
7
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
8
- def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
9
-
10
- mode = 'input-and-history'
11
- # 当 输入部分的token占比 小于 全文的一半时,只裁剪历史
12
- input_token_num = get_token_num(inputs)
13
- if input_token_num < max_token_limit//2:
14
- mode = 'only-history'
15
- max_token_limit = max_token_limit - input_token_num
16
-
17
- everything = [inputs] if mode == 'input-and-history' else ['']
18
- everything.extend(history)
19
- n_token = get_token_num('\n'.join(everything))
20
- everything_token = [get_token_num(e) for e in everything]
21
- delta = max(everything_token) // 16 # 截断时的颗粒度
22
-
23
- while n_token > max_token_limit:
24
- where = np.argmax(everything_token)
25
- encoded = enc.encode(everything[where], disallowed_special=())
26
- clipped_encoded = encoded[:len(encoded)-delta]
27
- everything[where] = enc.decode(clipped_encoded)[:-1] # -1 to remove the may-be illegal char
28
- everything_token[where] = get_token_num(everything[where])
29
- n_token = get_token_num('\n'.join(everything))
30
-
31
- if mode == 'input-and-history':
32
- inputs = everything[0]
33
- else:
34
- pass
35
- history = everything[1:]
36
- return inputs, history
37
-
38
- def request_gpt_model_in_new_thread_with_ui_alive(
39
- inputs, inputs_show_user, llm_kwargs,
40
- chatbot, history, sys_prompt, refresh_interval=0.2,
41
- handle_token_exceed=True,
42
- retry_times_at_unknown_error=2,
43
- ):
44
- """
45
- Request GPT model,请求GPT模型同时维持用户界面活跃。
46
-
47
- 输入参数 Args (以_array结尾的输入变量都是列表,列表长度为子任务的数量,执行时,会把列表拆解,放到每个子线程中分别执行):
48
- inputs (string): List of inputs (输入)
49
- inputs_show_user (string): List of inputs to show user(展现在报告中的输入,借助此参数,在汇总报告中隐藏啰嗦的真实输入,增强报告的可读性)
50
- top_p (float): Top p value for sampling from model distribution (GPT参数,浮点数)
51
- temperature (float): Temperature value for sampling from model distribution(GPT参数,浮点数)
52
- chatbot: chatbot inputs and outputs (用户界面对话窗口句柄,用于数据流可视化)
53
- history (list): List of chat history (历史,对话历史列表)
54
- sys_prompt (string): List of system prompts (系统输入,列表,用于输入给GPT的前提提示,比如你是翻译官怎样怎样)
55
- refresh_interval (float, optional): Refresh interval for UI (default: 0.2) (刷新时间间隔频率,建议低于1,不可高于3,仅仅服务于视觉效果)
56
- handle_token_exceed:是否自动处理token溢出的情况,如果选择自动处理,则会在溢出时暴力截断,默认开启
57
- retry_times_at_unknown_error:失败时的重试次数
58
-
59
- 输出 Returns:
60
- future: 输出,GPT返回的结果
61
- """
62
- import time
63
- from concurrent.futures import ThreadPoolExecutor
64
- from request_llm.bridge_all import predict_no_ui_long_connection
65
- # 用户反馈
66
- chatbot.append([inputs_show_user, ""])
67
- yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
68
- executor = ThreadPoolExecutor(max_workers=16)
69
- mutable = ["", time.time(), ""]
70
- def _req_gpt(inputs, history, sys_prompt):
71
- retry_op = retry_times_at_unknown_error
72
- exceeded_cnt = 0
73
- while True:
74
- # watchdog error
75
- if len(mutable) >= 2 and (time.time()-mutable[1]) > 5:
76
- raise RuntimeError("检测到程序终止。")
77
- try:
78
- # 【第一种情况】:顺利完成
79
- result = predict_no_ui_long_connection(
80
- inputs=inputs, llm_kwargs=llm_kwargs,
81
- history=history, sys_prompt=sys_prompt, observe_window=mutable)
82
- return result
83
- except ConnectionAbortedError as token_exceeded_error:
84
- # 【第二种情况】:Token溢出
85
- if handle_token_exceed:
86
- exceeded_cnt += 1
87
- # 【选择处理】 尝试计算比例,尽可能多地保留文本
88
- from toolbox import get_reduce_token_percent
89
- p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error))
90
- MAX_TOKEN = 4096
91
- EXCEED_ALLO = 512 + 512 * exceeded_cnt
92
- inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO)
93
- mutable[0] += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n'
94
- continue # 返回重试
95
- else:
96
- # 【选择放弃】
97
- tb_str = '```\n' + traceback.format_exc() + '```'
98
- mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
99
- return mutable[0] # 放弃
100
- except:
101
- # 【第三种情况】:其他错误:重试几次
102
- tb_str = '```\n' + traceback.format_exc() + '```'
103
- print(tb_str)
104
- mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
105
- if retry_op > 0:
106
- retry_op -= 1
107
- mutable[0] += f"[Local Message] 重试中,请稍等 {retry_times_at_unknown_error-retry_op}/{retry_times_at_unknown_error}:\n\n"
108
- if ("Rate limit reached" in tb_str) or ("Too Many Requests" in tb_str):
109
- time.sleep(30)
110
- time.sleep(5)
111
- continue # 返回重试
112
- else:
113
- time.sleep(5)
114
- return mutable[0] # 放弃
115
-
116
- # 提交任务
117
- future = executor.submit(_req_gpt, inputs, history, sys_prompt)
118
- while True:
119
- # yield一次以刷新前端页面
120
- time.sleep(refresh_interval)
121
- # “喂狗”(看门狗)
122
- mutable[1] = time.time()
123
- if future.done():
124
- break
125
- chatbot[-1] = [chatbot[-1][0], mutable[0]]
126
- yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
127
-
128
- final_result = future.result()
129
- chatbot[-1] = [chatbot[-1][0], final_result]
130
- yield from update_ui(chatbot=chatbot, history=[]) # 如果最后成功了,则删除报错信息
131
- return final_result
132
-
133
-
134
- def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
135
- inputs_array, inputs_show_user_array, llm_kwargs,
136
- chatbot, history_array, sys_prompt_array,
137
- refresh_interval=0.2, max_workers=-1, scroller_max_len=30,
138
- handle_token_exceed=True, show_user_at_complete=False,
139
- retry_times_at_unknown_error=2,
140
- ):
141
- """
142
- Request GPT model using multiple threads with UI and high efficiency
143
- 请求GPT模型的[多线程]版。
144
- 具备以下功能:
145
- 实时在UI上反馈远程数据流
146
- 使用线程池,可调节线程池的大小避免openai的流量限制错误
147
- 处理中途中止的情况
148
- 网络等出问题时,会把traceback和已经接收的数据转入输出
149
-
150
- 输入参数 Args (以_array结尾的输入变量都是列表,列表长度为子任务的数量,执行时,会把列表拆解,放到每个子线程中分别执行):
151
- inputs_array (list): List of inputs (每个子任务的输入)
152
- inputs_show_user_array (list): List of inputs to show user(每个子任务展现在报告中的输入,借助此参数,在汇总报告中隐藏啰嗦的真实输入,增强报告的可读性)
153
- llm_kwargs: llm_kwargs参数
154
- chatbot: chatbot (用户界面对话窗口句柄,用于数据流可视化)
155
- history_array (list): List of chat history (历史对话输入,双层列表,第一层列表是子任务分解,第二层列表是对话历史)
156
- sys_prompt_array (list): List of system prompts (系统输入,列表,用于输入给GPT的前提提示,比如你是翻译官怎样怎样)
157
- refresh_interval (float, optional): Refresh interval for UI (default: 0.2) (刷新时间间隔频率,建议低于1,不可高于3,仅仅服务于视觉效果)
158
- max_workers (int, optional): Maximum number of threads (default: see config.py) (最大线程数,如果子任务非常多,需要用此选项防止高频地请求openai导致错误)
159
- scroller_max_len (int, optional): Maximum length for scroller (default: 30)(数据流的显示最后收到的多少个字符,仅仅服务于视觉效果)
160
- handle_token_exceed (bool, optional): (是否在输入过长时,自动缩减文本)
161
- handle_token_exceed:是否自动处理token溢出的情况,如果选择自动处理,则会在溢出时暴力截断,默认开启
162
- show_user_at_complete (bool, optional): (在结束时,把完整输入-输出结果显示在聊天框)
163
- retry_times_at_unknown_error:子任务失败时的重试次数
164
-
165
- 输出 Returns:
166
- list: List of GPT model responses (每个子任务的输出汇总,如果某个子任务出错,response中会携带traceback报错信息,方便调试和定位问题。)
167
- """
168
- import time, random
169
- from concurrent.futures import ThreadPoolExecutor
170
- from request_llm.bridge_all import predict_no_ui_long_connection
171
- assert len(inputs_array) == len(history_array)
172
- assert len(inputs_array) == len(sys_prompt_array)
173
- if max_workers == -1: # 读取配置文件
174
- try: max_workers, = get_conf('DEFAULT_WORKER_NUM')
175
- except: max_workers = 8
176
- if max_workers <= 0 or max_workers >= 20: max_workers = 8
177
- # 屏蔽掉 chatglm的多线程,可能会导致严重卡顿
178
- if not (llm_kwargs['llm_model'].startswith('gpt-') or llm_kwargs['llm_model'].startswith('api2d-')):
179
- max_workers = 1
180
-
181
- executor = ThreadPoolExecutor(max_workers=max_workers)
182
- n_frag = len(inputs_array)
183
- # 用户反馈
184
- chatbot.append(["请开始多线程操作。", ""])
185
- yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
186
- # 跨线程传递
187
- mutable = [["", time.time(), "等待中"] for _ in range(n_frag)]
188
-
189
- # 子线程任务
190
- def _req_gpt(index, inputs, history, sys_prompt):
191
- gpt_say = ""
192
- retry_op = retry_times_at_unknown_error
193
- exceeded_cnt = 0
194
- mutable[index][2] = "执行中"
195
- while True:
196
- # watchdog error
197
- if len(mutable[index]) >= 2 and (time.time()-mutable[index][1]) > 5:
198
- raise RuntimeError("检测到程序终止。")
199
- try:
200
- # 【第一种情况】:顺利完成
201
- # time.sleep(10); raise RuntimeError("测试")
202
- gpt_say = predict_no_ui_long_connection(
203
- inputs=inputs, llm_kwargs=llm_kwargs, history=history,
204
- sys_prompt=sys_prompt, observe_window=mutable[index], console_slience=True
205
- )
206
- mutable[index][2] = "已成功"
207
- return gpt_say
208
- except ConnectionAbortedError as token_exceeded_error:
209
- # 【第二种情况】:Token溢出,
210
- if handle_token_exceed:
211
- exceeded_cnt += 1
212
- # 【选择处理】 尝试计算比例,尽可能多地保留文本
213
- from toolbox import get_reduce_token_percent
214
- p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error))
215
- MAX_TOKEN = 4096
216
- EXCEED_ALLO = 512 + 512 * exceeded_cnt
217
- inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO)
218
- gpt_say += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n'
219
- mutable[index][2] = f"截断重试"
220
- continue # 返回重试
221
- else:
222
- # 【选择放弃】
223
- tb_str = '```\n' + traceback.format_exc() + '```'
224
- gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
225
- if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
226
- mutable[index][2] = "输入过长已放弃"
227
- return gpt_say # 放弃
228
- except:
229
- # 【第三种情况】:其他错误
230
- tb_str = '```\n' + traceback.format_exc() + '```'
231
- print(tb_str)
232
- gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
233
- if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
234
- if retry_op > 0:
235
- retry_op -= 1
236
- wait = random.randint(5, 20)
237
- if ("Rate limit reached" in tb_str) or ("Too Many Requests" in tb_str):
238
- wait = wait * 3
239
- fail_info = "OpenAI绑定信用卡可解除频率限制 "
240
- else:
241
- fail_info = ""
242
- # 也许等待十几秒后,情况会好转
243
- for i in range(wait):
244
- mutable[index][2] = f"{fail_info}等待重试 {wait-i}"; time.sleep(1)
245
- # 开始重试
246
- mutable[index][2] = f"重试中 {retry_times_at_unknown_error-retry_op}/{retry_times_at_unknown_error}"
247
- continue # 返回重试
248
- else:
249
- mutable[index][2] = "已失败"
250
- wait = 5
251
- time.sleep(5)
252
- return gpt_say # 放弃
253
-
254
- # 异步任务开始
255
- futures = [executor.submit(_req_gpt, index, inputs, history, sys_prompt) for index, inputs, history, sys_prompt in zip(
256
- range(len(inputs_array)), inputs_array, history_array, sys_prompt_array)]
257
- cnt = 0
258
- while True:
259
- # yield一次以刷新前端页面
260
- time.sleep(refresh_interval)
261
- cnt += 1
262
- worker_done = [h.done() for h in futures]
263
- if all(worker_done):
264
- executor.shutdown()
265
- break
266
- # 更好的UI视觉效果
267
- observe_win = []
268
- # 每个线程都要“喂狗”(看门狗)
269
- for thread_index, _ in enumerate(worker_done):
270
- mutable[thread_index][1] = time.time()
271
- # 在前端打印些好玩的东西
272
- for thread_index, _ in enumerate(worker_done):
273
- print_something_really_funny = "[ ...`"+mutable[thread_index][0][-scroller_max_len:].\
274
- replace('\n', '').replace('```', '...').replace(
275
- ' ', '.').replace('<br/>', '.....').replace('$', '.')+"`... ]"
276
- observe_win.append(print_something_really_funny)
277
- # 在前端打印些好玩的东西
278
- stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n'
279
- if not done else f'`{mutable[thread_index][2]}`\n\n'
280
- for thread_index, done, obs in zip(range(len(worker_done)), worker_done, observe_win)])
281
- # 在前端打印些好玩的东西
282
- chatbot[-1] = [chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt % 10+1))]
283
- yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
284
-
285
- # 异步任务结束
286
- gpt_response_collection = []
287
- for inputs_show_user, f in zip(inputs_show_user_array, futures):
288
- gpt_res = f.result()
289
- gpt_response_collection.extend([inputs_show_user, gpt_res])
290
-
291
- # 是否在结束时,在界面上显示结果
292
- if show_user_at_complete:
293
- for inputs_show_user, f in zip(inputs_show_user_array, futures):
294
- gpt_res = f.result()
295
- chatbot.append([inputs_show_user, gpt_res])
296
- yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
297
- time.sleep(0.3)
298
- return gpt_response_collection
299
-
300
-
301
- def breakdown_txt_to_satisfy_token_limit(txt, get_token_fn, limit):
302
- def cut(txt_tocut, must_break_at_empty_line): # 递归
303
- if get_token_fn(txt_tocut) <= limit:
304
- return [txt_tocut]
305
- else:
306
- lines = txt_tocut.split('\n')
307
- estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines)
308
- estimated_line_cut = int(estimated_line_cut)
309
- for cnt in reversed(range(estimated_line_cut)):
310
- if must_break_at_empty_line:
311
- if lines[cnt] != "":
312
- continue
313
- print(cnt)
314
- prev = "\n".join(lines[:cnt])
315
- post = "\n".join(lines[cnt:])
316
- if get_token_fn(prev) < limit:
317
- break
318
- if cnt == 0:
319
- raise RuntimeError("存在一行极长的文本!")
320
- # print(len(post))
321
- # 列表递归接龙
322
- result = [prev]
323
- result.extend(cut(post, must_break_at_empty_line))
324
- return result
325
- try:
326
- return cut(txt, must_break_at_empty_line=True)
327
- except RuntimeError:
328
- return cut(txt, must_break_at_empty_line=False)
329
-
330
-
331
- def force_breakdown(txt, limit, get_token_fn):
332
- """
333
- 当无法用标点、空行分割时,我们用最暴力的方法切割
334
- """
335
- for i in reversed(range(len(txt))):
336
- if get_token_fn(txt[:i]) < limit:
337
- return txt[:i], txt[i:]
338
- return "Tiktoken未知错误", "Tiktoken未知错误"
339
-
340
- def breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn, limit):
341
- # 递归
342
- def cut(txt_tocut, must_break_at_empty_line, break_anyway=False):
343
- if get_token_fn(txt_tocut) <= limit:
344
- return [txt_tocut]
345
- else:
346
- lines = txt_tocut.split('\n')
347
- estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines)
348
- estimated_line_cut = int(estimated_line_cut)
349
- cnt = 0
350
- for cnt in reversed(range(estimated_line_cut)):
351
- if must_break_at_empty_line:
352
- if lines[cnt] != "":
353
- continue
354
- prev = "\n".join(lines[:cnt])
355
- post = "\n".join(lines[cnt:])
356
- if get_token_fn(prev) < limit:
357
- break
358
- if cnt == 0:
359
- if break_anyway:
360
- prev, post = force_breakdown(txt_tocut, limit, get_token_fn)
361
- else:
362
- raise RuntimeError(f"存在一行极长的文本!{txt_tocut}")
363
- # print(len(post))
364
- # 列表递归接龙
365
- result = [prev]
366
- result.extend(cut(post, must_break_at_empty_line, break_anyway=break_anyway))
367
- return result
368
- try:
369
- # 第1次尝试,将双空行(\n\n)作为切分点
370
- return cut(txt, must_break_at_empty_line=True)
371
- except RuntimeError:
372
- try:
373
- # 第2次尝试,将单空行(\n)作为切分点
374
- return cut(txt, must_break_at_empty_line=False)
375
- except RuntimeError:
376
- try:
377
- # 第3次尝试,将英文句号(.)作为切分点
378
- res = cut(txt.replace('.', '。\n'), must_break_at_empty_line=False) # 这个中文的句号是故意的,作为一个标识而存在
379
- return [r.replace('。\n', '.') for r in res]
380
- except RuntimeError as e:
381
- try:
382
- # 第4次尝试,将中文句号(。)作为切分点
383
- res = cut(txt.replace('。', '。。\n'), must_break_at_empty_line=False)
384
- return [r.replace('。。\n', '。') for r in res]
385
- except RuntimeError as e:
386
- # 第5次尝试,没办法了,随便切一下敷衍吧
387
- return cut(txt, must_break_at_empty_line=False, break_anyway=True)
388
-
389
-
390
-
391
- def read_and_clean_pdf_text(fp):
392
- """
393
- 这个函数用于分割pdf,用了很多trick,逻辑较乱,效果奇好
394
-
395
- **输入参数说明**
396
- - `fp`:需要读取和清理文本的pdf文件路径
397
-
398
- **输出参数说明**
399
- - `meta_txt`:清理后的文本内容字符串
400
- - `page_one_meta`:第一页清理后的文本内容列表
401
-
402
- **函数功能**
403
- 读取pdf文件并清理其中的文本内容,清理规则包括:
404
- - 提取所有块元的文本信息,并合并为一个字符串
405
- - 去除短块(字符数小于100)并替换为回车符
406
- - 清理多余的空行
407
- - 合并小写字母开头的段落块并替换为空格
408
- - 清除重复的换行
409
- - 将每个换行符替换为两个换行符,使每个段落之间有两个换行符分隔
410
- """
411
- import fitz, copy
412
- import re
413
- import numpy as np
414
- from colorful import print亮黄, print亮绿
415
- fc = 0 # Index 0 文本
416
- fs = 1 # Index 1 字体
417
- fb = 2 # Index 2 框框
418
- REMOVE_FOOT_NOTE = True # 是否丢弃掉 不是正文的内容 (比正文字体小,如参考文献、脚注、图注等)
419
- REMOVE_FOOT_FFSIZE_PERCENT = 0.95 # 小于正文的?时,判定为不是正文(有些文章的正文部分字体大小不是100%统一的,有肉眼不可见的小变化)
420
- def primary_ffsize(l):
421
- """
422
- 提取文本块主字体
423
- """
424
- fsize_statiscs = {}
425
- for wtf in l['spans']:
426
- if wtf['size'] not in fsize_statiscs: fsize_statiscs[wtf['size']] = 0
427
- fsize_statiscs[wtf['size']] += len(wtf['text'])
428
- return max(fsize_statiscs, key=fsize_statiscs.get)
429
-
430
- def ffsize_same(a,b):
431
- """
432
- 提取字体大小是否近似相等
433
- """
434
- return abs((a-b)/max(a,b)) < 0.02
435
-
436
- with fitz.open(fp) as doc:
437
- meta_txt = []
438
- meta_font = []
439
-
440
- meta_line = []
441
- meta_span = []
442
- ############################## <第 1 步,搜集初始信息> ##################################
443
- for index, page in enumerate(doc):
444
- # file_content += page.get_text()
445
- text_areas = page.get_text("dict") # 获取页面上的文本信息
446
- for t in text_areas['blocks']:
447
- if 'lines' in t:
448
- pf = 998
449
- for l in t['lines']:
450
- txt_line = "".join([wtf['text'] for wtf in l['spans']])
451
- if len(txt_line) == 0: continue
452
- pf = primary_ffsize(l)
453
- meta_line.append([txt_line, pf, l['bbox'], l])
454
- for wtf in l['spans']: # for l in t['lines']:
455
- meta_span.append([wtf['text'], wtf['size'], len(wtf['text'])])
456
- # meta_line.append(["NEW_BLOCK", pf])
457
- # 块元提取 for each word segment with in line for each line cross-line words for each block
458
- meta_txt.extend([" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace(
459
- '- ', '') for t in text_areas['blocks'] if 'lines' in t])
460
- meta_font.extend([np.mean([np.mean([wtf['size'] for wtf in l['spans']])
461
- for l in t['lines']]) for t in text_areas['blocks'] if 'lines' in t])
462
- if index == 0:
463
- page_one_meta = [" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace(
464
- '- ', '') for t in text_areas['blocks'] if 'lines' in t]
465
-
466
- ############################## <第 2 步,获取正文主字体> ##################################
467
- fsize_statiscs = {}
468
- for span in meta_span:
469
- if span[1] not in fsize_statiscs: fsize_statiscs[span[1]] = 0
470
- fsize_statiscs[span[1]] += span[2]
471
- main_fsize = max(fsize_statiscs, key=fsize_statiscs.get)
472
- if REMOVE_FOOT_NOTE:
473
- give_up_fize_threshold = main_fsize * REMOVE_FOOT_FFSIZE_PERCENT
474
-
475
- ############################## <第 3 步,切分和重新整合> ##################################
476
- mega_sec = []
477
- sec = []
478
- for index, line in enumerate(meta_line):
479
- if index == 0:
480
- sec.append(line[fc])
481
- continue
482
- if REMOVE_FOOT_NOTE:
483
- if meta_line[index][fs] <= give_up_fize_threshold:
484
- continue
485
- if ffsize_same(meta_line[index][fs], meta_line[index-1][fs]):
486
- # 尝试识别段落
487
- if meta_line[index][fc].endswith('.') and\
488
- (meta_line[index-1][fc] != 'NEW_BLOCK') and \
489
- (meta_line[index][fb][2] - meta_line[index][fb][0]) < (meta_line[index-1][fb][2] - meta_line[index-1][fb][0]) * 0.7:
490
- sec[-1] += line[fc]
491
- sec[-1] += "\n\n"
492
- else:
493
- sec[-1] += " "
494
- sec[-1] += line[fc]
495
- else:
496
- if (index+1 < len(meta_line)) and \
497
- meta_line[index][fs] > main_fsize:
498
- # 单行 + 字体大
499
- mega_sec.append(copy.deepcopy(sec))
500
- sec = []
501
- sec.append("# " + line[fc])
502
- else:
503
- # 尝试识别section
504
- if meta_line[index-1][fs] > meta_line[index][fs]:
505
- sec.append("\n" + line[fc])
506
- else:
507
- sec.append(line[fc])
508
- mega_sec.append(copy.deepcopy(sec))
509
-
510
- finals = []
511
- for ms in mega_sec:
512
- final = " ".join(ms)
513
- final = final.replace('- ', ' ')
514
- finals.append(final)
515
- meta_txt = finals
516
-
517
- ############################## <第 4 步,乱七八糟的后处理> ##################################
518
- def 把字符太少的块清除为回车(meta_txt):
519
- for index, block_txt in enumerate(meta_txt):
520
- if len(block_txt) < 100:
521
- meta_txt[index] = '\n'
522
- return meta_txt
523
- meta_txt = 把字符太少的块清除为回车(meta_txt)
524
-
525
- def 清理多余的空行(meta_txt):
526
- for index in reversed(range(1, len(meta_txt))):
527
- if meta_txt[index] == '\n' and meta_txt[index-1] == '\n':
528
- meta_txt.pop(index)
529
- return meta_txt
530
- meta_txt = 清理多余的空行(meta_txt)
531
-
532
- def 合并小写开头的段落块(meta_txt):
533
- def starts_with_lowercase_word(s):
534
- pattern = r"^[a-z]+"
535
- match = re.match(pattern, s)
536
- if match:
537
- return True
538
- else:
539
- return False
540
- for _ in range(100):
541
- for index, block_txt in enumerate(meta_txt):
542
- if starts_with_lowercase_word(block_txt):
543
- if meta_txt[index-1] != '\n':
544
- meta_txt[index-1] += ' '
545
- else:
546
- meta_txt[index-1] = ''
547
- meta_txt[index-1] += meta_txt[index]
548
- meta_txt[index] = '\n'
549
- return meta_txt
550
- meta_txt = 合并小写开头的段落块(meta_txt)
551
- meta_txt = 清理多余的空行(meta_txt)
552
-
553
- meta_txt = '\n'.join(meta_txt)
554
- # 清除重复的换行
555
- for _ in range(5):
556
- meta_txt = meta_txt.replace('\n\n', '\n')
557
-
558
- # 换行 -> 双换行
559
- meta_txt = meta_txt.replace('\n', '\n\n')
560
-
561
- ############################## <第 5 步,展示分割效果> ##################################
562
- # for f in finals:
563
- # print亮黄(f)
564
- # print亮绿('***************************')
565
-
566
- return meta_txt, page_one_meta
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/intel_opts/textual_inversion_dfq/text2images.py DELETED
@@ -1,112 +0,0 @@
1
- import argparse
2
- import math
3
- import os
4
-
5
- import torch
6
- from neural_compressor.utils.pytorch import load
7
- from PIL import Image
8
- from transformers import CLIPTextModel, CLIPTokenizer
9
-
10
- from diffusers import AutoencoderKL, StableDiffusionPipeline, UNet2DConditionModel
11
-
12
-
13
- def parse_args():
14
- parser = argparse.ArgumentParser()
15
- parser.add_argument(
16
- "-m",
17
- "--pretrained_model_name_or_path",
18
- type=str,
19
- default=None,
20
- required=True,
21
- help="Path to pretrained model or model identifier from huggingface.co/models.",
22
- )
23
- parser.add_argument(
24
- "-c",
25
- "--caption",
26
- type=str,
27
- default="robotic cat with wings",
28
- help="Text used to generate images.",
29
- )
30
- parser.add_argument(
31
- "-n",
32
- "--images_num",
33
- type=int,
34
- default=4,
35
- help="How much images to generate.",
36
- )
37
- parser.add_argument(
38
- "-s",
39
- "--seed",
40
- type=int,
41
- default=42,
42
- help="Seed for random process.",
43
- )
44
- parser.add_argument(
45
- "-ci",
46
- "--cuda_id",
47
- type=int,
48
- default=0,
49
- help="cuda_id.",
50
- )
51
- args = parser.parse_args()
52
- return args
53
-
54
-
55
- def image_grid(imgs, rows, cols):
56
- if not len(imgs) == rows * cols:
57
- raise ValueError("The specified number of rows and columns are not correct.")
58
-
59
- w, h = imgs[0].size
60
- grid = Image.new("RGB", size=(cols * w, rows * h))
61
- grid_w, grid_h = grid.size
62
-
63
- for i, img in enumerate(imgs):
64
- grid.paste(img, box=(i % cols * w, i // cols * h))
65
- return grid
66
-
67
-
68
- def generate_images(
69
- pipeline,
70
- prompt="robotic cat with wings",
71
- guidance_scale=7.5,
72
- num_inference_steps=50,
73
- num_images_per_prompt=1,
74
- seed=42,
75
- ):
76
- generator = torch.Generator(pipeline.device).manual_seed(seed)
77
- images = pipeline(
78
- prompt,
79
- guidance_scale=guidance_scale,
80
- num_inference_steps=num_inference_steps,
81
- generator=generator,
82
- num_images_per_prompt=num_images_per_prompt,
83
- ).images
84
- _rows = int(math.sqrt(num_images_per_prompt))
85
- grid = image_grid(images, rows=_rows, cols=num_images_per_prompt // _rows)
86
- return grid, images
87
-
88
-
89
- args = parse_args()
90
- # Load models and create wrapper for stable diffusion
91
- tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer")
92
- text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder")
93
- vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae")
94
- unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet")
95
-
96
- pipeline = StableDiffusionPipeline.from_pretrained(
97
- args.pretrained_model_name_or_path, text_encoder=text_encoder, vae=vae, unet=unet, tokenizer=tokenizer
98
- )
99
- pipeline.safety_checker = lambda images, clip_input: (images, False)
100
- if os.path.exists(os.path.join(args.pretrained_model_name_or_path, "best_model.pt")):
101
- unet = load(args.pretrained_model_name_or_path, model=unet)
102
- unet.eval()
103
- setattr(pipeline, "unet", unet)
104
- else:
105
- unet = unet.to(torch.device("cuda", args.cuda_id))
106
- pipeline = pipeline.to(unet.device)
107
- grid, images = generate_images(pipeline, prompt=args.caption, num_images_per_prompt=args.images_num, seed=args.seed)
108
- grid.save(os.path.join(args.pretrained_model_name_or_path, "{}.png".format("_".join(args.caption.split()))))
109
- dirname = os.path.join(args.pretrained_model_name_or_path, "_".join(args.caption.split()))
110
- os.makedirs(dirname, exist_ok=True)
111
- for idx, image in enumerate(images):
112
- image.save(os.path.join(dirname, "{}.png".format(idx + 1)))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/cgnet.py DELETED
@@ -1,35 +0,0 @@
1
- # model settings
2
- norm_cfg = dict(type='SyncBN', eps=1e-03, requires_grad=True)
3
- model = dict(
4
- type='EncoderDecoder',
5
- backbone=dict(
6
- type='CGNet',
7
- norm_cfg=norm_cfg,
8
- in_channels=3,
9
- num_channels=(32, 64, 128),
10
- num_blocks=(3, 21),
11
- dilations=(2, 4),
12
- reductions=(8, 16)),
13
- decode_head=dict(
14
- type='FCNHead',
15
- in_channels=256,
16
- in_index=2,
17
- channels=256,
18
- num_convs=0,
19
- concat_input=False,
20
- dropout_ratio=0,
21
- num_classes=19,
22
- norm_cfg=norm_cfg,
23
- loss_decode=dict(
24
- type='CrossEntropyLoss',
25
- use_sigmoid=False,
26
- loss_weight=1.0,
27
- class_weight=[
28
- 2.5959933, 6.7415504, 3.5354059, 9.8663225, 9.690899, 9.369352,
29
- 10.289121, 9.953208, 4.3097677, 9.490387, 7.674431, 9.396905,
30
- 10.347791, 6.3927646, 10.226669, 10.241062, 10.280587,
31
- 10.396974, 10.055647
32
- ])),
33
- # model training and testing settings
34
- train_cfg=dict(sampler=None),
35
- test_cfg=dict(mode='whole'))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x512_160k_ade20k.py DELETED
@@ -1,2 +0,0 @@
1
- _base_ = './deeplabv3_r50-d8_512x512_160k_ade20k.py'
2
- model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x512_160k_ade20k.py DELETED
@@ -1,6 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/fcn_r50-d8.py', '../_base_/datasets/ade20k.py',
3
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
4
- ]
5
- model = dict(
6
- decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_512x512_40k_voc12aug.py DELETED
@@ -1,10 +0,0 @@
1
- _base_ = './fcn_hr18_512x512_40k_voc12aug.py'
2
- model = dict(
3
- pretrained='open-mmlab://msra/hrnetv2_w48',
4
- backbone=dict(
5
- extra=dict(
6
- stage2=dict(num_channels=(48, 96)),
7
- stage3=dict(num_channels=(48, 96, 192)),
8
- stage4=dict(num_channels=(48, 96, 192, 384)))),
9
- decode_head=dict(
10
- in_channels=[48, 96, 192, 384], channels=sum([48, 96, 192, 384])))
 
 
 
 
 
 
 
 
 
 
 
spaces/Artrajz/vits-simple-api/vits/commons.py DELETED
@@ -1,96 +0,0 @@
1
- import torch
2
- from torch.nn import functional as F
3
- import torch.jit
4
-
5
-
6
- def script_method(fn, _rcb=None):
7
- return fn
8
-
9
-
10
- def script(obj, optimize=True, _frames_up=0, _rcb=None):
11
- return obj
12
-
13
-
14
- torch.jit.script_method = script_method
15
- torch.jit.script = script
16
-
17
-
18
- def init_weights(m, mean=0.0, std=0.01):
19
- classname = m.__class__.__name__
20
- if classname.find("Conv") != -1:
21
- m.weight.data.normal_(mean, std)
22
-
23
-
24
- def get_padding(kernel_size, dilation=1):
25
- return int((kernel_size*dilation - dilation)/2)
26
-
27
-
28
- def intersperse(lst, item):
29
- result = [item] * (len(lst) * 2 + 1)
30
- result[1::2] = lst
31
- return result
32
-
33
-
34
- def slice_segments(x, ids_str, segment_size=4):
35
- ret = torch.zeros_like(x[:, :, :segment_size])
36
- for i in range(x.size(0)):
37
- idx_str = ids_str[i]
38
- idx_end = idx_str + segment_size
39
- ret[i] = x[i, :, idx_str:idx_end]
40
- return ret
41
-
42
-
43
- def rand_slice_segments(x, x_lengths=None, segment_size=4):
44
- b, d, t = x.size()
45
- if x_lengths is None:
46
- x_lengths = t
47
- ids_str_max = x_lengths - segment_size + 1
48
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
49
- ret = slice_segments(x, ids_str, segment_size)
50
- return ret, ids_str
51
-
52
-
53
- def subsequent_mask(length):
54
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
55
- return mask
56
-
57
-
58
- @torch.jit.script
59
- def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
60
- n_channels_int = n_channels[0]
61
- in_act = input_a + input_b
62
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
63
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
64
- acts = t_act * s_act
65
- return acts
66
-
67
-
68
- def convert_pad_shape(pad_shape):
69
- l = pad_shape[::-1]
70
- pad_shape = [item for sublist in l for item in sublist]
71
- return pad_shape
72
-
73
-
74
- def sequence_mask(length, max_length=None):
75
- if max_length is None:
76
- max_length = length.max()
77
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
78
- return x.unsqueeze(0) < length.unsqueeze(1)
79
-
80
-
81
- def generate_path(duration, mask):
82
- """
83
- duration: [b, 1, t_x]
84
- mask: [b, 1, t_y, t_x]
85
- """
86
- device = duration.device
87
-
88
- b, _, t_y, t_x = mask.shape
89
- cum_duration = torch.cumsum(duration, -1)
90
-
91
- cum_duration_flat = cum_duration.view(b * t_x)
92
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
93
- path = path.view(b, t_x, t_y)
94
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
95
- path = path.unsqueeze(1).transpose(2,3) * mask
96
- return path
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/protocol.py DELETED
@@ -1,42 +0,0 @@
1
- from typing import Any, cast, Set, TYPE_CHECKING
2
- from inspect import isclass
3
-
4
- if TYPE_CHECKING:
5
- from pip._vendor.rich.console import RenderableType
6
-
7
- _GIBBERISH = """aihwerij235234ljsdnp34ksodfipwoe234234jlskjdf"""
8
-
9
-
10
- def is_renderable(check_object: Any) -> bool:
11
- """Check if an object may be rendered by Rich."""
12
- return (
13
- isinstance(check_object, str)
14
- or hasattr(check_object, "__rich__")
15
- or hasattr(check_object, "__rich_console__")
16
- )
17
-
18
-
19
- def rich_cast(renderable: object) -> "RenderableType":
20
- """Cast an object to a renderable by calling __rich__ if present.
21
-
22
- Args:
23
- renderable (object): A potentially renderable object
24
-
25
- Returns:
26
- object: The result of recursively calling __rich__.
27
- """
28
- from pip._vendor.rich.console import RenderableType
29
-
30
- rich_visited_set: Set[type] = set() # Prevent potential infinite loop
31
- while hasattr(renderable, "__rich__") and not isclass(renderable):
32
- # Detect object which claim to have all the attributes
33
- if hasattr(renderable, _GIBBERISH):
34
- return repr(renderable)
35
- cast_method = getattr(renderable, "__rich__")
36
- renderable = cast_method()
37
- renderable_type = type(renderable)
38
- if renderable_type in rich_visited_set:
39
- break
40
- rich_visited_set.add(renderable_type)
41
-
42
- return cast(RenderableType, renderable)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Audio-AGI/WavJourney/scripts/start_service_and_ui.sh DELETED
@@ -1,2 +0,0 @@
1
- conda run --live-stream -n WavJourney python -u services.py 2>&1 | tee services_logs/service.out &
2
- conda run --live-stream -n WavJourney python -u ui_client.py 2>&1 | tee services_logs/wavejourney.out
 
 
 
spaces/Awesimo/jojogan/e4e/editings/latent_editor.py DELETED
@@ -1,45 +0,0 @@
1
- import torch
2
- import sys
3
- sys.path.append(".")
4
- sys.path.append("..")
5
- from editings import ganspace, sefa
6
- from utils.common import tensor2im
7
-
8
-
9
- class LatentEditor(object):
10
- def __init__(self, stylegan_generator, is_cars=False):
11
- self.generator = stylegan_generator
12
- self.is_cars = is_cars # Since the cars StyleGAN output is 384x512, there is a need to crop the 512x512 output.
13
-
14
- def apply_ganspace(self, latent, ganspace_pca, edit_directions):
15
- edit_latents = ganspace.edit(latent, ganspace_pca, edit_directions)
16
- return self._latents_to_image(edit_latents)
17
-
18
- def apply_interfacegan(self, latent, direction, factor=1, factor_range=None):
19
- edit_latents = []
20
- if factor_range is not None: # Apply a range of editing factors. for example, (-5, 5)
21
- for f in range(*factor_range):
22
- edit_latent = latent + f * direction
23
- edit_latents.append(edit_latent)
24
- edit_latents = torch.cat(edit_latents)
25
- else:
26
- edit_latents = latent + factor * direction
27
- return self._latents_to_image(edit_latents)
28
-
29
- def apply_sefa(self, latent, indices=[2, 3, 4, 5], **kwargs):
30
- edit_latents = sefa.edit(self.generator, latent, indices, **kwargs)
31
- return self._latents_to_image(edit_latents)
32
-
33
- # Currently, in order to apply StyleFlow editings, one should run inference,
34
- # save the latent codes and load them form the official StyleFlow repository.
35
- # def apply_styleflow(self):
36
- # pass
37
-
38
- def _latents_to_image(self, latents):
39
- with torch.no_grad():
40
- images, _ = self.generator([latents], randomize_noise=False, input_is_latent=True)
41
- if self.is_cars:
42
- images = images[:, :, 64:448, :] # 512x512 -> 384x512
43
- horizontal_concat_image = torch.cat(list(images), 2)
44
- final_image = tensor2im(horizontal_concat_image)
45
- return final_image
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/structures/instances.py DELETED
@@ -1,192 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- import itertools
3
- from typing import Any, Dict, List, Tuple, Union
4
- import torch
5
-
6
-
7
- class Instances:
8
- """
9
- This class represents a list of instances in an image.
10
- It stores the attributes of instances (e.g., boxes, masks, labels, scores) as "fields".
11
- All fields must have the same ``__len__`` which is the number of instances.
12
-
13
- All other (non-field) attributes of this class are considered private:
14
- they must start with '_' and are not modifiable by a user.
15
-
16
- Some basic usage:
17
-
18
- 1. Set/get/check a field:
19
-
20
- .. code-block:: python
21
-
22
- instances.gt_boxes = Boxes(...)
23
- print(instances.pred_masks) # a tensor of shape (N, H, W)
24
- print('gt_masks' in instances)
25
-
26
- 2. ``len(instances)`` returns the number of instances
27
- 3. Indexing: ``instances[indices]`` will apply the indexing on all the fields
28
- and returns a new :class:`Instances`.
29
- Typically, ``indices`` is a integer vector of indices,
30
- or a binary mask of length ``num_instances``
31
-
32
- .. code-block:: python
33
-
34
- category_3_detections = instances[instances.pred_classes == 3]
35
- confident_detections = instances[instances.scores > 0.9]
36
- """
37
-
38
- def __init__(self, image_size: Tuple[int, int], **kwargs: Any):
39
- """
40
- Args:
41
- image_size (height, width): the spatial size of the image.
42
- kwargs: fields to add to this `Instances`.
43
- """
44
- self._image_size = image_size
45
- self._fields: Dict[str, Any] = {}
46
- for k, v in kwargs.items():
47
- self.set(k, v)
48
-
49
- @property
50
- def image_size(self) -> Tuple[int, int]:
51
- """
52
- Returns:
53
- tuple: height, width
54
- """
55
- return self._image_size
56
-
57
- def __setattr__(self, name: str, val: Any) -> None:
58
- if name.startswith("_"):
59
- super().__setattr__(name, val)
60
- else:
61
- self.set(name, val)
62
-
63
- def __getattr__(self, name: str) -> Any:
64
- if name == "_fields" or name not in self._fields:
65
- raise AttributeError("Cannot find field '{}' in the given Instances!".format(name))
66
- return self._fields[name]
67
-
68
- def set(self, name: str, value: Any) -> None:
69
- """
70
- Set the field named `name` to `value`.
71
- The length of `value` must be the number of instances,
72
- and must agree with other existing fields in this object.
73
- """
74
- data_len = len(value)
75
- if len(self._fields):
76
- assert (
77
- len(self) == data_len
78
- ), "Adding a field of length {} to a Instances of length {}".format(data_len, len(self))
79
- self._fields[name] = value
80
-
81
- def has(self, name: str) -> bool:
82
- """
83
- Returns:
84
- bool: whether the field called `name` exists.
85
- """
86
- return name in self._fields
87
-
88
- def remove(self, name: str) -> None:
89
- """
90
- Remove the field called `name`.
91
- """
92
- del self._fields[name]
93
-
94
- def get(self, name: str) -> Any:
95
- """
96
- Returns the field called `name`.
97
- """
98
- return self._fields[name]
99
-
100
- def get_fields(self) -> Dict[str, Any]:
101
- """
102
- Returns:
103
- dict: a dict which maps names (str) to data of the fields
104
-
105
- Modifying the returned dict will modify this instance.
106
- """
107
- return self._fields
108
-
109
- # Tensor-like methods
110
- def to(self, *args: Any, **kwargs: Any) -> "Instances":
111
- """
112
- Returns:
113
- Instances: all fields are called with a `to(device)`, if the field has this method.
114
- """
115
- ret = Instances(self._image_size)
116
- for k, v in self._fields.items():
117
- if hasattr(v, "to"):
118
- v = v.to(*args, **kwargs)
119
- ret.set(k, v)
120
- return ret
121
-
122
- def __getitem__(self, item: Union[int, slice, torch.BoolTensor]) -> "Instances":
123
- """
124
- Args:
125
- item: an index-like object and will be used to index all the fields.
126
-
127
- Returns:
128
- If `item` is a string, return the data in the corresponding field.
129
- Otherwise, returns an `Instances` where all fields are indexed by `item`.
130
- """
131
- if type(item) == int:
132
- if item >= len(self) or item < -len(self):
133
- raise IndexError("Instances index out of range!")
134
- else:
135
- item = slice(item, None, len(self))
136
-
137
- ret = Instances(self._image_size)
138
- for k, v in self._fields.items():
139
- ret.set(k, v[item])
140
- return ret
141
-
142
- def __len__(self) -> int:
143
- for v in self._fields.values():
144
- # use __len__ because len() has to be int and is not friendly to tracing
145
- return v.__len__()
146
- raise NotImplementedError("Empty Instances does not support __len__!")
147
-
148
- def __iter__(self):
149
- raise NotImplementedError("`Instances` object is not iterable!")
150
-
151
- @staticmethod
152
- def cat(instance_lists: List["Instances"]) -> "Instances":
153
- """
154
- Args:
155
- instance_lists (list[Instances])
156
-
157
- Returns:
158
- Instances
159
- """
160
- assert all(isinstance(i, Instances) for i in instance_lists)
161
- assert len(instance_lists) > 0
162
- if len(instance_lists) == 1:
163
- return instance_lists[0]
164
-
165
- image_size = instance_lists[0].image_size
166
- if not isinstance(image_size, torch.Tensor): # could be a tensor in tracing
167
- for i in instance_lists[1:]:
168
- assert i.image_size == image_size
169
- ret = Instances(image_size)
170
- for k in instance_lists[0]._fields.keys():
171
- values = [i.get(k) for i in instance_lists]
172
- v0 = values[0]
173
- if isinstance(v0, torch.Tensor):
174
- values = torch.cat(values, dim=0)
175
- elif isinstance(v0, list):
176
- values = list(itertools.chain(*values))
177
- elif hasattr(type(v0), "cat"):
178
- values = type(v0).cat(values)
179
- else:
180
- raise ValueError("Unsupported type {} for concatenation".format(type(v0)))
181
- ret.set(k, values)
182
- return ret
183
-
184
- def __str__(self) -> str:
185
- s = self.__class__.__name__ + "("
186
- s += "num_instances={}, ".format(len(self))
187
- s += "image_height={}, ".format(self._image_size[0])
188
- s += "image_width={}, ".format(self._image_size[1])
189
- s += "fields=[{}])".format(", ".join((f"{k}: {v}" for k, v in self._fields.items())))
190
- return s
191
-
192
- __repr__ = __str__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/3d Paint Download.md DELETED
@@ -1,151 +0,0 @@
1
- <br />
2
- <h1>Cómo descargar y usar software de pintura 3D</h1>
3
- <p>Si usted está buscando una manera de dar rienda suelta a su creatividad y hacer impresionantes obras de arte en tres dimensiones, es posible que desee probar algunos de los mejores software de pintura 3D disponibles. En este artículo, le mostraremos qué es el software de pintura 3D, cómo descargarlo y cómo usarlo. </p>
4
- <h2>3d paint download</h2><br /><p><b><b>Download</b> &#8230;&#8230;&#8230; <a href="https://bltlly.com/2v6M1D">https://bltlly.com/2v6M1D</a></b></p><br /><br />
5
- <h2>¿Qué es el software de pintura 3D? </h2>
6
- <p>El software de pintura 3D es un tipo de aplicación de modelado que le permite crear, editar y renderizar objetos y escenas 3D. A diferencia del software de pintura 2D tradicional, que solo funciona en superficies planas, el software de pintura 3D le permite manipular formas en un espacio virtual y aplicarles texturas y colores realistas. </p>
7
- <h3>La diferencia entre la pintura 2D y 3D</h3>
8
- <p>La principal diferencia entre la pintura 2D y 3D es la dimensionalidad de los objetos. En la pintura en 2D, solo puedes dibujar líneas, curvas y formas en un plano. En la pintura 3D, puede crear objetos sólidos que tienen profundidad, anchura y altura. También puede girarlos, escalarlos y moverlos en un entorno 3D. </p>
9
- <h3>Los beneficios de la pintura 3D</h3>
10
- <p>Algunos de los beneficios de usar software de pintura 3D son:</p>
11
- <ul>
12
- <li>Puedes crear obras de arte más realistas e inmersivas que capturen los detalles y la iluminación del mundo real. </li>
13
- <li>Puedes experimentar con diferentes perspectivas y ángulos para mostrar tu trabajo. </li>
14
- <li> Puedes agregar profundidad y volumen a tus dibujos y hacerlos salir. </li>
15
- <li>Puedes combinar diferentes elementos y materiales para crear composiciones únicas. </li>
16
- <li> Puede exportar su trabajo en varios formatos y compartirlo en línea o imprimirlo. </li>
17
- </ul>
18
- <h2>Cómo descargar software de pintura 3D</h2>
19
- <p>Hay muchas opciones para descargar software de pintura 3D, dependiendo de sus preferencias y necesidades. Aquí están algunas de las más populares:</p>
20
- <h3>Pintar 3D desde Microsoft Store</h3>
21
-
22
- <ol>
23
- <li>Escriba "paint" en el cuadro de búsqueda en la barra de tareas y seleccione "Paint" de la lista de resultados. </li>
24
- <li>Haga clic en "Obtener" en la aplicación de la tienda y esperar a que se complete la instalación. </li>
25
- <li>Inicie Paint 3D desde el menú Inicio o la barra de tareas. </li>
26
- </ol>
27
- <h4>Fuente:</h4>
28
- <p>[Abrir Microsoft Paint]( 1 )</p>
29
- <p></p>
30
- <h4>Captura de pantalla:</h4>
31
- <img src=" 5 " alt="Pintar captura de pantalla 3D">
32
- <h4>Publicidad:</h4>
33
- <p>Si quieres aprender más sobre Paint 3D y cómo usarlo eficazmente, echa un vistazo a este curso en línea que te enseñará todo lo que necesitas saber sobre este increíble software. Aprenderás a crear obras de arte impresionantes en 2D y 3D, cómo aplicar texturas y efectos, cómo exportar y compartir tu trabajo, y mucho más. Haga clic aquí para inscribirse ahora y obtener un descuento especial! </p>
34
- <h4>Tabla:</h e:</h4>
35
- <tabla>
36
- <tr>
37
- <th>Pros</th>
38
- <th>Contras</th>
39
- </tr>
40
- <tr>
41
- <td>Gratis y fácil de usar</td>
42
- <td>Características limitadas y personalización</td>
43
- </tr>
44
- <tr>
45
- <td>Integrado con Windows 10</td>
46
- <td>No es compatible con versiones anteriores de Windows</td>
47
- </tr>
48
- <tr>
49
- <td>Ofrece opciones 2D y 3D</td>
50
- <td>No muy avanzado o profesional</td>
51
- </tr>
52
- </tabla>
53
- <h3>Adobe Substance 3D Painter</h3>
54
- <p>Si está buscando un software de pintura 3D más avanzado y profesional, es posible que desee probar Adobe Substance 3D Painter. Esta es una potente aplicación que te permite crear texturas y materiales realistas y detallados para tus modelos 3D. Puede utilizar una variedad de pinceles, herramientas y ajustes preestablecidos, así como importar sus propias imágenes o modelos de otras fuentes. También puede exportar su trabajo en varios formatos e integrarlo con otros productos de Adobe o software de terceros. Para descargar Adobe Substance 3D Painter, necesitas tener una suscripción a Adobe Creative Cloud. Puede obtener una prueba gratuita durante 30 días o elegir un plan que se adapte a sus necesidades. Para descargar Adobe Substance 3D Painter desde el sitio web de Adobe, siga estos pasos:</p>
55
- <ol>
56
-
57
- <li>Inicia sesión con tu Adobe ID o crea uno si no tienes uno. </li>
58
- <li>Siga las instrucciones en la pantalla para descargar e instalar el software. </li>
59
- <li>Inicie Adobe Substance 3D Painter desde la aplicación Creative Cloud o el menú Inicio. </li>
60
- </ol>
61
- <h4>Fuente:</h4>
62
- <p>[Adobe Substance 3D Painter]</p>
63
- <h4>Captura de pantalla:</h4>
64
- <img src=" 6 " alt="Captura de pantalla de Adobe Substance 3D Painter">
65
- <h4>Publicidad:</h4>
66
- <p>Si quieres dominar Adobe Substance 3D Painter y crear texturas y materiales increíbles para tus modelos 3D, deberías echar un vistazo a este curso online que te enseñará todo lo que necesitas saber sobre este software. Aprenderá a utilizar la interfaz, los pinceles, las herramientas, los ajustes preestablecidos y las capas, cómo importar y exportar su trabajo, cómo integrarlo con otro software y mucho más. Haga clic aquí para inscribirse ahora y obtener un descuento especial! </p>
67
- <h4>Tabla:</h4>
68
- <tabla>
69
- <tr>
70
- <th>Pros</th>
71
- <th>Contras</th>
72
- </tr>
73
- <tr>
74
- <td>Avanzado y profesional</td>
75
- <td>Caro y complejo</td>
76
- </tr>
77
- <tr>
78
- <td>Realista y detallado</td>
79
- <td>Requiere hardware y software de alta gama</td>
80
- </tr>
81
- <tr>
82
- <td>Integrado con productos de Adobe y otro software</td>
83
- <td>Requiere una suscripción de Adobe Creative Cloud</td>
84
- </tr>
85
- </tabla>
86
- <h3>Microsoft Paint 3D desde FileHippo</h3>
87
- <p>Si quieres descargar Microsoft Paint 3D sin pasar por Microsoft Store, puedes usar FileHippo, un sitio web que ofrece descargas gratuitas de varios programas. Microsoft Paint 3D de FileHippo es el mismo que el de la tienda de Microsoft, pero no requiere ningún registro o instalación. Simplemente puede descargar el archivo ejecutable y ejecutarlo en su computadora. Para descargar Microsoft Paint 3D desde FileHippo, siga estos pasos:</p>
88
- <ol>
89
- <li>Ir a [Microsoft Paint 3D] en FileHippo y haga clic en "Descargar la última versión". </li>
90
- <li> Seleccione una carpeta donde desea guardar el archivo y esperar a que se complete la descarga. </li>
91
-
92
- </ol>
93
- <h4>Fuente:</h4>
94
- <p>[Microsoft Paint 3D]</p>
95
- <h4>Captura de pantalla:</h4>
96
- <img src=" 7 " alt="Microsoft Paint 3D captura de pantalla">
97
- <h4>Publicidad:</h4>
98
- <p>Si quieres aprender más sobre Microsoft Paint 3D y cómo usarlo eficazmente, echa un vistazo a este curso en línea que te enseñará todo lo que necesitas saber sobre este increíble software. Aprenderás a crear obras de arte impresionantes en 2D y 3D, cómo aplicar texturas y efectos, cómo exportar y compartir tu trabajo, y mucho más. Haga clic aquí para inscribirse ahora y obtener un descuento especial! </p>
99
- <h4>Tabla:</h4>
100
- <tabla>
101
- <tr>
102
- <th>Pros</th>
103
- <th>Contras</th>
104
- </tr>
105
- <tr>
106
- <td>Gratis y fácil de usar</td>
107
- <td>Características limitadas y personalización</td </td>
108
- </tr>
109
- <tr>
110
- <td>No requiere instalación ni registro</td>
111
- <td>No es compatible con versiones anteriores de Windows</td>
112
- </tr>
113
- <tr>
114
- <td>Ofrece opciones 2D y 3D</td>
115
- <td>No muy avanzado o profesional</td>
116
- </tr>
117
- </tabla>
118
- <h2>Cómo usar software de pintura 3D</h2>
119
- <p>Ahora que ha descargado su software de pintura 3D preferido, es posible que se pregunte cómo usarlo. Si bien cada software tiene su propia interfaz y características, hay algunos pasos comunes que puede seguir para crear sus propias obras de arte en 3D. Estos son algunos de ellos:</p>
120
- <h3>Crear un nuevo proyecto</h3>
121
- <p>El primer paso es crear un nuevo proyecto o archivo donde trabajarás en tu pintura 3D. Dependiendo del software, es posible que tenga que elegir una plantilla, un tamaño de lienzo, una resolución o un color de fondo. También puede nombrar su proyecto y guardarlo en una carpeta de su elección. </p>
122
- <h3>Elegir un objeto 3D</h3>
123
- <p>El siguiente paso es elegir un objeto 3D sobre el que quieras pintar. Puede usar uno de los modelos predefinidos que vienen con el software, importar su propio modelo de otra fuente o crear su propio modelo desde cero. También puedes usar formas básicas como cubos, esferas, cilindros o conos para construir tu propio modelo. </p>
124
-
125
- <p>El tercer paso es aplicar texturas y colores a su objeto 3D. Puede utilizar los pinceles, herramientas y ajustes preestablecidos que proporciona el software, o importar sus propias imágenes o texturas de otras fuentes. También puede ajustar el tamaño, la opacidad, la dureza y el ángulo de los cepillos, así como los modos de mezcla, las capas y las máscaras de las texturas. También puede usar el selector de color, la rueda de color o la paleta de colores para elegir los colores que desea usar. </p>
126
- <h3>Añadir pegatinas y efectos</h3>
127
- <p>El cuarto paso es agregar pegatinas y efectos a su objeto 3D. Las pegatinas son imágenes que puedes colocar encima de tu objeto, como logotipos, patrones, símbolos o texto. Los efectos son filtros que puedes aplicar a tu objeto, como sombras, luces, reflejos o distorsiones. También puede utilizar las herramientas y preajustes que proporciona el software, o importar sus propias pegatinas y efectos de otras fuentes. </p>
128
- <h3>Exportar y compartir tu trabajo</h3>
129
- <p>El paso final es exportar y compartir su trabajo. Puede guardar su proyecto como un archivo en varios formatos, como PNG, JPG, BMP, GIF, TGA o PSD. También puede exportar su proyecto como modelo 3D en formatos como OBJ, STL, FBX o GLB. También puede compartir su trabajo en línea o imprimirlo. </p>
130
- <h2>Conclusión</h2>
131
- <p>En conclusión, el software de pintura 3D es una gran manera de crear impresionantes obras de arte en tres dimensiones. Puede descargar diferentes tipos de software de pintura 3D dependiendo de sus preferencias y necesidades. También puede utilizar algunos pasos comunes para crear sus propias pinturas en 3D. Esperamos que este artículo te haya ayudado a aprender más sobre el software de pintura 3D y cómo descargarlo y usarlo. </p>
132
- <h2>Preguntas frecuentes</h2>
133
- <h4>¿Cuáles son algunos ejemplos de software de pintura 3D? </h4>
134
- <p>Algunos ejemplos de software de pintura 3D son Paint 3D de Microsoft Store, Adobe Substance 3D Painter, Microsoft Paint 3D de FileHippo, Blender, ZBrush, SketchUp, Maya y Cinema 4D. </p>
135
- <h4>¿Cuáles son algunos de los beneficios de usar software de pintura 3D? </h4>
136
-
137
- <h4>¿Cuáles son algunos de los desafíos de usar software de pintura 3D? </h4>
138
- <p>Algunos desafíos del uso de software de pintura 3D son que puede necesitar algunas habilidades técnicas y conocimientos para usarlo de manera efectiva; es posible que necesite hardware y software de alta gama para ejecutarlo sin problemas; es posible que necesite una conexión a Internet o una suscripción para descargarlo o acceder a él; y usted puede hacer frente a algunos problemas de compatibilidad con otro software o dispositivos. </p>
139
- <h4>¿Cómo puedo aprender más sobre el uso de software de pintura 3D? </h4>
140
- <p>Puede aprender más sobre el uso de software de pintura 3D leyendo tutoriales y guías en línea; viendo videos y demostraciones en línea ; inscribiéndose en cursos y programas en línea; o practicando con sus propios proyectos y experimentos. </p>
141
- <h4>¿Cuáles son algunos consejos y trucos para usar software de pintura 3D? </h4>
142
- <p>Algunos consejos y trucos para usar software de pintura 3D son:</p>
143
- <ul>
144
- <li>Usa una tableta gráfica o un lápiz táctil para dibujar con mayor precisión y comodidad. </li>
145
- <li>Utilice atajos de teclado y teclas de acceso rápido para acelerar su flujo de trabajo y acceder a diferentes funciones. </li>
146
- <li>Usa capas y máscaras para organizar y editar tu trabajo de manera más fácil y eficiente. </li>
147
- <li>Usa imágenes de referencia y modelos para inspirar y guiar tu trabajo. </li>
148
- <li>Utilice los botones deshacer y rehacer para corregir sus errores y probar diferentes opciones. </li>
149
- </ul></p> 64aa2da5cf<br />
150
- <br />
151
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Apk Moderno Mod Ops.md DELETED
@@ -1,78 +0,0 @@
1
-
2
- <h1>Moderno Ops Mod APK: Una guía para desbloquear todo</h1>
3
- <p>Si eres un fan de los juegos de disparos llenos de acción, es posible que hayas oído hablar de Modern Ops. Es un popular juego de FPS en línea que te permite competir con otros jugadores en varios modos y mapas. Puedes elegir entre una amplia gama de armas, personalizar a tu personaje y unirte a un clan para formar equipo con tus amigos. Pero lo que si quieres desbloquear todo en el juego sin gastar dinero o tiempo? Ahí es donde Modern Ops Mod APK viene muy bien. En este artículo, te contaremos todo lo que necesitas saber sobre esta versión modificada del juego, incluyendo sus características, beneficios, proceso de instalación, consejos de juego y más. </p>
4
- <h2>¿Qué es Operaciones Modernas? </h2>
5
- <p>Modern Ops es un juego multijugador de disparos en primera persona desarrollado por Edkon Games GmbH. Fue lanzado en 2019 para dispositivos Android e iOS. El juego tiene más de 50 millones de descargas en Google Play Store y ha recibido críticas positivas de usuarios y críticos por igual. El juego está inspirado en otros juegos populares de FPS como Call of Duty y Counter-Strike. Puedes jugar como un terrorista o un antiterrorista y participar en emocionantes batallas con otros jugadores de todo el mundo. También puedes crear tu propio equipo y chatear con tus compañeros de equipo usando mensajes de voz o texto. </p>
6
- <h2>apk moderno mod ops</h2><br /><p><b><b>DOWNLOAD</b> &rarr;&rarr;&rarr; <a href="https://bltlly.com/2v6JLn">https://bltlly.com/2v6JLn</a></b></p><br /><br />
7
- <h3>Características de Operaciones Modernas</h3>
8
- <p>Algunas de las características que hacen de Modern Ops un juego emocionante y adictivo son:</p>
9
- <ul>
10
- <li>Más de 30 armas modernas, incluyendo pistolas, rifles, escopetas, francotiradores, ametralladoras y granadas. </li>
11
- <li> Diferentes pieles y accesorios para sus armas para hacerlos ver fresco y único. </li>
12
- <li>Varios modos de juego, tales como equipo deathmatch, libre para todos, desactivar la bomba, capturar la bandera, y más. </li>
13
- <li>Diferentes mapas con gráficos realistas y efectos de sonido. </li>
14
- <li>Un sistema de clasificación que te recompensa con monedas y gemas por tu rendimiento. </li>
15
- <li>Un sistema de clanes que te permite unirte o crear un clan y participar en guerras de clanes. </li>
16
-
17
- </ul>
18
- <h3>¿Por qué usar Modern Ops Mod APK? </h3>
19
- <p>Modern Ops es un juego gratuito, pero también tiene algunas compras en la aplicación que pueden mejorar su experiencia de juego. Por ejemplo, puedes comprar armas de primera calidad, pieles, cajas, amplificadores y más con dinero real. Sin embargo, no todo el mundo puede permitirse el lujo de gastar dinero en estos artículos, o que podrían encontrar demasiado caro o injusto. Es por eso que algunas personas prefieren utilizar Modern Ops Mod APK lugar. Esta es una versión modificada del juego que le da acceso a recursos y características ilimitadas. Algunos de los beneficios de usar Modern Ops Mod APK son:</p>
20
- <ul>
21
- <li>Puedes desbloquear todo en el juego sin gastar dinero ni tiempo. </li>
22
- <li>Puedes conseguir monedas y gemas ilimitadas para comprar lo que quieras en el juego. </li>
23
- <li>Puedes obtener munición y granadas ilimitadas para que nunca te quedes sin potencia de fuego. </li>
24
- <li>Puedes obtener salud y armadura ilimitadas para sobrevivir más tiempo en las batallas. </li>
25
- <li>Puedes obtener energía ilimitada para jugar tanto como quieras sin esperar a que se recargue. </li>
26
- <li> Puede obtener acceso a todas las armas, pieles, archivos adjuntos, cajas, amplificadores y más en el juego. </li>
27
- <li>Puedes acceder a todos los modos de juego y mapas del juego. </li>
28
- <li> Puede obtener acceso a todas las funciones premium que normalmente están disponibles solo para usuarios VIP. </li>
29
- </ul>
30
- <h2>¿Cómo descargar e instalar Modern Ops Mod APK? </h2>
31
- <p>Si usted está interesado en descargar e instalar Modern Ops Mod APK en su dispositivo Android, es necesario seguir algunos pasos simples. Antes de eso, debe asegurarse de que su dispositivo cumple con algunos requisitos. </p>
32
- <h3>Requisitos</h3>
33
-
34
- <h3>Pasos</h3>
35
- <p>Una vez que haya cumplido con los requisitos, puede seguir estos pasos para descargar e instalar Modern Ops Mod APK en su dispositivo: - Paso 1: Descargar el archivo APK mod de una fuente de confianza. Puede utilizar este enlace para descargar la última versión de Modern Ops Mod APK: [Descargar Modern Ops Mod APK]. - Paso 2: Después de descargar el archivo APK mod, localizarlo en su dispositivo utilizando una aplicación de administrador de archivos. Toque en el archivo y seleccione Instalar para iniciar el proceso de instalación. - Paso 3: Espere a que se complete la instalación. Es posible que vea un mensaje de advertencia diciendo que la aplicación no es segura o podría dañar su dispositivo. Ignore este mensaje y continúe con la instalación. - Paso 4: Después de la instalación se hace, iniciar el juego desde el cajón de la aplicación o la pantalla de inicio. Verá un mensaje emergente pidiéndole que descargue algunos archivos de datos adicionales. Toque en Aceptar y espere a que termine la descarga. - Paso 5: Una vez que la descarga se ha completado, se puede disfrutar de jugar Modern Ops Mod APK con recursos y características ilimitadas. </p>
36
- <p></p>
37
- <h2>¿Cómo se juega moderno Ops Mod APK? </h2>
38
- <p>Jugar Modern Ops Mod APK es similar a jugar el juego original, pero con algunas ventajas adicionales. Usted puede elegir entre diferentes modos de juego, mapas, armas, y más. Aquí hay algunos consejos sobre cómo jugar Modern Ops Mod APK con eficacia:</p>
39
- <h3>Modos de juego</h3>
40
-
41
- <h3>Tips and tricks</h3>
42
-
43
- la pantalla. - Utilice su clan: Clan es una característica que le permite unirse o crear un clan y jugar con sus amigos u otros jugadores en Modern Ops Mod APK. Puedes chatear con los miembros de tu clan, invitarlos a tu escuadrón, participar en guerras de clanes y ganar puntos de clan y recompensas. También puedes acceder a armas, pieles y cajas exclusivas del clan. Clan puede ayudarte a mejorar tu trabajo en equipo, coordinación y estrategia en el juego. <h2>Pros y contras de Modern Ops Mod APK</h2>
44
- <p>Moderno Ops Mod APK es una gran manera de disfrutar del juego con recursos y características ilimitadas, pero también tiene algunos inconvenientes que usted debe tener en cuenta. Estos son algunos de los pros y los contras de Modern Ops Mod APK:</p>
45
- <h3>Pros</h3>
46
- <ul>
47
- <li> Es gratis para descargar y usar. </li>
48
- <li>Te da monedas ilimitadas, gemas, munición, salud, energía y más. </li>
49
- <li>Desbloquea todo en el juego, incluyendo armas, pieles, archivos adjuntos, cajas, boosters, y más. </li>
50
- <li>Te da acceso a todos los modos de juego y mapas en el juego. </li>
51
- <li>Te da acceso a todas las funciones premium que normalmente solo están disponibles para usuarios VIP. </li>
52
- <li>Mejora tu juego y lo hace más divertido y fácil. </li>
53
- </ul>
54
- <h3>Contras</h3>
55
- <ul>
56
- <li> No es una versión oficial del juego y puede tener algunos errores o errores. </li>
57
- <li>Puede que no sea compatible con algunos dispositivos o versiones del juego. </li>
58
- <li>Es posible que tenga que actualizarlo con frecuencia para que coincida con la última versión del juego. </li>
59
- <li> Puede ser detectado por los desarrolladores de juegos y resultar en una prohibición o suspensión de su cuenta. </li>
60
- <li>Podría comprometer la seguridad y privacidad de su dispositivo y datos. </li>
61
- <li>Podría arruinar el equilibrio y la equidad del juego y hacerlo menos desafiante y gratificante. </li>
62
- </ul>
63
- <h2>Conclusión</h2>
64
-
65
- <h2>Preguntas frecuentes</h2>
66
- <p>Aquí están algunas de las preguntas más frecuentes sobre Modern Ops Mod APK:</p>
67
- <ol>
68
- <li><b>¿Es seguro usar Modern Ops Mod APK? </b></li>
69
- <p>Modern Ops Mod APK no es una versión oficial del juego y puede contener algunos códigos maliciosos o virus que pueden dañar su dispositivo o datos. Por lo tanto, le recomendamos que lo descargue de una fuente confiable y lo escanee con una aplicación antivirus antes de instalarlo. También debe copia de seguridad de sus datos y utilizar una cuenta secundaria para jugar el juego con este mod APK.</p>
70
- <li><b>¿Es legal usar Modern Ops Mod APK? </b></li>
71
- <p>Modern Ops Mod APK no es legal de usar, ya que viola los términos y condiciones de los desarrolladores de juegos. También infringe sus derechos de propiedad intelectual y sus fuentes de ingresos. Por lo tanto, el uso de este mod APK podría resultar en acciones legales de los desarrolladores de juegos o autoridades. Usted debe utilizar este mod APK a su propio riesgo y responsabilidad. </p>
72
- <li><b> ¿Cómo actualizo Modern Ops Mod APK? </b></li>
73
- <p>Para actualizar Modern Ops Mod APK, es necesario descargar la última versión del archivo APK mod de una fuente confiable e instalarlo en su dispositivo. También debe eliminar la versión anterior del archivo mod APK de su dispositivo para evitar conflictos o errores. También debe comprobar si el mod APK es compatible con la última versión del juego antes de actualizarlo. </p>
74
- <li><b> ¿Cómo puedo desinstalar Modern Ops Mod APK? </b></li>
75
- <p>Para desinstalar Modern Ops Mod APK, es necesario ir a Configuración > Aplicaciones > Operaciones modernas > Desinstalar y toque en Aceptar para confirmar. También debe eliminar el archivo APK mod desde el almacenamiento del dispositivo para liberar algo de espacio. También puedes reinstalar la versión original del juego desde Google Play Store o App Store si quieres volver a jugar. </p>
76
- <li><b>¿Puedo jugar Modern Ops Mod APK en línea con otros jugadores? </b></li> 64aa2da5cf<br />
77
- <br />
78
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descarga De Microsoft Word 2016.md DELETED
@@ -1,61 +0,0 @@
1
- <br />
2
- <h1>Cómo descargar Microsoft Word 2016</h1>
3
- <p>Microsoft Word es una de las aplicaciones de procesamiento de textos más populares y ampliamente utilizadas en el mundo. Le permite crear, editar, formatear y compartir documentos con facilidad y eficiencia. Ya sea que necesite escribir un informe, un CV, una carta o una publicación de blog, Microsoft Word puede ayudarlo a realizar sus tareas. </p>
4
- <p>Microsoft Word 2016 es la última versión de la aplicación que se lanzó en septiembre de 2015. Es parte de la suite de Microsoft Office que también incluye Excel, PowerPoint, Outlook y más. Microsoft Word 2016 ofrece muchas mejoras y mejoras sobre las versiones anteriores, tales como:</p>
5
- <h2>descarga de microsoft word 2016</h2><br /><p><b><b>Download File</b> &#127379; <a href="https://bltlly.com/2v6Kkb">https://bltlly.com/2v6Kkb</a></b></p><br /><br />
6
- <ul>
7
- <li>Nuevos temas y plantillas</li>
8
- <li>Mejores herramientas de colaboración</li>
9
- <li>Funciones de búsqueda e investigación inteligentes</li>
10
- <li>Opciones de seguridad y privacidad mejoradas</li>
11
- <li>Integración con OneDrive y SharePoint</li>
12
- </ul>
13
- <p>Si está interesado en descargar Microsoft Word 2016, tiene varias opciones para elegir. En este artículo, le mostraremos cómo descargar Microsoft Word 2016 desde diferentes fuentes y qué beneficios puede obtener al usarlo. </p>
14
- <h2>Descargar Microsoft Word 2016 desde el sitio web de Microsoft</h2>
15
- <p>La forma más fácil y confiable de descargar Microsoft Word 2016 es obtenerlo directamente desde el sitio web de Microsoft. Necesitará una cuenta de Microsoft y una suscripción a Microsoft Office o Microsoft Office. Estos son los pasos a seguir:</p>
16
- <ol>
17
- <li>Vaya a <a href="( 1 )">www.office.com</a> e inicie sesión con su cuenta de Microsoft. Si no tiene una, puede crear una gratis. </li>
18
- <li>Seleccione Instalar Office y elija la versión que desee. Puede obtener Office Home & Student o Office Home & Business para una compra única o obtener Office Personal u Office Home & Business para una suscripción mensual o anual. </li>
19
-
20
- </ol>
21
- <h2>Descargar Microsoft Word 2016 desde un instalador offline</h2>
22
- <p>Si tiene una conexión a Internet lenta o poco confiable, es posible que desee descargar Microsoft Word 2016 desde un instalador sin conexión. Este es un archivo que contiene todos los archivos necesarios para instalar Microsoft Word 2016 sin conexión a Internet. Todavía necesitará una cuenta de Microsoft y una suscripción a Office u Office. Estos son los pasos a seguir:</p>
23
- <ol>
24
- <li>Descargue el archivo de instalación sin conexión desde <a href="( 2 )">www.office.com</a>. Necesitará iniciar sesión con su cuenta y seleccionar Otras opciones. Luego marque la casilla Descargar un instalador sin conexión y seleccione el idioma que desee. </li>
25
- <li>Abra el archivo y seleccione la carpeta de Microsoft Office. Verá una nueva unidad virtual en su PC, como (D:) o (E:). </li>
26
- <li>Haga doble clic en el archivo setup.exe y siga las instrucciones para instalar Microsoft Word 2016 en su PC. Es posible que necesite ingresar su clave de producto o iniciar sesión nuevamente con su cuenta. </li>
27
- </ol>
28
- <h2>Descargar Microsoft Word 2016 de un vendedor de terceros</h2>
29
- <p>Otra opción para descargar Microsoft Word 2016 es comprarlo a un vendedor de terceros. Esta es una empresa o un individuo que vende claves de producto de Microsoft Word 2016 a un precio más bajo que Microsoft. Sin embargo, debe tener cuidado y asegurarse de que el vendedor es de buena reputación y confiable. También debe verificar que la clave del producto es válida y no es utilizada por otra persona. Estos son los pasos a seguir:</p>
30
- <ol>
31
- <li>Encuentre un vendedor de terceros de buena reputación que ofrece claves de productos de Microsoft Word 2016. Puedes consultar reseñas en línea, valoraciones, comentarios y servicio al cliente para determinar la calidad del vendedor. </li>
32
- <li>Compra la clave del producto y verifica su validez. Puede utilizar una herramienta como <a href="">www.productkey.net</a> para comprobar si la clave del producto es original y no está bloqueada por Microsoft.</li>
33
-
34
- </ol>
35
- <h2>Beneficios de usar Microsoft Word 2016</h2>
36
- <p>Al descargar Microsoft Word 2016, puede disfrutar de muchos beneficios que mejorarán su productividad y creatividad. Estos son algunos de los beneficios de usar Microsoft Word 2016:</p>
37
- <ul>
38
- <li>Características y funcionalidad mejoradas: Microsoft Word 2016 tiene muchas características nuevas y mejoradas que hacen que sea más fácil y rápido crear y editar documentos. Por ejemplo, puede usar la función Dime para encontrar lo que necesita rápidamente, usar la función Búsqueda inteligente para obtener información relevante de la web, usar la función Editor de tinta para escribir y dibujar con su pluma o dedo, y utilice la función del editor para obtener sugerencias para mejorar su escritura. </li>
39
- <li>Compatibilidad con otras aplicaciones y dispositivos de Office: Microsoft Word 2016 es compatible con otras aplicaciones de Office, como Excel, PowerPoint, Outlook, OneNote y más. Puede cambiar fácilmente entre ellos y compartir datos y contenido. También puede usar Microsoft Word 2016 en diferentes dispositivos, como PC, portátiles, tabletas y teléfonos inteligentes. Puede sincronizar sus documentos entre dispositivos y acceder a ellos en cualquier momento y en cualquier lugar. </li>
40
- <li>Acceso a servicios en línea y almacenamiento en la nube: Microsoft Word 2016 le da acceso a servicios en línea y almacenamiento en la nube que mejoran su experiencia y seguridad. Por ejemplo, puede usar OneDrive para almacenar sus documentos en línea y acceder a ellos desde cualquier dispositivo. También puede usar SharePoint para colaborar con otros en documentos en tiempo real. También puedes usar Skype for Business para comunicarte con tus colegas y clientes. </li>
41
- </ul>
42
- <h2>Conclusión</h2>
43
-
44
- <p>Al usar Microsoft Word 2016, puede disfrutar de muchos beneficios que mejorarán su productividad y creatividad. Puede usar funciones nuevas y mejoradas, trabajar con otras aplicaciones y dispositivos de Office y acceder a servicios en línea y almacenamiento en la nube. Ya sea que necesite escribir un informe, un CV, una carta o una publicación de blog, Microsoft Word 2016 puede ayudarlo a realizar sus tareas. </p>
45
- <p></p>
46
- <p>Si desea descargar Microsoft Word 2016 hoy, haga clic aquí (enlace) y empezar! </p>
47
- <h3>Preguntas frecuentes</h3>
48
- <h4>Q: ¿Cuánto cuesta Microsoft Word 2016? </h4>
49
- <p>A: El costo de Microsoft Word 2016 depende de la versión que elija y la fuente de la que la compra. Si lo compra en el sitio web de Microsoft, puede pagar una cuota única de $149.99 para Office Home & Student o $249.99 para Office Home & Business o pagar una cuota de suscripción mensual o anual de $69.99 para Office Personal o $99.99 para Office Home & Business. Si usted lo compra de un vendedor de terceros, usted puede encontrar precios más bajos, pero hay que tener cuidado con la calidad y la validez de la clave del producto. </p>
50
- <h4>Q: ¿Cómo puedo actualizar Microsoft Word 2016? </h4>
51
- <p>A: Para actualizar Microsoft Word 2016, necesita tener una conexión a Internet y una suscripción a Office u Office. Puede actualizarlo manual o automáticamente. Para actualizarlo manualmente, vaya a Archivo > Cuenta > Opciones de actualización y seleccione Actualizar ahora. Para actualizarlo automáticamente, vaya a Archivo > Cuenta > Opciones de actualización y seleccione Habilitar actualizaciones. Recibirá las últimas actualizaciones y parches de seguridad para Microsoft Word 2016 y otras aplicaciones de Office. </p>
52
- <h4>Q: ¿Cómo puedo desinstalar Microsoft Word 2016? </h4>
53
-
54
- <h4>Q: ¿Cómo puedo recuperar un documento eliminado o no guardado en Microsoft Word 2016? </h4>
55
- <p>A: Para recuperar un documento eliminado o no guardado en Microsoft Word 2016, puede usar las funciones Autorrecuperación o Recuperación de documentos. La función de Autorrevisión guarda una copia de su documento cada pocos minutos en caso de un apagón o un fallo del sistema. La función Recuperación de documentos lo ayuda a recuperar los documentos que estaban abiertos pero no guardados cuando Microsoft Word 2016 se cerró inesperadamente. Para usar estas funciones, vaya a Archivo > Abrir > Recuperar documentos no guardados o Archivo > Información > Administrar documentos y seleccione el documento que desea recuperar. </p>
56
- <h4>Q: ¿Cómo puedo agregar una tabla en Microsoft Word 2016? </h4>
57
- <p>A: Para agregar una tabla en Microsoft Word 2016, puede usar la pestaña Insertar en la cinta. Haga clic en el botón Tabla y seleccione el número de filas y columnas que desea. También puede utilizar la herramienta Dibujar tabla para dibujar su propia tabla o utilizar la opción Tablas rápidas para elegir entre las tablas predefinidas. También puede convertir texto a una tabla o insertar una tabla desde Excel. Para formatear la tabla, puede usar las pestañas Herramientas de tabla en la cinta y aplicar diferentes estilos, colores, bordes y efectos. </p>
58
- <h4>Q: ¿Cómo puedo compartir un documento en Microsoft Word 2016? </h4>
59
- <p>A: Para compartir un documento en Microsoft Word 2016, puede usar el botón Compartir en la esquina superior derecha de la pantalla. Tendrá que guardar su documento en OneDrive o SharePoint primero. Luego puede invitar a las personas a ver o editar su documento ingresando sus direcciones de correo electrónico o eligiendo entre sus contactos. También puede copiar un enlace a su documento y pegarlo en un correo electrónico o un mensaje. También puede compartir su documento como archivo adjunto o como archivo PDF. </p> 64aa2da5cf<br />
60
- <br />
61
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BetterAPI/BetterChat_new/src/routes/settings/+server.ts DELETED
@@ -1,34 +0,0 @@
1
- import { collections } from "$lib/server/database.js";
2
- import { subMinutes } from "date-fns";
3
- import { z } from "zod";
4
-
5
- export async function PATCH({ locals, request }) {
6
- const json = await request.json();
7
-
8
- const settings = z
9
- .object({
10
- shareConversationsWithModelAuthors: z.boolean().default(true),
11
- ethicsModalAcceptedAt: z.optional(z.date({ coerce: true }).min(subMinutes(new Date(), 5))),
12
- })
13
- .parse(json);
14
-
15
- await collections.settings.updateOne(
16
- {
17
- sessionId: locals.sessionId,
18
- },
19
- {
20
- $set: {
21
- ...settings,
22
- updatedAt: new Date(),
23
- },
24
- $setOnInsert: {
25
- createdAt: new Date(),
26
- },
27
- },
28
- {
29
- upsert: true,
30
- }
31
- );
32
-
33
- return new Response();
34
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/basic/install.md DELETED
@@ -1,207 +0,0 @@
1
- # Installation
2
-
3
- This page provides basic prerequisites to run OpenVQA, including the setups of hardware, software, and datasets.
4
-
5
- ## Hardware & Software Setup
6
-
7
- A machine with at least **1 GPU (>= 8GB)**, **20GB memory** and **50GB free disk space** is required. We strongly recommend to use a SSD drive to guarantee high-speed I/O.
8
-
9
- The following packages are required to build the project correctly.
10
-
11
- - [Python](https://www.python.org/downloads/) >= 3.5
12
- - [Cuda](https://developer.nvidia.com/cuda-toolkit) >= 9.0 and [cuDNN](https://developer.nvidia.com/cudnn)
13
- - [PyTorch](http://pytorch.org/) >= 0.4.1 with CUDA (**PyTorch 1.x is also supported**).
14
- - [SpaCy](https://spacy.io/) and initialize the [GloVe](https://github.com/explosion/spacy-models/releases/download/en_vectors_web_lg-2.1.0/en_vectors_web_lg-2.1.0.tar.gz) as follows:
15
-
16
- ```bash
17
- $ pip install -r requirements.txt
18
- $ wget https://github.com/explosion/spacy-models/releases/download/en_vectors_web_lg-2.1.0/en_vectors_web_lg-2.1.0.tar.gz -O en_vectors_web_lg-2.1.0.tar.gz
19
- $ pip install en_vectors_web_lg-2.1.0.tar.gz
20
- ```
21
-
22
- ## Dataset Setup
23
-
24
- The following datasets should be prepared before running the experiments.
25
-
26
- **Note that if you only want to run experiments on one specific dataset, you can focus on the setup for that and skip the rest.**
27
-
28
- ### VQA-v2
29
-
30
- - Image Features
31
-
32
- The image features are extracted using the [bottom-up-attention](https://github.com/peteanderson80/bottom-up-attention) strategy, with each image being represented as an dynamic number (from 10 to 100) of 2048-D features. We store the features for each image in a `.npz` file. You can prepare the visual features by yourself or download the extracted features from [OneDrive](https://awma1-my.sharepoint.com/:f:/g/personal/yuz_l0_tn/EsfBlbmK1QZFhCOFpr4c5HUBzUV0aH2h1McnPG1jWAxytQ?e=2BZl8O) or [BaiduYun](https://pan.baidu.com/s/1C7jIWgM3hFPv-YXJexItgw#list/path=%2F). The downloaded files contains three files: **train2014.tar.gz, val2014.tar.gz, and test2015.tar.gz**, corresponding to the features of the train/val/test images for *VQA-v2*, respectively.
33
-
34
- All the image feature files are unzipped and placed in the `data/vqa/feats` folder to form the following tree structure:
35
-
36
- ```
37
- |-- data
38
- |-- vqa
39
- | |-- feats
40
- | | |-- train2014
41
- | | | |-- COCO_train2014_...jpg.npz
42
- | | | |-- ...
43
- | | |-- val2014
44
- | | | |-- COCO_val2014_...jpg.npz
45
- | | | |-- ...
46
- | | |-- test2015
47
- | | | |-- COCO_test2015_...jpg.npz
48
- | | | |-- ...
49
- ```
50
-
51
- - QA Annotations
52
-
53
- Download all the annotation `json` files for VQA-v2, including the [train questions](https://s3.amazonaws.com/cvmlp/vqa/mscoco/vqa/v2_Questions_Train_mscoco.zip), [val questions](https://s3.amazonaws.com/cvmlp/vqa/mscoco/vqa/v2_Questions_Val_mscoco.zip), [test questions](https://s3.amazonaws.com/cvmlp/vqa/mscoco/vqa/v2_Questions_Test_mscoco.zip), [train answers](https://s3.amazonaws.com/cvmlp/vqa/mscoco/vqa/v2_Annotations_Train_mscoco.zip), and [val answers](https://s3.amazonaws.com/cvmlp/vqa/mscoco/vqa/v2_Annotations_Val_mscoco.zip).
54
-
55
- In addition, we use the VQA samples from the Visual Genome to augment the training samples. We pre-processed these samples by two rules:
56
-
57
- 1. Select the QA pairs with the corresponding images appear in the MS-COCO *train* and *val* splits;
58
- 2. Select the QA pairs with the answer appear in the processed answer list (occurs more than 8 times in whole *VQA-v2* answers).
59
-
60
- We provide our processed vg questions and annotations files, you can download them from [OneDrive](https://awma1-my.sharepoint.com/:f:/g/personal/yuz_l0_tn/EmVHVeGdck1IifPczGmXoaMBFiSvsegA6tf_PqxL3HXclw) or [BaiduYun](https://pan.baidu.com/s/1QCOtSxJGQA01DnhUg7FFtQ#list/path=%2F).
61
-
62
- All the QA annotation files are unzipped and placed in the `data/vqa/raw` folder to form the following tree structure:
63
-
64
- ```
65
- |-- data
66
- |-- vqa
67
- | |-- raw
68
- | | |-- v2_OpenEnded_mscoco_train2014_questions.json
69
- | | |-- v2_OpenEnded_mscoco_val2014_questions.json
70
- | | |-- v2_OpenEnded_mscoco_test2015_questions.json
71
- | | |-- v2_OpenEnded_mscoco_test-dev2015_questions.json
72
- | | |-- v2_mscoco_train2014_annotations.json
73
- | | |-- v2_mscoco_val2014_annotations.json
74
- | | |-- VG_questions.json
75
- | | |-- VG_annotations.json
76
-
77
- ```
78
-
79
- ### GQA
80
-
81
- - Image Features
82
-
83
- Download the [spatial features](https://nlp.stanford.edu/data/gqa/spatialFeatures.zip) and [object features](https://nlp.stanford.edu/data/gqa/objectFeatures.zip) for GQA from its official website. **Spatial Features Files** include `gqa_spatial_*.h5` and `gqa_spatial_info.json`. **Object Features Files** include `gqa_objects_*.h5` and `gqa_objects_info.json`.
84
- To make the input features consistent with those for VQA-v2, we provide a [script](https://github.com/MILVLG/openvqa/tree/master/data/gqa/gqa_feat_preproc.py) to transform `.h5` feature files into multiple `.npz` files, with each file corresponding to one image.
85
-
86
- ```bash
87
- $ cd data/gqa
88
-
89
- $ unzip spatialFeatures.zip
90
- $ python gqa_feat_preproc.py --mode=spatial --spatial_dir=./spatialFeatures --out_dir=./feats/gqa-grid
91
- $ rm -r spatialFeatures.zip ./spatialFeatures
92
-
93
- $ unzip objectFeatures.zip
94
- $ python gqa_feat_preproc.py --mode=object --object_dir=./objectFeatures --out_dir=./feats/gqa-frcn
95
- $ rm -r objectFeatures.zip ./objectFeatures
96
- ```
97
-
98
- All the processed feature files are placed in the `data/gqa/feats` folder to form the following tree structure:
99
-
100
- ```
101
- |-- data
102
- |-- gqa
103
- | |-- feats
104
- | | |-- gqa-frcn
105
- | | | |-- 1.npz
106
- | | | |-- ...
107
- | | |-- gqa-grid
108
- | | | |-- 1.npz
109
- | | | |-- ...
110
- ```
111
-
112
- - Questions and Scene Graphs
113
-
114
- Download all the GQA [QA files](https://nlp.stanford.edu/data/gqa/questions1.2.zip) from the official site, including all the splits needed for training, validation and testing. Download the [scene graphs files](https://nlp.stanford.edu/data/gqa/sceneGraphs.zip) for `train` and `val` splits from the official site. Download the [supporting files](https://nlp.stanford.edu/data/gqa/eval.zip) from the official site, including the `train` and `val` choices supporting files for the evaluation.
115
-
116
- All the question files and scene graph files are unzipped and placed in the `data/gqa/raw` folder to form the following tree structure:
117
-
118
- ```
119
- |-- data
120
- |-- gqa
121
- | |-- raw
122
- | | |-- questions1.2
123
- | | | |-- train_all_questions
124
- | | | | |-- train_all_questions_0.json
125
- | | | | |-- ...
126
- | | | | |-- train_all_questions_9.json
127
- | | | |-- train_balanced_questions.json
128
- | | | |-- val_all_questions.json
129
- | | | |-- val_balanced_questions.json
130
- | | | |-- testdev_all_questions.json
131
- | | | |-- testdev_balanced_questions.json
132
- | | | |-- test_all_questions.json
133
- | | | |-- test_balanced_questions.json
134
- | | | |-- challenge_all_questions.json
135
- | | | |-- challenge_balanced_questions.json
136
- | | | |-- submission_all_questions.json
137
- | | |-- eval
138
- | | | |-- train_choices
139
- | | | | |-- train_all_questions_0.json
140
- | | | | |-- ...
141
- | | | | |-- train_all_questions_9.json
142
- | | | |-- val_choices.json
143
- | | |-- sceneGraphs
144
- | | | |-- train_sceneGraphs.json
145
- | | | |-- val_sceneGraphs.json
146
- ```
147
-
148
- ### CLEVR
149
-
150
- - Images, Questions and Scene Graphs
151
-
152
- Download all the [CLEVR v1.0](https://dl.fbaipublicfiles.com/clevr/CLEVR_v1.0.zip) from the official site, including all the splits needed for training, validation and testing.
153
-
154
- All the image files, question files and scene graph files are unzipped and placed in the `data/clevr/raw` folder to form the following tree structure:
155
-
156
- ```
157
- |-- data
158
- |-- clevr
159
- | |-- raw
160
- | | |-- images
161
- | | | |-- train
162
- | | | | |-- CLEVR_train_000000.json
163
- | | | | |-- ...
164
- | | | | |-- CLEVR_train_069999.json
165
- | | | |-- val
166
- | | | | |-- CLEVR_val_000000.json
167
- | | | | |-- ...
168
- | | | | |-- CLEVR_val_014999.json
169
- | | | |-- test
170
- | | | | |-- CLEVR_test_000000.json
171
- | | | | |-- ...
172
- | | | | |-- CLEVR_test_014999.json
173
- | | |-- questions
174
- | | | |-- CLEVR_train_questions.json
175
- | | | |-- CLEVR_val_questions.json
176
- | | | |-- CLEVR_test_questions.json
177
- | | |-- scenes
178
- | | | |-- CLEVR_train_scenes.json
179
- | | | |-- CLEVR_val_scenes.json
180
- ```
181
-
182
- - Image Features
183
-
184
- To make the input features consistent with those for VQA-v2, we provide a [script](https://github.com/MILVLG/openvqa/tree/master/data/clevr/clevr_extract_feat.py) to extract image features using a pre-trained ResNet-101 model like most previous works did and generate `.h5` files, with each file corresponding to one image.
185
-
186
- ```bash
187
- $ cd data/clevr
188
-
189
- $ python clevr_extract_feat.py --mode=all --gpu=0
190
- ```
191
-
192
- All the processed feature files are placed in the `data/clevr/feats` folder to form the following tree structure:
193
-
194
- ```
195
- |-- data
196
- |-- clevr
197
- | |-- feats
198
- | | |-- train
199
- | | | |-- 1.npz
200
- | | | |-- ...
201
- | | |-- val
202
- | | | |-- 1.npz
203
- | | | |-- ...
204
- | | |-- test
205
- | | | |-- 1.npz
206
- | | | |-- ...
207
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/set_operations.h DELETED
The diff for this file is too large to render. See raw diff
 
spaces/CVPR/regionclip-demo/detectron2/data/clip_build.py DELETED
@@ -1,158 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
2
- import bisect
3
- import copy
4
- import logging
5
- import os
6
- import torch
7
- import torch.utils.data
8
- import torch.distributed
9
- from torch.utils.data.dataset import ConcatDataset
10
-
11
- from .catalog import DatasetCatalog
12
- from .clip_datasets.clip_img_txt_pair_tsv import CLIPImgTxtPairTSVDataset
13
-
14
- from .transforms.build import build_clip_transforms
15
-
16
- def config_tsv_dataset_args(cfg, dataset_file, factory_name=None, is_train=True):
17
- ############### code removecd as tsv_dataset_name = factory_name = "CLIPImgTxtPairTSVDataset" ##############
18
- if factory_name is not None:
19
- tsv_dataset_name = factory_name
20
-
21
- if tsv_dataset_name in ["CLIPImgTxtPairTSVDataset"]:
22
- # no need for extra arguments
23
- args = {}
24
- args['args'] = cfg
25
- args['seq_len'] = cfg.DATASETS.MAX_SEQ_LENGTH # cfg.max_seq_length
26
-
27
- return args, tsv_dataset_name
28
-
29
-
30
- def build_dataset(cfg, transforms, dataset_catalog, is_train=True, is_aux=False):
31
- """
32
- Arguments:
33
- cfg: config file.
34
- transforms (callable): transforms to apply to each (image, target) sample
35
- dataset_catalog (DatasetCatalog): contains the information on how to construct a dataset.
36
- is_train (bool): whether to setup the dataset for training or testing
37
- """
38
-
39
- dataset_list = (cfg.DATASETS.TRAIN if not is_aux else cfg.DATASETS.AUX) if is_train else cfg.DATASETS.TEST
40
- factory_list = (cfg.DATASETS.FACTORY_TRAIN if not is_aux else cfg.DATASETS.FACTORY_AUX) if is_train else cfg.DATASETS.FACTORY_TEST
41
- path_list = (cfg.DATASETS.PATH_TRAIN if not is_aux else cfg.DATASETS.PATH_AUX) if is_train else cfg.DATASETS.PATH_TEST
42
-
43
- if not isinstance(dataset_list, (list, tuple)):
44
- raise RuntimeError(
45
- "dataset_list should be a list of strings, got {}".format(dataset_list))
46
- if not isinstance(factory_list, (list, tuple)):
47
- raise RuntimeError(
48
- "factory_list should be a list of strings, got {}".format(factory_list))
49
- datasets = []
50
- target_offset = 0
51
- for i, dataset_name in enumerate(dataset_list):
52
- factory_name = factory_list[i] if i < len(factory_list) else None
53
-
54
- if factory_name == "CLIPImgTxtPairTSVDataset":
55
- dataset_names_merged = dataset_name.split('+')
56
- path_lists_merged = path_list[i].split('+')
57
-
58
- assert len(dataset_names_merged) == len(path_lists_merged), "number of datasets must match that of dataset paths"
59
-
60
- image_tsv_list = []
61
- text_tsv_list = []
62
- dataset_name_list = []
63
- map_files = []
64
- max_num_tsv = 20 # maximum tsv files to load within a given folder
65
-
66
- for dname, dpath in zip(dataset_names_merged, path_lists_merged):
67
- args, tsv_dataset_name = config_tsv_dataset_args(
68
- cfg, dataset_name, factory_name, is_train
69
- )
70
- factory = CLIPImgTxtPairTSVDataset if tsv_dataset_name in ["CLIPImgTxtPairTSVDataset"] else None
71
- prev_len = len(image_tsv_list)
72
-
73
- isFile = os.path.isfile(dpath)
74
- if isFile:
75
- dpath_listed_files = [os.path.basename(dpath)]
76
- dpath = os.path.dirname(dpath)
77
- else:
78
- dpath_listed_files = sorted(os.listdir(dpath))
79
-
80
- for filename in dpath_listed_files:
81
- if ("images" in filename or "image" in filename or "img" in filename) and filename.endswith(".tsv"):
82
- image_tsv_list.append(os.path.join(dpath, filename))
83
- if "images" in filename: # "images" - "text"
84
- text_tsv_list.append(os.path.join(dpath, filename.replace("images", "text")))
85
- elif "image" in filename: # "image"-"text"
86
- text_tsv_list.append(os.path.join(dpath, filename.replace("image", "text")))
87
- elif "img" in filename: # "img"-"caption"
88
- text_tsv_list.append(os.path.join(dpath, filename.replace("img", "caption")))
89
- if len(image_tsv_list) - prev_len == max_num_tsv:
90
- break
91
- dataset_name_list += [dname] * (len(image_tsv_list) - prev_len)
92
-
93
- if dname == "imagenet22k":
94
- map_files += [os.path.join(dpath, 'darknet_data_imagenet.labels.list')] * (len(image_tsv_list) - prev_len)
95
- else:
96
- map_files += [None] * (len(image_tsv_list) - prev_len)
97
-
98
- assert len(image_tsv_list) == len(text_tsv_list), \
99
- "the number image tsv files must be equal to that of text tsv files, otherwise check your data!"
100
-
101
- args["image_tsv_file"] = image_tsv_list
102
- args["text_tsv_file"] = text_tsv_list
103
- args["dataset_name"] = dataset_name_list
104
- args["map_file"] = map_files
105
- args["filtered_datasets"] = cfg.DATASETS.FILTERED_CLASSIFICATION_DATASETS
106
- assert len(image_tsv_list) == len(text_tsv_list) == len(dataset_name_list) == len(map_files)
107
-
108
- print("number of image tsv files: ", len(image_tsv_list))
109
- print("number of text tsv fies: ", len(text_tsv_list))
110
-
111
- args["is_train"] = is_train
112
- args["transforms"] = transforms
113
- args["target_offset"] = target_offset
114
- if "bpe" in cfg.INPUT.TEXT_TOKENIZER:
115
- from detectron2.data.datasets.clip_prompt_utils import SimpleTokenizer as _Tokenizer
116
- tokenizer = _Tokenizer()
117
- args["tokenizer_type"] = "bpe"
118
- args["tokenizer"] = tokenizer
119
- # make dataset from factory
120
- dataset = factory(**args)
121
- datasets.append(dataset)
122
-
123
- precomputed_tokens = {}
124
- dataset_classes = {}
125
- for dataset in datasets:
126
- if hasattr(dataset, "input_ids_all_classes"):
127
- precomputed_tokens["imagenet"] = \
128
- [dataset.input_ids_all_classes, dataset.input_mask_all_classes, dataset.segment_ids_all_classes]
129
- if hasattr(dataset, "classnames"):
130
- if isinstance(dataset.classnames, dict):
131
- dataset_classes.update(dataset.classnames)
132
- else:
133
- dataset_classes[dataset.dataset_name] = dataset.classnames
134
-
135
- # for testing, return a list of datasets
136
- if not is_train:
137
- return datasets, precomputed_tokens, dataset_classes
138
-
139
- if len(datasets) == 0:
140
- return None, None, None
141
-
142
- # for training, concatenate all datasets into a single one
143
- dataset = datasets[0]
144
- if len(datasets) > 1:
145
- dataset = ConcatDataset(datasets)
146
- return [dataset], precomputed_tokens, dataset_classes
147
-
148
-
149
- def make_clip_dataset(cfg, is_train=True, is_aux=False, transforms=None):
150
- if transforms is None:
151
- transforms = build_clip_transforms(cfg, is_train)
152
- print("data transforms: ")
153
- print(transforms)
154
- datasets, precomputed_tokens, dataset_classes = build_dataset(cfg, transforms, DatasetCatalog, is_train, is_aux)
155
-
156
- if not datasets:
157
- return None, None, None
158
- return datasets, precomputed_tokens, dataset_classes
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py DELETED
@@ -1,14 +0,0 @@
1
- from .mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ import (
2
- dataloader,
3
- lr_multiplier,
4
- model,
5
- optimizer,
6
- train,
7
- )
8
-
9
- train.max_iter *= 4 # 100ep -> 400ep
10
-
11
- lr_multiplier.scheduler.milestones = [
12
- milestone * 4 for milestone in lr_multiplier.scheduler.milestones
13
- ]
14
- lr_multiplier.scheduler.num_updates = train.max_iter
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/meme-api/meme_generator/memes/dont_go_near/__init__.py DELETED
@@ -1,22 +0,0 @@
1
- from pathlib import Path
2
- from typing import List
3
-
4
- from pil_utils import BuildImage
5
-
6
- from meme_generator import add_meme
7
- from meme_generator.utils import make_jpg_or_gif
8
-
9
- img_dir = Path(__file__).parent / "images"
10
-
11
-
12
- def dont_go_near(images: List[BuildImage], texts, args):
13
- frame = BuildImage.open(img_dir / "0.png")
14
-
15
- def make(img: BuildImage) -> BuildImage:
16
- img = img.convert("RGBA").resize((170, 170), keep_ratio=True)
17
- return frame.copy().paste(img, (23, 231), alpha=True)
18
-
19
- return make_jpg_or_gif(images[0], make)
20
-
21
-
22
- add_meme("dont_go_near", dont_go_near, min_images=1, max_images=1, keywords=["不要靠近"])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CoWork/dreambooth-training-public/app.py DELETED
@@ -1,687 +0,0 @@
1
- from subprocess import getoutput
2
- import os
3
-
4
- gpu_info = getoutput('nvidia-smi')
5
- if("A10G" in gpu_info):
6
- which_gpu = "A10G"
7
- os.system(f"pip install --no-deps xformers==0.0.16rc425")
8
- elif("T4" in gpu_info):
9
- which_gpu = "T4"
10
- os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl")
11
- else:
12
- which_gpu = "CPU"
13
-
14
- import gradio as gr
15
- from pathlib import Path
16
- import argparse
17
- import shutil
18
- from train_dreambooth import run_training
19
- from convertosd import convert
20
- from PIL import Image
21
- from slugify import slugify
22
- import requests
23
- import torch
24
- import zipfile
25
- import tarfile
26
- import urllib.parse
27
- import gc
28
- from diffusers import StableDiffusionPipeline
29
- from huggingface_hub import snapshot_download, update_repo_visibility, HfApi
30
-
31
- is_spaces = True if "SPACE_ID" in os.environ else False
32
- if(is_spaces):
33
- is_shared_ui = True if "multimodalart/dreambooth-training" in os.environ['SPACE_ID'] else False
34
- else:
35
- is_shared_ui = False
36
- is_gpu_associated = torch.cuda.is_available()
37
-
38
- os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
39
-
40
- if(is_gpu_associated):
41
- model_v1 = snapshot_download(repo_id="multimodalart/sd-fine-tunable")
42
- model_v2 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-1", ignore_patterns=["*.ckpt", "*.safetensors"])
43
- model_v2_512 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-1-base", ignore_patterns=["*.ckpt", "*.safetensors"])
44
- safety_checker = snapshot_download(repo_id="multimodalart/sd-sc")
45
- model_to_load = model_v1
46
-
47
- def swap_base_model(selected_model):
48
- if(is_gpu_associated):
49
- global model_to_load
50
- if(selected_model == "v1-5"):
51
- model_to_load = model_v1
52
- elif(selected_model == "v2-1-768"):
53
- model_to_load = model_v2
54
- else:
55
- model_to_load = model_v2_512
56
-
57
-
58
-
59
- css = '''
60
- .instruction{position: absolute; top: 0;right: 0;margin-top: 0px !important}
61
- .arrow{position: absolute;top: 0;right: -110px;margin-top: -8px !important}
62
- #component-4, #component-3, #component-10{min-height: 0}
63
- .duplicate-button img{margin: 0}
64
- '''
65
- maximum_concepts = 3
66
-
67
- def swap_text(option, base):
68
- resize_width = 768 if base == "v2-1-768" else 512
69
- mandatory_liability = "You must have the right to do so and you are liable for the images you use, example:"
70
- if(option == "object"):
71
- instance_prompt_example = "cttoy"
72
- freeze_for = 30
73
- return [f"You are going to train `object`(s), upload 5-10 images of each object you are planning on training on from different angles/perspectives. You can use services like <a style='text-decoration: underline' target='_blank' href='https://www.birme.net/?target_width={resize_width}&target_height={resize_width}'>birme</a> for smart cropping. {mandatory_liability}:", '''<img src="file=cat-toy.png" />''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, gr.update(visible=False)]
74
- elif(option == "person"):
75
- instance_prompt_example = "julcto"
76
- freeze_for = 70
77
- #show_prior_preservation = True if base != "v2-1-768" else False
78
- show_prior_preservation=False
79
- if(show_prior_preservation):
80
- prior_preservation_box_update = gr.update(visible=show_prior_preservation)
81
- else:
82
- prior_preservation_box_update = gr.update(visible=show_prior_preservation, value=False)
83
- return [f"You are going to train a `person`(s), upload 10-20 images of each person you are planning on training on from different angles/perspectives. You can use services like <a style='text-decoration: underline' target='_blank' href='https://www.birme.net/?target_width={resize_width}&target_height={resize_width}'>birme</a> for smart cropping. {mandatory_liability}:", '''<img src="file=person.png" />''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, prior_preservation_box_update]
84
- elif(option == "style"):
85
- instance_prompt_example = "trsldamrl"
86
- freeze_for = 10
87
- return [f"You are going to train a `style`, upload 10-20 images of the style you are planning on training on. You can use services like <a style='text-decoration: underline' target='_blank' href='https://www.birme.net/?target_width={resize_width}&target_height={resize_width}'>birme</a> for smart cropping. {mandatory_liability}:", '''<img src="file=trsl_style.png" />''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}", freeze_for, gr.update(visible=False)]
88
-
89
- def count_files(*inputs):
90
- file_counter = 0
91
- concept_counter = 0
92
- for i, input in enumerate(inputs):
93
- if(i < maximum_concepts):
94
- files = inputs[i]
95
- if(files):
96
- concept_counter+=1
97
- file_counter+=len(files)
98
- uses_custom = inputs[-1]
99
- type_of_thing = inputs[-4]
100
- selected_model = inputs[-5]
101
- experimental_faces = inputs[-6]
102
- if(uses_custom):
103
- Training_Steps = int(inputs[-3])
104
- else:
105
- Training_Steps = file_counter*150
106
- if(type_of_thing == "person" and Training_Steps > 2400):
107
- Training_Steps = 2400 #Avoid overfitting on person faces
108
- if(is_spaces):
109
- if(selected_model == "v1-5"):
110
- its = 1.1 if which_gpu == "T4" else 1.8
111
- if(experimental_faces):
112
- its = 1
113
- elif(selected_model == "v2-1-512"):
114
- its = 0.8 if which_gpu == "T4" else 1.5
115
- if(experimental_faces):
116
- its = 0.7
117
- elif(selected_model == "v2-1-768"):
118
- its = 0.48 if which_gpu == "T4" else 0.85
119
-
120
- gpu_price = 0.60 if which_gpu == "T4" else 1.10
121
- summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps. The training should take around {round(Training_Steps/its, 2)} seconds, or {round((Training_Steps/its)/60, 2)} minutes.
122
- The setup, compression and uploading the model can take up to 20 minutes.<br>As the {which_gpu}-Small GPU costs US${gpu_price} for 1h, <span style="font-size: 120%"><b>the estimated cost for this training is below US${round((((Training_Steps/its)/3600)+0.3+0.1)*gpu_price, 2)}.</b></span><br><br>
123
- If you check the box below the GPU attribution will automatically removed after training is done and the model is uploaded. If not, don't forget to come back here and swap the hardware back to CPU.<br><br>'''
124
- else:
125
- summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps.<br><br>'''
126
-
127
- return([gr.update(visible=True), gr.update(visible=True, value=summary_sentence)])
128
-
129
- def update_steps(*files_list):
130
- file_counter = 0
131
- for i, files in enumerate(files_list):
132
- if(files):
133
- file_counter+=len(files)
134
- return(gr.update(value=file_counter*200))
135
-
136
- def visualise_progress_bar():
137
- return gr.update(visible=True)
138
-
139
- def pad_image(image):
140
- w, h = image.size
141
- if w == h:
142
- return image
143
- elif w > h:
144
- new_image = Image.new(image.mode, (w, w), (0, 0, 0))
145
- new_image.paste(image, (0, (w - h) // 2))
146
- return new_image
147
- else:
148
- new_image = Image.new(image.mode, (h, h), (0, 0, 0))
149
- new_image.paste(image, ((h - w) // 2, 0))
150
- return new_image
151
-
152
- def validate_model_upload(hf_token, model_name):
153
- if(hf_token != ''):
154
- api = HfApi()
155
- try:
156
- _ = api.whoami(hf_token)
157
- except:
158
- raise gr.Error("You have inserted an invalid Hugging Face token")
159
- try:
160
- if(is_spaces):
161
- update_repo_visibility(repo_id=os.environ['SPACE_ID'], private=True, token=hf_token, repo_type="space")
162
- except:
163
- raise gr.Error("Oops, you created a Hugging Face token with read permissions only. You need one with write permissions")
164
- else:
165
- raise gr.Error("Please insert a Hugging Face Token (make sure to create it with write permissions)")
166
- if(model_name == ""):
167
- raise gr.Error("Please fill in your model's name")
168
-
169
- def swap_hardware(hf_token, hardware="cpu-basic"):
170
- hardware_url = f"https://huggingface.co/spaces/{os.environ['SPACE_ID']}/hardware"
171
- headers = { "authorization" : f"Bearer {hf_token}"}
172
- body = {'flavor': hardware}
173
- requests.post(hardware_url, json = body, headers=headers)
174
-
175
- def swap_sleep_time(hf_token,sleep_time):
176
- sleep_time_url = f"https://huggingface.co/api/spaces/{os.environ['SPACE_ID']}/sleeptime"
177
- headers = { "authorization" : f"Bearer {hf_token}"}
178
- body = {'seconds':sleep_time}
179
- requests.post(sleep_time_url,json=body,headers=headers)
180
-
181
- def get_sleep_time(hf_token):
182
- sleep_time_url = f"https://huggingface.co/api/spaces/{os.environ['SPACE_ID']}"
183
- headers = { "authorization" : f"Bearer {hf_token}"}
184
- response = requests.get(sleep_time_url,headers=headers)
185
- try:
186
- gcTimeout = response.json()['runtime']['gcTimeout']
187
- except:
188
- gcTimeout = None
189
- return gcTimeout
190
-
191
- def write_to_community(title, description,hf_token):
192
- from huggingface_hub import HfApi
193
- api = HfApi()
194
- api.create_discussion(repo_id=os.environ['SPACE_ID'], title=title, description=description,repo_type="space", token=hf_token)
195
-
196
- def train(progress=gr.Progress(track_tqdm=True), *inputs):
197
- which_model = inputs[-10]
198
- if(which_model == ""):
199
- raise gr.Error("You forgot to select a base model to use")
200
-
201
- if is_shared_ui:
202
- raise gr.Error("This Space only works in duplicated instances")
203
- if not is_gpu_associated:
204
- raise gr.Error("Please associate a T4 or A10G GPU for this Space")
205
- hf_token = inputs[-5]
206
- model_name = inputs[-7]
207
- if(is_spaces):
208
- sleep_time = get_sleep_time(hf_token)
209
- if sleep_time:
210
- swap_sleep_time(hf_token, -1)
211
- remove_attribution_after = inputs[-6]
212
- else:
213
- remove_attribution_after = False
214
-
215
- if(remove_attribution_after):
216
- validate_model_upload(hf_token, model_name)
217
-
218
- torch.cuda.empty_cache()
219
- if 'pipe' in globals():
220
- global pipe, pipe_is_set
221
- del pipe
222
- pipe_is_set = False
223
- gc.collect()
224
-
225
- if os.path.exists("output_model"): shutil.rmtree('output_model')
226
- if os.path.exists("instance_images"): shutil.rmtree('instance_images')
227
- if os.path.exists("diffusers_model.tar"): os.remove("diffusers_model.tar")
228
- if os.path.exists("model.ckpt"): os.remove("model.ckpt")
229
- if os.path.exists("hastrained.success"): os.remove("hastrained.success")
230
- file_counter = 0
231
- resolution = 512 if which_model != "v2-1-768" else 768
232
- for i, input in enumerate(inputs):
233
- if(i < maximum_concepts-1):
234
- if(input):
235
- os.makedirs('instance_images',exist_ok=True)
236
- files = inputs[i+(maximum_concepts*2)]
237
- prompt = inputs[i+maximum_concepts]
238
- if(prompt == "" or prompt == None):
239
- raise gr.Error("You forgot to define your concept prompt")
240
- for j, file_temp in enumerate(files):
241
- file = Image.open(file_temp.name)
242
- image = pad_image(file)
243
- image = image.resize((resolution, resolution))
244
- extension = file_temp.name.split(".")[1]
245
- image = image.convert('RGB')
246
- image.save(f'instance_images/{prompt}_({j+1}).jpg', format="JPEG", quality = 100)
247
- file_counter += 1
248
-
249
- os.makedirs('output_model',exist_ok=True)
250
- uses_custom = inputs[-1]
251
- type_of_thing = inputs[-4]
252
- experimental_face_improvement = inputs[-9]
253
-
254
- if(uses_custom):
255
- Training_Steps = int(inputs[-3])
256
- Train_text_encoder_for = int(inputs[-2])
257
- else:
258
- if(type_of_thing == "object"):
259
- Train_text_encoder_for=30
260
-
261
- elif(type_of_thing == "style"):
262
- Train_text_encoder_for=15
263
-
264
- elif(type_of_thing == "person"):
265
- Train_text_encoder_for=70
266
-
267
- Training_Steps = file_counter*150
268
- if(type_of_thing == "person" and Training_Steps > 2600):
269
- Training_Steps = 2600 #Avoid overfitting on people's faces
270
- stptxt = int((Training_Steps*Train_text_encoder_for)/100)
271
- gradient_checkpointing = True if (experimental_face_improvement or which_model != "v1-5") else False
272
- cache_latents = True if which_model != "v1-5" else False
273
- if (type_of_thing == "object" or type_of_thing == "style" or (type_of_thing == "person" and not experimental_face_improvement)):
274
- args_general = argparse.Namespace(
275
- image_captions_filename = True,
276
- train_text_encoder = True if stptxt > 0 else False,
277
- stop_text_encoder_training = stptxt,
278
- save_n_steps = 0,
279
- pretrained_model_name_or_path = model_to_load,
280
- instance_data_dir="instance_images",
281
- class_data_dir=None,
282
- output_dir="output_model",
283
- instance_prompt="",
284
- seed=42,
285
- resolution=resolution,
286
- mixed_precision="fp16",
287
- train_batch_size=1,
288
- gradient_accumulation_steps=1,
289
- use_8bit_adam=True,
290
- learning_rate=2e-6,
291
- lr_scheduler="polynomial",
292
- lr_warmup_steps = 0,
293
- max_train_steps=Training_Steps,
294
- gradient_checkpointing=gradient_checkpointing,
295
- cache_latents=cache_latents,
296
- )
297
- print("Starting single training...")
298
- lock_file = open("intraining.lock", "w")
299
- lock_file.close()
300
- try:
301
- run_training(args_general)
302
- except Exception as e:
303
- if(is_spaces):
304
- title="There was an error on during your training"
305
- description=f'''
306
- Unfortunately there was an error during training your {model_name} model.
307
- Please check it out below. Feel free to report this issue to [Dreambooth Training](https://huggingface.co/spaces/multimodalart/dreambooth-training):
308
- ```
309
- {str(e)}
310
- ```
311
- '''
312
- swap_hardware(hf_token, "cpu-basic")
313
- write_to_community(title,description,hf_token)
314
-
315
-
316
- gc.collect()
317
- torch.cuda.empty_cache()
318
- if(which_model == "v1-5"):
319
- print("Adding Safety Checker to the model...")
320
- shutil.copytree(f"{safety_checker}/feature_extractor", "output_model/feature_extractor", dirs_exist_ok=True)
321
- shutil.copytree(f"{safety_checker}/safety_checker", "output_model/safety_checker", dirs_exist_ok=True)
322
- shutil.copy(f"model_index.json", "output_model/model_index.json")
323
-
324
- if(not remove_attribution_after):
325
- swap_sleep_time(hf_token, sleep_time)
326
- print("Archiving model file...")
327
- with tarfile.open("diffusers_model.tar", "w") as tar:
328
- tar.add("output_model", arcname=os.path.basename("output_model"))
329
- if os.path.exists("intraining.lock"): os.remove("intraining.lock")
330
- trained_file = open("hastrained.success", "w")
331
- trained_file.close()
332
- print("Training completed!")
333
- return [
334
- gr.update(visible=False), #progress_bar
335
- gr.update(visible=True, value=["diffusers_model.tar"]), #result
336
- gr.update(visible=True), #try_your_model
337
- gr.update(visible=True), #push_to_hub
338
- gr.update(visible=True), #convert_button
339
- gr.update(visible=False), #training_ongoing
340
- gr.update(visible=True) #completed_training
341
- ]
342
- else:
343
- where_to_upload = inputs[-8]
344
- push(model_name, where_to_upload, hf_token, which_model, True)
345
- swap_hardware(hf_token, "cpu-basic")
346
-
347
- pipe_is_set = False
348
- def generate(prompt, steps):
349
- torch.cuda.empty_cache()
350
- from diffusers import StableDiffusionPipeline
351
- global pipe_is_set
352
- if(not pipe_is_set):
353
- global pipe
354
- pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float16)
355
- pipe = pipe.to("cuda")
356
- pipe_is_set = True
357
-
358
- image = pipe(prompt, num_inference_steps=steps).images[0]
359
- return(image)
360
-
361
- def push(model_name, where_to_upload, hf_token, which_model, comes_from_automated=False):
362
- validate_model_upload(hf_token, model_name)
363
- if(not os.path.exists("model.ckpt")):
364
- convert("output_model", "model.ckpt")
365
- from huggingface_hub import HfApi, HfFolder, CommitOperationAdd
366
- from huggingface_hub import create_repo
367
- model_name_slug = slugify(model_name)
368
- api = HfApi()
369
- your_username = api.whoami(token=hf_token)["name"]
370
- if(where_to_upload == "My personal profile"):
371
- model_id = f"{your_username}/{model_name_slug}"
372
- else:
373
- model_id = f"sd-dreambooth-library/{model_name_slug}"
374
- headers = {"Authorization" : f"Bearer: {hf_token}", "Content-Type": "application/json"}
375
- response = requests.post("https://huggingface.co/organizations/sd-dreambooth-library/share/SSeOwppVCscfTEzFGQaqpfcjukVeNrKNHX", headers=headers)
376
-
377
- print(f"Starting to upload the model {model_id}...")
378
- images_upload = os.listdir("instance_images")
379
- image_string = ""
380
- instance_prompt_list = []
381
- previous_instance_prompt = ''
382
- for i, image in enumerate(images_upload):
383
- instance_prompt = image.split("_")[0]
384
- if(instance_prompt != previous_instance_prompt):
385
- title_instance_prompt_string = instance_prompt
386
- instance_prompt_list.append(instance_prompt)
387
- else:
388
- title_instance_prompt_string = ''
389
- previous_instance_prompt = instance_prompt
390
- image_string = f'''{title_instance_prompt_string} {"(use that on your prompt)" if title_instance_prompt_string != "" else ""}
391
- {image_string}![{instance_prompt} {i}](https://huggingface.co/{model_id}/resolve/main/concept_images/{urllib.parse.quote(image)})'''
392
- readme_text = f'''---
393
- license: creativeml-openrail-m
394
- tags:
395
- - text-to-image
396
- widget:
397
- - text: {instance_prompt_list[0]}
398
- ---
399
- ### {model_name} Dreambooth model trained by {api.whoami(token=hf_token)["name"]} with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the {which_model} base model
400
-
401
- You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
402
-
403
- Sample pictures of:
404
- {image_string}
405
- '''
406
- #Save the readme to a file
407
- readme_file = open("model.README.md", "w")
408
- readme_file.write(readme_text)
409
- readme_file.close()
410
- #Save the token identifier to a file
411
- text_file = open("token_identifier.txt", "w")
412
- text_file.write(', '.join(instance_prompt_list))
413
- text_file.close()
414
- try:
415
- create_repo(model_id,private=True, token=hf_token)
416
- except:
417
- import time
418
- epoch_time = str(int(time.time()))
419
- create_repo(f"{model_id}-{epoch_time}", private=True,token=hf_token)
420
- operations = [
421
- CommitOperationAdd(path_in_repo="token_identifier.txt", path_or_fileobj="token_identifier.txt"),
422
- CommitOperationAdd(path_in_repo="README.md", path_or_fileobj="model.README.md"),
423
- CommitOperationAdd(path_in_repo=f"model.ckpt",path_or_fileobj="model.ckpt")
424
- ]
425
- api.create_commit(
426
- repo_id=model_id,
427
- operations=operations,
428
- commit_message=f"Upload the model {model_name}",
429
- token=hf_token
430
- )
431
- api.upload_folder(
432
- folder_path="output_model",
433
- repo_id=model_id,
434
- token=hf_token
435
- )
436
- api.upload_folder(
437
- folder_path="instance_images",
438
- path_in_repo="concept_images",
439
- repo_id=model_id,
440
- token=hf_token
441
- )
442
- if is_spaces:
443
- if(not comes_from_automated):
444
- extra_message = "Don't forget to remove the GPU attribution after you play with it."
445
- else:
446
- extra_message = "The GPU has been removed automatically as requested, and you can try the model via the model page"
447
- title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!"
448
- description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}"
449
- write_to_community(title, description, hf_token)
450
- #api.create_discussion(repo_id=os.environ['SPACE_ID'], title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!", description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}",repo_type="space", token=hf_token)
451
- print("Model uploaded successfully!")
452
- return [gr.update(visible=True, value=f"Successfully uploaded your model. Access it [here](https://huggingface.co/{model_id})"), gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])]
453
-
454
- def convert_to_ckpt():
455
- if 'pipe' in globals():
456
- global pipe, pipe_is_set
457
- del pipe
458
- pipe_is_set = False
459
- gc.collect()
460
- convert("output_model", "model.ckpt")
461
- return gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])
462
-
463
- def check_status(top_description):
464
- if os.path.exists("hastrained.success"):
465
- if is_spaces:
466
- update_top_tag = gr.update(value=f'''
467
- <div class="gr-prose" style="max-width: 80%">
468
- <h2>Your model has finished training ✅</h2>
469
- <p>Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub). Once you are done, your model is safe, and you don't want to train a new one, go to the <a href="https://huggingface.co/spaces/{os.environ['SPACE_ID']}" target="_blank">settings page</a> and downgrade your Space to a CPU Basic</p>
470
- </div>
471
- ''')
472
- else:
473
- update_top_tag = gr.update(value=f'''
474
- <div class="gr-prose" style="max-width: 80%">
475
- <h2>Your model has finished training ✅</h2>
476
- <p>Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub).</p>
477
- </div>
478
- ''')
479
- show_outputs = True
480
- elif os.path.exists("intraining.lock"):
481
- update_top_tag = gr.update(value='''
482
- <div class="gr-prose" style="max-width: 80%">
483
- <h2>Don't worry, your model is still training! ⌛</h2>
484
- <p>You closed the tab while your model was training, but it's all good! It is still training right now. You can click the "Open logs" button above here to check the training status. Once training is done, reload this tab to interact with your model</p>
485
- </div>
486
- ''')
487
- show_outputs = False
488
- else:
489
- update_top_tag = gr.update(value=top_description)
490
- show_outputs = False
491
- if os.path.exists("diffusers_model.tar"):
492
- update_files_tag = gr.update(visible=show_outputs, value=["diffusers_model.tar"])
493
- else:
494
- update_files_tag = gr.update(visible=show_outputs)
495
- return [
496
- update_top_tag, #top_description
497
- gr.update(visible=show_outputs), #try_your_model
498
- gr.update(visible=show_outputs), #push_to_hub
499
- update_files_tag, #result
500
- gr.update(visible=show_outputs), #convert_button
501
- ]
502
-
503
- def checkbox_swap(checkbox):
504
- return [gr.update(visible=checkbox), gr.update(visible=checkbox), gr.update(visible=checkbox), gr.update(visible=checkbox)]
505
-
506
- with gr.Blocks(css=css) as demo:
507
- with gr.Box():
508
- if is_shared_ui:
509
- top_description = gr.HTML(f'''
510
- <div class="gr-prose" style="max-width: 80%">
511
- <h2>Attention - This Space doesn't work in this shared UI</h2>
512
- <p>For it to work, you can either run locally or duplicate the Space and run it on your own profile using a (paid) private T4-small or A10G-small GPU for training. A T4 costs US$0.60/h, so it should cost < US$1 to train most models using default settings with it!&nbsp;&nbsp;<a class="duplicate-button" style="display:inline-block" target="_blank" href="https://huggingface.co/spaces/{os.environ['SPACE_ID']}?duplicate=true"><img src="https://img.shields.io/badge/-Duplicate%20Space-blue?labelColor=white&style=flat&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAAXNSR0IArs4c6QAAAP5JREFUOE+lk7FqAkEURY+ltunEgFXS2sZGIbXfEPdLlnxJyDdYB62sbbUKpLbVNhyYFzbrrA74YJlh9r079973psed0cvUD4A+4HoCjsA85X0Dfn/RBLBgBDxnQPfAEJgBY+A9gALA4tcbamSzS4xq4FOQAJgCDwV2CPKV8tZAJcAjMMkUe1vX+U+SMhfAJEHasQIWmXNN3abzDwHUrgcRGmYcgKe0bxrblHEB4E/pndMazNpSZGcsZdBlYJcEL9Afo75molJyM2FxmPgmgPqlWNLGfwZGG6UiyEvLzHYDmoPkDDiNm9JR9uboiONcBXrpY1qmgs21x1QwyZcpvxt9NS09PlsPAAAAAElFTkSuQmCC&logoWidth=14" alt="Duplicate Space"></a></p>
513
- <img class="instruction" src="file=duplicate.png">
514
- <img class="arrow" src="file=arrow.png" />
515
- </div>
516
- ''')
517
- elif(is_spaces):
518
- if(is_gpu_associated):
519
- top_description = gr.HTML(f'''
520
- <div class="gr-prose" style="max-width: 80%">
521
- <h2>You have successfully associated a {which_gpu} GPU to the Dreambooth Training Space 🎉</h2>
522
- <p>You can now train your model! You will be billed by the minute from when you activated the GPU until when it is turned it off.</p>
523
- </div>
524
- ''')
525
- else:
526
- top_description = gr.HTML(f'''
527
- <div class="gr-prose" style="max-width: 80%">
528
- <h2>You have successfully duplicated the Dreambooth Training Space 🎉</h2>
529
- <p>There's only one step left before you can train your model: <a href="https://huggingface.co/spaces/{os.environ['SPACE_ID']}/settings" style="text-decoration: underline" target="_blank">attribute a <b>T4-small or A10G-small GPU</b> to it (via the Settings tab)</a> and run the training below. You will be billed by the minute from when you activate the GPU until when it is turned it off.</p>
530
- </div>
531
- ''')
532
- else:
533
- top_description = gr.HTML(f'''
534
- <div class="gr-prose" style="max-width: 80%">
535
- <h2>You have successfully cloned the Dreambooth Training Space locally 🎉</h2>
536
- <p>Do a <code>pip install requirements-local.txt</code></p>
537
- </div>
538
- ''')
539
- gr.Markdown("# Dreambooth Training UI 💭")
540
- gr.Markdown("Customize Stable Diffusion v1 or v2 (ⁿᵉʷ!) by giving it a few examples of a concept. Based on the [🧨 diffusers](https://github.com/huggingface/diffusers) implementation, additional techniques from [TheLastBen](https://github.com/TheLastBen/diffusers) and [ShivamShrirao](https://github.com/ShivamShrirao/diffusers)")
541
-
542
- with gr.Row() as what_are_you_training:
543
- type_of_thing = gr.Dropdown(label="What would you like to train?", choices=["object", "person", "style"], value="object", interactive=True)
544
- with gr.Column():
545
- base_model_to_use = gr.Dropdown(label="Which base model would you like to use?", choices=["v1-5", "v2-1-512", "v2-1-768"], value="v1-5", interactive=True)
546
-
547
- #Very hacky approach to emulate dynamically created Gradio components
548
- with gr.Row() as upload_your_concept:
549
- with gr.Column():
550
- thing_description = gr.Markdown("You are going to train an `object`, please upload 5-10 images of the object you are planning on training on from different angles/perspectives. You must have the right to do so and you are liable for the images you use, example")
551
- thing_experimental = gr.Checkbox(label="Improve faces (prior preservation) - can take longer training but can improve faces", visible=False, value=False)
552
- thing_image_example = gr.HTML('''<img src="file=cat-toy.png" />''')
553
- things_naming = gr.Markdown("You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `cttoy` here). Images will be automatically cropped to 512x512.")
554
-
555
- with gr.Column():
556
- file_collection = []
557
- concept_collection = []
558
- buttons_collection = []
559
- delete_collection = []
560
- is_visible = []
561
-
562
- row = [None] * maximum_concepts
563
- for x in range(maximum_concepts):
564
- ordinal = lambda n: "%d%s" % (n, "tsnrhtdd"[(n // 10 % 10 != 1) * (n % 10 < 4) * n % 10::4])
565
- if(x == 0):
566
- visible = True
567
- is_visible.append(gr.State(value=True))
568
- else:
569
- visible = False
570
- is_visible.append(gr.State(value=False))
571
-
572
- file_collection.append(gr.File(file_types=["image"], label=f'''Upload the images for your {ordinal(x+1) if (x>0) else ""} concept''', file_count="multiple", interactive=True, visible=visible))
573
- with gr.Column(visible=visible) as row[x]:
574
- concept_collection.append(gr.Textbox(label=f'''{ordinal(x+1) if (x>0) else ""} concept prompt - use a unique, made up word to avoid collisions'''))
575
- with gr.Row():
576
- if(x < maximum_concepts-1):
577
- buttons_collection.append(gr.Button(value="Add +1 concept", visible=visible))
578
- if(x > 0):
579
- delete_collection.append(gr.Button(value=f"Delete {ordinal(x+1)} concept"))
580
-
581
- counter_add = 1
582
- for button in buttons_collection:
583
- if(counter_add < len(buttons_collection)):
584
- button.click(lambda:
585
- [gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), True, None],
586
- None,
587
- [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], buttons_collection[counter_add], is_visible[counter_add], file_collection[counter_add]], queue=False)
588
- else:
589
- button.click(lambda:[gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), True], None, [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], is_visible[counter_add]], queue=False)
590
- counter_add += 1
591
-
592
- counter_delete = 1
593
- for delete_button in delete_collection:
594
- if(counter_delete < len(delete_collection)+1):
595
- delete_button.click(lambda:[gr.update(visible=False),gr.update(visible=False), gr.update(visible=True), False], None, [file_collection[counter_delete], row[counter_delete], buttons_collection[counter_delete-1], is_visible[counter_delete]], queue=False)
596
- counter_delete += 1
597
-
598
- with gr.Accordion("Custom Settings", open=False):
599
- swap_auto_calculated = gr.Checkbox(label="Use custom settings")
600
- gr.Markdown("If not checked, the % of frozen encoder will be tuned automatically to whether you are training an `object`, `person` or `style`. The text-encoder is frozen after 10% of the steps for a style, 30% of the steps for an object and 75% trained for persons. The number of steps varies between 1400 and 2400 depending on how many images uploaded. If you see too many artifacts in your output, it means it may have overfit and you need less steps. If your results aren't really what you wanted, it may be underfitting and you need more steps.")
601
- steps = gr.Number(label="How many steps", value=2400)
602
- perc_txt_encoder = gr.Number(label="Percentage of the training steps the text-encoder should be trained as well", value=30)
603
-
604
- with gr.Box(visible=False) as training_summary:
605
- training_summary_text = gr.HTML("", visible=True, label="Training Summary")
606
- is_advanced_visible = True if is_spaces else False
607
- training_summary_checkbox = gr.Checkbox(label="Automatically remove paid GPU attribution and upload model to the Hugging Face Hub after training", value=True, visible=is_advanced_visible)
608
- training_summary_model_name = gr.Textbox(label="Name of your model", visible=True)
609
- training_summary_where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], value="My personal profile", label="Upload to", visible=True)
610
- training_summary_token_message = gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.", visible=True)
611
- training_summary_token = gr.Textbox(label="Hugging Face Write Token", type="password", visible=True)
612
-
613
- train_btn = gr.Button("Start Training")
614
- progress_bar = gr.Textbox(visible=False)
615
- if(is_shared_ui):
616
- training_ongoing = gr.Markdown("## This Space only works in duplicated instances. Please duplicate it and try again!", visible=False)
617
- elif(not is_gpu_associated):
618
- training_ongoing = gr.Markdown("## Oops, you haven't associated your T4 or A10G GPU to this Space. Visit the Settings tab, associate and try again.", visible=False)
619
- else:
620
- training_ongoing = gr.Markdown("## Training is ongoing ⌛... You can close this tab if you like or just wait. If you did not check the `Remove GPU After training`, you can come back here to try your model and upload it after training. Don't forget to remove the GPU attribution after you are done. ", visible=False)
621
-
622
-
623
- #Post-training UI
624
- completed_training = gr.Markdown('''# ✅ Training completed.
625
- ### Don't forget to remove the GPU attribution after you are done trying and uploading your model''', visible=False)
626
-
627
- with gr.Row():
628
- with gr.Box(visible=False) as try_your_model:
629
- gr.Markdown("## Try your model")
630
- prompt = gr.Textbox(label="Type your prompt")
631
- result_image = gr.Image()
632
- inference_steps = gr.Slider(minimum=1, maximum=150, value=50, step=1)
633
- generate_button = gr.Button("Generate Image")
634
-
635
- with gr.Box(visible=False) as push_to_hub:
636
- gr.Markdown("## Push to Hugging Face Hub")
637
- model_name = gr.Textbox(label="Name of your model", placeholder="Tarsila do Amaral Style")
638
- where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to")
639
- gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.")
640
- hf_token = gr.Textbox(label="Hugging Face Write Token", type="password")
641
-
642
- push_button = gr.Button("Push to the Hub")
643
-
644
- result = gr.File(label="Download the uploaded models in the diffusers format", visible=True)
645
- success_message_upload = gr.Markdown(visible=False)
646
- convert_button = gr.Button("Convert to CKPT", visible=False)
647
-
648
- #Swap the examples and the % of text encoder trained depending if it is an object, person or style
649
- type_of_thing.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False)
650
-
651
- #Swap the base model
652
-
653
- base_model_to_use.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False)
654
- #base_model_to_use.change(fn=visualise_progress_bar, inputs=[], outputs=progress_bar)
655
- base_model_to_use.change(fn=swap_base_model, inputs=base_model_to_use, outputs=[])
656
- #Update the summary box below the UI according to how many images are uploaded and whether users are using custom settings or not
657
- for file in file_collection:
658
- #file.change(fn=update_steps,inputs=file_collection, outputs=steps)
659
- file.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
660
-
661
- thing_experimental.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
662
- base_model_to_use.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
663
- steps.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
664
- perc_txt_encoder.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
665
-
666
- #Give more options if the user wants to finish everything after training
667
- if(is_spaces):
668
- training_summary_checkbox.change(fn=checkbox_swap, inputs=training_summary_checkbox, outputs=[training_summary_token_message, training_summary_token, training_summary_model_name, training_summary_where_to_upload],queue=False, show_progress=False)
669
- #Add a message for while it is in training
670
-
671
- #train_btn.click(lambda:gr.update(visible=True), inputs=None, outputs=training_ongoing)
672
-
673
- #The main train function
674
- train_btn.click(lambda:gr.update(visible=True), inputs=[], outputs=progress_bar)
675
- train_btn.click(fn=train, inputs=is_visible+concept_collection+file_collection+[base_model_to_use]+[thing_experimental]+[training_summary_where_to_upload]+[training_summary_model_name]+[training_summary_checkbox]+[training_summary_token]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[progress_bar, result, try_your_model, push_to_hub, convert_button, training_ongoing, completed_training], queue=False)
676
-
677
- #Button to generate an image from your trained model after training
678
- generate_button.click(fn=generate, inputs=[prompt, inference_steps], outputs=result_image, queue=False)
679
- #Button to push the model to the Hugging Face Hub
680
- push_button.click(fn=push, inputs=[model_name, where_to_upload, hf_token, base_model_to_use], outputs=[success_message_upload, result], queue=False)
681
- #Button to convert the model to ckpt format
682
- convert_button.click(fn=convert_to_ckpt, inputs=[], outputs=result, queue=False)
683
-
684
- #Checks if the training is running
685
- demo.load(fn=check_status, inputs=top_description, outputs=[top_description, try_your_model, push_to_hub, result, convert_button], queue=False, show_progress=False)
686
-
687
- demo.queue(default_enabled=False).launch(debug=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CofAI/picscore/picscore.py DELETED
@@ -1,7 +0,0 @@
1
- import gradio as gr
2
-
3
- description = """<div>
4
- PICSCORE BETA-1
5
- </div>
6
- """
7
- gr.Interface.load("CompVis/stable-diffusion-v1-4", description=description).launch()
 
 
 
 
 
 
 
 
spaces/CofAI/picscore1/README.md DELETED
@@ -1,15 +0,0 @@
1
- ---
2
- title: PicScore — Stabel Diffusion
3
- emoji: 🖼
4
- colorFrom: indigo
5
- colorTo: purple
6
- sdk: static
7
- pinned: true
8
- license: other
9
- ---
10
-
11
- #tags: StableDiffusion, SD, PicScore, promt, picgen
12
-
13
- ---
14
-
15
- This is PicScore with Stable Diffusion 2.1 for FREE!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CoreyMorris/MMLU-by-task-Leaderboard/plotting_utils.py DELETED
@@ -1,152 +0,0 @@
1
- import streamlit as st
2
- import pandas as pd
3
- import plotly.express as px
4
- import matplotlib.pyplot as plt
5
- import numpy as np
6
- import plotly.graph_objects as go
7
-
8
- def plot_top_n(df, target_column, n=10):
9
- top_n = df.nlargest(n, target_column)
10
-
11
- # Initialize the bar plot
12
- fig, ax1 = plt.subplots(figsize=(10, 5))
13
-
14
- # Set width for each bar and their positions
15
- width = 0.28
16
- ind = np.arange(len(top_n))
17
-
18
- # Plot target_column and MMLU_average on the primary y-axis with adjusted positions
19
- ax1.bar(ind - width, top_n[target_column], width=width, color='blue', label=target_column)
20
- ax1.bar(ind, top_n['MMLU_average'], width=width, color='orange', label='MMLU_average')
21
-
22
- # Set the primary y-axis labels and title
23
- ax1.set_title(f'Top {n} performing models on {target_column}')
24
- ax1.set_xlabel('Model')
25
- ax1.set_ylabel('Score')
26
-
27
- # Create a secondary y-axis for Parameters
28
- ax2 = ax1.twinx()
29
-
30
- # Plot Parameters as bars on the secondary y-axis with adjusted position
31
- ax2.bar(ind + width, top_n['Parameters'], width=width, color='red', label='Parameters')
32
-
33
- # Set the secondary y-axis labels
34
- ax2.set_ylabel('Parameters', color='red')
35
- ax2.tick_params(axis='y', labelcolor='red')
36
-
37
- # Set the x-ticks and their labels
38
- ax1.set_xticks(ind)
39
- ax1.set_xticklabels(top_n.index, rotation=45, ha="right")
40
-
41
- # Adjust the legend
42
- fig.tight_layout()
43
- fig.legend(loc='center left', bbox_to_anchor=(1, 0.5))
44
-
45
- # Show the plot
46
- st.pyplot(fig)
47
-
48
- # Function to create an unfilled radar chart
49
- def create_radar_chart_unfilled(df, model_names, metrics):
50
- fig = go.Figure()
51
- min_value = df.loc[model_names, metrics].min().min()
52
- max_value = df.loc[model_names, metrics].max().max()
53
- for model_name in model_names:
54
- values_model = df.loc[model_name, metrics]
55
- fig.add_trace(go.Scatterpolar(
56
- r=values_model,
57
- theta=metrics,
58
- name=model_name
59
- ))
60
-
61
- fig.update_layout(
62
- polar=dict(
63
- radialaxis=dict(
64
- visible=True,
65
- range=[min_value, max_value]
66
- )),
67
- showlegend=True,
68
- width=800, # Change the width as needed
69
- height=600 # Change the height as needed
70
- )
71
- return fig
72
-
73
-
74
-
75
- # Function to create a line chart
76
- def create_line_chart(df, model_names, metrics):
77
- line_data = []
78
- for model_name in model_names:
79
- values_model = df.loc[model_name, metrics]
80
- for metric, value in zip(metrics, values_model):
81
- line_data.append({'Model': model_name, 'Metric': metric, 'Value': value})
82
-
83
- line_df = pd.DataFrame(line_data)
84
-
85
- fig = px.line(line_df, x='Metric', y='Value', color='Model', title='Comparison of Models', line_dash_sequence=['solid'])
86
- fig.update_layout(showlegend=True)
87
- return fig
88
-
89
- def create_plot(df, x_values, y_values, models=None, title=None):
90
- if models is not None:
91
- df = df[df.index.isin(models)]
92
-
93
- # remove rows with NaN values
94
- df = df.dropna(subset=[x_values, y_values])
95
-
96
- plot_data = pd.DataFrame({
97
- 'Model': df.index,
98
- x_values: df[x_values],
99
- y_values: df[y_values],
100
- })
101
-
102
- plot_data['color'] = 'purple'
103
- fig = px.scatter(plot_data, x=x_values, y=y_values, color='color', hover_data=['Model'], trendline="ols")
104
-
105
- # If title is not provided, use x_values vs. y_values as the default title
106
- if title is None:
107
- title = x_values + " vs. " + y_values
108
-
109
- layout_args = dict(
110
- showlegend=False,
111
- xaxis_title=x_values,
112
- yaxis_title=y_values,
113
- xaxis=dict(),
114
- yaxis=dict(),
115
- title=title,
116
- height=500,
117
- width=1000,
118
- )
119
- fig.update_layout(**layout_args)
120
-
121
- # Add a dashed line at 0.25 for the y_values
122
- x_min = df[x_values].min()
123
- x_max = df[x_values].max()
124
-
125
- y_min = df[y_values].min()
126
- y_max = df[y_values].max()
127
-
128
- if x_values.startswith('MMLU'):
129
- fig.add_shape(
130
- type='line',
131
- x0=0.25, x1=0.25,
132
- y0=y_min, y1=y_max,
133
- line=dict(
134
- color='red',
135
- width=2,
136
- dash='dash'
137
- )
138
- )
139
-
140
- if y_values.startswith('MMLU'):
141
- fig.add_shape(
142
- type='line',
143
- x0=x_min, x1=x_max,
144
- y0=0.25, y1=0.25,
145
- line=dict(
146
- color='red',
147
- width=2,
148
- dash='dash'
149
- )
150
- )
151
-
152
- return fig
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cyril666/ContourNet-ABI/setup.py DELETED
@@ -1,69 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
2
- #!/usr/bin/env python
3
-
4
- import glob
5
- import os
6
-
7
- import torch
8
- from setuptools import find_packages
9
- from setuptools import setup
10
- from torch.utils.cpp_extension import CUDA_HOME
11
- from torch.utils.cpp_extension import CppExtension
12
- from torch.utils.cpp_extension import CUDAExtension
13
-
14
- requirements = ["torch", "torchvision"]
15
-
16
-
17
- def get_extensions():
18
- this_dir = os.path.dirname(os.path.abspath(__file__))
19
- extensions_dir = os.path.join(this_dir, "maskrcnn_benchmark", "csrc")
20
-
21
- main_file = glob.glob(os.path.join(extensions_dir, "*.cpp"))
22
- source_cpu = glob.glob(os.path.join(extensions_dir, "cpu", "*.cpp"))
23
- source_cuda = glob.glob(os.path.join(extensions_dir, "cuda", "*.cu"))
24
-
25
- sources = main_file + source_cpu
26
- extension = CppExtension
27
-
28
- extra_compile_args = {"cxx": []}
29
- define_macros = []
30
-
31
- if (torch.cuda.is_available() and CUDA_HOME is not None) or os.getenv("FORCE_CUDA", "0") == "1":
32
- extension = CUDAExtension
33
- sources += source_cuda
34
- define_macros += [("WITH_CUDA", None)]
35
- extra_compile_args["nvcc"] = [
36
- "-DCUDA_HAS_FP16=1",
37
- "-D__CUDA_NO_HALF_OPERATORS__",
38
- "-D__CUDA_NO_HALF_CONVERSIONS__",
39
- "-D__CUDA_NO_HALF2_OPERATORS__",
40
- ]
41
-
42
- sources = [os.path.join(extensions_dir, s) for s in sources]
43
-
44
- include_dirs = [extensions_dir]
45
-
46
- ext_modules = [
47
- extension(
48
- "maskrcnn_benchmark._C",
49
- sources,
50
- include_dirs=include_dirs,
51
- define_macros=define_macros,
52
- extra_compile_args=extra_compile_args,
53
- )
54
- ]
55
-
56
- return ext_modules
57
-
58
-
59
- setup(
60
- name="maskrcnn_benchmark",
61
- version="0.1",
62
- author="fmassa",
63
- url="https://github.com/facebookresearch/maskrcnn-benchmark",
64
- description="object detection in pytorch",
65
- packages=find_packages(exclude=("configs", "tests",)),
66
- # install_requires=requirements,
67
- ext_modules=get_extensions(),
68
- cmdclass={"build_ext": torch.utils.cpp_extension.BuildExtension},
69
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/common/gradcam.py DELETED
@@ -1,24 +0,0 @@
1
- import numpy as np
2
- from matplotlib import pyplot as plt
3
- from scipy.ndimage import filters
4
- from skimage import transform as skimage_transform
5
-
6
-
7
- def getAttMap(img, attMap, blur=True, overlap=True):
8
- attMap -= attMap.min()
9
- if attMap.max() > 0:
10
- attMap /= attMap.max()
11
- attMap = skimage_transform.resize(attMap, (img.shape[:2]), order=3, mode="constant")
12
- if blur:
13
- attMap = filters.gaussian_filter(attMap, 0.02 * max(img.shape[:2]))
14
- attMap -= attMap.min()
15
- attMap /= attMap.max()
16
- cmap = plt.get_cmap("jet")
17
- attMapV = cmap(attMap)
18
- attMapV = np.delete(attMapV, 3, 2)
19
- if overlap:
20
- attMap = (
21
- 1 * (1 - attMap**0.7).reshape(attMap.shape + (1,)) * img
22
- + (attMap**0.7).reshape(attMap.shape + (1,)) * attMapV
23
- )
24
- return attMap
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/openapi/models.py DELETED
@@ -1,611 +0,0 @@
1
- from enum import Enum
2
- from typing import Any, Callable, Dict, Iterable, List, Optional, Set, Type, Union
3
-
4
- from fastapi._compat import (
5
- PYDANTIC_V2,
6
- CoreSchema,
7
- GetJsonSchemaHandler,
8
- JsonSchemaValue,
9
- _model_rebuild,
10
- general_plain_validator_function,
11
- )
12
- from fastapi.logger import logger
13
- from pydantic import AnyUrl, BaseModel, Field
14
- from typing_extensions import Annotated, Literal
15
- from typing_extensions import deprecated as typing_deprecated
16
-
17
- try:
18
- import email_validator
19
-
20
- assert email_validator # make autoflake ignore the unused import
21
- from pydantic import EmailStr
22
- except ImportError: # pragma: no cover
23
-
24
- class EmailStr(str): # type: ignore
25
- @classmethod
26
- def __get_validators__(cls) -> Iterable[Callable[..., Any]]:
27
- yield cls.validate
28
-
29
- @classmethod
30
- def validate(cls, v: Any) -> str:
31
- logger.warning(
32
- "email-validator not installed, email fields will be treated as str.\n"
33
- "To install, run: pip install email-validator"
34
- )
35
- return str(v)
36
-
37
- @classmethod
38
- def _validate(cls, __input_value: Any, _: Any) -> str:
39
- logger.warning(
40
- "email-validator not installed, email fields will be treated as str.\n"
41
- "To install, run: pip install email-validator"
42
- )
43
- return str(__input_value)
44
-
45
- @classmethod
46
- def __get_pydantic_json_schema__(
47
- cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler
48
- ) -> JsonSchemaValue:
49
- return {"type": "string", "format": "email"}
50
-
51
- @classmethod
52
- def __get_pydantic_core_schema__(
53
- cls, source: Type[Any], handler: Callable[[Any], CoreSchema]
54
- ) -> CoreSchema:
55
- return general_plain_validator_function(cls._validate)
56
-
57
-
58
- class Contact(BaseModel):
59
- name: Optional[str] = None
60
- url: Optional[AnyUrl] = None
61
- email: Optional[EmailStr] = None
62
-
63
- if PYDANTIC_V2:
64
- model_config = {"extra": "allow"}
65
-
66
- else:
67
-
68
- class Config:
69
- extra = "allow"
70
-
71
-
72
- class License(BaseModel):
73
- name: str
74
- identifier: Optional[str] = None
75
- url: Optional[AnyUrl] = None
76
-
77
- if PYDANTIC_V2:
78
- model_config = {"extra": "allow"}
79
-
80
- else:
81
-
82
- class Config:
83
- extra = "allow"
84
-
85
-
86
- class Info(BaseModel):
87
- title: str
88
- summary: Optional[str] = None
89
- description: Optional[str] = None
90
- termsOfService: Optional[str] = None
91
- contact: Optional[Contact] = None
92
- license: Optional[License] = None
93
- version: str
94
-
95
- if PYDANTIC_V2:
96
- model_config = {"extra": "allow"}
97
-
98
- else:
99
-
100
- class Config:
101
- extra = "allow"
102
-
103
-
104
- class ServerVariable(BaseModel):
105
- enum: Annotated[Optional[List[str]], Field(min_length=1)] = None
106
- default: str
107
- description: Optional[str] = None
108
-
109
- if PYDANTIC_V2:
110
- model_config = {"extra": "allow"}
111
-
112
- else:
113
-
114
- class Config:
115
- extra = "allow"
116
-
117
-
118
- class Server(BaseModel):
119
- url: Union[AnyUrl, str]
120
- description: Optional[str] = None
121
- variables: Optional[Dict[str, ServerVariable]] = None
122
-
123
- if PYDANTIC_V2:
124
- model_config = {"extra": "allow"}
125
-
126
- else:
127
-
128
- class Config:
129
- extra = "allow"
130
-
131
-
132
- class Reference(BaseModel):
133
- ref: str = Field(alias="$ref")
134
-
135
-
136
- class Discriminator(BaseModel):
137
- propertyName: str
138
- mapping: Optional[Dict[str, str]] = None
139
-
140
-
141
- class XML(BaseModel):
142
- name: Optional[str] = None
143
- namespace: Optional[str] = None
144
- prefix: Optional[str] = None
145
- attribute: Optional[bool] = None
146
- wrapped: Optional[bool] = None
147
-
148
- if PYDANTIC_V2:
149
- model_config = {"extra": "allow"}
150
-
151
- else:
152
-
153
- class Config:
154
- extra = "allow"
155
-
156
-
157
- class ExternalDocumentation(BaseModel):
158
- description: Optional[str] = None
159
- url: AnyUrl
160
-
161
- if PYDANTIC_V2:
162
- model_config = {"extra": "allow"}
163
-
164
- else:
165
-
166
- class Config:
167
- extra = "allow"
168
-
169
-
170
- class Schema(BaseModel):
171
- # Ref: JSON Schema 2020-12: https://json-schema.org/draft/2020-12/json-schema-core.html#name-the-json-schema-core-vocabu
172
- # Core Vocabulary
173
- schema_: Optional[str] = Field(default=None, alias="$schema")
174
- vocabulary: Optional[str] = Field(default=None, alias="$vocabulary")
175
- id: Optional[str] = Field(default=None, alias="$id")
176
- anchor: Optional[str] = Field(default=None, alias="$anchor")
177
- dynamicAnchor: Optional[str] = Field(default=None, alias="$dynamicAnchor")
178
- ref: Optional[str] = Field(default=None, alias="$ref")
179
- dynamicRef: Optional[str] = Field(default=None, alias="$dynamicRef")
180
- defs: Optional[Dict[str, "SchemaOrBool"]] = Field(default=None, alias="$defs")
181
- comment: Optional[str] = Field(default=None, alias="$comment")
182
- # Ref: JSON Schema 2020-12: https://json-schema.org/draft/2020-12/json-schema-core.html#name-a-vocabulary-for-applying-s
183
- # A Vocabulary for Applying Subschemas
184
- allOf: Optional[List["SchemaOrBool"]] = None
185
- anyOf: Optional[List["SchemaOrBool"]] = None
186
- oneOf: Optional[List["SchemaOrBool"]] = None
187
- not_: Optional["SchemaOrBool"] = Field(default=None, alias="not")
188
- if_: Optional["SchemaOrBool"] = Field(default=None, alias="if")
189
- then: Optional["SchemaOrBool"] = None
190
- else_: Optional["SchemaOrBool"] = Field(default=None, alias="else")
191
- dependentSchemas: Optional[Dict[str, "SchemaOrBool"]] = None
192
- prefixItems: Optional[List["SchemaOrBool"]] = None
193
- # TODO: uncomment and remove below when deprecating Pydantic v1
194
- # It generales a list of schemas for tuples, before prefixItems was available
195
- # items: Optional["SchemaOrBool"] = None
196
- items: Optional[Union["SchemaOrBool", List["SchemaOrBool"]]] = None
197
- contains: Optional["SchemaOrBool"] = None
198
- properties: Optional[Dict[str, "SchemaOrBool"]] = None
199
- patternProperties: Optional[Dict[str, "SchemaOrBool"]] = None
200
- additionalProperties: Optional["SchemaOrBool"] = None
201
- propertyNames: Optional["SchemaOrBool"] = None
202
- unevaluatedItems: Optional["SchemaOrBool"] = None
203
- unevaluatedProperties: Optional["SchemaOrBool"] = None
204
- # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-a-vocabulary-for-structural
205
- # A Vocabulary for Structural Validation
206
- type: Optional[str] = None
207
- enum: Optional[List[Any]] = None
208
- const: Optional[Any] = None
209
- multipleOf: Optional[float] = Field(default=None, gt=0)
210
- maximum: Optional[float] = None
211
- exclusiveMaximum: Optional[float] = None
212
- minimum: Optional[float] = None
213
- exclusiveMinimum: Optional[float] = None
214
- maxLength: Optional[int] = Field(default=None, ge=0)
215
- minLength: Optional[int] = Field(default=None, ge=0)
216
- pattern: Optional[str] = None
217
- maxItems: Optional[int] = Field(default=None, ge=0)
218
- minItems: Optional[int] = Field(default=None, ge=0)
219
- uniqueItems: Optional[bool] = None
220
- maxContains: Optional[int] = Field(default=None, ge=0)
221
- minContains: Optional[int] = Field(default=None, ge=0)
222
- maxProperties: Optional[int] = Field(default=None, ge=0)
223
- minProperties: Optional[int] = Field(default=None, ge=0)
224
- required: Optional[List[str]] = None
225
- dependentRequired: Optional[Dict[str, Set[str]]] = None
226
- # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-vocabularies-for-semantic-c
227
- # Vocabularies for Semantic Content With "format"
228
- format: Optional[str] = None
229
- # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-a-vocabulary-for-the-conten
230
- # A Vocabulary for the Contents of String-Encoded Data
231
- contentEncoding: Optional[str] = None
232
- contentMediaType: Optional[str] = None
233
- contentSchema: Optional["SchemaOrBool"] = None
234
- # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-a-vocabulary-for-basic-meta
235
- # A Vocabulary for Basic Meta-Data Annotations
236
- title: Optional[str] = None
237
- description: Optional[str] = None
238
- default: Optional[Any] = None
239
- deprecated: Optional[bool] = None
240
- readOnly: Optional[bool] = None
241
- writeOnly: Optional[bool] = None
242
- examples: Optional[List[Any]] = None
243
- # Ref: OpenAPI 3.1.0: https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.1.0.md#schema-object
244
- # Schema Object
245
- discriminator: Optional[Discriminator] = None
246
- xml: Optional[XML] = None
247
- externalDocs: Optional[ExternalDocumentation] = None
248
- example: Annotated[
249
- Optional[Any],
250
- typing_deprecated(
251
- "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, "
252
- "although still supported. Use examples instead."
253
- ),
254
- ] = None
255
-
256
- if PYDANTIC_V2:
257
- model_config = {"extra": "allow"}
258
-
259
- else:
260
-
261
- class Config:
262
- extra = "allow"
263
-
264
-
265
- # Ref: https://json-schema.org/draft/2020-12/json-schema-core.html#name-json-schema-documents
266
- # A JSON Schema MUST be an object or a boolean.
267
- SchemaOrBool = Union[Schema, bool]
268
-
269
-
270
- class Example(BaseModel):
271
- summary: Optional[str] = None
272
- description: Optional[str] = None
273
- value: Optional[Any] = None
274
- externalValue: Optional[AnyUrl] = None
275
-
276
- if PYDANTIC_V2:
277
- model_config = {"extra": "allow"}
278
-
279
- else:
280
-
281
- class Config:
282
- extra = "allow"
283
-
284
-
285
- class ParameterInType(Enum):
286
- query = "query"
287
- header = "header"
288
- path = "path"
289
- cookie = "cookie"
290
-
291
-
292
- class Encoding(BaseModel):
293
- contentType: Optional[str] = None
294
- headers: Optional[Dict[str, Union["Header", Reference]]] = None
295
- style: Optional[str] = None
296
- explode: Optional[bool] = None
297
- allowReserved: Optional[bool] = None
298
-
299
- if PYDANTIC_V2:
300
- model_config = {"extra": "allow"}
301
-
302
- else:
303
-
304
- class Config:
305
- extra = "allow"
306
-
307
-
308
- class MediaType(BaseModel):
309
- schema_: Optional[Union[Schema, Reference]] = Field(default=None, alias="schema")
310
- example: Optional[Any] = None
311
- examples: Optional[Dict[str, Union[Example, Reference]]] = None
312
- encoding: Optional[Dict[str, Encoding]] = None
313
-
314
- if PYDANTIC_V2:
315
- model_config = {"extra": "allow"}
316
-
317
- else:
318
-
319
- class Config:
320
- extra = "allow"
321
-
322
-
323
- class ParameterBase(BaseModel):
324
- description: Optional[str] = None
325
- required: Optional[bool] = None
326
- deprecated: Optional[bool] = None
327
- # Serialization rules for simple scenarios
328
- style: Optional[str] = None
329
- explode: Optional[bool] = None
330
- allowReserved: Optional[bool] = None
331
- schema_: Optional[Union[Schema, Reference]] = Field(default=None, alias="schema")
332
- example: Optional[Any] = None
333
- examples: Optional[Dict[str, Union[Example, Reference]]] = None
334
- # Serialization rules for more complex scenarios
335
- content: Optional[Dict[str, MediaType]] = None
336
-
337
- if PYDANTIC_V2:
338
- model_config = {"extra": "allow"}
339
-
340
- else:
341
-
342
- class Config:
343
- extra = "allow"
344
-
345
-
346
- class Parameter(ParameterBase):
347
- name: str
348
- in_: ParameterInType = Field(alias="in")
349
-
350
-
351
- class Header(ParameterBase):
352
- pass
353
-
354
-
355
- class RequestBody(BaseModel):
356
- description: Optional[str] = None
357
- content: Dict[str, MediaType]
358
- required: Optional[bool] = None
359
-
360
- if PYDANTIC_V2:
361
- model_config = {"extra": "allow"}
362
-
363
- else:
364
-
365
- class Config:
366
- extra = "allow"
367
-
368
-
369
- class Link(BaseModel):
370
- operationRef: Optional[str] = None
371
- operationId: Optional[str] = None
372
- parameters: Optional[Dict[str, Union[Any, str]]] = None
373
- requestBody: Optional[Union[Any, str]] = None
374
- description: Optional[str] = None
375
- server: Optional[Server] = None
376
-
377
- if PYDANTIC_V2:
378
- model_config = {"extra": "allow"}
379
-
380
- else:
381
-
382
- class Config:
383
- extra = "allow"
384
-
385
-
386
- class Response(BaseModel):
387
- description: str
388
- headers: Optional[Dict[str, Union[Header, Reference]]] = None
389
- content: Optional[Dict[str, MediaType]] = None
390
- links: Optional[Dict[str, Union[Link, Reference]]] = None
391
-
392
- if PYDANTIC_V2:
393
- model_config = {"extra": "allow"}
394
-
395
- else:
396
-
397
- class Config:
398
- extra = "allow"
399
-
400
-
401
- class Operation(BaseModel):
402
- tags: Optional[List[str]] = None
403
- summary: Optional[str] = None
404
- description: Optional[str] = None
405
- externalDocs: Optional[ExternalDocumentation] = None
406
- operationId: Optional[str] = None
407
- parameters: Optional[List[Union[Parameter, Reference]]] = None
408
- requestBody: Optional[Union[RequestBody, Reference]] = None
409
- # Using Any for Specification Extensions
410
- responses: Optional[Dict[str, Union[Response, Any]]] = None
411
- callbacks: Optional[Dict[str, Union[Dict[str, "PathItem"], Reference]]] = None
412
- deprecated: Optional[bool] = None
413
- security: Optional[List[Dict[str, List[str]]]] = None
414
- servers: Optional[List[Server]] = None
415
-
416
- if PYDANTIC_V2:
417
- model_config = {"extra": "allow"}
418
-
419
- else:
420
-
421
- class Config:
422
- extra = "allow"
423
-
424
-
425
- class PathItem(BaseModel):
426
- ref: Optional[str] = Field(default=None, alias="$ref")
427
- summary: Optional[str] = None
428
- description: Optional[str] = None
429
- get: Optional[Operation] = None
430
- put: Optional[Operation] = None
431
- post: Optional[Operation] = None
432
- delete: Optional[Operation] = None
433
- options: Optional[Operation] = None
434
- head: Optional[Operation] = None
435
- patch: Optional[Operation] = None
436
- trace: Optional[Operation] = None
437
- servers: Optional[List[Server]] = None
438
- parameters: Optional[List[Union[Parameter, Reference]]] = None
439
-
440
- if PYDANTIC_V2:
441
- model_config = {"extra": "allow"}
442
-
443
- else:
444
-
445
- class Config:
446
- extra = "allow"
447
-
448
-
449
- class SecuritySchemeType(Enum):
450
- apiKey = "apiKey"
451
- http = "http"
452
- oauth2 = "oauth2"
453
- openIdConnect = "openIdConnect"
454
-
455
-
456
- class SecurityBase(BaseModel):
457
- type_: SecuritySchemeType = Field(alias="type")
458
- description: Optional[str] = None
459
-
460
- if PYDANTIC_V2:
461
- model_config = {"extra": "allow"}
462
-
463
- else:
464
-
465
- class Config:
466
- extra = "allow"
467
-
468
-
469
- class APIKeyIn(Enum):
470
- query = "query"
471
- header = "header"
472
- cookie = "cookie"
473
-
474
-
475
- class APIKey(SecurityBase):
476
- type_: SecuritySchemeType = Field(default=SecuritySchemeType.apiKey, alias="type")
477
- in_: APIKeyIn = Field(alias="in")
478
- name: str
479
-
480
-
481
- class HTTPBase(SecurityBase):
482
- type_: SecuritySchemeType = Field(default=SecuritySchemeType.http, alias="type")
483
- scheme: str
484
-
485
-
486
- class HTTPBearer(HTTPBase):
487
- scheme: Literal["bearer"] = "bearer"
488
- bearerFormat: Optional[str] = None
489
-
490
-
491
- class OAuthFlow(BaseModel):
492
- refreshUrl: Optional[str] = None
493
- scopes: Dict[str, str] = {}
494
-
495
- if PYDANTIC_V2:
496
- model_config = {"extra": "allow"}
497
-
498
- else:
499
-
500
- class Config:
501
- extra = "allow"
502
-
503
-
504
- class OAuthFlowImplicit(OAuthFlow):
505
- authorizationUrl: str
506
-
507
-
508
- class OAuthFlowPassword(OAuthFlow):
509
- tokenUrl: str
510
-
511
-
512
- class OAuthFlowClientCredentials(OAuthFlow):
513
- tokenUrl: str
514
-
515
-
516
- class OAuthFlowAuthorizationCode(OAuthFlow):
517
- authorizationUrl: str
518
- tokenUrl: str
519
-
520
-
521
- class OAuthFlows(BaseModel):
522
- implicit: Optional[OAuthFlowImplicit] = None
523
- password: Optional[OAuthFlowPassword] = None
524
- clientCredentials: Optional[OAuthFlowClientCredentials] = None
525
- authorizationCode: Optional[OAuthFlowAuthorizationCode] = None
526
-
527
- if PYDANTIC_V2:
528
- model_config = {"extra": "allow"}
529
-
530
- else:
531
-
532
- class Config:
533
- extra = "allow"
534
-
535
-
536
- class OAuth2(SecurityBase):
537
- type_: SecuritySchemeType = Field(default=SecuritySchemeType.oauth2, alias="type")
538
- flows: OAuthFlows
539
-
540
-
541
- class OpenIdConnect(SecurityBase):
542
- type_: SecuritySchemeType = Field(
543
- default=SecuritySchemeType.openIdConnect, alias="type"
544
- )
545
- openIdConnectUrl: str
546
-
547
-
548
- SecurityScheme = Union[APIKey, HTTPBase, OAuth2, OpenIdConnect, HTTPBearer]
549
-
550
-
551
- class Components(BaseModel):
552
- schemas: Optional[Dict[str, Union[Schema, Reference]]] = None
553
- responses: Optional[Dict[str, Union[Response, Reference]]] = None
554
- parameters: Optional[Dict[str, Union[Parameter, Reference]]] = None
555
- examples: Optional[Dict[str, Union[Example, Reference]]] = None
556
- requestBodies: Optional[Dict[str, Union[RequestBody, Reference]]] = None
557
- headers: Optional[Dict[str, Union[Header, Reference]]] = None
558
- securitySchemes: Optional[Dict[str, Union[SecurityScheme, Reference]]] = None
559
- links: Optional[Dict[str, Union[Link, Reference]]] = None
560
- # Using Any for Specification Extensions
561
- callbacks: Optional[Dict[str, Union[Dict[str, PathItem], Reference, Any]]] = None
562
- pathItems: Optional[Dict[str, Union[PathItem, Reference]]] = None
563
-
564
- if PYDANTIC_V2:
565
- model_config = {"extra": "allow"}
566
-
567
- else:
568
-
569
- class Config:
570
- extra = "allow"
571
-
572
-
573
- class Tag(BaseModel):
574
- name: str
575
- description: Optional[str] = None
576
- externalDocs: Optional[ExternalDocumentation] = None
577
-
578
- if PYDANTIC_V2:
579
- model_config = {"extra": "allow"}
580
-
581
- else:
582
-
583
- class Config:
584
- extra = "allow"
585
-
586
-
587
- class OpenAPI(BaseModel):
588
- openapi: str
589
- info: Info
590
- jsonSchemaDialect: Optional[str] = None
591
- servers: Optional[List[Server]] = None
592
- # Using Any for Specification Extensions
593
- paths: Optional[Dict[str, Union[PathItem, Any]]] = None
594
- webhooks: Optional[Dict[str, Union[PathItem, Reference]]] = None
595
- components: Optional[Components] = None
596
- security: Optional[List[Dict[str, List[str]]]] = None
597
- tags: Optional[List[Tag]] = None
598
- externalDocs: Optional[ExternalDocumentation] = None
599
-
600
- if PYDANTIC_V2:
601
- model_config = {"extra": "allow"}
602
-
603
- else:
604
-
605
- class Config:
606
- extra = "allow"
607
-
608
-
609
- _model_rebuild(Schema)
610
- _model_rebuild(Operation)
611
- _model_rebuild(Encoding)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Button-9b719f62.css DELETED
@@ -1 +0,0 @@
1
- .block.svelte-90oupt{position:relative;margin:0;box-shadow:var(--block-shadow);border-width:var(--block-border-width);border-color:var(--block-border-color);border-radius:var(--block-radius);background:var(--block-background-fill);width:100%;line-height:var(--line-sm)}.block.border_focus.svelte-90oupt{border-color:var(--color-accent)}.padded.svelte-90oupt{padding:var(--block-padding)}.hidden.svelte-90oupt{display:none}.hide-container.svelte-90oupt{margin:0;box-shadow:none;--block-border-width:0;background:transparent;padding:0;overflow:visible}div.svelte-e8n7p6{margin-bottom:var(--spacing-lg);color:var(--block-info-text-color);font-weight:var(--block-info-text-weight);font-size:var(--block-info-text-size);line-height:var(--line-sm)}span.has-info.svelte-1gfkn6j{margin-bottom:var(--spacing-xs)}span.svelte-1gfkn6j:not(.has-info){margin-bottom:var(--spacing-lg)}span.svelte-1gfkn6j{display:inline-block;position:relative;z-index:var(--layer-4);border:solid var(--block-title-border-width) var(--block-title-border-color);border-radius:var(--block-title-radius);background:var(--block-title-background-fill);padding:var(--block-title-padding);color:var(--block-title-text-color);font-weight:var(--block-title-text-weight);font-size:var(--block-title-text-size);line-height:var(--line-sm)}.hide.svelte-1gfkn6j{margin:0;height:0}div.svelte-1mwvhlq{display:inline-flex;align-items:center;z-index:var(--layer-2);box-shadow:var(--block-label-shadow);border:var(--block-label-border-width) solid var(--border-color-primary);border-top:none;border-left:none;border-radius:var(--block-label-radius);background:var(--block-label-background-fill);padding:var(--block-label-padding);pointer-events:none;color:var(--block-label-text-color);font-weight:var(--block-label-text-weight);font-size:var(--block-label-text-size);line-height:var(--line-sm)}.gr-group div.svelte-1mwvhlq{border-top-left-radius:0}div.float.svelte-1mwvhlq{position:absolute;top:var(--block-label-margin);left:var(--block-label-margin)}div.svelte-1mwvhlq:not(.float){position:static;margin-top:var(--block-label-margin);margin-left:var(--block-label-margin)}.hide.svelte-1mwvhlq{height:0}span.svelte-1mwvhlq{opacity:.8;margin-right:var(--size-2);width:calc(var(--block-label-text-size) - 1px);height:calc(var(--block-label-text-size) - 1px)}.hide-label.svelte-1mwvhlq{box-shadow:none;border-width:0;background:transparent;overflow:visible}button.svelte-1030q2h{display:flex;justify-content:center;align-items:center;gap:1px;z-index:var(--layer-1);box-shadow:var(--shadow-drop);border:1px solid var(--button-secondary-border-color);border-radius:var(--radius-sm);background:var(--background-fill-primary);padding:2px;color:var(--block-label-text-color)}button.svelte-1030q2h:hover{cursor:pointer;border:2px solid var(--button-secondary-border-color-hover);padding:1px;color:var(--block-label-text-color)}span.svelte-1030q2h{padding:0 1px;font-size:10px}div.svelte-1030q2h{padding:2px;width:14px;height:14px}.pending.svelte-1030q2h{animation:svelte-1030q2h-flash .5s infinite}@keyframes svelte-1030q2h-flash{0%{opacity:.5}50%{opacity:1}to{opacity:.5}}.empty.svelte-lk9eg8{display:flex;justify-content:center;align-items:center;margin-top:calc(0px - var(--size-6));height:var(--size-full)}.icon.svelte-lk9eg8{opacity:.5;height:var(--size-5);color:var(--body-text-color)}.small.svelte-lk9eg8{min-height:calc(var(--size-32) - 20px)}.large.svelte-lk9eg8{min-height:calc(var(--size-64) - 20px)}.unpadded_box.svelte-lk9eg8{margin-top:0}.small_parent.svelte-lk9eg8{min-height:100%!important}.dropdown-arrow.svelte-p5edak{fill:var(--body-text-color);margin-right:var(--size-2);width:var(--size-5)}button.svelte-1e89no8{display:inline-flex;justify-content:center;align-items:center;transition:var(--button-transition);box-shadow:var(--button-shadow);padding:var(--size-0-5) var(--size-2);text-align:center}button.svelte-1e89no8:hover,button[disabled].svelte-1e89no8{box-shadow:var(--button-shadow-hover)}button.svelte-1e89no8:active{box-shadow:var(--button-shadow-active)}button[disabled].svelte-1e89no8{opacity:.5;filter:grayscale(30%);cursor:not-allowed}.hidden.svelte-1e89no8{display:none}.primary.svelte-1e89no8{border:var(--button-border-width) solid var(--button-primary-border-color);background:var(--button-primary-background-fill);color:var(--button-primary-text-color)}.primary.svelte-1e89no8:hover,.primary[disabled].svelte-1e89no8{border-color:var(--button-primary-border-color-hover);background:var(--button-primary-background-fill-hover);color:var(--button-primary-text-color-hover)}.secondary.svelte-1e89no8{border:var(--button-border-width) solid var(--button-secondary-border-color);background:var(--button-secondary-background-fill);color:var(--button-secondary-text-color)}.secondary.svelte-1e89no8:hover,.secondary[disabled].svelte-1e89no8{border-color:var(--button-secondary-border-color-hover);background:var(--button-secondary-background-fill-hover);color:var(--button-secondary-text-color-hover)}.stop.svelte-1e89no8{border:var(--button-border-width) solid var(--button-cancel-border-color);background:var(--button-cancel-background-fill);color:var(--button-cancel-text-color)}.stop.svelte-1e89no8:hover,.stop[disabled].svelte-1e89no8{border-color:var(--button-cancel-border-color-hover);background:var(--button-cancel-background-fill-hover);color:var(--button-cancel-text-color-hover)}.sm.svelte-1e89no8{border-radius:var(--button-small-radius);padding:var(--button-small-padding);font-weight:var(--button-small-text-weight);font-size:var(--button-small-text-size)}.lg.svelte-1e89no8{border-radius:var(--button-large-radius);padding:var(--button-large-padding);font-weight:var(--button-large-text-weight);font-size:var(--button-large-text-size)}
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/index.html DELETED
@@ -1,84 +0,0 @@
1
- <!doctype html>
2
- <html
3
- lang="en"
4
- style="
5
- margin: 0;
6
- padding: 0;
7
- min-height: 100%;
8
- display: flex;
9
- flex-direction: column;
10
- "
11
- >
12
- <head>
13
- <meta charset="utf-8" />
14
- <meta
15
- name="viewport"
16
- content="width=device-width, initial-scale=1, shrink-to-fit=no, maximum-scale=1"
17
- />
18
-
19
-
20
- <meta property="og:url" content="https://gradio.app/" />
21
- <meta property="og:type" content="website" />
22
- <meta property="og:image" content="{{ config['thumbnail'] or '' }}" />
23
- <meta property="og:title" content="{{ config['title'] or '' }}" />
24
- <meta
25
- property="og:description"
26
- content="{{ config['simple_description'] or '' }}"
27
- />
28
- <meta name="twitter:card" content="summary_large_image" />
29
- <meta name="twitter:creator" content="@teamGradio" />
30
- <meta name="twitter:title" content="{{ config['title'] or '' }}" />
31
- <meta
32
- name="twitter:description"
33
- content="{{ config['simple_description'] or '' }}"
34
- />
35
- <meta name="twitter:image" content="{{ config['thumbnail'] or '' }}" />
36
-
37
- <script>
38
- window.__gradio_mode__ = "app";
39
- </script>
40
-
41
- <script>window.gradio_config = {{ config | toorjson }};</script>
42
-
43
- <link rel="preconnect" href="https://fonts.googleapis.com" />
44
- <link
45
- rel="preconnect"
46
- href="https://fonts.gstatic.com"
47
- crossorigin="anonymous"
48
- />
49
- <script
50
- src="https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.3.6/iframeResizer.contentWindow.min.js"
51
- async
52
- ></script>
53
- <script type="module" crossorigin src="./assets/index-3370be2a.js"></script>
54
-
55
- </head>
56
-
57
- <body
58
- style="
59
- width: 100%;
60
- margin: 0;
61
- padding: 0;
62
- display: flex;
63
- flex-direction: column;
64
- flex-grow: 1;
65
- "
66
- >
67
- <gradio-app
68
- control_page_title="true"
69
- embed="false"
70
- eager="true"
71
- style="display: flex; flex-direction: column; flex-grow: 1"
72
- >
73
- </gradio-app>
74
- <script>
75
- const ce = document.getElementsByTagName("gradio-app");
76
- if (ce[0]) {
77
- ce[0].addEventListener("domchange", () => {
78
- document.body.style.padding = "0";
79
- });
80
- document.body.style.padding = "0";
81
- }
82
- </script>
83
- </body>
84
- </html>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DRAGSclub/README/README.md DELETED
@@ -1,10 +0,0 @@
1
- ---
2
- title: README
3
- emoji: 🔥
4
- colorFrom: purple
5
- colorTo: indigo
6
- sdk: static
7
- pinned: false
8
- ---
9
-
10
- Edit this `README.md` markdown file to author your organization card 🔥
 
 
 
 
 
 
 
 
 
 
 
spaces/Darkk88/medium-GPT4/app.py DELETED
@@ -1,3 +0,0 @@
1
- import gradio as gr
2
-
3
- gr.Interface.load("models/ingen51/DialoGPT-medium-GPT4").launch()
 
 
 
 
spaces/Datasculptor/MusicGen/audiocraft/data/__init__.py DELETED
@@ -1,8 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- # flake8: noqa
8
- from . import audio, audio_dataset
 
 
 
 
 
 
 
 
 
spaces/Deepak107/Bottle_images/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Bottle Images
3
- emoji: 🐢
4
- colorFrom: green
5
- colorTo: blue
6
- sdk: gradio
7
- sdk_version: 3.2
8
- app_file: app.py
9
- pinned: false
10
- license: afl-3.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Duskfallcrew/textual-inversion-training/app.py DELETED
@@ -1,559 +0,0 @@
1
- import gradio as gr
2
- import os
3
- from pathlib import Path
4
- import argparse
5
- import shutil
6
- # from train_dreambooth import run_training
7
- from textual_inversion import run_training
8
- from convertosd import convert
9
- from PIL import Image
10
- from slugify import slugify
11
- import requests
12
- import torch
13
- import zipfile
14
- import tarfile
15
- import urllib.parse
16
- import gc
17
- from diffusers import StableDiffusionPipeline
18
- from huggingface_hub import snapshot_download
19
-
20
-
21
- is_spaces = True if "SPACE_ID" in os.environ else False
22
- #is_shared_ui = True if "IS_SHARED_UI" in os.environ else False
23
- if(is_spaces):
24
- is_shared_ui = True if ("lvkaokao/textual-inversion-training" in os.environ['SPACE_ID'] or "Intel/textual-inversion-training" in os.environ['SPACE_ID']) else False
25
- else:
26
- is_shared_ui = False
27
-
28
- css = '''
29
- .instruction{position: absolute; top: 0;right: 0;margin-top: 0px !important}
30
- .arrow{position: absolute;top: 0;right: -110px;margin-top: -8px !important}
31
- #component-4, #component-3, #component-10{min-height: 0}
32
- .duplicate-button img{margin: 0}
33
- '''
34
- maximum_concepts = 1
35
-
36
- #Pre download the files
37
- '''
38
- model_v1_4 = snapshot_download(repo_id="CompVis/stable-diffusion-v1-4")
39
- #model_v1_5 = snapshot_download(repo_id="runwayml/stable-diffusion-v1-5")
40
- model_v1_5 = snapshot_download(repo_id="stabilityai/stable-diffusion-2")
41
- model_v2_512 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-base", revision="fp16")
42
- safety_checker = snapshot_download(repo_id="multimodalart/sd-sc")
43
- '''
44
- model_v1_4 = "CompVis/stable-diffusion-v1-4"
45
- model_v1_5 = "stabilityai/stable-diffusion-2"
46
- model_v2_512 = "stabilityai/stable-diffusion-2-base"
47
-
48
- model_to_load = model_v1_4
49
-
50
-
51
- with zipfile.ZipFile("mix.zip", 'r') as zip_ref:
52
- zip_ref.extractall(".")
53
-
54
- def swap_text(option):
55
- mandatory_liability = "You must have the right to do so and you are liable for the images you use, example:"
56
- if(option == "object"):
57
- instance_prompt_example = "cttoy"
58
- freeze_for = 30
59
- return [f"You are going to train `object`(s), upload 5-10 images of each object you are planning on training on from different angles/perspectives. {mandatory_liability}:", '''<img src="file/cat-toy.png" />''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to 512x512.", freeze_for, gr.update(visible=False)]
60
- elif(option == "person"):
61
- instance_prompt_example = "julcto"
62
- freeze_for = 70
63
- return [f"You are going to train a `person`(s), upload 10-20 images of each person you are planning on training on from different angles/perspectives. {mandatory_liability}:", '''<img src="file/person.png" />''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to 512x512.", freeze_for, gr.update(visible=True)]
64
- elif(option == "style"):
65
- instance_prompt_example = "trsldamrl"
66
- freeze_for = 10
67
- return [f"You are going to train a `style`, upload 10-20 images of the style you are planning on training on. Name the files with the words you would like {mandatory_liability}:", '''<img src="file/trsl_style.png" />''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to 512x512.", freeze_for, gr.update(visible=False)]
68
-
69
- def swap_base_model(selected_model):
70
- global model_to_load
71
- if(selected_model == "v1-4"):
72
- model_to_load = model_v1_4
73
- elif(selected_model == "v1-5"):
74
- model_to_load = model_v1_5
75
- else:
76
- model_to_load = model_v2_512
77
-
78
- def count_files(*inputs):
79
- file_counter = 0
80
- concept_counter = 0
81
- for i, input in enumerate(inputs):
82
- if(i < maximum_concepts-1):
83
- files = inputs[i]
84
- if(files):
85
- concept_counter+=1
86
- file_counter+=len(files)
87
- uses_custom = inputs[-1]
88
- type_of_thing = inputs[-4]
89
- if(uses_custom):
90
- Training_Steps = int(inputs[-3])
91
- else:
92
- Training_Steps = file_counter*200
93
- if(Training_Steps > 2400):
94
- Training_Steps=2400
95
- elif(Training_Steps < 1400):
96
- Training_Steps=1400
97
- if(is_spaces):
98
- summary_sentence = f'''The training should take around 24 hours for 1000 steps using the default free CPU.<br><br>'''
99
- else:
100
- summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps.<br><br>'''
101
-
102
- return([gr.update(visible=True), gr.update(visible=True, value=summary_sentence)])
103
-
104
- def update_steps(*files_list):
105
- file_counter = 0
106
- for i, files in enumerate(files_list):
107
- if(files):
108
- file_counter+=len(files)
109
- return(gr.update(value=file_counter*200))
110
-
111
- def pad_image(image):
112
- w, h = image.size
113
- if w == h:
114
- return image
115
- elif w > h:
116
- new_image = Image.new(image.mode, (w, w), (0, 0, 0))
117
- new_image.paste(image, (0, (w - h) // 2))
118
- return new_image
119
- else:
120
- new_image = Image.new(image.mode, (h, h), (0, 0, 0))
121
- new_image.paste(image, ((h - w) // 2, 0))
122
- return new_image
123
-
124
- def train(*inputs):
125
- if is_shared_ui:
126
- raise gr.Error("This Space only works in duplicated instances")
127
-
128
- torch.cuda.empty_cache()
129
- if 'pipe' in globals():
130
- global pipe, pipe_is_set
131
- del pipe
132
- pipe_is_set = False
133
- gc.collect()
134
-
135
- if os.path.exists("output_model"): shutil.rmtree('output_model')
136
- if os.path.exists("concept_images"): shutil.rmtree('concept_images')
137
- if os.path.exists("diffusers_model.tar"): os.remove("diffusers_model.tar")
138
- if os.path.exists("model.ckpt"): os.remove("model.ckpt")
139
- if os.path.exists("hastrained.success"): os.remove("hastrained.success")
140
- file_counter = 0
141
- print(inputs)
142
-
143
- os.makedirs('concept_images', exist_ok=True)
144
- files = inputs[maximum_concepts*3]
145
- init_word = inputs[maximum_concepts*2]
146
- prompt = inputs[maximum_concepts]
147
- if(prompt == "" or prompt == None):
148
- raise gr.Error("You forgot to define your concept prompt")
149
-
150
- for j, file_temp in enumerate(files):
151
- file = Image.open(file_temp.name)
152
- image = pad_image(file)
153
- image = image.resize((512, 512))
154
- extension = file_temp.name.split(".")[1]
155
- image = image.convert('RGB')
156
- image.save(f'concept_images/{j+1}.jpg', format="JPEG", quality = 100)
157
- file_counter += 1
158
-
159
-
160
- os.makedirs('output_model',exist_ok=True)
161
- uses_custom = inputs[-1]
162
- type_of_thing = inputs[-4]
163
- remove_attribution_after = inputs[-6]
164
- experimental_face_improvement = inputs[-9]
165
- which_model = inputs[-10]
166
- if(uses_custom):
167
- Training_Steps = int(inputs[-3])
168
- else:
169
- Training_Steps = 1000
170
-
171
- print(os.listdir("concept_images"))
172
-
173
- args_general = argparse.Namespace(
174
- pretrained_model_name_or_path = model_to_load,
175
- train_data_dir="concept_images",
176
- learnable_property=type_of_thing,
177
- placeholder_token=prompt,
178
- initializer_token=init_word,
179
- resolution=512,
180
- train_batch_size=1,
181
- gradient_accumulation_steps=2,
182
- use_bf16=True,
183
- max_train_steps=Training_Steps,
184
- learning_rate=5.0e-4,
185
- scale_lr=True,
186
- lr_scheduler="constant",
187
- lr_warmup_steps=0,
188
- output_dir="output_model",
189
- )
190
- print("Starting single training...")
191
- lock_file = open("intraining.lock", "w")
192
- lock_file.close()
193
- run_training(args_general)
194
-
195
- gc.collect()
196
- torch.cuda.empty_cache()
197
- if(which_model in ["v1-5"]):
198
- print("Adding Safety Checker to the model...")
199
- shutil.copytree(f"{safety_checker}/feature_extractor", "output_model/feature_extractor")
200
- shutil.copytree(f"{safety_checker}/safety_checker", "output_model/safety_checker")
201
- shutil.copy(f"model_index.json", "output_model/model_index.json")
202
-
203
- if(not remove_attribution_after):
204
- print("Archiving model file...")
205
- with tarfile.open("diffusers_model.tar", "w") as tar:
206
- tar.add("output_model", arcname=os.path.basename("output_model"))
207
- if os.path.exists("intraining.lock"): os.remove("intraining.lock")
208
- trained_file = open("hastrained.success", "w")
209
- trained_file.close()
210
- print(os.listdir("output_model"))
211
- print("Training completed!")
212
- return [
213
- gr.update(visible=True, value=["diffusers_model.tar"]), #result
214
- gr.update(visible=True), #try_your_model
215
- gr.update(visible=True), #push_to_hub
216
- gr.update(visible=True), #convert_button
217
- gr.update(visible=False), #training_ongoing
218
- gr.update(visible=True) #completed_training
219
- ]
220
- else:
221
- hf_token = inputs[-5]
222
- model_name = inputs[-7]
223
- where_to_upload = inputs[-8]
224
- push(model_name, where_to_upload, hf_token, which_model, True)
225
- hardware_url = f"https://huggingface.co/spaces/{os.environ['SPACE_ID']}/hardware"
226
- headers = { "authorization" : f"Bearer {hf_token}"}
227
- body = {'flavor': 'cpu-basic'}
228
- requests.post(hardware_url, json = body, headers=headers)
229
-
230
- import time
231
- pipe_is_set = False
232
- def generate(prompt, steps):
233
-
234
- print("prompt: ", prompt)
235
- print("steps: ", steps)
236
-
237
- torch.cuda.empty_cache()
238
- from diffusers import StableDiffusionPipeline
239
- global pipe_is_set
240
- if(not pipe_is_set):
241
- global pipe
242
- if torch.cuda.is_available():
243
- pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float16)
244
- pipe = pipe.to("cuda")
245
- else:
246
- pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float)
247
- pipe_is_set = True
248
-
249
- start_time = time.time()
250
- image = pipe(prompt, num_inference_steps=steps, guidance_scale=7.5).images[0]
251
- print("cost: ", time.time() - start_time)
252
- return(image)
253
-
254
- def push(model_name, where_to_upload, hf_token, which_model, comes_from_automated=False):
255
-
256
- if(not os.path.exists("model.ckpt")):
257
- convert("output_model", "model.ckpt")
258
- from huggingface_hub import HfApi, HfFolder, CommitOperationAdd
259
- from huggingface_hub import create_repo
260
- model_name_slug = slugify(model_name)
261
- api = HfApi()
262
- your_username = api.whoami(token=hf_token)["name"]
263
- if(where_to_upload == "My personal profile"):
264
- model_id = f"{your_username}/{model_name_slug}"
265
- else:
266
- model_id = f"sd-dreambooth-library/{model_name_slug}"
267
- headers = {"Authorization" : f"Bearer: {hf_token}", "Content-Type": "application/json"}
268
- response = requests.post("https://huggingface.co/organizations/sd-dreambooth-library/share/SSeOwppVCscfTEzFGQaqpfcjukVeNrKNHX", headers=headers)
269
-
270
- images_upload = os.listdir("concept_images")
271
- image_string = ""
272
- instance_prompt_list = []
273
- previous_instance_prompt = ''
274
- for i, image in enumerate(images_upload):
275
- instance_prompt = image.split("_")[0]
276
- if(instance_prompt != previous_instance_prompt):
277
- title_instance_prompt_string = instance_prompt
278
- instance_prompt_list.append(instance_prompt)
279
- else:
280
- title_instance_prompt_string = ''
281
- previous_instance_prompt = instance_prompt
282
- image_string = f'''{title_instance_prompt_string} {"(use that on your prompt)" if title_instance_prompt_string != "" else ""}
283
- {image_string}![{instance_prompt} {i}](https://huggingface.co/{model_id}/resolve/main/concept_images/{urllib.parse.quote(image)})'''
284
- readme_text = f'''---
285
- license: creativeml-openrail-m
286
- tags:
287
- - text-to-image
288
- ---
289
- ### {model_name} Dreambooth model trained by {api.whoami(token=hf_token)["name"]} with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the {which_model} base model
290
-
291
- You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
292
-
293
- Sample pictures of:
294
- {image_string}
295
- '''
296
- #Save the readme to a file
297
- readme_file = open("model.README.md", "w")
298
- readme_file.write(readme_text)
299
- readme_file.close()
300
- #Save the token identifier to a file
301
- text_file = open("token_identifier.txt", "w")
302
- text_file.write(', '.join(instance_prompt_list))
303
- text_file.close()
304
- try:
305
- create_repo(model_id,private=True, token=hf_token)
306
- except:
307
- import time
308
- epoch_time = str(int(time.time()))
309
- create_repo(f"{model_id}-{epoch_time}", private=True,token=hf_token)
310
- operations = [
311
- CommitOperationAdd(path_in_repo="token_identifier.txt", path_or_fileobj="token_identifier.txt"),
312
- CommitOperationAdd(path_in_repo="README.md", path_or_fileobj="model.README.md"),
313
- CommitOperationAdd(path_in_repo=f"model.ckpt",path_or_fileobj="model.ckpt")
314
- ]
315
- api.create_commit(
316
- repo_id=model_id,
317
- operations=operations,
318
- commit_message=f"Upload the model {model_name}",
319
- token=hf_token
320
- )
321
- api.upload_folder(
322
- folder_path="output_model",
323
- repo_id=model_id,
324
- token=hf_token
325
- )
326
- api.upload_folder(
327
- folder_path="concept_images",
328
- path_in_repo="concept_images",
329
- repo_id=model_id,
330
- token=hf_token
331
- )
332
- if is_spaces:
333
- if(not comes_from_automated):
334
- extra_message = "Don't forget to remove the GPU attribution after you play with it."
335
- else:
336
- extra_message = "The GPU has been removed automatically as requested, and you can try the model via the model page"
337
- api.create_discussion(repo_id=os.environ['SPACE_ID'], title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!", description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}",repo_type="space", token=hf_token)
338
-
339
- return [gr.update(visible=True, value=f"Successfully uploaded your model. Access it [here](https://huggingface.co/{model_id})"), gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])]
340
-
341
- def convert_to_ckpt():
342
- convert("output_model", "model.ckpt")
343
- return gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])
344
-
345
- def check_status(top_description):
346
- print('=='*20)
347
- print(os.listdir("./"))
348
-
349
- if os.path.exists("hastrained.success"):
350
- if is_spaces:
351
- update_top_tag = gr.update(value=f'''
352
- <div class="gr-prose" style="max-width: 80%">
353
- <h2>Your model has finished training ✅</h2>
354
- <p>Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub). Once you are done, your model is safe, and you don't want to train a new one, go to the <a href="https://huggingface.co/spaces/{os.environ['SPACE_ID']}">settings page</a> and downgrade your Space to a CPU Basic</p>
355
- </div>
356
- ''')
357
- else:
358
- update_top_tag = gr.update(value=f'''
359
- <div class="gr-prose" style="max-width: 80%">
360
- <h2>Your model has finished training ✅</h2>
361
- <p>Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub).</p>
362
- </div>
363
- ''')
364
- show_outputs = True
365
- elif os.path.exists("intraining.lock"):
366
- update_top_tag = gr.update(value='''
367
- <div class="gr-prose" style="max-width: 80%">
368
- <h2>Don't worry, your model is still training! ⌛</h2>
369
- <p>You closed the tab while your model was training, but it's all good! It is still training right now. You can click the "Open logs" button above here to check the training status. Once training is done, reload this tab to interact with your model</p>
370
- </div>
371
- ''')
372
- show_outputs = False
373
- else:
374
- update_top_tag = gr.update(value=top_description)
375
- show_outputs = False
376
- if os.path.exists("diffusers_model.tar"):
377
- update_files_tag = gr.update(visible=show_outputs, value=["diffusers_model.tar"])
378
- else:
379
- update_files_tag = gr.update(visible=show_outputs)
380
- return [
381
- update_top_tag, #top_description
382
- gr.update(visible=show_outputs), #try_your_model
383
- gr.update(visible=show_outputs), #push_to_hub
384
- update_files_tag, #result
385
- gr.update(visible=show_outputs), #convert_button
386
- ]
387
-
388
- def checkbox_swap(checkbox):
389
- return [gr.update(visible=checkbox), gr.update(visible=checkbox), gr.update(visible=checkbox), gr.update(visible=checkbox)]
390
-
391
- with gr.Blocks(css=css) as demo:
392
- with gr.Box():
393
- if is_shared_ui:
394
- top_description = gr.HTML(f'''
395
- <div class="gr-prose" style="max-width: 80%">
396
- <h2>Attention - This Space doesn't work in this shared UI</h2>
397
- <p>For it to work, you can either run locally or duplicate the Space and run it on your own profile using the free CPU or a (paid) private T4 GPU for training. CPU training takes a long time while each T4 costs US$0.60/h which should cost < US$1 to train most models using default settings!&nbsp;&nbsp;<a class="duplicate-button" style="display:inline-block" href="https://huggingface.co/spaces/{os.environ['SPACE_ID']}?duplicate=true"><img src="https://img.shields.io/badge/-Duplicate%20Space-blue?labelColor=white&style=flat&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAAXNSR0IArs4c6QAAAP5JREFUOE+lk7FqAkEURY+ltunEgFXS2sZGIbXfEPdLlnxJyDdYB62sbbUKpLbVNhyYFzbrrA74YJlh9r079973psed0cvUD4A+4HoCjsA85X0Dfn/RBLBgBDxnQPfAEJgBY+A9gALA4tcbamSzS4xq4FOQAJgCDwV2CPKV8tZAJcAjMMkUe1vX+U+SMhfAJEHasQIWmXNN3abzDwHUrgcRGmYcgKe0bxrblHEB4E/pndMazNpSZGcsZdBlYJcEL9Afo75molJyM2FxmPgmgPqlWNLGfwZGG6UiyEvLzHYDmoPkDDiNm9JR9uboiONcBXrpY1qmgs21x1QwyZcpvxt9NS09PlsPAAAAAElFTkSuQmCC&logoWidth=14" alt="Duplicate Space"></a></p>
398
- <img class="instruction" src="file/duplicate.png">
399
- <img class="arrow" src="file/arrow.png" />
400
- </div>
401
- ''')
402
- elif(is_spaces):
403
- top_description = gr.HTML(f'''
404
- <div class="gr-prose" style="max-width: 80%">
405
- <h2>You have successfully duplicated the Textual Inversion Training Space 🎉</h2>
406
- <p>If you want to use CPU, it will take a long time to run the training below. If you want to use GPU, please get this ready: <a href="https://huggingface.co/spaces/{os.environ['SPACE_ID']}/settings">attribute a T4 GPU to it (via the Settings tab)</a> and run the training below. You will be billed by the minute from when you activate the GPU until when it is turned it off.</p>
407
- </div>
408
- ''')
409
- else:
410
- top_description = gr.HTML(f'''
411
- <div class="gr-prose" style="max-width: 80%">
412
- <h2>You have successfully cloned the Dreambooth Training Space locally 🎉</h2>
413
- <p>Do a <code>pip install requirements-local.txt</code></p>
414
- </div>
415
- ''')
416
- gr.Markdown("# Textual Inversion Training UI 💭")
417
- gr.Markdown("Customize Stable Diffusion by training it on a new concept. This Space is based on [Intel® Neural Compressor](https://github.com/intel/neural-compressor/tree/master/examples/pytorch/diffusion_model/diffusers/textual_inversion) with [🧨 diffusers](https://github.com/huggingface/diffusers)")
418
-
419
- with gr.Row() as what_are_you_training:
420
- type_of_thing = gr.Dropdown(label="What would you like to train?", choices=["object", "person", "style"], value="object", interactive=True)
421
- base_model_to_use = gr.Dropdown(label="Which base model would you like to use?", choices=["v1-4", "v1-5", "v2-512"], value="v1-4", interactive=True)
422
-
423
- #Very hacky approach to emulate dynamically created Gradio components
424
- with gr.Row() as upload_your_concept:
425
- with gr.Column():
426
- thing_description = gr.Markdown("You are going to train an `object`, please upload 1-5 images of the object to teach new concepts to Stable Diffusion, example")
427
- thing_experimental = gr.Checkbox(label="Improve faces (prior preservation) - can take longer training but can improve faces", visible=False, value=False)
428
- thing_image_example = gr.HTML('''<img src="file/dicoo-toy.png" class="aligncenter" height="128" width="128" />''')
429
- things_naming = gr.Markdown("You should name your concept with a unique made up word that never appears in the model vocab (e.g.: `dicoo*` here). **The meaning of the initial word** is to initialize the concept word embedding which will make training easy (e.g.: `toy` here). Images will be automatically cropped to 512x512.")
430
-
431
- with gr.Column():
432
- file_collection = []
433
- concept_collection = []
434
- init_collection = []
435
- buttons_collection = []
436
- delete_collection = []
437
- is_visible = []
438
-
439
- row = [None] * maximum_concepts
440
- for x in range(maximum_concepts):
441
- ordinal = lambda n: "%d%s" % (n, "tsnrhtdd"[(n // 10 % 10 != 1) * (n % 10 < 4) * n % 10::4])
442
- if(x == 0):
443
- visible = True
444
- is_visible.append(gr.State(value=True))
445
- else:
446
- visible = False
447
- is_visible.append(gr.State(value=False))
448
-
449
- file_collection.append(gr.File(label=f'''Upload the images for your {ordinal(x+1) if (x>0) else ""} concept''', file_count="multiple", interactive=True, visible=visible))
450
- with gr.Column(visible=visible) as row[x]:
451
- concept_collection.append(gr.Textbox(label=f'''{ordinal(x+1) if (x>0) else ""} concept word - use a unique, made up word to avoid collisions'''))
452
- init_collection.append(gr.Textbox(label=f'''{ordinal(x+1) if (x>0) else ""} initial word - to init the concept embedding'''))
453
- with gr.Row():
454
- if(x < maximum_concepts-1):
455
- buttons_collection.append(gr.Button(value="Add +1 concept", visible=visible))
456
- if(x > 0):
457
- delete_collection.append(gr.Button(value=f"Delete {ordinal(x+1)} concept"))
458
-
459
- counter_add = 1
460
- for button in buttons_collection:
461
- if(counter_add < len(buttons_collection)):
462
- button.click(lambda:
463
- [gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), True, None],
464
- None,
465
- [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], buttons_collection[counter_add], is_visible[counter_add], file_collection[counter_add]], queue=False)
466
- else:
467
- button.click(lambda:[gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), True], None, [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], is_visible[counter_add]], queue=False)
468
- counter_add += 1
469
-
470
- counter_delete = 1
471
- for delete_button in delete_collection:
472
- if(counter_delete < len(delete_collection)+1):
473
- delete_button.click(lambda:[gr.update(visible=False),gr.update(visible=False), gr.update(visible=True), False], None, [file_collection[counter_delete], row[counter_delete], buttons_collection[counter_delete-1], is_visible[counter_delete]], queue=False)
474
- counter_delete += 1
475
-
476
- with gr.Accordion("Custom Settings", open=False):
477
- swap_auto_calculated = gr.Checkbox(label="Use custom settings")
478
- gr.Markdown("The default steps is 1000. If your results aren't really what you wanted, it may be underfitting and you need more steps.")
479
- steps = gr.Number(label="How many steps", value=1000)
480
- # need to remove
481
- perc_txt_encoder = gr.Number(label="Percentage of the training steps the text-encoder should be trained as well", value=30, visible=False)
482
- # perc_txt_encoder = 30
483
-
484
- with gr.Box(visible=False) as training_summary:
485
- training_summary_text = gr.HTML("", visible=False, label="Training Summary")
486
- is_advanced_visible = True if is_spaces else False
487
- training_summary_checkbox = gr.Checkbox(label="Automatically remove paid GPU attribution and upload model to the Hugging Face Hub after training", value=False, visible=is_advanced_visible)
488
- training_summary_model_name = gr.Textbox(label="Name of your model", visible=False)
489
- training_summary_where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to", visible=False)
490
- training_summary_token_message = gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.", visible=False)
491
- training_summary_token = gr.Textbox(label="Hugging Face Write Token", type="password", visible=False)
492
-
493
- train_btn = gr.Button("Start Training")
494
-
495
- training_ongoing = gr.Markdown("## Training is ongoing ⌛... You can close this tab if you like or just wait. If you did not check the `Remove GPU After training`, you can come back here to try your model and upload it after training. Don't forget to remove the GPU attribution after you are done. ", visible=False)
496
-
497
- #Post-training UI
498
- completed_training = gr.Markdown('''# ✅ Training completed.
499
- ### Don't forget to remove the GPU attribution after you are done trying and uploading your model''', visible=False)
500
-
501
- with gr.Row():
502
- with gr.Box(visible=True) as try_your_model:
503
- gr.Markdown("## Try your model")
504
- prompt = gr.Textbox(label="Type your prompt")
505
- result_image = gr.Image()
506
- inference_steps = gr.Slider(minimum=1, maximum=150, value=50, step=1)
507
- generate_button = gr.Button("Generate Image")
508
-
509
- with gr.Box(visible=False) as push_to_hub:
510
- gr.Markdown("## Push to Hugging Face Hub")
511
- model_name = gr.Textbox(label="Name of your model", placeholder="Tarsila do Amaral Style")
512
- where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to")
513
- gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.")
514
- hf_token = gr.Textbox(label="Hugging Face Write Token", type="password")
515
-
516
- push_button = gr.Button("Push to the Hub")
517
-
518
- result = gr.File(label="Download the uploaded models in the diffusers format", visible=True)
519
- success_message_upload = gr.Markdown(visible=False)
520
- convert_button = gr.Button("Convert to CKPT", visible=False)
521
-
522
- #Swap the examples and the % of text encoder trained depending if it is an object, person or style
523
- type_of_thing.change(fn=swap_text, inputs=[type_of_thing], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False)
524
-
525
- #Swap the base model
526
- base_model_to_use.change(fn=swap_base_model, inputs=base_model_to_use, outputs=[])
527
-
528
- #Update the summary box below the UI according to how many images are uploaded and whether users are using custom settings or not
529
- for file in file_collection:
530
- #file.change(fn=update_steps,inputs=file_collection, outputs=steps)
531
- file.change(fn=count_files, inputs=file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
532
-
533
- steps.change(fn=count_files, inputs=file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
534
- perc_txt_encoder.change(fn=count_files, inputs=file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
535
-
536
- #Give more options if the user wants to finish everything after training
537
- if(is_spaces):
538
- training_summary_checkbox.change(fn=checkbox_swap, inputs=training_summary_checkbox, outputs=[training_summary_token_message, training_summary_token, training_summary_model_name, training_summary_where_to_upload],queue=False, show_progress=False)
539
- #Add a message for while it is in training
540
- train_btn.click(lambda:gr.update(visible=True), inputs=None, outputs=training_ongoing)
541
-
542
- #The main train function
543
- train_btn.click(fn=train, inputs=is_visible+concept_collection+init_collection+file_collection+[base_model_to_use]+[thing_experimental]+[training_summary_where_to_upload]+[training_summary_model_name]+[training_summary_checkbox]+[training_summary_token]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[result, try_your_model, push_to_hub, convert_button, training_ongoing, completed_training], queue=False)
544
-
545
- #Button to generate an image from your trained model after training
546
- print('=='*20)
547
- print(prompt)
548
- print(inference_steps)
549
- generate_button.click(fn=generate, inputs=[prompt, inference_steps], outputs=result_image, queue=False)
550
-
551
- #Button to push the model to the Hugging Face Hub
552
- push_button.click(fn=push, inputs=[model_name, where_to_upload, hf_token, base_model_to_use], outputs=[success_message_upload, result], queue=False)
553
- #Button to convert the model to ckpt format
554
- convert_button.click(fn=convert_to_ckpt, inputs=[], outputs=result, queue=False)
555
-
556
- #Checks if the training is running
557
- demo.load(fn=check_status, inputs=top_description, outputs=[top_description, try_your_model, push_to_hub, result, convert_button], queue=False, show_progress=False)
558
-
559
- demo.queue(default_enabled=False).launch(debug=True)