parquet-converter commited on
Commit
bb6ec4f
·
1 Parent(s): aaae815

Update parquet files (step 18 of 397)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/ACDSee for Windows 10 The Best Photo Editing Software You Can Try for Free.md +0 -29
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/AerosoftCrackerV2.exel Save Money and Time with This Amazing Cracker.md +0 -151
  3. spaces/1gistliPinn/ChatGPT4/Examples/ABYSS CRAWLERS Plus Game Hack Password !!BETTER!!.md +0 -6
  4. spaces/1gistliPinn/ChatGPT4/Examples/Bollettino Postale 896 22.pdfl ((FREE)).md +0 -75
  5. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android Oyun Clubta Truck Simulator Ultimate Apk Para Hileli Oyna - Gereki ehirler ve Trlar.md +0 -109
  6. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bolt APK - The Best App for Booking Rides and Scooters.md +0 -150
  7. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Traffic Racing Game Stunning 3D Graphics and Smooth Car Handling.md +0 -141
  8. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cat Simulator Annual Life Kitty Pet MOD - The Best Game for Cat Fans.md +0 -106
  9. spaces/1phancelerku/anime-remove-background/Download ibis Paint X MOD APK and Unleash Your Creativity - Premium Unlocked.md +0 -200
  10. spaces/1phancelerku/anime-remove-background/FIFA 09 APK for Android - The Ultimate Guide to Download and Install.md +0 -98
  11. spaces/2023Liu2023/bingo/src/components/voice.tsx +0 -52
  12. spaces/801artistry/RVC801/infer/lib/audio.py +0 -197
  13. spaces/A666sxr/Genshin_TTS/monotonic_align/__init__.py +0 -19
  14. spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/hparams.py +0 -129
  15. spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/models/encoder.py +0 -686
  16. spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/conv.py +0 -167
  17. spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/linear_probe.py +0 -63
  18. spaces/AIGText/GlyphControl/ldm/modules/midas/utils.py +0 -189
  19. spaces/AIatUIUC/CodeLATS/generators/parse.py +0 -49
  20. spaces/Adapter/CoAdapter/ldm/models/diffusion/ddim.py +0 -293
  21. spaces/AdithyaSNair/Dog_breed_predictor/README.md +0 -12
  22. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/touchcursor-plugin.js +0 -20
  23. spaces/Alesteba/NeRF_ficus-pxl/config.py +0 -16
  24. spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/libJPG/jpge.h +0 -172
  25. spaces/Amon1/ChatGPTForAcadamic/crazy_functions/总结word文档.py +0 -127
  26. spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.py +0 -60
  27. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py +0 -1020
  28. spaces/Awiny/Image2Paragraph/models/grit_model.py +0 -27
  29. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/__init__.py +0 -9
  30. spaces/Bart92/RVC_HF/go-applio-manager-recode.bat +0 -322
  31. spaces/BartPoint/VoiceChange_Beta/vc_infer_pipeline.py +0 -443
  32. spaces/BetterAPI/BetterChat_new/src/hooks.server.ts +0 -37
  33. spaces/BillBojangeles2000/WikiGPT/README.md +0 -13
  34. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/structures/__init__.py +0 -10
  35. spaces/CVPR/LIVE/thrust/thrust/mr/sync_pool.h +0 -116
  36. spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/logical.h +0 -22
  37. spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/mismatch.h +0 -58
  38. spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/datasets/czech_slr_dataset.py +0 -153
  39. spaces/CVPR/WALT/configs/walt/walt_people.py +0 -80
  40. spaces/CVPR/WALT/mmdet/datasets/builder.py +0 -143
  41. spaces/Chujinze/Res2Net/README.md +0 -12
  42. spaces/CikeyQI/Yunzai/Yunzai/plugins/adapter/go-cqhttp.js +0 -842
  43. spaces/CobaltZvc/Hyper_Bot/style.css +0 -28
  44. spaces/CofAI/netlist/index.html +0 -12
  45. spaces/CorvaeOboro/gen_ability_icon/torch_utils/ops/bias_act.h +0 -38
  46. spaces/Curranj/GPT-SQL/README.md +0 -12
  47. spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/config/defaults.py +0 -471
  48. spaces/DGSpitzer/TXT-2-IMG-2-MUSIC-2-VIDEO-w-RIFFUSION/utils.py +0 -50
  49. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/FontFile.py +0 -110
  50. spaces/Datasculptor/StyleGAN-NADA/e4e/options/train_options.py +0 -84
spaces/1acneusushi/gradio-2dmoleculeeditor/data/ACDSee for Windows 10 The Best Photo Editing Software You Can Try for Free.md DELETED
@@ -1,29 +0,0 @@
1
-
2
- <h1>How to Free Download ACDSee for Windows 10</h1>
3
- <p>If you are looking for a powerful and easy-to-use photo editing software, you might want to try ACDSee. ACDSee is a popular program that allows you to organize, edit, and share your photos with ease. It has many features and tools that can help you enhance your images and create stunning results.</p>
4
- <h2>free download acdsee for windows 10</h2><br /><p><b><b>DOWNLOAD</b> &#9658;&#9658;&#9658; <a href="https://byltly.com/2uKxji">https://byltly.com/2uKxji</a></b></p><br /><br />
5
- <p>But how can you get ACDSee for Windows 10? Is there a way to free download it? The answer is yes, but you need to be careful. There are many websites that claim to offer free downloads of ACDSee, but some of them might be scams or contain viruses. You don't want to risk your computer's security or waste your time with fake downloads.</p>
6
- <p>That's why we recommend you to use the official website of ACDSee. There, you can find the latest version of ACDSee for Windows 10, as well as other products and services from the company. You can also get a free trial of ACDSee for 30 days, which will let you test all the features and functions of the software before you decide to buy it.</p>
7
- <p>To free download ACDSee for Windows 10 from the official website, follow these steps:</p>
8
- <ol>
9
- <li>Go to <a href="https://www.acdsee.com/en/index/">https://www.acdsee.com/en/index/</a> and click on the "Download" button at the top right corner.</li>
10
- <li>Select the product you want to download. In this case, choose "ACDSee Photo Studio Ultimate 2023" or "ACDSee Photo Studio Professional 2023", depending on your needs and preferences.</li>
11
- <li>Click on the "Free Trial" button and fill in your name and email address. You will receive a confirmation email with a link to download the software.</li>
12
- <li>Click on the link in the email and follow the instructions to install ACDSee on your Windows 10 computer.</li>
13
- <li>Enjoy your free trial of ACDSee for 30 days. You can use all the features and tools of the software without any limitations or watermarks.</li>
14
- </ol>
15
- <p>That's it! You have successfully free downloaded ACDSee for Windows 10. Now you can start editing and sharing your photos with this amazing software. If you like it, you can purchase a license from the official website or from an authorized reseller. ACDSee offers different plans and prices to suit your budget and needs.</p>
16
- <p>We hope this article was helpful and informative. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!</p>
17
- <p></p>
18
-
19
- <h2>Why Choose ACDSee for Windows 10?</h2>
20
- <p>ACDSee is one of the best photo editing software for Windows 10. It has many advantages and benefits that make it stand out from other programs. Here are some of the reasons why you should choose ACDSee for Windows 10:</p>
21
- <ul>
22
- <li>ACDSee is fast and efficient. It can handle large and complex files without slowing down your computer. It also has a smooth and intuitive interface that makes it easy to navigate and use.</li>
23
- <li>ACDSee is versatile and flexible. It can support various file formats, including RAW, JPEG, PNG, TIFF, GIF, and more. It also has a wide range of tools and features that can help you with different tasks, such as cropping, resizing, rotating, adjusting colors, applying filters, adding text, removing blemishes, and more.</li>
24
- <li>ACDSee is powerful and professional. It can perform advanced editing and processing functions, such as HDR, panorama, focus stacking, facial recognition, batch editing, watermarking, and more. It also has a built-in digital asset management system that allows you to organize, sort, tag, rate, and search your photos easily.</li>
25
- <li>ACDSee is creative and fun. It can help you unleash your artistic potential and create stunning results. It has a variety of modes and options that can let you experiment and explore different styles and effects. You can also share your photos with your friends and family through email, social media, or cloud services.</li>
26
- </ul>
27
- <p>As you can see, ACDSee is a great choice for Windows 10 users who want to edit and manage their photos in a fast, easy, and professional way. If you haven't tried it yet, don't miss this opportunity to free download ACDSee for Windows 10 from the official website. You won't regret it!</p> ddb901b051<br />
28
- <br />
29
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/AerosoftCrackerV2.exel Save Money and Time with This Amazing Cracker.md DELETED
@@ -1,151 +0,0 @@
1
-
2
- <h1>What is AerosoftCrackerV2.exe and why you should avoid it</h1>
3
- <p>Have you ever heard of AerosoftCrackerV2.exe? If you are a fan of flight simulation games, you may have come across this file online. It claims to be a crack for Aerosoft products, which are popular add-ons for Microsoft Flight Simulator X (FSX) and Prepar3D (P3D). However, don't be fooled by its name. AerosoftCrackerV2.exe is not a legitimate crack, but a malicious program that can harm your computer and compromise your security.</p>
4
- <p>In this article, we will explain what AerosoftCrackerV2.exe is, how it works, what are the symptoms of its infection, how to remove it from your computer, and how to prevent it from infecting your computer in the future. By reading this article, you will learn how to protect yourself from this dangerous threat and enjoy your flight simulation games safely.</p>
5
- <h2>AerosoftCrackerV2.exel</h2><br /><p><b><b>DOWNLOAD</b> &#10004;&#10004;&#10004; <a href="https://byltly.com/2uKzBd">https://byltly.com/2uKzBd</a></b></p><br /><br />
6
- <h2>How does AerosoftCrackerV2.exe work?</h2>
7
- <p>AerosoftCrackerV2.exe is a type of malware that belongs to the Trojan category. A Trojan is a program that pretends to be something else in order to trick users into downloading or running it. Once executed, a Trojan can perform various malicious actions on the infected computer without the user's knowledge or consent.</p>
8
- <p>AerosoftCrackerV2.exe works by posing as a crack for Aerosoft products. A crack is a program that modifies or bypasses the security features of a software product in order to use it for free or without restrictions. Some users may be tempted to use cracks for flight simulation add-ons because they are expensive or hard to find. However, using cracks is illegal and risky, as they may contain malware or viruses that can damage your computer or steal your personal information.</p>
9
- <p>When you download or run AerosoftCrackerV2.exe on your computer, it will install itself in a hidden location and create several files and registry entries that allow it to run automatically every time you start your computer. It will also try to disable your antivirus software or firewall in order to avoid detection and removal. Then, it will perform various malicious activities on your computer, such as:</p>
10
- <ul>
11
- <li>Downloading and installing other malware or viruses on your computer</li>
12
- <li>Stealing your personal information, such as passwords, credit card numbers, bank account details, etc.</li>
13
- <li>Monitoring your online activities, such as browsing history, keystrokes, etc.</li>
14
- <li>Displaying unwanted ads or pop-ups on your screen</li>
15
- <li>Redirecting your web browser to malicious websites</li>
16
- <li>Slowing down your computer performance or causing crashes or errors</li>
17
- </ul>
18
- <h2>What are the symptoms of AerosoftCrackerV2.exe infection?</h2>
19
- <p>If your computer is infected by AerosoftCrackerV2.exe, you may notice some of the following signs:</p>
20
- <ul>
21
- <li>Your antivirus software or firewall is disabled or not working properly</li>
22
- <li>Your computer runs slower than usual or freezes frequently</li>
23
- <li>You see strange files or folders on your computer that you don't recognize</li>
24
- <li>You see unwanted ads or pop-ups on your screen that are related to flight simulation products or services</li>
25
- <li>Your web browser is redirected to unfamiliar websites that ask you to download or buy something</li>
26
- <li>You receive warnings or alerts from unknown sources that claim your computer is infected or needs repair</li>
27
- <li>You notice unauthorized charges on your credit card or bank account statements</li>
28
- </ul>
29
- <h2>How to remove AerosoftCrackerV2.exe from your computer?</h2>
30
- <p>If you suspect that your computer is infected by AerosoftCrackerV2.exe, you should take immediate action to remove it from your computer. There are two methods that you can use to remove AerosoftCrackerV2.exe: manual removal method and automatic removal method.</p>
31
- <h3>Manual removal method</h3>
32
- <p>The manual removal method involves deleting AerosoftCrackerV2.exe and its related files and registry entries from your computer manually. This method requires some technical skills and knowledge of how to access and modify system files and settings. If you are not confident or experienced in doing this, we recommend that you use the automatic removal method instead.</p>
33
- <p>To manually remove AerosoftCrackerV2.exe from your computer, follow these steps:</p>
34
- <ol>
35
- <li>Restart your computer in Safe Mode with Networking. To do this, press F8 repeatedly while booting up until you see a menu with different options. Choose Safe Mode with Networking and press Enter.</li>
36
- <li>Open Task Manager by pressing Ctrl+Alt+Delete keys together. Look for any suspicious processes that are related to AerosoftCrackerV2.exe and end them.</li>
37
- <li>Open File Explorer by pressing Windows+E keys together. Navigate to the following locations and delete any files or folders that are related to AerosoftCrackerV2.exe:</li>
38
- <ul>
39
- <li>%profile%\downloads\fsx-p3d-fsx se - aerosoft - airbus a318-a319-a320-a321 v1.31\cracks</li>
40
- <li>%sysdrive%\22222\aerosoft</li>
41
- <li>%sysdrive%\22222\utilities and tools pack</li>
42
- <li>%desktop%\traduçao\mega airport prague</li>
43
- <li>%desktop%</li>
44
- <li>%sysdrive%\p3d\fsx-p3d-fsx se - aerosoft - airbus a318-a319-a320-a321 v1.31</li>
45
- <li>%programfiles%\microsoft games</li>
46
- <li>%sysdrive%\torrent\nassau x (fsx-p3d)</li>
47
- <li>%sysdrive%\torrent\fsx-p3d-fsx se - aerosoft - airbus a318-a319-a320-a321 v1.31</li>
48
- <li>%sysdrive%\salvar\simulador de voo\simulador de voo fsx prepar3d\prepar3d v4.0 academic\cenarios\new version (fsx-p3d)</li>
49
- </ul>
50
- <li>Open Registry Editor by pressing Windows+R keys together and typing regedit in the Run box. Click OK. Navigate to the following registry keys and delete any sub-keys or values that are related to AerosoftCrackerV2.exe:</li>
51
- <ul>
52
- <li>HKEY_LOCAL_MACHINE\\Software\\Microsoft\\Windows\\CurrentVersion\\RunServices</li>
53
- <li>HKEY_LOCAL_MACHINE\\Software\\Microsoft\\Windows\\CurrentVersion\\RunServicesOnce</li>
54
- <li>HKEY_CURRENT_USER\\Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Shell Folders</li>
55
- <li>HKEY_CURRENT_USER\\Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\User Shell Folders</li>
56
- <li>HKEY_LOCAL_MACHINE\\Software\\Microsoft\\Windows\\CurrentVersion\\explorer\\User Shell Folders</li>
57
- </ul>
58
- <li>Close Registry Editor and restart your computer normally.</li>
59
- </ol>
60
- <h3>Automatic removal method</h3>
61
- <p>The automatic removal method involves using a reliable anti-malware tool to scan and remove AerosoftCrackerV2.exe and its related files and registry entries from your computer automatically. This method is easier and safer than the manual removal method, as it does not require any technical skills or knowledge of how to access and modify system files and settings. It also ensures that no traces of AerosoftCrackerV2.exe are left behind on your computer.</p>
62
- <p>How to use AerosoftCrackerV2.exel to crack software<br />
63
- AerosoftCrackerV2.exel download link<br />
64
- Is AerosoftCrackerV2.exel safe and virus-free?<br />
65
- AerosoftCrackerV2.exel tutorial and guide<br />
66
- AerosoftCrackerV2.exel reviews and feedback<br />
67
- AerosoftCrackerV2.exel alternatives and competitors<br />
68
- AerosoftCrackerV2.exel compatibility and requirements<br />
69
- AerosoftCrackerV2.exel features and benefits<br />
70
- AerosoftCrackerV2.exel updates and patches<br />
71
- AerosoftCrackerV2.exel license and terms of use<br />
72
- AerosoftCrackerV2.exel support and customer service<br />
73
- AerosoftCrackerV2.exel errors and troubleshooting<br />
74
- AerosoftCrackerV2.exel tips and tricks<br />
75
- AerosoftCrackerV2.exel best practices and recommendations<br />
76
- AerosoftCrackerV2.exel case studies and success stories<br />
77
- How to uninstall AerosoftCrackerV2.exel<br />
78
- How to optimize AerosoftCrackerV2.exel performance<br />
79
- How to customize AerosoftCrackerV2.exel settings<br />
80
- How to integrate AerosoftCrackerV2.exel with other tools<br />
81
- How to backup and restore AerosoftCrackerV2.exel data<br />
82
- How to upgrade from AerosoftCrackerV1 to AerosoftCrackerV2.exel<br />
83
- How to get a free trial of AerosoftCrackerV2.exel<br />
84
- How to buy AerosoftCrackerV2.exel with a discount code<br />
85
- How to contact the developer of AerosoftCrackerV2.exel<br />
86
- How to report a bug or issue with AerosoftCrackerV2.exel<br />
87
- How to join the community of AerosoftCrackerV2.exel users<br />
88
- How to access the documentation of AerosoftCrackerV2.exel<br />
89
- How to learn more about the technology behind AerosoftCrackerV2.exel<br />
90
- How to crack Adobe Photoshop with AerosoftCrackerV2.exel<br />
91
- How to crack Microsoft Office with AerosoftCrackerV2.exel<br />
92
- How to crack Autodesk AutoCAD with AerosoftCrackerV2.exel<br />
93
- How to crack CorelDRAW with AerosoftCrackerV2.exel<br />
94
- How to crack FL Studio with AerosoftCrackerV2.exel<br />
95
- How to crack Adobe Premiere Pro with AerosoftCrackerV2.exel<br />
96
- How to crack Sony Vegas Pro with AerosoftCrackerV2.exel<br />
97
- How to crack Ableton Live with AerosoftCrackerV2.exel<br />
98
- How to crack Adobe Illustrator with AerosoftCrackerV2.exel<br />
99
- How to crack Adobe InDesign with AerosoftCrackerV2.exel<br />
100
- How to crack Adobe After Effects with AerosoftCrackerV2.exel<br />
101
- How to crack Adobe Acrobat Pro with AerosoftCrackerV2.exel<br />
102
- How to crack SketchUp Pro with AerosoftCrackerV2.exel<br />
103
- How to crack Camtasia Studio with AerosoftCrackerV2.exel<br />
104
- How to crack Nero Burning ROM with AerosoftCrackerV2.exel<br />
105
- How to crack WinRAR with AerosoftCrackerV2.exel<br />
106
- How to crack VMware Workstation with AerosoftCrackerV2.exel<br />
107
- How to crack CyberLink PowerDVD with AerosoftCrackerV2.exel<br />
108
- How to crack Avast Antivirus with AerosoftCrackerV2.exel<br />
109
- How to crack Malwarebytes Anti-Malware with AerosoftCrackerV2.exel<br />
110
- How to crack CCleaner Professional with AerosoftCrackerV2.exel</p>
111
- <p>To automatically remove AerosoftCrackerV2.exe from your computer, follow these steps:</p>
112
- <ol>
113
- <li>Download and install a reputable anti-malware tool on your computer. You can choose from various options, such as Malwarebytes, SpyHunter, Trend Micro, etc.</li>
114
- <li>Launch the anti-malware tool and update its database to the latest version.</li>
115
- <li>Perform a full system scan with the anti-malware tool and wait for it to finish.</li>
116
- <li>Review the scan results and select all the detected threats related to AerosoftCrackerV2.exe.</li>
117
- <li>Click on the Remove or Quarantine button to delete or isolate AerosoftCrackerV2.exe and its related files and registry entries from your computer.</li>
118
- <li>Restart your computer if prompted by the anti-malware tool.</li>
119
- </ol>
120
- <h2>How to prevent AerosoftCrackerV2.exe infection in the future?</h2>
121
- <p>Now that you have removed AerosoftCrackerV2.exe from your computer, you may wonder how to prevent it from infecting your computer again in the future. Here are some tips that you can follow to avoid downloading or running malicious programs like AerosoftCrackerV2.exe:</p>
122
- <ul>
123
- <li>Avoid using cracks for flight simulation add-ons or any other software products. They are illegal and risky, as they may contain malware or viruses that can damage your computer or steal your personal information.</li>
124
- <li>Only download flight simulation add-ons or any other software products from official or trusted sources. Do not trust unknown or suspicious websites that offer free or cheap downloads.</li>
125
- <li>Always scan any downloaded files with a reliable anti-virus or anti-malware tool before opening or running them. This will help you detect and remove any potential threats before they can harm your computer.</li>
126
- <li>Keep your operating system and software products updated with the latest patches and security fixes. This will help you fix any vulnerabilities that may be exploited by malware or hackers.</li>
127
- <li>Use a strong password for your online accounts and change it regularly. This will help you prevent unauthorized access to your personal information or data.</li>
128
- <li>Backup your important data regularly to an external drive or cloud storage. This will help you recover your data in case of a malware attack or system failure.</li>
129
- </ul>
130
- <h1>Conclusion</h1>
131
- <p>AerosoftCrackerV2.exe is a malicious program that claims to be a crack for Aerosoft products, which are popular add-ons for flight simulation games. However, it is not a legitimate crack, but a Trojan that can harm your computer and compromise your security. It can perform various malicious activities on your computer, such as downloading and installing other malware or viruses, stealing your personal information, monitoring your online activities, displaying unwanted ads or pop-ups, redirecting your web browser to malicious websites, slowing down your computer performance or causing crashes or errors.</p>
132
- <p>To protect yourself from this dangerous threat, you should avoid using cracks for flight simulation add-ons or any other software products. You should also only download flight simulation add-ons or any other software products from official or trusted sources. You should always scan any downloaded files with a reliable anti-virus or anti-malware tool before opening or running them. You should also keep your operating system and software products updated with the latest patches and security fixes. You should also use a strong password for your online accounts and change it regularly. You should also backup your important data regularly to an external drive or cloud storage.</p>
133
- <p>If you suspect that your computer is infected by AerosoftCrackerV2.exe, you should take immediate action to remove it from your computer. You can use either the manual removal method or the automatic removal method to do so. The manual removal method involves deleting AerosoftCrackerV2.exe and its related files and registry entries from your computer manually. The automatic removal method involves using a reliable anti-malware tool to scan and remove AerosoftCrackerV2.exe and its related files and registry entries from your computer automatically.</p>
134
- <p>We hope this article has helped you understand what AerosoftCrackerV2.exe is, how it works, what are the symptoms of its infection, how to remove it from your computer, and how to prevent it from infecting your computer in the future. By following these tips, you will be able to enjoy your flight simulation games safely and securely.</p>
135
- <h1>FAQs</h1>
136
- <p>Here are some frequently asked questions and answers about AerosoftCrackerV2.exe:</p>
137
- <ol>
138
- <li><b>What is Aerosoft?</b></li>
139
- <p>Aerosoft is a German company that develops and publishes add-ons for flight simulation games, such as Microsoft Flight Simulator X (FSX) and Prepar3D (P3D). They offer various products that enhance the realism and immersion of flight simulation games, such as airports, aircrafts, sceneries, tools, etc.</p>
140
- <li><b>What is a crack?</b></li>
141
- <p>A crack is a program that modifies or bypasses the security features of a software product in order to use it for free or without restrictions. Some users may be tempted to use cracks for flight simulation add-ons because they are expensive or hard to find. However, using cracks is illegal and risky, as they may contain malware or viruses that can damage your computer or steal your personal information.</p>
142
- <li><b>What is a Trojan?</b></li>
143
- <p>A Trojan is a type of malware that pretends to be something else in order to trick users into downloading or running it. Once executed, a Trojan can perform various malicious actions on the infected computer without the user's knowledge or consent. Trojans are often used by hackers to gain remote access to computers, steal data, install other malware, etc.</p>
144
- <li><b>How can I tell if my computer is infected by AerosoftCrackerV2.exe?</b></li>
145
- <p>If your computer is infected by AerosoftCrackerV2.exe, you may notice some of the following signs: Your antivirus software or firewall is disabled or not working properly; Your computer runs slower than usual or freezes frequently; You see strange files or folders on your computer that you don't recognize; You see unwanted ads or pop-ups on your screen that are related to flight simulation products or services; Your web browser is redirected to unfamiliar websites that ask you to download or buy something; You receive warnings or alerts from unknown sources that claim your computer is infected or needs repair; You notice unauthorized charges on your credit card or bank account statements.</p>
146
- <li><b>How can I protect my computer from malware?</b></li>
147
- <p>You can protect your computer from malware by following some simple tips, such as: Use a firewall and an anti-malware tool and keep them updated; Don't open email messages from unfamiliar senders or email attachments that you don't recognize; Use a pop-up blocker and a modern browser with SmartScreen enabled; Pay attention to Windows SmartScreen notifications and don't run unrecognized apps downloaded from the internet; Keep Windows and other software products updated with the latest patches and security fixes; Use strong passwords and change them regularly; Backup your important data regularly to an external drive or cloud storage.</p>
148
- </ol>
149
- </p> 0a6ba089eb<br />
150
- <br />
151
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/ABYSS CRAWLERS Plus Game Hack Password !!BETTER!!.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>ABYSS CRAWLERS plus game hack password</h2><br /><p><b><b>Download File</b> &#10040; <a href="https://imgfil.com/2uxYWD">https://imgfil.com/2uxYWD</a></b></p><br /><br />
2
-
3
- d5da3c52bf<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Bollettino Postale 896 22.pdfl ((FREE)).md DELETED
@@ -1,75 +0,0 @@
1
-
2
- <h1>Bollettino Postale 896 22.pdf: come scaricarlo e pagarlo online</h1>
3
- <p>Il bollettino postale è uno dei metodi più usati per effettuare pagamenti a soggetti pubblici o privati che dispongono di un conto corrente postale. Esistono diversi tipi di bollettini postali, a seconda della finalità e della modalità di compilazione. In questo articolo ci concentreremo sul bollettino postale 896 22.pdf, un bollettino precompilato che serve per il versamento di tasse, contributi, bolli e altri oneri. Vedremo cos'è, come compilarlo, dove trovarlo e come pagarlo online.</p>
4
- <h2>Cos'è il bollettino postale 896 22.pdf?</h2>
5
- <p>Il bollettino postale 896 22.pdf è un documento che consente di effettuare un versamento presso un qualsiasi ufficio postale, in favore di un determinato soggetto titolare di un conto corrente postale. Si tratta di un bollettino precompilato, ovvero che presenta già alcuni campi riempiti con le informazioni necessarie per il pagamento. Questo tipo di bollettino è usato per il versamento di tasse, contributi, bolli e altri oneri.</p>
6
- <h2>Bollettino Postale 896 22.pdfl</h2><br /><p><b><b>Download</b> &gt; <a href="https://imgfil.com/2uxXMr">https://imgfil.com/2uxXMr</a></b></p><br /><br />
7
- <h2>Come compilare il bollettino postale 896 22.pdf?</h2>
8
- <p>Per compilare il bollettino postale 896 22.pdf devi inserire i seguenti dati:</p>
9
- <ul>
10
- <li><b>numero di conto corrente dell'intestatario</b>: è il numero a 12 cifre che identifica il soggetto che riceve il pagamento. Questo campo è già precompilato sul bollettino;</li>
11
- <li><b>importo del versamento</b>: è la somma che devi pagare al destinatario. Devi scriverla sia in lettere che in numeri. Questo campo può essere già precompilato o lasciato vuoto sul bollettino;</li>
12
- <li><b>intestatario del versamento</b>: è il nome e il cognome (o la ragione sociale) del soggetto che riceve il pagamento. Questo campo è già precompilato sul bollettino;</li>
13
- <li><b>causale</b>: è il motivo per cui effettui il pagamento. Puoi scrivere una breve descrizione o un codice alfanumerico. Questo campo può essere già precompilato o lasciato vuoto sul bollettino;</li>
14
- <li><b>dati personali di chi effettua il pagamento</b>: sono i tuoi dati anagrafici (nome, cognome, indirizzo). Questo campo devi compilarlo tu sul bollettino.</li>
15
- </ul>
16
- <h2>Dove trovare il bollettino postale 896 22.pdf?</h2>
17
- <p>Puoi trovare il bollettino postale 896 22.pdf in diversi modi:</p>
18
- <ul>
19
- <li><b>sul sito web del soggetto che richiede il pagamento</b>: molti enti pubblici o privati mettono a disposizione sul loro sito web i modelli di bollettini postali precompilati con i loro dati. Puoi scaricare e stampare il bollettino dal sito e poi portarlo all'ufficio postale per pagarlo;</li>
20
- <li><b>sul sito web delle Poste Italiane</b>: puoi accedere al servizio online Bollettini Online delle Poste Italiane e scegliere il tipo di bollettino che ti serve (precompilato o bianco). Puoi compilare i campi richiesti e stampare il bollettino dal tuo computer. Puoi anche pagare online con carta di credito o conto bancario;</li>
21
- <li><b>sull'app Postepay</b>: puoi scaricare l'app Postepay sul tuo smartphone e accedere alla sezione Bollettini. Puoi scegliere il tipo di bollettino che ti serve (precompilato o bianco) e compilare i campi richiesti. Puoi anche pagare online con la tua carta Postepay o con altri metodi;</li>
22
- <li><b>sugli sportelli ATM Postamat</b>: puoi recarti presso uno sportello ATM Postamat e scegliere l'opzione Bollettini Postali. Puoi inserire i dati richiesti e stampare il bollettino dallo sportello. Puoi anche pagarlo con la tua carta Postamat o con altre carte abilitate;</li>
23
- <li><b>sugli sportelli self-service delle Poste Italiane</b>: puoi recarti presso uno sportello self-service delle Poste Italiane e scegliere l'opzione Bollettini Postali. Puoi inserire i dati richiesti e stampare il bollettino dallo sportello. Puoi anche pagarlo con la tua carta Postamat o con altre carte abilitate.</li>
24
- </ul>
25
- <h2>Come pagare il bollettino postale 896 22.pdf online?</h2>
26
- <p>Se vuoi evitare le code agli uffici postali, puoi pagare il bollettino postale 896 22.pdf online, tramite i seguenti servizi:</p>
27
- <ul>
28
- <li><b>Bollettini Online delle Poste Italiane</b>: puoi accedere al servizio online Bollettini Online delle Poste Italiane e scegliere il tipo di bollettino che ti serve (precompilato o bianco). Puoi compilare i campi richiesti e pagare online con carta di credito o conto bancario;</li>
29
- <li><b>App Postepay</b>: puoi scaricare l'app Postepay sul tuo smartphone e accedere alla sezione Bollettini. Puoi scegliere il tipo di bollettino che ti serve (precompilato o bianco) e compilare i campi richiesti. Puoi anche pagare online con la tua carta Postepay o con altri metodi;</li>
30
- <li><b>Servizi bancari online</b>: puoi accedere al tuo servizio bancario online e cercare l'opzione per pagare i bollettini postali. Puoi inserire i dati richiesti e pagare online con il tuo conto bancario o con altre carte abilitate;</li>
31
- <li><b>Servizi di pagamento online</b>: puoi usare dei servizi di pagamento online come PayPal, Satispay, Nexi Pay, ecc. per pagare i bollettini postali. Puoi collegare il tuo conto bancario o la tua carta di credito a questi servizi e pagare online con facilità.</li>
32
- </ul></p>
33
- <h2>Quali sono i vantaggi e gli svantaggi del bollettino postale 896 22.pdf?</h2>
34
- <p>Il bollettino postale 896 22.pdf è un metodo di pagamento molto diffuso e utilizzato in Italia. Tuttavia, come ogni cosa, ha dei vantaggi e degli svantaggi che devi conoscere prima di usarlo. Vediamoli insieme:</p>
35
- <ul>
36
- <li><b>Vantaggi</b>:
37
- <ul>
38
- <li>è un metodo di pagamento sicuro e tracciabile, che ti permette di avere una prova del versamento;</li>
39
- <li>è un metodo di pagamento semplice e veloce, che ti richiede solo di compilare alcuni campi e portare il bollettino all'ufficio postale o pagarlo online;</li>
40
- <li>è un metodo di pagamento universale, che ti consente di pagare qualsiasi soggetto che dispone di un conto corrente postale;</li>
41
- <li>è un metodo di pagamento economico, che ti costa solo il costo del bollettino (1,50 euro) e eventuali commissioni bancarie o postali.</li>
42
- </ul>
43
- </li>
44
- <li><b>Svantaggi</b>:
45
- <ul>
46
- <li>è un metodo di pagamento obsoleto e poco innovativo, che non si adatta alle esigenze della società digitale;</li>
47
- <li>è un metodo di pagamento soggetto a errori umani, che possono causare ritardi o problemi nel versamento;</li>
48
- <li>è un metodo di pagamento limitato, che non ti consente di pagare con altre modalità come il bonifico bancario o il contante;</li>
49
- <li>è un metodo di pagamento vincolato agli orari degli uffici postali, che possono essere scomodi o inaccessibili.</li>
50
- </ul>
51
- </li>
52
- </ul>
53
- <h2>Come risolvere i problemi con il bollettino postale 896 22.pdf?</h2>
54
- <p>A volte può capitare di avere dei problemi con il bollettino postale 896 22.pdf. Ad esempio, puoi averlo perso, sbagliato, strappato o non ricevuto. In questi casi, devi sapere come risolvere la situazione. Ecco alcuni consigli:</p>
55
- <ul>
56
- <li><b>Se hai perso il bollettino postale</b>: puoi richiederne una copia al soggetto che te lo ha inviato o scaricarlo dal suo sito web. Puoi anche cercare il codice a barre del bollettino sul sito delle Poste Italiane e stamparlo nuovamente.</li>
57
- <li><b>Se hai sbagliato il bollettino postale</b>: puoi correggere l'errore con una penna nera e una riga sopra il dato errato. Puoi anche annullare il bollettino sbagliato e compilarne uno nuovo.</li>
58
- <li><b>Se hai strappato il bollettino postale</b>: puoi incollarlo con dello scotch trasparente e portarlo all'ufficio postale. Puoi anche stamparne una copia dal sito delle Poste Italiane o dal sito del soggetto che te lo ha inviato.</li>
59
- <li><b>Se non hai ricevuto il bollettino postale</b>: puoi contattare il soggetto che te lo doveva inviare e chiedere spiegazioni. Puoi anche verificare se è disponibile sul suo sito web o sul sito delle Poste Italiane.</li>
60
- </ul>
61
- <h2>Quali sono le alternative al bollettino postale 896 22.pdf?</h2>
62
- <p>Se non vuoi usare il bollettino postale 896 22.pdf per effettuare i tuoi pagamenti, puoi scegliere tra diverse alternative. Alcune di queste sono:</p>
63
- <ul>
64
- <li><b>Bonifico bancario</b>: è un metodo di pagamento che ti consente di trasferire denaro da un conto bancario ad un altro. Puoi effettuare un bonifico bancario online, tramite il tuo servizio bancario online, o presso una filiale della tua banca. Devi conoscere il codice IBAN del destinatario e la causale del pagamento.</li>
65
- <li><b>PagoPA</b>: è un sistema di pagamento elettronico che ti consente di pagare le tasse e i servizi pubblici in modo semplice e sicuro. Puoi accedere a PagoPA tramite il sito web o l'app dell'ente che richiede il pagamento, o tramite il portale www.pagopa.gov.it. Puoi scegliere tra diverse modalità di pagamento, come carta di credito, conto corrente, Satispay, ecc.</li>
66
- <li><b>Carta di credito</b>: è un metodo di pagamento che ti consente di pagare con il denaro che hai a disposizione sul tuo conto corrente o con il credito che ti viene concesso dalla tua banca. Puoi usare la carta di credito per pagare online, tramite il sito web o l'app del soggetto che richiede il pagamento, o presso i terminali POS abilitati.</li>
67
- <li><b>Contanti</b>: è il metodo di pagamento più tradizionale e semplice, che consiste nello scambiare denaro fisico tra chi paga e chi riceve. Puoi usare i contanti per pagare presso gli uffici postali, le tabaccherie, le edicole e altri esercizi commerciali convenzionati.</li>
68
- </ul>
69
- <h2>Conclusione</h2>
70
- <p>Il bollettino postale 896 22.pdf è uno dei metodi più usati per effettuare pagamenti a soggetti pubblici o privati che dispongono di un conto corrente postale. Si tratta di un bollettino precompilato che serve per il versamento di tasse, contributi, bolli e altri oneri. Per usarlo devi compilare alcuni campi con i dati richiesti e portarlo all'ufficio postale o pagarlo online. Il bollettino postale 896 22.pdf ha dei vantaggi e degli svantaggi che devi conoscere prima di sceglierlo. Inoltre, esistono delle alternative al bollettino postale 896 22.pdf che puoi valutare in base alle tue esigenze e preferenze.</p>
71
- <p></p>
72
- <h2>Conclusione</h2>
73
- <p>Il bollettino postale 896 22.pdf è uno dei metodi più usati per effettuare pagamenti a soggetti pubblici o privati che dispongono di un conto corrente postale. Si tratta di un bollettino precompilato che serve per il versamento di tasse, contributi, bolli e altri oneri. Per usarlo devi compilare alcuni campi con i dati richiesti e portarlo all'ufficio postale o pagarlo online. Il bollettino postale 896 22.pdf ha dei vantaggi e degli svantaggi che devi conoscere prima di sceglierlo. Inoltre, esistono delle alternative al bollettino postale 896 22.pdf che puoi valutare in base alle tue esigenze e preferenze.</p> 3cee63e6c2<br />
74
- <br />
75
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android Oyun Clubta Truck Simulator Ultimate Apk Para Hileli Oyna - Gereki ehirler ve Trlar.md DELETED
@@ -1,109 +0,0 @@
1
- <br />
2
- <h1>Truck Simulator Ultimate Apk: A Realistic and Fun Truck Driving Game</h1>
3
- <p>If you are a fan of truck driving games, you might have heard of Truck Simulator Ultimate Apk, a new and exciting game that lets you experience the thrill of driving a truck across different countries and continents. In this article, we will tell you everything you need to know about this game, including its features, how to download and install it, and how to get para hilesi from Android Oyun Club, a popular Turkish website for modded games.</p>
4
- <h2>What is Truck Simulator Ultimate Apk?</h2>
5
- <p>Truck Simulator Ultimate Apk is a simulation game developed by Zuuks Games, the same company that created Bus Simulator and Euro Truck Driver. The game was released in September 2021 and has already gained millions of downloads and positive reviews from players around the world. The game aims to provide a realistic and fun truck driving experience, with stunning graphics, realistic physics, and diverse gameplay options.</p>
6
- <h2>truck simulator ultimate apk android oyun club para hilesi</h2><br /><p><b><b>Download File</b> >> <a href="https://urlin.us/2uSTWc">https://urlin.us/2uSTWc</a></b></p><br /><br />
7
- <h3>Features of Truck Simulator Ultimate Apk</h3>
8
- <p>Truck Simulator Ultimate Apk has many features that make it stand out from other truck driving games. Here are some of them:</p>
9
- <h4>Realistic truck models and physics</h4>
10
- <p>The game features over 30 different truck models from famous brands such as Mercedes-Benz, Volvo, Scania, MAN, Renault, and more. Each truck has its own specifications, performance, and sound effects. The game also uses advanced physics engine to simulate the weight, speed, braking, steering, suspension, and damage of the trucks.</p>
11
- <h4>Customizable trucks and trailers</h4>
12
- <p>You can customize your trucks and trailers with various options such as paint, decals, wheels, lights, horns, exhausts, bumpers, spoilers, and more. You can also upgrade your trucks with different engines, transmissions, chassis, tires, and accessories. You can create your own unique truck style and show it off to other players.</p>
13
- <h4>Dynamic weather and day-night cycle</h4>
14
- <p>The game has a dynamic weather system that changes according to the location and time of the day. You can drive in sunny, rainy, snowy, foggy, or stormy conditions. You can also experience the day-night cycle that affects the visibility and traffic on the roads. You have to adapt your driving style to the changing weather and lighting conditions.</p>
15
- <h4>Various cargo types and delivery missions</h4>
16
- <p>The game offers a variety of cargo types such as containers, cars, logs, food, chemicals, livestock, and more. You have to load your cargo onto your trailer and deliver it to the destination safely and on time. You have to follow the traffic rules, avoid accidents, pay tolls, refuel your truck, rest when needed, and manage your budget. You can earn money and experience points by completing delivery missions.</p>
17
- <p>truck simulator ultimate apk indir android oyun club<br />
18
- truck simulator ultimate apk hileli oyun indir club<br />
19
- truck simulator ultimate apk mod para hilesi android<br />
20
- truck simulator ultimate apk full sürüm android oyun club<br />
21
- truck simulator ultimate apk son sürüm para hileli<br />
22
- truck simulator ultimate apk android oyun club güncel<br />
23
- truck simulator ultimate apk ücretsiz para hilesi indir<br />
24
- truck simulator ultimate apk android oyun club kurulumu<br />
25
- truck simulator ultimate apk hile nasıl yapılır android oyun club<br />
26
- truck simulator ultimate apk android oyun club yorumları<br />
27
- truck simulator ultimate apk android oyun club alternatifleri<br />
28
- truck simulator ultimate apk android oyun club benzeri oyunlar<br />
29
- truck simulator ultimate apk android oyun club sistem gereksinimleri<br />
30
- truck simulator ultimate apk android oyun club online modu<br />
31
- truck simulator ultimate apk android oyun club multiplayer özelliği<br />
32
- truck simulator ultimate apk android oyun club grafik ayarları<br />
33
- truck simulator ultimate apk android oyun club türkçe dil desteği<br />
34
- truck simulator ultimate apk android oyun club araç modelleri<br />
35
- truck simulator ultimate apk android oyun club harita genişliği<br />
36
- truck simulator ultimate apk android oyun club gerçekçilik seviyesi<br />
37
- truck simulator ultimate apk android oyun club tycoon modu nedir<br />
38
- truck simulator ultimate apk android oyun club tycoon modu hileleri<br />
39
- truck simulator ultimate apk android oyun club tycoon modu ipuçları<br />
40
- truck simulator ultimate apk android oyun club tycoon modu rehberi<br />
41
- truck simulator ultimate apk android oyun club tycoon modu stratejileri<br />
42
- truck simulator ultimate apk android oyun club tycoon modu en iyi araçlar<br />
43
- truck simulator ultimate apk android oyun club tycoon modu en iyi rotalar<br />
44
- truck simulator ultimate apk android oyun club tycoon modu en iyi yatırımlar<br />
45
- truck simulator ultimate apk android oyun club tycoon modu en iyi personel<br />
46
- truck simulator ultimate apk android oyun club tycoon modu en iyi müşteriler<br />
47
- truck simulator ultimate apk para hilesi nasıl yapılır android<br />
48
- truck simulator ultimate apk para hilesi indirme linki android<br />
49
- truck simulator ultimate apk para hilesi güvenli mi android<br />
50
- truck simulator ultimate apk para hilesi ban riski var mı android<br />
51
- truck simulator ultimate apk para hilesi avantajları nelerdir android<br />
52
- truck simulator ultimate apk para hilesi dezavantajları nelerdir android<br />
53
- truck simulator ultimate apk para hilesi kullanıcı yorumları android<br />
54
- truck simulator ultimate apk para hilesi video anlatımı android<br />
55
- truck simulator ultimate apk para hilesi sorun çözümleri android<br />
56
- truck simulator ultimate apk para hilesi alternatif yöntemler android<br />
57
- zuuks games truck simulator ultimate apk indir para hileli <br />
58
- zuuks games truck simulator ultimate apk güncelleme para hileli <br />
59
- zuuks games truck simulator ultimate apk inceleme para hileli <br />
60
- zuuks games truck simulator ultimate apk özellikleri para hileli <br />
61
- zuuks games truck simulator ultimate apk farkı nedir para hileli <br />
62
- zuuks games truck simulator ultimate ap</p>
63
- <h4>Multiplayer mode and online ranking system</h4>
64
- <p>The game has a multiplayer mode that allows you to play with other players online. You can join or create a convoy with your friends or other players and drive together on the same map. You can chat with other players using voice or text messages. You can also compete with other players in the online ranking system based on your level, money earned, distance driven, cargo delivered, etc.</p>
65
- <h3>How to download and install Truck Simulator Ultimate Apk?</h3>
66
- <p>If you want to download and install Truck Simulator Ultimate Apk on your Android device, you can follow these simple steps:</p>
67
- <h4>Requirements and compatibility</h4>
68
- <p>Before you download and install the game, you need to make sure that your device meets the minimum requirements and is compatible with the game. The game requires Android 5.0 or higher, at least 3 GB of RAM, and 1.5 GB of free storage space. The game also supports 64-bit devices and controllers.</p>
69
- <h4>Download link and installation steps</h4>
70
- <p>You can download the game from the official Google Play Store by clicking on this link. Alternatively, you can also download the game from other sources such as APKPure or APKMirror, but make sure that you download the latest version and from a trusted website. After you download the game, you need to follow these steps to install it:</p>
71
- <ul>
72
- <li>Go to your device settings and enable the option to install apps from unknown sources.</li>
73
- <li>Locate the downloaded APK file and tap on it to start the installation process.</li>
74
- <li>Follow the on-screen instructions and grant the necessary permissions to the game.</li>
75
- <li>Wait for the installation to finish and launch the game from your app drawer or home screen.</li>
76
- </ul>
77
- <p>Congratulations, you have successfully installed Truck Simulator Ultimate Apk on your device. You can now enjoy driving your truck across different countries and continents.</p>
78
- <h3>What is Android Oyun Club and how to get para hilesi?</h3>
79
- <p>If you want to enhance your gaming experience and get some extra benefits in Truck Simulator Ultimate Apk, you might be interested in Android Oyun Club and para hilesi. Let's see what they are and how to use them.</p>
80
- <h4>Android Oyun Club: a popular Turkish website for modded games</h4>
81
- <p>Android Oyun Club is a website that provides modded versions of various Android games, including Truck Simulator Ultimate Apk. A modded game is a game that has been modified or hacked to provide some advantages or features that are not available in the original game. For example, a modded game might have unlimited money, unlocked items, premium features, etc.</p>
82
- <h4>Para hilesi: a cheat that gives unlimited money in the game</h4>
83
- <p>Para hilesi is a Turkish term that means money cheat. It is a cheat that gives you unlimited money in Truck Simulator Ultimate Apk. With unlimited money, you can buy any truck, trailer, upgrade, or customization that you want without worrying about your budget. You can also skip some delivery missions that are too hard or boring for you.</p>
84
- <h4>How to use para hilesi in Truck Simulator Ultimate Apk?</h4>
85
- <p>If you want to use para hilesi in Truck Simulator Ultimate Apk, you need to download the modded version of the game from Android Oyun Club. You can find the link to the modded game here. After you download the modded game, you need to follow these steps to use para hilesi:</p>
86
- <ul>
87
- <li>Delete or uninstall the original version of the game from your device.</li>
88
- <li>Install the modded version of the game following the same steps as above.</li>
89
- <li>Launch the modded game and create a new profile or load an existing one.</li>
90
- <li>You will see that you have unlimited money in your account. You can use it to buy anything you want in the game.</li>
91
- </ul>
92
- <p>Enjoy playing Truck Simulator Ultimate Apk with para hilesi from Android Oyun Club.</p>
93
- <h2>Conclusion</h2>
94
- <p>In this article, we have covered everything you need to know about Truck Simulator Ultimate Apk, a realistic and fun truck driving game. We have explained its features, how to download and install it, and how to get para hilesi from Android Oyun Club. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy trucking!</p>
95
- <h3>Frequently Asked Questions</h3>
96
- <ul>
97
- <li><b>Q: Is Truck Simulator Ultimate Apk free?</b></li>
98
- <li>A: Yes, Truck Simulator Ultimate Apk is free to download and play. However, some items and features in the game may require real money purchases.</li>
99
- <li><b>Q: Is Truck Simulator Ultimate Apk safe?</b></li>
100
- <li>A: Yes, Truck Simulator Ultimate Apk is safe as long as you download it from a trusted source such as Google Play Store or APKPure. However, if you download it from other sources such as Android Oyun Club, you should be careful and scan it for viruses or malware before installing it.</li>
101
- <li >li><b>Q: Is Truck Simulator Ultimate Apk realistic?</b></li>
102
- <li>A: Yes, Truck Simulator Ultimate Apk is realistic in terms of graphics, physics, sound, and gameplay. The game features realistic truck models, weather effects, traffic rules, cargo types, and delivery missions. The game also simulates the challenges and risks of truck driving, such as fuel consumption, damage, fatigue, tolls, etc.</li>
103
- <li><b>Q: How many countries and continents are available in Truck Simulator Ultimate Apk?</b></li>
104
- <li>A: Truck Simulator Ultimate Apk currently offers 12 countries and 3 continents to explore. The countries are Germany, France, Italy, Spain, Turkey, UK, USA, Canada, Brazil, Mexico, Argentina, and Chile. The continents are Europe, North America, and South America. The game developers plan to add more countries and continents in the future updates.</li>
105
- <li><b>Q: How can I play with my friends in Truck Simulator Ultimate Apk?</b></li>
106
- <li>A: You can play with your friends in Truck Simulator Ultimate Apk by using the multiplayer mode. You can join or create a convoy with your friends or other players and drive together on the same map. You can also chat with them using voice or text messages. To use the multiplayer mode, you need to have an internet connection and a Truck Simulator Ultimate account.</li>
107
- </ul></p> 197e85843d<br />
108
- <br />
109
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bolt APK - The Best App for Booking Rides and Scooters.md DELETED
@@ -1,150 +0,0 @@
1
-
2
- <h1>What is APK Bolt and How to Use It?</h1>
3
- <h2>Introduction</h2>
4
- <p>If you are looking for a convenient and cost-effective way to get around your city, you might want to try out APK Bolt. APK Bolt is an Android app that allows you to request a ride from a nearby driver, and enjoy a low-cost ride to your destination. But what exactly is an APK file, and what is Bolt? In this article, we will explain what APK Bolt is, how it works, what are its benefits, how to download and install it, and how it compares with other transportation apps.</p>
5
- <h2>apk bolt</h2><br /><p><b><b>Download</b> &#9999; &#9999; &#9999; <a href="https://urlin.us/2uSV9h">https://urlin.us/2uSV9h</a></b></p><br /><br />
6
- <h3>What is an APK file?</h3>
7
- <p>An APK file is a file format that is used to distribute and install applications on Android devices. APK stands for Android Package Kit, and it contains all the files and code that are needed for an app to run on your device. You can download APK files from various sources, such as the Google Play Store, third-party websites, or directly from the app developers. However, you need to enable the option to install apps from unknown sources in your device settings before you can install an APK file.</p>
8
- <h3>What is Bolt?</h3>
9
- <p>Bolt is a transportation app that was formerly known as Taxify. It was founded in 2013 in Estonia, and it operates in 45 countries and 400 cities around the world. Bolt's mission is to provide fast, reliable, and affordable transportation to millions of people, while also helping thousands of drivers support their families. Bolt offers different types of services, such as ride-hailing, car-sharing, scooter-sharing, food delivery, and electric bikes.</p>
10
- <h3>What is APK Bolt?</h3>
11
- <p>APK Bolt is the name of the Android app that you can use to access the Bolt services on your device. You can download the APK Bolt file from various sources, such as [APKCombo](^1^), [APKPure], or [Uptodown]. With APK Bolt, you can tap the button to order a ride, see the price of your ride before you order, use a range of safety features, pay inside the app or with cash, and leave a rating for your driver.</p>
12
- <h2>Benefits of Using APK Bolt</h2>
13
- <h3>Fast and Affordable Rides</h3>
14
- <p>One of the main benefits of using APK Bolt is that you can get a comfortable, low-cost ride in minutes. You don't have to wait for a long time for a driver to pick you up, as there are thousands of drivers available 24/7. You also don't have to pay a lot for your ride, as APK Bolt offers competitive prices that are cheaper than other transportation apps. You can also save money by using promo codes, discounts, and offers that are regularly available on the app.</p>
15
- <p>apk bolt: request a ride<br />
16
- apk bolt driver: drive with bolt<br />
17
- apk bolt food: food delivery<br />
18
- apk bolt business: manage your rides<br />
19
- apk bolt lite: low-cost rides<br />
20
- apk bolt browser: fast and secure web browser<br />
21
- apk bolt taxi: book a taxi online<br />
22
- apk bolt scooter: electric scooter rental<br />
23
- apk bolt mod: unlocked features and unlimited money<br />
24
- apk bolt vpn: protect your privacy online<br />
25
- apk bolt app: download and install bolt app<br />
26
- apk bolt game: race and drift with bolt cars<br />
27
- apk bolt launcher: customize your home screen<br />
28
- apk bolt video downloader: download videos from any website<br />
29
- apk bolt music player: play and stream music offline<br />
30
- apk bolt photo editor: edit and enhance your photos<br />
31
- apk bolt keyboard: type faster and easier<br />
32
- apk bolt lock screen: secure your phone with bolt pattern<br />
33
- apk bolt wallpaper: beautify your phone with bolt wallpapers<br />
34
- apk bolt theme: change the look and feel of your phone<br />
35
- apk bolt chat: chat and make new friends<br />
36
- apk bolt social media: connect with people around the world<br />
37
- apk bolt news: get the latest news and updates<br />
38
- apk bolt weather: check the weather forecast and alerts<br />
39
- apk bolt maps: navigate and explore with bolt maps<br />
40
- apk bolt fitness: track your health and fitness goals<br />
41
- apk bolt calculator: perform calculations and conversions<br />
42
- apk bolt clock: set alarms and timers with bolt clock<br />
43
- apk bolt calendar: organize your schedule and events<br />
44
- apk bolt notes: take notes and reminders with bolt notes<br />
45
- apk bolt file manager: manage your files and folders<br />
46
- apk bolt antivirus: protect your phone from viruses and malware<br />
47
- apk bolt cleaner: optimize your phone performance and battery life<br />
48
- apk bolt flashlight: turn your phone into a flashlight<br />
49
- apk bolt compass: find your direction with bolt compass<br />
50
- apk bolt qr scanner: scan qr codes and barcodes<br />
51
- apk bolt pdf reader: view and edit pdf files<br />
52
- apk bolt translator: translate text and speech in any language<br />
53
- apk bolt voice recorder: record and play audio files<br />
54
- apk bolt radio: listen to live radio stations online<br />
55
- apk bolt podcast: discover and listen to podcasts on any topic<br />
56
- apk bolt ebook reader: read ebooks and audiobooks offline<br />
57
- apk bolt shopping: shop online and get the best deals<br />
58
- apk bolt travel: book flights, hotels, and car rentals online<br />
59
- apk bolt dating: find your match and date online<br />
60
- apk bolt learning: learn new skills and hobbies online<br />
61
- apk bolt entertainment: watch movies, shows, and live tv online <br />
62
- apk bolt sports: follow your favorite sports teams and players online <br />
63
- apk bolt finance: manage your money and investments online</p>
64
- <h3>Safety Features</h3>
65
- <p>Another benefit of using APK Bolt is that you can use a range of safety features that ensure your security and peace of mind. For example, you can share details of your journey with your friends or family members, so they can track your location and status. You can also contact the customer support team or the emergency services in case you need any assistance or help. Moreover, you can see the ratings and reviews of your driver before you accept the ride, so you can choose the best option for you.</p>
66
- <h3>Flexible Payment Options</h3>
67
- <p>A third benefit of using APK Bolt is that you can choose from different payment options that suit your preference and convenience. You can pay inside the app using your credit or debit card, or you can also pay with cash, or use other methods such as PayPal, Google Pay, or Apple Pay. You can also tip your driver if you are satisfied with their service, and rate them after the ride.</p>
68
- <h2>How to Download and Install APK Bolt</h2>
69
- <h3>Steps to Download APK Bolt</h3>
70
- <p>If you want to download APK Bolt on your Android device, you can follow these simple steps:</p>
71
- <ol>
72
- <li>Go to one of the sources that offer the APK Bolt file, such as [APKCombo], [APKPure], or [Uptodown].</li>
73
- <li>Search for APK Bolt in the search bar, or browse the categories to find it.</li>
74
- <li>Tap on the APK Bolt icon, and then tap on the download button.</li>
75
- <li>Wait for the download to finish, and then locate the file in your device storage.</li>
76
- </ol>
77
- <h3>Steps to Install APK Bolt</h3>
78
- <p>Before you can install APK Bolt on your device, you need to enable the option to install apps from unknown sources. To do this, you can follow these steps:</p>
79
- <ol>
80
- <li>Go to your device settings, and then tap on security or privacy.</li>
81
- <li>Find the option that says "Unknown sources" or "Install unknown apps", and toggle it on.</li>
82
- <li>Confirm your choice by tapping on OK or Allow.</li>
83
- </ol>
84
- <p>Once you have enabled this option, you can install APK Bolt by following these steps:</p>
85
- <ol>
86
- <li>Locate the APK Bolt file in your device storage, and tap on it.</li>
87
- <li>Tap on Install, and wait for the installation to complete.</li>
88
- <li>Tap on Open, and grant the necessary permissions to the app.</li>
89
- </ol>
90
- <h3>Steps to Request a Ride with APK Bolt</h3>
91
- <p>After you have installed APK Bolt on your device, you can start using it to request a ride. To do this, you can follow these steps:</p>
92
- <ol>
93
- <li>Open the APK Bolt app, and sign up or log in with your phone number or email address.</li>
94
- <li>Select your pickup location and destination by typing them in or using the map.</li>
95
- <li>Select the type of ride you want, such as Bolt Lite, Bolt Comfort, or Bolt Green.</li>
96
- <li>See the price of your ride before you order, and choose your payment method.</li>
97
- <li>Tap on Request a Ride, and wait for a driver to accept your request.</li>
98
- <li>See the details of your driver and their vehicle, and contact them if needed.</li>
99
- <li>Enjoy your ride, and pay inside the app or with cash.</li>
100
- <li>Leave a rating and a tip for your driver if you wish.</li>
101
- </ol>
102
- <h2>Comparison of APK Bolt with Other Transportation Apps</h2>
103
- <p>If you are wondering how APK Bolt compares with other transportation apps, such as Uber, Lyft, or Grab, here is a brief overview of their features and prices:</p>
104
- <h3>Uber</h3>
105
- <p>Uber is one of the most popular transportation apps in the world, operating in over 80 countries and 900 cities. Uber offers different types of services, such as UberX, UberXL, UberPool, UberBlack, UberEats, and more. Uber's main advantages are its global reach, its variety of options, and its user-friendly interface. However, Uber's main disadvantages are its high prices, its surge pricing during peak hours or high demand, and its controversies over safety and ethics.</p>
106
- <h3>Lyft</h3>
107
- <p>Lyft is another popular transportation app in the US and Canada, operating in over 600 cities. Lyft offers different types of services, such as Lyft Line, Lyft Plus, Lyft Premier, Lyft Lux, and more. Lyft's main advantages are its lower prices than Uber, its social and environmental initiatives, and its friendly drivers. However, Lyft's main disadvantages are its limited availability outside the US and Canada, its lack of options in some areas, and its lower quality of service in some cases.</p>
108
- <h3>Grab</h3>
109
- <p>Grab is the leading transportation app in Southeast Asia, operating in over 300 cities in 8 countries. Grab offers different types of services, such as GrabCar, GrabTaxi, GrabBike, GrabHitch, GrabExpress, and more. Grab's main advantages are its wide coverage in the region, its local knowledge and expertise, and its integration with other services such as food delivery, payments, and travel. However, Grab's main disadvantages are its high prices in some markets, its frequent cancellations by drivers, and its technical issues and glitches.</p>
110
- <h4>Table: Features and Prices of Different Transportation Apps</h4>
111
- <table>
112
- <tr>
113
- <th>App</th>
114
- <th>Features</th>
115
- <th>Prices</th>
116
- </tr>
117
- <tr>
118
- <td>APK Bolt</td>
119
- <td>- Fast and affordable rides<br>- Safety features<br>- Flexible payment options<br>- Available in 45 countries and 400 cities</td>
120
- <td>- Base fare: $1.00<br>- Per mile: $0.50<br>- Per minute: $0.10<br>- Minimum fare: $2.00<br>- Cancellation fee: $1.00</td>
121
- </tr>
122
- <tr>
123
- <td>Uber</td>
124
- <td>- Global reach<br>- Variety of options<br>- User-friendly interface<br>- Available in over 80 countries and 900 cities</td>
125
- <td>- Base fare: $1.50<br>- Per mile: $1.00<br>- Per minute: $0.20<br>- Minimum fare: $5.00<br>- Cancellation fee: $5.00</td>
126
- </tr>
127
- <tr>
128
- <td>Lyft</td>
129
- <td>- Lower prices than Uber<br>- Social and environmental initiatives<br>- Friendly drivers<br>- Available in the US and Canada</td>
130
- <td>- Base fare: $1.00<br>- Per mile: $0.75<br>- Per minute: $0.15<br>- Minimum fare: $3.50<br>- Cancellation fee: $5.00</td>
131
- </tr>
132
- <tr>
133
- <td>Grab</td>
134
- <td>- Wide coverage in Southeast Asia<br>- Local knowledge and expertise<br>- Integration with other services<br>- Available in 8 countries and over 300 cities</td>
135
- <td>- Base fare: $1.50<br>- Per mile: $1.25<br>- Per minute: $0.25<br>- Minimum fare: $4.00<br>- Cancellation fee: $2.00</td>
136
- </tr>
137
- </table>
138
- <h2>Conclusion</h2>
139
- <p>In conclusion, APK Bolt is a great app that you can use to get a fast, reliable, and affordable ride to your destination. You can download the APK Bolt file from various sources, install it on your device, and start using it to request a ride from a nearby driver. You can also enjoy the benefits of using APK Bolt, such as safety features, flexible payment options, and competitive prices. You can also compare APK Bolt with other transportation apps, such as Uber, Lyft, or Grab, and see which one suits your needs better.</p>
140
- <h2>FAQs</h2>
141
- <p>Here are some frequently asked questions about APK Bolt:</p>
142
- <ol>
143
- <li><b>Is APK Bolt safe?</b><br>Yes, APK Bolt is safe to use, as it has a range of safety features that ensure your security and peace of mind. You can share details of your journey with your friends or family members, contact the customer support team or the emergency services if needed, and see the ratings and reviews of your driver before you accept the ride.</li>
144
- <li><b>Is APK Bolt legal?</b><br>Yes, APK Bolt is legal to use in most countries where it operates. However, you should check the local laws and regulations before you use APK Bolt in a new location, as some places may have restrictions or bans on ride-hailing services.</li>
145
- <li><b>Is APK Bolt free?</b><br>No, APK Bolt is not free to use, as you have to pay for your ride according to the distance, time, and traffic of your ride. However, APK Bolt offers competitive prices that are cheaper than other transportation apps, and you can also save money by using promo codes, discounts, and offers that are regularly available on the app.</li>
146
- <li><b>How can I contact APK Bolt?</b><br>You can contact APK Bolt by using the in-app chat feature, or by sending an email to [email protected]. You can also visit their website at https://bolt.eu/ or follow them on social media platforms such as Facebook, Twitter, Instagram, or YouTube.</li>
147
- <li><b>How can I update APK Bolt?</b><br>You can update APK Bolt by downloading the latest version of the APK file from the same source that you used to download it initially, and then installing it over the existing app. You can also check for updates within the app by tapping on the menu icon, and then tapping on Settings and About.</li>
148
- </ol></p> 197e85843d<br />
149
- <br />
150
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Traffic Racing Game Stunning 3D Graphics and Smooth Car Handling.md DELETED
@@ -1,141 +0,0 @@
1
- <br />
2
- <h1>Download Car Traffic Racing Game: A Guide for Beginners</h1>
3
- <p>Do you love racing games? Do you want to experience the thrill of driving through busy traffic? Do you want to customize your own car and compete with other players online? If you answered yes to any of these questions, then you should download Car Traffic Racing Game, one of the best car racing games available on Google Play. In this article, we will tell you everything you need to know about this game, including its features, benefits, how to download and install it, how to play it, how to upgrade and customize your car, and how to join online multiplayer races. By the end of this article, you will be ready to hit the road and enjoy the ultimate car racing experience.</p>
4
- <h2>What is Car Traffic Racing Game?</h2>
5
- <p>Car Traffic Racing Game is a milestone in the genre of endless arcade racing games. It is developed by TOJ Games, a company that specializes in creating fun and addictive games for mobile devices. Car Traffic Racing Game lets you drive your car through highway traffic, earn cash, upgrade your car, and buy new ones. You can also participate in online races with other players from around the world. You can choose from over 40 different cars and five detailed environments, such as suburb, desert, snowy, rainy, and city night. You can also choose from five game modes, such as Endless, Two-Way, Time Trial, Police Chase, and Free Ride. You can enjoy stunning 3D graphics, smooth and realistic car handling, rich types of NPC traffic, basic customization through paint and wheels, online leaderboards and achievements, and more.</p>
6
- <h2>download car traffic racing game</h2><br /><p><b><b>Download File</b> >>>>> <a href="https://urlin.us/2uSVNq">https://urlin.us/2uSVNq</a></b></p><br /><br />
7
- <h3>The features of Car Traffic Racing Game</h3>
8
- <p>Car Traffic Racing Game has many features that make it stand out from other racing games. Some of these features are:</p>
9
- <ul>
10
- <li><b>Stunning 3D graphics:</b> The game has amazing 3D graphics that make you feel like you are driving in real life. You can see the details of the cars, the environments, the traffic, the weather effects, and more.</li>
11
- <li><b>Smooth and realistic car handling:</b> The game has a simple and intuitive control system that lets you steer your car with ease. You can tilt or touch your device to steer, touch the gas button to accelerate, and touch the brake button to slow down. You can also adjust the sensitivity of the steering and the camera angle.</li>
12
- <li><b>40+ different cars to choose from:</b> The game has a wide variety of cars that you can unlock and buy with the cash you earn from racing. You can choose from sports cars, muscle cars, trucks, buses, SUVs, and more. Each car has its own speed, acceleration, handling, braking, and price.</li>
13
- <li><b>5 detailed environments:</b> The game has five different environments that you can race in. Each environment has its own scenery, traffic density, weather condition, time of day, and difficulty level. You can race in suburb, desert, snowy, rainy, or city night.</li>
14
- <li><b>5 game modes:</b> The game has five different game modes that you can play. Each mode has its own objective, challenge, and reward. You can play Endless mode where you drive as long as you can without crashing; Two-Way mode where you drive in the opposite direction of the traffic; Time Trial mode where you race against the clock; Police Chase mode where you evade the police cars; and Free Ride mode where you explore the environment at your own pace.</li>
15
- <li><b>Rich types of NPC traffic:</b> The game has a realistic and diverse traffic system that makes the racing more challenging and fun. You can encounter cars, trucks, buses, motorcycles, vans, and more on the road. You can also see traffic lights, road signs, speed cameras, and more.</li>
16
- <li><b>Basic customization through paint and wheels:</b> The game allows you to customize your car with different colors and wheels. You can change the paint of your car body, roof, hood, spoiler, and rims. You can also choose from different types of wheels, such as alloy, chrome, or steel.</li>
17
- <li><b>Online leaderboards and achievements:</b> The game has a global ranking system that lets you compare your scores and achievements with other players. You can see your rank, your best score, your best distance, your best speed, and more. You can also unlock various achievements by completing different tasks in the game.</li>
18
- </ul>
19
- <h3>The benefits of playing Car Traffic Racing Game</h3>
20
- <p>Playing Car Traffic Racing Game is not only fun but also beneficial for you. Some of the benefits are:</p>
21
- <ul>
22
- <li><b>It improves your concentration and reflexes:</b> Playing Car Traffic Racing Game requires you to pay attention to the road, the traffic, the obstacles, and the other cars. You also need to react quickly to avoid collisions and accidents. This helps you improve your concentration and reflexes, which are useful skills in real life.</li>
23
- <li><b>It boosts your mood and reduces stress:</b> Playing Car Traffic Racing Game gives you a sense of excitement and satisfaction. You can enjoy the adrenaline rush of driving fast, the thrill of overtaking other cars, the joy of earning cash and rewards, and the pride of unlocking new cars and achievements. This helps you boost your mood and reduce stress, which are important for your mental health.</li>
24
- <li><b>It enhances your creativity and imagination:</b> Playing Car Traffic Racing Game allows you to express your personality and style through your car. You can customize your car with different colors and wheels, and make it look unique and cool. You can also imagine yourself as a professional racer or a fugitive on the run, and create your own stories and scenarios in the game. This helps you enhance your creativity and imagination, which are valuable for your personal growth.</li>
25
- </ul>
26
- <h2>How to download and install Car Traffic Racing Game?</h2>
27
- <p>Downloading and installing Car Traffic Racing Game is easy and fast. Here are the requirements and steps for doing so:</p>
28
- <p>Download Traffic Racer game for Android<br />
29
- How to play Traffic Tour online for free<br />
30
- Best car racing games with traffic on PC<br />
31
- Download Traffic Games from CrazyGames website<br />
32
- Traffic Racer tips and tricks to earn cash and upgrade cars<br />
33
- Traffic Tour review and gameplay features<br />
34
- Car traffic racing game with realistic graphics and physics<br />
35
- Download Traffic Racer mod apk with unlimited money<br />
36
- How to install Traffic Tour on Windows 10<br />
37
- Car racing games with traffic and police chase mode<br />
38
- Download Traffic Games for iOS devices<br />
39
- Traffic Racer vs Traffic Tour: which one is better?<br />
40
- Car traffic racing game with different environments and weather<br />
41
- Download Traffic Racer for Chromebook<br />
42
- How to play Traffic Tour with friends online<br />
43
- Car racing games with traffic and customization options<br />
44
- Download Traffic Games for Mac OS<br />
45
- Traffic Racer cheats and hacks to unlock all cars<br />
46
- How to stream Traffic Tour on Twitch or YouTube<br />
47
- Car traffic racing game with leaderboards and achievements<br />
48
- Download Traffic Racer for Kindle Fire<br />
49
- How to play Traffic Tour offline without internet connection<br />
50
- Car racing games with traffic and time trial mode<br />
51
- Download Traffic Games for Linux<br />
52
- Traffic Racer updates and new features<br />
53
- How to play Traffic Tour with a controller or a steering wheel<br />
54
- Car traffic racing game with different camera angles and views<br />
55
- Download Traffic Racer for Samsung Galaxy devices<br />
56
- How to play Traffic Tour on a big screen TV or a projector<br />
57
- Car racing games with traffic and free ride mode<br />
58
- Download Traffic Games for Nokia phones<br />
59
- Traffic Racer ratings and reviews from users and critics<br />
60
- How to play Traffic Tour on a VR headset or a 3D monitor<br />
61
- Car traffic racing game with different game modes and challenges<br />
62
- Download Traffic Racer for Huawei devices<br />
63
- How to play Traffic Tour on a laptop or a desktop computer<br />
64
- Car racing games with traffic and sound effects and music<br />
65
- Download Traffic Games for Sony Xperia devices<br />
66
- Traffic Racer FAQs and troubleshooting tips<br />
67
- How to play Traffic Tour on a tablet or a smartphone<br />
68
- Car traffic racing game with different car types and models<br />
69
- Download Traffic Racer for LG devices<br />
70
- How to play Traffic Tour on a browser or a web app<br />
71
- Car racing games with traffic and realistic car handling and controls<br />
72
- Download Traffic Games for Motorola devices <br />
73
- Traffic Racer system requirements and compatibility issues <br />
74
- How to play Traffic Tour on a smartwatch or a wearable device <br />
75
- Car traffic racing game with different languages and subtitles</p>
76
- <h3>The requirements for downloading Car Traffic Racing Game</h3>
77
- <p>To download and install Car Traffic Racing Game, you need to have a compatible device and a stable internet connection. The game is compatible with Android devices that have Android 4.4 or higher as their operating system. The game size is about 100 MB, so make sure you have enough storage space on your device.</p>
78
- <h3>The steps for downloading and installing Car Traffic Racing Game</h3>
79
- <p>To download and install Car Traffic Racing Game, follow these steps:</p>
80
- <ol>
81
- <li>Open Google Play Store on your device.</li>
82
- <li>Search for "Car Traffic Racing Game" or use this link: <a href="">Car Traffic Racing Game - Apps on Google Play</a>.</li>
83
- <li>Tap on the "Install" button to start downloading the game.</li>
84
- <li>Wait for the download to finish and then tap on the "Open" button to launch the game.</li>
85
- <li>Enjoy playing Car Traffic Racing Game!</li>
86
- </ol>
87
- <h2>How to play Car Traffic Racing Game?</h2>
88
- <p>Playing Car Traffic Racing Game is simple and fun. Here are some tips on how to play it:</p>
89
- <h3>The modes of Car Traffic Racing Game</h3>
90
- <p>The game has five modes that you can choose from: Endless, Two-Way, Time Trial, Police Chase, and Free Ride. Each mode has its own objective, challenge, and reward.</p>
91
- <ul>
92
- <li><b>Endless mode:</b> In this mode, you drive as long as you can without crashing or running out of fuel. The longer you drive, the more cash you earn. You can also collect coins and power-ups on the road to boost your score and performance.</li>
93
- <li><b>Two-Way mode:</b> In this mode, you drive in the opposite direction of the traffic. The more cars you overtake, the more cash you earn. You can also collect coins and power-ups on the road to boost your score and performance.</li>
94
- <li><b <b>Time Trial mode:</b> In this mode, you race against the clock. You have a limited amount of time to reach the checkpoints and extend your time. The faster you drive, the more cash you earn. You can also collect coins and power-ups on the road to boost your score and performance.</li>
95
- <li><b>Police Chase mode:</b> In this mode, you evade the police cars that are chasing you. The more police cars you escape, the more cash you earn. You can also collect coins and power-ups on the road to boost your score and performance.</li>
96
- <li><b>Free Ride mode:</b> In this mode, you explore the environment at your own pace. You can drive anywhere you want, without any traffic or police. You can also collect coins and power-ups on the road to boost your score and performance.</li>
97
- </ul>
98
- <h3>The controls of Car Traffic Racing Game</h3>
99
- <p>The game has a simple and intuitive control system that lets you steer your car with ease. You can choose from two options: tilt or touch. You can also adjust the sensitivity of the steering and the camera angle in the settings menu.</p>
100
- <ul>
101
- <li><b>Tilt:</b> In this option, you tilt your device left or right to steer your car. You touch the gas button on the right side of the screen to accelerate, and touch the brake button on the left side of the screen to slow down.</li>
102
- <li><b>Touch:</b> In this option, you touch the left or right side of the screen to steer your car. You touch the gas button on the right side of the screen to accelerate, and touch the brake button on the left side of the screen to slow down.</li>
103
- </ul>
104
- <h3>The tips and tricks for Car Traffic Racing Game</h3>
105
- <p>The game is easy to play but hard to master. Here are some tips and tricks that can help you improve your skills and enjoy the game more:</p>
106
- <ul>
107
- <li><b>Drive faster to earn more cash:</b> The faster you drive, the more cash you earn. You can use the cash to buy new cars, upgrade your car, or customize your car. However, driving faster also means more risk of crashing, so be careful and avoid collisions.</li>
108
- <li><b>Overtake other cars closely to get bonus cash:</b> The closer you overtake other cars, the more bonus cash you get. You can see a yellow bar on top of your screen that shows how much bonus cash you are earning. However, overtaking closely also means more risk of crashing, so be careful and avoid collisions.</li>
109
- <li><b>Collect coins and power-ups on the road:</b> The game has various coins and power-ups that you can collect on the road. Coins can increase your score and cash, while power-ups can give you different effects, such as speed boost, nitro boost, magnet, shield, or fuel refill. However, some coins and power-ups may be hard to reach or hidden behind obstacles, so be careful and avoid collisions.</li>
110
- <li><b>Use nitro boost wisely:</b> The game has a nitro boost feature that lets you drive faster for a short period of time. You can activate it by touching the nitro button on the bottom right corner of the screen. However, nitro boost is limited and needs time to recharge, so use it wisely and save it for when you need it most.</li>
111
- <li><b>Change lanes frequently:</b> The game has multiple lanes that you can switch between by steering your car left or right. Changing lanes frequently can help you avoid traffic, find coins and power-ups, overtake other cars, or escape police cars. However, changing lanes frequently also means more risk of crashing, so be careful and avoid collisions.</li>
112
- </ul>
113
- <h2>How to upgrade and customize your car in Car Traffic Racing Game?</h2>
114
- <p>The game allows you to upgrade and customize your car with different options. Here are some details on how to do so:</p>
115
- <h3>The currency and rewards in Car Traffic Racing Game</h3>
116
- <p>The game has two types of currency: cash and diamonds. Cash is earned by playing the game modes, while diamonds are earned by watching ads or buying them with real money. You can use cash to buy new cars or upgrade your car's speed, acceleration, handling, or braking. You can use diamonds to buy premium cars or customize your car's paint or wheels.</p>
117
- <p>The game also has various rewards that you can get by playing the game modes or completing achievements. Rewards include coins, power-ups, fuel refills, nitro refills, or free cars.</p>
118
- <h3>The options for upgrading and customizing your car in Car Traffic Racing Game</h3>
119
- <p>The game has a garage menu where you can upgrade and customize your car. You can access it by tapping on the li>Tap on the "Start" button to begin the race. The game will show you the countdown and then the race will start.</li>
120
- <li>Drive your car as fast and as far as you can, while avoiding traffic, obstacles, and other players. You can see your rank, distance, speed, and overtakes on the top of the screen. You can also see the other players' names, cars, and positions on the map on the bottom right corner of the screen.</li>
121
- <li>When the race is over, the game will show you the results and the rewards. You can see your rank, score, cash, diamonds, and achievements. You can also see the other players' ranks, scores, and cars.</li>
122
- <li>Tap on the "Continue" button to return to the online menu. You can choose to play another race or exit the online mode.</li>
123
- </ol>
124
- <h2>Conclusion</h2>
125
- <p>Car Traffic Racing Game is a fun and addictive game that lets you drive your car through highway traffic, earn cash, upgrade your car, and buy new ones. You can also join online races with other players from around the world. The game has stunning 3D graphics, smooth and realistic car handling, 40+ different cars to choose from, 5 detailed environments, 5 game modes, rich types of NPC traffic, basic customization through paint and wheels, online leaderboards and achievements, and more. If you are looking for a game that can challenge your skills, boost your mood, and enhance your creativity, then you should download Car Traffic Racing Game today. You will not regret it!</p>
126
- <h4>FAQs</h4>
127
- <p>Here are some frequently asked questions about Car Traffic Racing Game:</p>
128
- <ul>
129
- <li><b>Q: How can I get more diamonds in Car Traffic Racing Game?</b></li>
130
- <li><b>A: You can get more diamonds in Car Traffic Racing Game by watching ads or buying them with real money. You can also get diamonds by completing daily and weekly challenges or unlocking achievements in online mode.</b></li>
131
- <li><b>Q: How can I unlock new cars in Car Traffic Racing Game?</b></li>
132
- <li><b>A: You can unlock new cars in Car Traffic Racing Game by reaching certain ranks or completing certain challenges in online mode. You can also buy new cars with cash or diamonds in the garage menu.</b></li>
133
- <li><b>Q: How can I change the camera angle in Car Traffic Racing Game?</b></li>
134
- <li><b>A: You can change the camera angle in Car Traffic Racing Game by tapping on the camera icon on the top right corner of the screen. You can choose from four different camera angles: behind, top-down, hood, or cockpit.</b></li>
135
- <li><b>Q: How can I pause or exit the game in Car Traffic Racing Game?</b></li>
136
- <li><b>A: You can pause or exit the game in Car Traffic Racing Game by tapping on the pause icon on the top left corner of the screen. You can resume or restart the game by tapping on the resume or restart buttons. You can also exit the game by tapping on the exit button.</b></li>
137
- <li><b>Q: How can I contact the developer of Car Traffic Racing Game?</b></li>
138
- <li><b>A: You can contact the developer of Car Traffic Racing Game by sending an email to [email protected] or visiting their website at www.tojgames.com. You can also follow them on Facebook or Twitter for updates and news.</b></li>
139
- </ul></p> 197e85843d<br />
140
- <br />
141
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cat Simulator Annual Life Kitty Pet MOD - The Best Game for Cat Fans.md DELETED
@@ -1,106 +0,0 @@
1
- <br />
2
- <table>
3
- <tr>
4
- <td>
5
- <h1>Cat Simulator: Annual Life Kitty Pet Mod APK</h1>
6
- <p>Have you ever wondered what it would be like to live as a cat? To explore a vast world full of adventures, mysteries, and fun? To interact with other animals and make friends or enemies? To customize your kitty with different outfits and accessories? If you answered yes to any of these questions, then you should try <strong>Cat Simulator: Annual Life Kitty Pet Mod APK</strong>, a game that lets you experience all that and more!</p>
7
- <h2>What is Cat Simulator: Annual Life Kitty Pet Mod APK?</h2>
8
- <p>Cat Simulator: Annual Life Kitty Pet Mod APK is a modified version of <a href="(^1^)">Cat Simulator : Kitties Family</a>, a game developed by Avelog Games. In this game, you can choose your kitty from different breeds and colors, and then explore a beautiful 3D world full of different locations, such as a farm, a forest, a lake, and more. You can interact with other animals, such as dogs, cows, chickens, and even other cats. You can also complete various quests and challenges, such as catching mice, stealing food, destroying objects, and more. You can earn coins and rewards for your achievements, and use them to buy new items and accessories for your kitty. You can also unlock new breeds and colors as you progress in the game.</p>
9
- <h2>cat simulator annual life kitty pet mod apk</h2><br /><p><b><b>Download Zip</b> &#9675; <a href="https://urlin.us/2uSYdf">https://urlin.us/2uSYdf</a></b></p><br /><br />
10
- <p>Cat Simulator: Annual Life Kitty Pet Mod APK is different from the original game in that it gives you access to unlimited coins, unlocked items, and other features that are not available in the original version. This means that you can enjoy the game without any limitations or restrictions. You can customize your kitty however you want, explore the world without any boundaries, and have more fun and excitement.</p>
11
- <h2>How to download and install Cat Simulator: Annual Life Kitty Pet Mod APK?</h2>
12
- <p>Downloading and installing Cat Simulator: Annual Life Kitty Pet Mod APK is very easy and simple. Just follow these steps:</p>
13
- <ol>
14
- <li>Click on the download button below to get the APK file of the modded version of the game.</li>
15
- <li>Once the download is complete, locate the file on your device and tap on it to start the installation process.</li>
16
- <li>Allow the installation of unknown sources if prompted by your device.</li>
17
- <li>Wait for the installation to finish and then launch the game from your app drawer or home screen.</li>
18
- <li>Enjoy playing Cat Simulator: Annual Life Kitty Pet Mod APK with unlimited coins and unlocked items!</li>
19
- </ol>
20
- <p><a href="">Download Cat Simulator: Annual Life Kitty Pet Mod APK</a></p>
21
- <h2>What are the benefits of Cat Simulator: Annual Life Kitty Pet Mod APK?</h2>
22
- <p>Cat Simulator: Annual Life Kitty Pet Mod APK has many benefits that make it better than the original game. Here are some of them:</p>
23
- <ul>
24
- <li>You get unlimited coins that you can use to buy anything you want in the game.</li>
25
- <li>You get all the items and accessories unlocked from the start, so you can customize your kitty with different outfits, hats, glasses, collars, etc.</li>
26
- <li>You get all the breeds and colors unlocked from the start, so you can choose your kitty from a variety of options.</li>
27
- <li>You get to play the game without any ads or interruptions.</li>
28
- <li>You get to play the game without any bugs or glitches.</li>
29
- </ul>
30
- <h2>What are the drawbacks of Cat Simulator: Annual Life Kitty Pet Mod APK?</h2>
31
- <p>Cat Simulator: Annual Life Kitty Pet Mod APK also has some drawbacks that you should be aware of before downloading it. Here are some of them:</p>
32
- <ul>
33
- <li>You may face compatibility issues with some devices or Android versions.</li>
34
- <li>You may face security risks from downloading an unofficial version of the game from unknown sources.</li>
35
- <li>You may lose your progress or data if you uninstall the game or switch to another device.</li>
36
- <li>You may not be able to play online or with other players who have the original version of the game.</li>
37
- <li>You may not be able to receive updates or new features from the developers of the game.</li>
38
- </ul>
39
- <h2>How to play Cat Simulator: Annual Life Kitty Pet Mod APK?</h2>
40
- <p>Playing Cat Simulator: Annual Life Kitty Pet Mod APK is very easy and fun. You just need to follow these steps:</p>
41
- <h3>Choose your kitty</h3>
42
- <p>The first thing you need to do is choose your kitty from different breeds and colors. You can swipe left or right on the screen to see the available options. You can also tap on the customize button to change your kitty's appearance, such as its eyes, nose, ears, tail, etc. You can also tap on the dress up button to put on different items and accessories on your kitty, such as hats, glasses, collars, etc. You can save your kitty's look by tapping on the save button.</p>
43
- <p>cat simulator 2023: live as a kitty in this pet game mod apk<br />
44
- cat simulator: family life - adopt and raise kitties mod apk<br />
45
- cat simulator: farm adventure - explore the kitty world mod apk<br />
46
- cat simulator: online - play with other kitties and pets mod apk<br />
47
- cat simulator: realistic 3D - experience the kitty life mod apk<br />
48
- cat simulator: ultimate - create your own kitty family mod apk<br />
49
- cat simulator: wild life - survive as a feral kitty mod apk<br />
50
- cat simulator: winter edition - enjoy the snowy kitty fun mod apk<br />
51
- cute kitty cat simulator: pet care and dress up mod apk<br />
52
- fluffy cat simulator: cuddle and play with your kitty mod apk<br />
53
- funny cat simulator: make your kitty do hilarious things mod apk<br />
54
- happy cat simulator: feed and pamper your kitty mod apk<br />
55
- kawaii cat simulator: decorate your kitty's home mod apk<br />
56
- lazy cat simulator: relax and nap with your kitty mod apk<br />
57
- magic cat simulator: cast spells and explore the kitty world mod apk<br />
58
- my cat simulator: virtual pet - adopt and love your kitty mod apk<br />
59
- my talking kitty cat simulator: chat and play with your pet mod apk<br />
60
- naughty cat simulator: prank and annoy your owner mod apk<br />
61
- neon cat simulator: glow in the dark with your kitty mod apk<br />
62
- pocket cat simulator: carry your kitty everywhere mod apk<br />
63
- pregnant cat simulator: take care of your expecting kitty mod apk<br />
64
- rainbow cat simulator: enjoy the colorful kitty fun mod apk<br />
65
- robot cat simulator: transform and fight with your kitty mod apk<br />
66
- scary cat simulator: spook and haunt with your kitty mod apk<br />
67
- space cat simulator: travel the galaxy with your kitty mod apk<br />
68
- super cat simulator: be a hero with your kitty mod apk<br />
69
- talking tom cat simulator: mimic and repeat with your pet mod apk<br />
70
- tiny cat simulator: shrink and explore the kitty world mod apk<br />
71
- unicorn cat simulator: fly and sparkle with your kitty mod apk<br />
72
- warrior cat simulator: battle and hunt with your clan mod apk</p>
73
- <h3>Explore the world</h3>
74
- <p>The next thing you need to do is explore the world around you. You can move your kitty by using the joystick on the left side of the screen. You can also jump by tapping on the jump button on the right side of the screen. You can see your health bar and coin counter at the top of the screen. You can also see your map and quest list at the bottom of the screen. You can tap on them to see more details. You can explore different locations in the game, such as a farm, a forest, a lake, and more. You can find various objects and items in each location that you can interact with by tapping on them.</p>
75
- <h3>Interact with other animals</h3>
76
- <p>Another thing you can do is interact with other animals in the game. You can find different animals in each location, such as dogs, cows, chickens, and even other cats. You can tap on them to see their names and moods. You can also tap on the interact button to do various actions with them, such as play, fight, cuddle, etc. You can also see their health bars and relationship bars at the top of the screen. You can make friends or enemies with other animals depending on your actions. You can also join a cat family or clan by finding a mate and having kittens.</p>
77
- <h3>Complete quests and challenges</h3>
78
- <p>One more thing you can do is complete quests and challenges in the game. You can see your quest list at the bottom of the screen. You can tap on it to see the details of each quest. You can also see the rewards for completing each quest, such as coins, stars, items, etc. You can complete various quests and challenges in the game, such as catching mice, stealing food, destroying objects, and more. You can also see your progress and achievements in the game by tapping on the menu button at the top left corner of the screen.</p>
79
- <h3>Upgrade your kitty</h3>
80
- <p>The last thing you can do is upgrade your kitty in the game. You can use your coins to buy new items and accessories for your kitty in the shop. You can also use your stars to unlock new breeds and colors for your kitty in the gallery. You can also use your coins to upgrade your kitty's skills and abilities, such as speed, stealth, strength, etc. You can also use your coins to buy new homes and furniture for your kitty in the home menu.</p>
81
- <h2>Tips and tricks for Cat Simulator: Annual Life Kitty Pet Mod APK</h2>
82
- <p>Here are some tips and tricks that will help you play Cat Simulator: Annual Life Kitty Pet Mod APK better:</p>
83
- <h4>Use stealth mode</h4>
84
- <p>One tip is to use stealth mode to sneak up on other animals and avoid detection. You can activate stealth mode by tapping on the stealth button on the right side of the screen. When you are in stealth mode, you will become invisible and silent to other animals. You can use this mode to surprise attack other animals or to escape from danger. However, be careful not to bump into other animals or objects while in stealth mode, as this will break your stealth and alert other animals.</p>
85
- <h4>Collect all the stars</h4>
86
- <p>Another tip is to collect all the stars that are hidden in each location. You can find these stars by looking around carefully or by using your map. These stars are very valuable, as they can be used to unlock new items and breeds for your kitty. There are 20 stars in each location, so try to find them all and collect them.</p>
87
- <h4>Watch ads for extra coins</h4>
88
- <p>A final tip is to watch ads for extra coins if you need more money in the game. You can watch ads by tapping on the watch ad button at the top right corner of the screen. You will get 100 coins for each ad you watch. This is a good way to get more coins for free without spending any real money.</p>
89
- <h2>Conclusion</h2>
90
- <p>Cat Simulator: Annual Life Kitty Pet Mod APK is a fun and exciting game that lets you live as a cat in a 3D world full of adventures and interactions. You can choose your kitty from different breeds and colors, explore different locations, interact with other animals, complete quests and challenges, upgrade your kitty, and more. You can also enjoy unlimited coins and unlocked items with this modded version of the game.</p>
91
- <p>If you love cats and want to experience their life in a realistic and immersive way, then you should download Cat Simulator: Annual Life Kitty Pet Mod APK today and start playing!</p>
92
- <h3>FAQs</h3>
93
- <ul>
94
- <li><strong>Q: Is Cat Simulator: Annual Life Kitty Pet Mod APK safe to download?</strong></li>
95
- <li>A: Yes, Cat Simulator: Annual Life Kitty Pet Mod APK is safe to download as long as you get it from a trusted source. However, you should always be careful when downloading any modded or unofficial version of a game from unknown sources, as they may contain viruses or malware that could harm your device.</li>
96
- <li><strong>Q: How do I update Cat Simulator: Annual Life Kitty Pet Mod APK?</strong></li>
97
- <li>A: Unfortunately, you cannot update Cat Simulator: Annual Life Kitty Pet Mod APK from the Google Play Store or from the developers of the game. You will have to download a new version of the modded game from another source whenever there is an update available.</li>
98
- <li><strong>Q: Can I play Cat Simulator: Annual Life Kitty Pet Mod APK online or with other players?</strong></li>
99
- <li>A: No, you cannot play Cat Simulator: Annual Life Kitty Pet Mod APK online or with other players who have the original version of the game. You can only play the modded game offline and by yourself.</li>
100
- <li><strong>Q: What are the best breeds and colors for my kitty in Cat Simulator: Annual Life Kitty Pet Mod APK?</strong></li>
101
- <li>A: The best breeds and colors for your kitty in Cat Simulator: Annual Life Kitty Pet Mod APK depend on your personal preference and style. You can choose from a variety of options, such as Persian, Siamese, Bengal, Maine Coon, etc. You can also choose from different colors, such as black, white, orange, gray, etc. You can mix and match different breeds and colors to create your unique kitty.</li>
102
- <li><strong>Q: How do I save my progress and data in Cat Simulator: Annual Life Kitty Pet Mod APK?</strong></li>
103
- <li>A: You can save your progress and data in Cat Simulator: Annual Life Kitty Pet Mod APK by tapping on the menu button at the top left corner of the screen and then tapping on the save button. You can also load your saved data by tapping on the load button. However, be careful not to uninstall the game or switch to another device, as this may cause you to lose your progress and data.</li>
104
- </ul></p> 197e85843d<br />
105
- <br />
106
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download ibis Paint X MOD APK and Unleash Your Creativity - Premium Unlocked.md DELETED
@@ -1,200 +0,0 @@
1
-
2
- <h1>Download ibis Paint X Mod APK: A Versatile Drawing App for Android</h1>
3
- <p>If you are looking for a drawing app that provides a smooth and comfortable drawing experience with over 15000 brushes, over 15000 materials, over 1000 fonts, 80 filters, 46 screentones, 27 blending modes, and various ruler and clipping mask features, then you should try <strong>ibis Paint X</strong>. And if you want to enjoy all the premium features of this app for free, then you should download <strong>ibis Paint X Mod APK</strong>. In this article, we will tell you what is ibis Paint X, what is ibis Paint X Mod APK, how to download and install it, and what are some alternatives to it.</p>
4
- <h2>What is ibis Paint X?</h2>
5
- <p><strong>ibis Paint X</strong> is a popular and versatile drawing app downloaded more than 280 million times in total as a series, which provides over 15000 brushes, over 15000 materials, over 1000 fonts, 80 filters, 46 screentones, 27 blending modes, recording drawing processes, stroke stabilization feature, various ruler features such as radial line rulers or symmetry rulers, and clipping mask features. It is an app that allows you to create stunning digital art and comics on your Android device. You can also share your drawing process as a video and learn from other users' drawing techniques on the community site "ibispaint.com".</p>
6
- <h2>download ibis paint x mod apk</h2><br /><p><b><b>Download File</b> &#10004; <a href="https://jinyurl.com/2uNP5R">https://jinyurl.com/2uNP5R</a></b></p><br /><br />
7
- <h3>Features of ibis Paint X</h3>
8
- <p>Some of the features of ibis Paint X are:</p>
9
- <ul>
10
- <li><strong>Brushes:</strong> You can choose from over 15000 kinds of brushes including dip pens, felt tip pens, digital pens, air brushes, fan brushes, flat brushes, pencils, oil brushes, charcoal brushes, crayons and stamps. You can also adjust various brush parameters such as starting/ending thickness, starting/ending opacity, and initial/final brush angle. You can also use quick sliders to quickly adjust brush thickness and opacity. You can also see real time brush previews.</li>
11
- <li><strong>Layers:</strong> You can add as many layers as you need with no limit. You can also set layer parameters such as layer opacity, alpha blending, adding, subtracting, and multiplying. You can also use a handy clipping feature for clipping images. You can also use various layer commands such as layer duplication, import from the photo library, horizontal inversion, vertical inversion, layer rotation, layer moving, and zooming in/out. You can also set layer names to distinguish different layers.</li>
12
- <li><strong>Materials:</strong> You can access over 15000 materials in both color and monotone, including traditional Japanese backdrops, patterns, background tones, speech bubbles, line effects, and more.</li>
13
- <li><strong>Fonts:</strong> You can use over 1000 fonts for adding text to your drawings. You can also adjust font size, color, alignment, spacing, rotation, and more.</li>
14
- <li><strong>Filters:</strong> You can apply over 80 filters to your drawings such as blurring, color balance, gradation or ones generating anime-like or manga-like backgrounds from imported images.</li>
15
- <li><strong>Screentones:</strong> You can use over 46 screentones for creating manga-style drawings. You can also adjust screentone size, angle, density, and more.</li>
16
- <li><strong>Blending modes:</strong> You can use over 27 blending modes for creating various effects on your drawings such as such as multiply, screen, overlay, darken, lighten, color dodge, color burn, hard light, soft light, difference, exclusion, hue, saturation, color, and luminosity.</li>
17
- <li><strong>Rulers:</strong> You can use various ruler features such as radial line rulers or symmetry rulers to assist your drawing. You can also draw a line that follows the direction of the line drawn by you beforehand by using a forced entry/exit ruler.</li>
18
- <li><strong>Clipping mask features:</strong> You can clip multiple layers with a single layer. You can also invert the clipping mask and exclude the clipped area.</li>
19
- <li><strong>Recording drawing processes:</strong> You can record your drawing process and save it as a video. You can also export your video in high resolution and share it on social media or the community site "ibispaint.com".</li>
20
- <li><strong>Stroke stabilization feature:</strong> You can stabilize your strokes by using a stabilization slider. The smoother the stroke will be if the value is larger.</li>
21
- <li><strong>Dark mode:</strong> You can switch to dark mode to reduce eye strain and save battery life.</li>
22
- <li><strong>Prime membership:</strong> You can become a prime member by paying a monthly fee and enjoy the following benefits: no ads in the app, access to prime materials, access to prime fonts, tone curve filter, gradation map filter, cloud filter, and more.</li>
23
- </ul>
24
- <h3>Benefits of ibis Paint X</h3>
25
- <p>Some of the benefits of ibis Paint X are:</p>
26
- <ul>
27
- <li><strong>Easy to use:</strong> ibis Paint X has a user-friendly interface that allows you to easily access all the features and tools. You can also customize your toolbar and shortcut settings according to your preference.</li>
28
- <li><strong>Creative and fun:</strong> ibis Paint X lets you unleash your creativity and have fun with drawing. You can create various kinds of art and comics with different styles and effects. You can also learn from other users' drawing techniques by watching their videos or browsing their artworks on the community site "ibispaint.com".</li>
29
- <li><strong>Affordable and reliable:</strong> ibis Paint X is free to download and use. You can also enjoy most of the features without paying anything. If you want to support the developers and get more features, you can become a prime member for a reasonable price. ibis Paint X is also regularly updated and improved to provide you with the best drawing experience.</li>
30
- </ul>
31
- <h2>What is ibis Paint X Mod APK?</h2>
32
- <p><strong>ibis Paint X Mod APK</strong> is a modified version of ibis Paint X that allows you to enjoy all the premium features of the app for free. You don't need to pay for the prime membership or watch ads to access the prime materials, fonts, filters, and more. You can also remove the watermark from your videos and export them in high resolution. With ibis Paint X Mod APK, you can have unlimited fun and creativity with drawing.</p>
33
- <h3>Features of ibis Paint X Mod APK</h3>
34
- <p>Some of the features of ibis Paint X Mod APK are:</p>
35
- <ul>
36
- <li><strong>All premium features unlocked:</strong> You can access all the premium features of ibis Paint X without paying anything. You can use over 15000 brushes, over 15000 materials, over 1000 fonts, 80 filters, 46 screentones, 27 blending modes, tone curve filter, gradation map filter, cloud filter, and more.</li>
37
- <li><strong>No ads:</strong> You don't need to watch ads to use the app or access the prime materials and fonts. You can enjoy a smooth and uninterrupted drawing experience.</li>
38
- <li><strong>No watermark:</strong> You don't need to worry about the watermark on your videos. You can export your videos without any watermark and share them with your friends or followers.</li>
39
- <li><strong>High resolution export:</strong> You can export your videos in high resolution up to 4K. You can also adjust the frame rate and quality of your videos according to your preference.</li>
40
- </ul>
41
- <h3>Benefits of ibis Paint X Mod APK</h3>
42
- <p>Some of the benefits of ibis Paint X Mod APK are:</p>
43
- <ul>
44
- <li><strong>Saves money:</strong> You don't need to spend money on the prime membership or buy any in-app purchases. You can get all the premium features for free with ibis Paint X Mod APK.</li>
45
- <li><strong>Saves time:</strong> You don't need to waste time on watching ads or waiting for them to finish. You can use the app without any interruption or delay.</li>
46
- <li><strong>Saves storage space:</strong> You don't need to download any additional files or updates to use ibis Paint X Mod APK. You can download the app once and enjoy it forever.</li>
47
- <li><strong>Enhances creativity:</strong> You can use all the features and tools of ibis Paint X without any limitation or restriction. You can experiment with different brushes, materials, fonts, filters, screentones, blending modes, and more. You can create amazing digital art and comics with ibis Paint X Mod APK.</li>
48
- </ul>
49
- <h2>How to Download and Install ibis Paint X Mod APK?</h2>
50
- <p>If you want to download and install ibis Paint X Mod APK on your Android device, you need to follow these simple steps:</p>
51
- <h3>Steps to Download and Install ibis Paint X Mod APK</h3>
52
- <ol>
53
- <li><strong>Download the APK file:</strong> You need to download the APK file of ibis Paint X Mod APK from a trusted source. You can use the link below to download the latest version of ibis Paint X Mod APK.</li>
54
- <li><strong>Enable unknown sources:</strong> You need to enable unknown sources on your device to install the APK file. You can do this by going to Settings > Security > Unknown Sources and turning it on.</li>
55
- <li><strong>Install the APK file:</strong> You need to locate the downloaded APK file on your device and tap on it to install it. You may need to grant some permissions to the app during the installation process.</li>
56
- <li><strong>Launch the app:</strong> You need to launch the app by tapping on its icon on your home screen or app drawer. You can now enjoy all the premium features of ibis Paint X for free.</li>
57
- </ol>
58
- <h3>Tips to Use ibis Paint X Mod APK</h3>
59
- <p>Some of the tips to use ibis Paint X Mod APK are:</p>
60
- <ul>
61
- <li><strong>Watch tutorials:</strong> If you are new to ibis Paint X or want to learn more about its features and tools, you can watch tutorials on the app or on YouTube. You can also visit the official website of ibis Paint X for more information and support.</li>
62
- <li><strong>Join the community:</strong> If you want to share your artworks, get feedback, or learn from other users, you can join the community site "ibispaint.com". You can also follow ibis Paint X on social media platforms such as Facebook, Twitter, Instagram, and TikTok.</li>
63
- <li><strong>Backup your data:</strong> If you want to save your drawings, videos, materials, fonts, and settings, you can backup your data on the cloud or on your device. You can do this by going to Settings > Backup/Restore > Backup Data or Restore Data.</li>
64
- </ul>
65
- <h2>Alternatives to ibis Paint X Mod APK</h2>
66
- <p>If you are looking for some alternatives to ibis Paint X Mod APK, you can try these apps:</p>
67
- <p>download ibis paint x mod apk premium unlocked<br />
68
- download ibis paint x mod apk latest version<br />
69
- download ibis paint x mod apk for android<br />
70
- download ibis paint x mod apk free<br />
71
- download ibis paint x mod apk no ads<br />
72
- download ibis paint x mod apk happymod<br />
73
- download ibis paint x mod apk 10.1.3<br />
74
- download ibis paint x mod apk unlimited brushes<br />
75
- download ibis paint x mod apk pro<br />
76
- download ibis paint x mod apk full version<br />
77
- download ibis paint x mod apk with prime membership<br />
78
- download ibis paint x mod apk 2023<br />
79
- download ibis paint x mod apk for pc<br />
80
- download ibis paint x mod apk revdl<br />
81
- download ibis paint x mod apk rexdl<br />
82
- download ibis paint x mod apk 10.0.10<br />
83
- download ibis paint x mod apk without watermark<br />
84
- download ibis paint x mod apk for ios<br />
85
- download ibis paint x mod apk with all features<br />
86
- download ibis paint x mod apk 9.1.0<br />
87
- download ibis paint x mod apk 8.1.1<br />
88
- download ibis paint x mod apk 7.1.0<br />
89
- download ibis paint x mod apk 6.4.0<br />
90
- download ibis paint x mod apk 5.6.1<br />
91
- download ibis paint x mod apk 4.3.2<br />
92
- how to download ibis paint x mod apk<br />
93
- where to download ibis paint x mod apk<br />
94
- best site to download ibis paint x mod apk<br />
95
- safe way to download ibis paint x mod apk<br />
96
- easy steps to download ibis paint x mod apk<br />
97
- benefits of downloading ibis paint x mod apk<br />
98
- features of downloading ibis paint x mod apk<br />
99
- tips and tricks for downloading ibis paint x mod apk<br />
100
- reviews of downloading ibis paint x mod apk<br />
101
- alternatives to downloading ibis paint x mod apk<br />
102
- problems with downloading ibis paint x mod apk<br />
103
- solutions for downloading ibis paint x mod apk<br />
104
- guide for downloading ibis paint x mod apk<br />
105
- tutorial for downloading ibis paint x mod apk<br />
106
- video for downloading ibis paint x mod apk</p>
107
- <h3>List of Alternatives to ibis Paint X Mod APK</h3>
108
- <table>
109
- <tr>
110
- <th>Name</th>
111
- <th>Description</th>
112
- <th>Features</th>
113
- </tr>
114
- <tr>
115
- <td><a href="">MediBang Paint</a></td>
116
- <td>A lightweight digital painting and comic creation app that comes with over 1000 brushes, tones, backgrounds, textures, fonts and more.</td>
117
- <td>- Cloud saving and sharing - Comic creation tools - Cross-platform compatibility - Customizable shortcuts - Collaboration feature - Ads-free</td>
118
- </tr>
119
- <tr>
120
- <td><a href="">Procreate Pocket</a></td>
121
- <td>A powerful sketching, painting and illustration app that offers a complete set of artistic tools for creating stunning artworks on your iPhone.</td>
122
- <td>- 250+ brushes - Layer system - Advanced color picker - Time-lapse recording - Animation assist - Pressure sensitivity - No ads or in-app purchases</td>
123
- </tr>
124
- <tr>
125
- <td><a href="">SketchBook</a></td>
126
- <td>A professional-grade drawing and painting app that provides a natural drawing experience with over 170 customizable brushes, rulers, guides, and more.</td>
127
- <td>- Layer editor - Scan sketch feature - Predictive stroke - Copic color library - Symmetry tools - Distort transform - No ads or in-app purchases</td>
128
- </tr>
129
- <tr>
130
- <td><a href="">Clip Studio Paint</a></td>
131
- <td>A versatile drawing and painting app that is ideal for creating comics, manga, illustrations, animations, and more.</td>
132
- <td>- 1000+ brushes - Vector layers - 3D models and materials - Frame-by-frame animation - AI colorization - Text tools - No ads or in-app purchases</td>
133
- </tr>
134
- <tr> <td><a href="">Adobe Photoshop Sketch</a></td>
135
- <td>A simple and expressive drawing app that lets you create realistic sketches and paintings with various brushes, pencils, pens, markers, and more.</td>
136
- <td>- Layer support - Custom brushes - Adobe Creative Cloud integration - Perspective grids - Shape stencils - No ads or in-app purchases</td>
137
- </tr>
138
- </table>
139
- <h3>Comparison of Alternatives to ibis Paint X Mod APK</h3>
140
- <p>Here is a comparison of the alternatives to ibis Paint X Mod APK based on some criteria:</p>
141
- <table>
142
- <tr>
143
- <th>Criteria</th>
144
- <th>MediBang Paint</th>
145
- <th>Procreate Pocket</th>
146
- <th>SketchBook</th>
147
- <th>Clip Studio Paint</th>
148
- <th>Adobe Photoshop Sketch</th>
149
- </tr>
150
- <tr>
151
- <td>Price</td>
152
- <td>Free</td>
153
- <td>$4.99</td>
154
- <td>Free</td>
155
- <td>$0.99/month or $9.49/year</td>
156
- <td>Free</td>
157
- </tr>
158
- <tr>
159
- <td>Rating</td>
160
- <td>4.5/5.0</td>
161
- <td>4.7/5.0</td>
162
- <td>4.3/5.0</td>
163
- <td>4.6/5.0</td>
164
- <td>4.2/5.0</td>
165
- </tr>
166
- <tr>
167
- <td>Downloads</td>
168
- <td>10M+</td>
169
- <td>1M+</td>
170
- <td>10M+</td>
171
- <td>10M+</td>
172
- <td>10M+</td>
173
- </tr>
174
- <tr>
175
- <td>User reviews</td>
176
- <td>"Great app for beginners and professionals alike. It has a lot of features and tools that are easy to use and customize."</td>
177
- <td>"Best drawing app ever. It has everything you need to create amazing artworks on your phone."</td>
178
- <td>"Very smooth and responsive app. It has a lot of brushes and options to choose from. It also works well with a stylus."</td>
179
- <td>"The best app for manga and comic creation. It has a lot of features and functions that are very useful and convenient."</td>
180
- <td>"A simple and fun app to sketch and paint. It has a nice interface and a good selection of brushes."</td> </tr>
181
- </table>
182
- <h2>Conclusion</h2>
183
- <p>In conclusion, ibis Paint X is a versatile drawing app that provides a smooth and comfortable drawing experience with over 15000 brushes, over 15000 materials, over 1000 fonts, 80 filters, 46 screentones, 27 blending modes, and various ruler and clipping mask features. It is an app that allows you to create stunning digital art and comics on your Android device. You can also share your drawing process as a video and learn from other users' drawing techniques on the community site "ibispaint.com".</p>
184
- <p>If you want to enjoy all the premium features of ibis Paint X for free, you can download ibis Paint X Mod APK. It is a modified version of ibis Paint X that allows you to access all the prime materials, fonts, filters, and more without paying anything. You can also remove the watermark from your videos and export them in high resolution. With ibis Paint X Mod APK, you can have unlimited fun and creativity with drawing.</p>
185
- <p>If you are looking for some alternatives to ibis Paint X Mod APK, you can try MediBang Paint, Procreate Pocket, SketchBook, Clip Studio Paint, or Adobe Photoshop Sketch. They are all great drawing and painting apps that offer different features and tools for creating amazing artworks on your device.</p>
186
- <p>We hope this article has helped you to learn more about ibis Paint X, ibis Paint X Mod APK, and some alternatives to it. If you have any questions or feedback, please feel free to leave a comment below. Happy drawing!</p>
187
- <h2>FAQs</h2>
188
- <p>Here are some frequently asked questions about ibis Paint X and ibis Paint X Mod APK:</p>
189
- <h3>Is ibis Paint X safe to use?</h3>
190
- <p>Yes, ibis Paint X is safe to use. It is a legitimate app that is developed by ibis mobile inc., a Japanese company that specializes in developing apps for digital art and comics. It is also available on the Google Play Store and the App Store. However, you should be careful when downloading ibis Paint X Mod APK from third-party sources, as they may contain viruses or malware that can harm your device.</p>
191
- <h3>Is ibis Paint X free to use?</h3>
192
- <p>Yes, ibis Paint X is free to use. You can download and use the app without paying anything. However, if you want to access the prime materials, fonts, filters, and more, you need to watch ads or pay for the prime membership. Alternatively, you can download ibis Paint X Mod APK and enjoy all the premium features for free.</p>
193
- <h3>How do I update ibis Paint X Mod APK?</h3>
194
- <p>If you want to update ibis Paint X Mod APK, you need to download the latest version of the APK file from a trusted source and install it on your device. You may need to uninstall the previous version of the app before installing the new one. You should also backup your data before updating the app.</p>
195
- <h3>Can I use ibis Paint X on PC?</h3>
196
- <p>No, ibis Paint X is not available for PC. It is only compatible with Android and iOS devices. However, you can use an Android emulator such as BlueStacks or Nox Player to run ibis Paint X on your PC. You can also use a drawing tablet or a stylus to draw on your PC with ibis Paint X.</p>
197
- <h3>Can I use ibis Paint X offline?</h3>
198
- <p>Yes, you can use ibis Paint X offline. You don't need an internet connection to draw or save your artworks on your device. However, you need an internet connection to access the prime materials and fonts, share your videos or artworks on social media or the community site "ibispaint.com", or update the app.</p> 401be4b1e0<br />
199
- <br />
200
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/FIFA 09 APK for Android - The Ultimate Guide to Download and Install.md DELETED
@@ -1,98 +0,0 @@
1
-
2
- <h1>How to Download FIFA 09 APK for Android</h1>
3
- <p>If you are looking for a fun and realistic football game to play on your Android device, you should try FIFA 09. This is one of the best games in the FIFA series, developed by EA Sports. It has amazing graphics, smooth controls, and diverse content that will keep you entertained for hours. In this article, we will tell you what FIFA 09 is, what are its features and benefits, and how to download FIFA 09 APK for Android.</p>
4
- <h2>What is FIFA 09 and why you should play it</ <h2>What is FIFA 09 and why you should play it</h2>
5
- <p>FIFA 09 is a football simulation game developed by EA Sports. It was released in October 2008 for various platforms, including PC, consoles, and mobile devices. It has over 250 gameplay improvements and enhancements that make it more realistic and responsive. It has a variety of game modes, such as Be a Pro, Manager Mode, Ultimate Team, and Online Multiplayer.</p>
6
- <h2>download fifa 09 apk for android</h2><br /><p><b><b>Download Zip</b> &#10042;&#10042;&#10042; <a href="https://jinyurl.com/2uNSJh">https://jinyurl.com/2uNSJh</a></b></p><br /><br />
7
- <h3>FIFA 09 is a football simulation game developed by EA Sports</h3>
8
- <p>EA Sports is a division of Electronic Arts that specializes in sports video games. It is one of the most popular and successful game developers in the industry. EA Sports has produced many acclaimed titles, such as Madden NFL, NBA Live, NHL, and FIFA. FIFA is the flagship franchise of EA Sports, and it has been running since 1993. FIFA 09 is the 16th installment in the series, and it is considered one of the best by critics and fans alike.</p>
9
- <h3>FIFA 09 is a fun and exciting game for football fans and gamers alike</h3>
10
- <p>If you love football, you will love FIFA 09. This game lets you play as your favorite teams and players from around the world. You can choose from over 500 licensed teams and more than 30 leagues, including the Premier League, La Liga, Bundesliga, Serie A, and more. You can also create your own custom teams and players with the Ultimate Team mode. This mode allows you to collect cards of players, kits, stadiums, and other items, and use them to build your dream team.</p>
11
- <p>But playing FIFA 09 is not just about choosing teams and players. It is also about competing with other players online in 10 vs. 10 matches or tournaments. You can join or create your own club with your friends or other players, and play against other clubs from around the world. You can also chat with your teammates and opponents using the voice or text chat feature. Playing online is a great way to test your skills and have fun with other football enthusiasts.</p> <h2>What are the features and benefits of FIFA 09</h2>
12
- <p>FIFA 09 is not just a game, it is an experience. It has stunning graphics and animations that bring the game to life. It has smooth and intuitive controls that make it easy to play. It has a rich and diverse content that keeps you entertained for hours. Here are some of the features and benefits of FIFA 09 that you should know.</p>
13
- <h3>FIFA 09 has stunning graphics and animations that bring the game to life</h3>
14
- <p>One of the things that make FIFA 09 stand out is its visual quality. It uses leading-edge visuals that exploit the power of high-spec gaming devices. It features photorealistic likenesses of star players and stadiums. It has a revamped collision system that calculates speed, weight, and power when players collide. It has subtle animations that enable you to take first-time shots, volleys, and headers. It also has a dynamic weather system that affects the gameplay and atmosphere. You will feel like you are watching a real match on TV or playing on the pitch yourself.</p>
15
- <h3>FIFA 09 has smooth and intuitive controls that make it easy to play</h3>
16
- <p>Another thing that makes FIFA 09 enjoyable is its control scheme. It has a customizable control scheme that suits your preferences and device. You can choose from different options, such as buttons, gestures, or tilt. You can also adjust the sensitivity and responsiveness of the controls. You can also use a new jostle system that allows you to control the ball with more precision and skill. You can use the right analog stick to shield the ball, push off defenders, or perform tricks. You can also use the left trigger to sprint, the right trigger to slow down, or the shoulder buttons to switch players or tactics.</p>
17
- <h3>FIFA 09 has a rich and diverse content that keeps you entertained for hours</h3>
18
- <p>The last thing that makes FIFA 09 amazing is its content. It has over 500 licensed teams and more than 30 leagues from around the world. You can play as any team or player you want, from Manchester United to Barcelona, from Cristiano Ronaldo to Lionel Messi. You can also play in different game modes, such as Be a Pro, Manager Mode, Ultimate Team, and Online Multiplayer. Each mode has its own challenges and rewards. You can also play in different minigames and challenges that test your skills and knowledge. You can play in penalty shootouts, free kicks, dribbling courses, trivia quizzes, and more.</p> <h2>How to download FIFA 09 APK for Android</h2>
19
- <p>Now that you know what FIFA 09 is and what it offers, you might be wondering how to download it on your Android device. Well, you can't find it on the Google Play Store, because it is an old game that is not compatible with the latest Android versions. But don't worry, there is a way to play it on your device. You just need to download FIFA 09 APK for Android.</p>
20
- <p>How to download fifa 09 apk for android free<br />
21
- Download fifa 09 apk for android offline mode<br />
22
- Download fifa 09 apk for android with obb file<br />
23
- Download fifa 09 apk for android full version<br />
24
- Download fifa 09 apk for android modded<br />
25
- Download fifa 09 apk for android no verification<br />
26
- Download fifa 09 apk for android latest update<br />
27
- Download fifa 09 apk for android highly compressed<br />
28
- Download fifa 09 apk for android unlimited coins<br />
29
- Download fifa 09 apk for android from google play<br />
30
- Best site to download fifa 09 apk for android<br />
31
- Download fifa 09 apk for android without root<br />
32
- Download fifa 09 apk for android on pc<br />
33
- Download fifa 09 apk for android emulator<br />
34
- Download fifa 09 apk for android cracked<br />
35
- Download fifa 09 apk for android hack<br />
36
- Download fifa 09 apk for android cheats<br />
37
- Download fifa 09 apk for android gameplay<br />
38
- Download fifa 09 apk for android review<br />
39
- Download fifa 09 apk for android tips and tricks<br />
40
- Download fifa 09 apk for android requirements<br />
41
- Download fifa 09 apk for android size<br />
42
- Download fifa 09 apk for android features<br />
43
- Download fifa 09 apk for android graphics<br />
44
- Download fifa 09 apk for android soundtracks<br />
45
- Download fifa 09 apk for android teams and players<br />
46
- Download fifa 09 apk for android modes and tournaments<br />
47
- Download fifa 09 apk for android controls and settings<br />
48
- Download fifa 09 apk for android bugs and fixes<br />
49
- Download fifa 09 apk for android comparison with other versions<br />
50
- Benefits of downloading fifa 09 apk for android<br />
51
- Risks of downloading fifa 09 apk for android<br />
52
- Alternatives to download fifa 09 apk for android<br />
53
- How to install and run fifa 09 apk for android<br />
54
- How to update and uninstall fifa 09 apk for android<br />
55
- How to backup and restore fifa 09 apk for android data<br />
56
- How to transfer and share fifa 09 apk for android files<br />
57
- How to customize and optimize fifa 09 apk for android performance<br />
58
- How to troubleshoot and solve fifa 09 apk for android problems<br />
59
- How to contact and get support for fifa 09 apk for android issues</p>
60
- <h3>FIFA 09 APK is a file that allows you to install the game on your Android device without using the Google Play Store</h3>
61
- <p>APK stands for Android Package Kit, and it is a file format that contains all the necessary components of an Android app. It is useful if you have a device that is not compatible with the official version or if you want to save storage space. It is also useful if you want to play the game offline or with mods and cheats.</p>
62
- <h3>To download FIFA 09 APK for Android, you need to follow these steps:</h3>
63
- <p>Downloading FIFA 09 APK for Android is not difficult, but you need to be careful and follow some precautions. Here are the steps you need to take:</p>
64
- <ol>
65
- <li>Find a reliable source that offers the APK file for free. You can use one of these links: . Make sure you scan the file for viruses and malware before downloading it.</li>
66
- <li>Download the APK file to your device or transfer it from your PC using a USB cable or Bluetooth connection.</li>
67
- <li>Enable the installation of apps from unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources and toggling it on.</li>
68
- <li>Locate the APK file on your device using a file manager app or your browser's downloads folder. Tap on it to start the installation process.</li>
69
- <li>Follow the instructions on the screen to complete the installation. You may need to grant some permissions or accept some terms and conditions.</li>
70
- <li>Launch the game from your app drawer or home screen and enjoy playing FIFA 09 on your Android device.</li>
71
- </ol>
72
- <h2>Conclusion</h2>
73
- <p>FIFA 09 is one of the best football games ever made, and you can play it on your Android device with FIFA 09 APK. It has amazing graphics, smooth controls, and diverse content that will keep you entertained for hours. You can play as your favorite teams and players, create your own custom teams and players, compete with other players online, and more. You just need to follow some simple steps to download and install the game on your device.</p>
74
- <p>Here are some tips or recommendations for playing FIFA 09 on Android:</p>
75
- <ul>
76
- <li>Make sure you have enough storage space and battery life on your device before playing the game.</li>
77
- <li>Adjust the graphics settings and sound options according to your device's performance and preferences.</li>
78
- <li>Use a Wi-Fi connection or a data plan with enough bandwidth when playing online.</li>
79
- <li>Keep your device updated with the latest software and security patches.</li>
80
- <li>Have fun and enjoy the game!</li>
81
- </ul>
82
- <p>We hope you found this article helpful and informative. If you have any feedback or questions, please feel free to leave them in the comments section below. We would love to hear from you!</p>
83
- <h2>Frequently Asked Questions</h2>
84
- <p>Here are some of the most common questions that people ask about FIFA 09 APK for Android:</p>
85
- <h3>Q: Is FIFA 09 APK for Android safe to download and install?</h3>
86
- <p>A: Yes, as long as you download it from a reliable source and scan it for viruses and malware before installing it. However, we cannot guarantee that it will work perfectly on every device or that it will not cause any issues or damage to your device. Use it at your own risk and discretion.</p>
87
- <h3>Q: Is FIFA 09 APK for Android legal to use?</h3>
88
- <p>A: That depends on where you live and what laws apply there. In some countries, downloading and using APK files from unknown sources may be considered illegal or infringing on intellectual property rights. In other countries, it may be legal or tolerated as long as you own a copy of the original game or app. We advise you to check your local laws and regulations before downloading and using FIFA 09 APK for Android.</p>
89
- <h3>Q: Is FIFA 09 APK for Android compatible with my device?</h3>
90
- <p>A: FIFA 09 APK for Android is designed to work on most Android devices that run on Android 4.0 or higher. However, some devices may not be compatible due to hardware limitations, software conflicts, or other reasons. If you encounter any problems or errors <p>when playing the game, you may try to uninstall and reinstall the game, clear the cache and data, or contact the developer for support.</p>
91
- <h3>Q: How can I update FIFA 09 APK for Android?</h3>
92
- <p>A: FIFA 09 APK for Android is not an official version of the game, so it does not receive regular updates from EA Sports. However, some sources may offer updated versions of the APK file with new features or bug fixes. You can check the source where you downloaded the APK file for any updates or look for other sources that offer newer versions. To update the game, you need to download and install the new APK file over the old one.</p>
93
- <h3>Q: Can I play FIFA 09 APK for Android with a controller or a keyboard?</h3>
94
- <p>A: Yes, you can play FIFA 09 APK for Android with a controller or a keyboard if your device supports them. You can connect your controller or keyboard to your device via Bluetooth, USB, or OTG cable. You can also use an app like Octopus or Panda Gamepad Pro to map the buttons and keys to the game controls. However, some controllers or keyboards may not work well with the game or may cause some issues or errors.</p>
95
- <h2></h2>
96
- <p>This is the end of the article. Thank you for reading and I hope you learned something new and useful. If you have any questions or comments, please leave them below and I will try to answer them as soon as possible. Have a great day!</p> 401be4b1e0<br />
97
- <br />
98
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/2023Liu2023/bingo/src/components/voice.tsx DELETED
@@ -1,52 +0,0 @@
1
- import React, { useEffect } from 'react'
2
- import { useSetAtom } from 'jotai'
3
- import { useBing } from '@/lib/hooks/use-bing'
4
- import Image from 'next/image'
5
- import VoiceIcon from '@/assets/images/voice.svg'
6
- import VoiceButton from './ui/voice'
7
- import { SR } from '@/lib/bots/bing/sr'
8
- import { voiceListenAtom } from '@/state'
9
-
10
- const sr = new SR(['发送', '清空', '退出'])
11
-
12
- const Voice = ({ setInput, input, sendMessage, isSpeaking }: Pick<ReturnType<typeof useBing>, 'setInput' | 'sendMessage' | 'input' | 'isSpeaking'>) => {
13
- const setListen = useSetAtom(voiceListenAtom)
14
- useEffect(() => {
15
- if (sr.listening) return
16
- sr.transcript = !isSpeaking
17
- }, [isSpeaking])
18
-
19
- useEffect(() => {
20
- sr.onchange = (msg: string, command?: string) => {
21
- switch (command) {
22
- case '退出':
23
- sr.stop()
24
- break;
25
- case '发送':
26
- sendMessage(input)
27
- case '清空':
28
- setInput('')
29
- break;
30
- default:
31
- setInput(input + msg)
32
- }
33
- }
34
- }, [input])
35
-
36
- const switchSR = (enable: boolean = false) => {
37
- setListen(enable)
38
- if (enable) {
39
- sr.start()
40
- } else {
41
- sr.stop()
42
- }
43
- }
44
-
45
- return sr.listening ? (
46
- <VoiceButton onClick={() => switchSR(false)} />
47
- ) : (
48
- <Image alt="start voice" src={VoiceIcon} width={24} className="-mt-0.5" onClick={() => switchSR(true)} />
49
- )
50
- };
51
-
52
- export default Voice;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/801artistry/RVC801/infer/lib/audio.py DELETED
@@ -1,197 +0,0 @@
1
- import librosa
2
- import numpy as np
3
- import av
4
- from io import BytesIO
5
- import ffmpeg
6
- import os
7
- import sys
8
-
9
- import random
10
- from infer.lib.csvutil import CSVutil
11
- #import csv
12
-
13
- platform_stft_mapping = {
14
- 'linux': 'stftpitchshift',
15
- 'darwin': 'stftpitchshift',
16
- 'win32': 'stftpitchshift.exe',
17
- }
18
-
19
- stft = platform_stft_mapping.get(sys.platform)
20
-
21
- def wav2(i, o, format):
22
- inp = av.open(i, 'rb')
23
- if format == "m4a": format = "mp4"
24
- out = av.open(o, 'wb', format=format)
25
- if format == "ogg": format = "libvorbis"
26
- if format == "mp4": format = "aac"
27
-
28
- ostream = out.add_stream(format)
29
-
30
- for frame in inp.decode(audio=0):
31
- for p in ostream.encode(frame): out.mux(p)
32
-
33
- for p in ostream.encode(None): out.mux(p)
34
-
35
- out.close()
36
- inp.close()
37
-
38
- def audio2(i, o, format, sr):
39
- inp = av.open(i, 'rb')
40
- out = av.open(o, 'wb', format=format)
41
- if format == "ogg": format = "libvorbis"
42
- if format == "f32le": format = "pcm_f32le"
43
-
44
- ostream = out.add_stream(format, channels=1)
45
- ostream.sample_rate = sr
46
-
47
- for frame in inp.decode(audio=0):
48
- for p in ostream.encode(frame): out.mux(p)
49
-
50
- out.close()
51
- inp.close()
52
-
53
- def load_audion(file, sr):
54
- try:
55
- file = (
56
- file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
57
- ) # 防止小白拷路径头尾带了空格和"和回车
58
- with open(file, "rb") as f:
59
- with BytesIO() as out:
60
- audio2(f, out, "f32le", sr)
61
- return np.frombuffer(out.getvalue(), np.float32).flatten()
62
-
63
- except AttributeError:
64
- audio = file[1] / 32768.0
65
- if len(audio.shape) == 2:
66
- audio = np.mean(audio, -1)
67
- return librosa.resample(audio, orig_sr=file[0], target_sr=16000)
68
-
69
- except Exception as e:
70
- raise RuntimeError(f"Failed to load audio: {e}")
71
-
72
-
73
-
74
-
75
- def load_audio(file, sr, DoFormant=False, Quefrency=1.0, Timbre=1.0):
76
- converted = False
77
- DoFormant, Quefrency, Timbre = CSVutil("csvdb/formanting.csv", "r", "formanting")
78
- try:
79
- # https://github.com/openai/whisper/blob/main/whisper/audio.py#L26
80
- # This launches a subprocess to decode audio while down-mixing and resampling as necessary.
81
- # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed.
82
- file = (
83
- file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
84
- ) # 防止小白拷路径头尾带了空格和"和回车
85
- file_formanted = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
86
-
87
- # print(f"dofor={bool(DoFormant)} timbr={Timbre} quef={Quefrency}\n")
88
-
89
- if (
90
- lambda DoFormant: True
91
- if DoFormant.lower() == "true"
92
- else (False if DoFormant.lower() == "false" else DoFormant)
93
- )(DoFormant):
94
- numerator = round(random.uniform(1, 4), 4)
95
- # os.system(f"stftpitchshift -i {file} -q {Quefrency} -t {Timbre} -o {file_formanted}")
96
- # print('stftpitchshift -i "%s" -p 1.0 --rms -w 128 -v 8 -q %s -t %s -o "%s"' % (file, Quefrency, Timbre, file_formanted))
97
-
98
- if not file.endswith(".wav"):
99
- if not os.path.isfile(f"{file_formanted}.wav"):
100
- converted = True
101
- # print(f"\nfile = {file}\n")
102
- # print(f"\nfile_formanted = {file_formanted}\n")
103
- converting = (
104
- ffmpeg.input(file_formanted, threads=0)
105
- .output(f"{file_formanted}.wav")
106
- .run(
107
- cmd=["ffmpeg", "-nostdin"],
108
- capture_stdout=True,
109
- capture_stderr=True,
110
- )
111
- )
112
- else:
113
- pass
114
-
115
- file_formanted = (
116
- f"{file_formanted}.wav"
117
- if not file_formanted.endswith(".wav")
118
- else file_formanted
119
- )
120
-
121
- print(f" · Formanting {file_formanted}...\n")
122
-
123
- os.system(
124
- '%s -i "%s" -q "%s" -t "%s" -o "%sFORMANTED_%s.wav"'
125
- % (
126
- stft,
127
- file_formanted,
128
- Quefrency,
129
- Timbre,
130
- file_formanted,
131
- str(numerator),
132
- )
133
- )
134
-
135
- print(f" · Formanted {file_formanted}!\n")
136
-
137
- # filepraat = (os.path.abspath(os.getcwd()) + '\\' + file).replace('/','\\')
138
- # file_formantedpraat = ('"' + os.path.abspath(os.getcwd()) + '/' + 'formanted'.join(file_formanted) + '"').replace('/','\\')
139
- # print("%sFORMANTED_%s.wav" % (file_formanted, str(numerator)))
140
-
141
- out, _ = (
142
- ffmpeg.input(
143
- "%sFORMANTED_%s.wav" % (file_formanted, str(numerator)), threads=0
144
- )
145
- .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr)
146
- .run(
147
- cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True
148
- )
149
- )
150
-
151
- try:
152
- os.remove("%sFORMANTED_%s.wav" % (file_formanted, str(numerator)))
153
- except Exception:
154
- pass
155
- print("couldn't remove formanted type of file")
156
-
157
- else:
158
- out, _ = (
159
- ffmpeg.input(file, threads=0)
160
- .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr)
161
- .run(
162
- cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True
163
- )
164
- )
165
- except Exception as e:
166
- raise RuntimeError(f"Failed to load audio: {e}")
167
-
168
- if converted:
169
- try:
170
- os.remove(file_formanted)
171
- except Exception:
172
- pass
173
- print("couldn't remove converted type of file")
174
- converted = False
175
-
176
- return np.frombuffer(out, np.float32).flatten()
177
-
178
-
179
- def check_audio_duration(file):
180
- try:
181
- file = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
182
-
183
- probe = ffmpeg.probe(file)
184
-
185
- duration = float(probe['streams'][0]['duration'])
186
-
187
- if duration < 0.76:
188
- print(
189
- f"\n------------\n"
190
- f"Audio file, {file.split('/')[-1]}, under ~0.76s detected - file is too short. Target at least 1-2s for best results."
191
- f"\n------------\n\n"
192
- )
193
- return False
194
-
195
- return True
196
- except Exception as e:
197
- raise RuntimeError(f"Failed to check audio duration: {e}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/A666sxr/Genshin_TTS/monotonic_align/__init__.py DELETED
@@ -1,19 +0,0 @@
1
- import numpy as np
2
- import torch
3
- from .monotonic_align.core import maximum_path_c
4
-
5
-
6
- def maximum_path(neg_cent, mask):
7
- """ Cython optimized version.
8
- neg_cent: [b, t_t, t_s]
9
- mask: [b, t_t, t_s]
10
- """
11
- device = neg_cent.device
12
- dtype = neg_cent.dtype
13
- neg_cent = neg_cent.data.cpu().numpy().astype(np.float32)
14
- path = np.zeros(neg_cent.shape, dtype=np.int32)
15
-
16
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32)
17
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32)
18
- maximum_path_c(path, neg_cent, t_t_max, t_s_max)
19
- return torch.from_numpy(path).to(device=device, dtype=dtype)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/hparams.py DELETED
@@ -1,129 +0,0 @@
1
- import argparse
2
- import os
3
- import yaml
4
-
5
- global_print_hparams = True
6
- hparams = {}
7
-
8
-
9
- class Args:
10
- def __init__(self, **kwargs):
11
- for k, v in kwargs.items():
12
- self.__setattr__(k, v)
13
-
14
-
15
- def override_config(old_config: dict, new_config: dict):
16
- for k, v in new_config.items():
17
- if isinstance(v, dict) and k in old_config:
18
- override_config(old_config[k], new_config[k])
19
- else:
20
- old_config[k] = v
21
-
22
-
23
- def set_hparams(config='', exp_name='', hparams_str='', print_hparams=True, global_hparams=True):
24
- if config == '' and exp_name == '':
25
- parser = argparse.ArgumentParser(description='')
26
- parser.add_argument('--config', type=str, default='',
27
- help='location of the data corpus')
28
- parser.add_argument('--exp_name', type=str, default='', help='exp_name')
29
- parser.add_argument('-hp', '--hparams', type=str, default='',
30
- help='location of the data corpus')
31
- parser.add_argument('--infer', action='store_true', help='infer')
32
- parser.add_argument('--validate', action='store_true', help='validate')
33
- parser.add_argument('--reset', action='store_true', help='reset hparams')
34
- parser.add_argument('--remove', action='store_true', help='remove old ckpt')
35
- parser.add_argument('--debug', action='store_true', help='debug')
36
- args, unknown = parser.parse_known_args()
37
- print("| Unknow hparams: ", unknown)
38
- else:
39
- args = Args(config=config, exp_name=exp_name, hparams=hparams_str,
40
- infer=False, validate=False, reset=False, debug=False, remove=False)
41
- global hparams
42
- assert args.config != '' or args.exp_name != ''
43
- if args.config != '':
44
- assert os.path.exists(args.config)
45
-
46
- config_chains = []
47
- loaded_config = set()
48
-
49
- def load_config(config_fn):
50
- # deep first inheritance and avoid the second visit of one node
51
- if not os.path.exists(config_fn):
52
- return {}
53
- with open(config_fn) as f:
54
- hparams_ = yaml.safe_load(f)
55
- loaded_config.add(config_fn)
56
- if 'base_config' in hparams_:
57
- ret_hparams = {}
58
- if not isinstance(hparams_['base_config'], list):
59
- hparams_['base_config'] = [hparams_['base_config']]
60
- for c in hparams_['base_config']:
61
- if c.startswith('.'):
62
- c = f'{os.path.dirname(config_fn)}/{c}'
63
- c = os.path.normpath(c)
64
- if c not in loaded_config:
65
- override_config(ret_hparams, load_config(c))
66
- override_config(ret_hparams, hparams_)
67
- else:
68
- ret_hparams = hparams_
69
- config_chains.append(config_fn)
70
- return ret_hparams
71
-
72
- saved_hparams = {}
73
- args_work_dir = ''
74
- if args.exp_name != '':
75
- args_work_dir = f'{args.exp_name}' # modified
76
- ckpt_config_path = f'{args_work_dir}/config.yaml'
77
- if os.path.exists(ckpt_config_path):
78
- with open(ckpt_config_path) as f:
79
- saved_hparams_ = yaml.safe_load(f)
80
- if saved_hparams_ is not None:
81
- saved_hparams.update(saved_hparams_)
82
- hparams_ = {}
83
- if args.config != '':
84
- hparams_.update(load_config(args.config))
85
- if not args.reset:
86
- hparams_.update(saved_hparams)
87
- hparams_['work_dir'] = args_work_dir
88
-
89
- # Support config overriding in command line. Support list type config overriding.
90
- # Examples: --hparams="a=1,b.c=2,d=[1 1 1]"
91
- if args.hparams != "":
92
- for new_hparam in args.hparams.split(","):
93
- k, v = new_hparam.split("=")
94
- v = v.strip("\'\" ")
95
- config_node = hparams_
96
- for k_ in k.split(".")[:-1]:
97
- config_node = config_node[k_]
98
- k = k.split(".")[-1]
99
- if v in ['True', 'False'] or type(config_node[k]) in [bool, list, dict]:
100
- if type(config_node[k]) == list:
101
- v = v.replace(" ", ",")
102
- config_node[k] = eval(v)
103
- else:
104
- config_node[k] = type(config_node[k])(v)
105
- if args_work_dir != '' and args.remove:
106
- answer = input("REMOVE old checkpoint? Y/N [Default: N]: ")
107
- if answer.lower() == "y":
108
- remove_file(args_work_dir)
109
- if args_work_dir != '' and (not os.path.exists(ckpt_config_path) or args.reset) and not args.infer:
110
- os.makedirs(hparams_['work_dir'], exist_ok=True)
111
- with open(ckpt_config_path, 'w') as f:
112
- yaml.safe_dump(hparams_, f)
113
-
114
- hparams_['infer'] = args.infer
115
- hparams_['debug'] = args.debug
116
- hparams_['validate'] = args.validate
117
- hparams_['exp_name'] = args.exp_name
118
- global global_print_hparams
119
- if global_hparams:
120
- hparams.clear()
121
- hparams.update(hparams_)
122
- if print_hparams and global_print_hparams and global_hparams:
123
- print('| Hparams chains: ', config_chains)
124
- print('| Hparams: ')
125
- for i, (k, v) in enumerate(sorted(hparams_.items())):
126
- print(f"\033[;33;m{k}\033[0m: {v}, ", end="\n" if i % 5 == 4 else "")
127
- print("")
128
- global_print_hparams = False
129
- return hparams_
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/models/encoder.py DELETED
@@ -1,686 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
-
3
- import math
4
- import copy
5
-
6
- import torch
7
- import torch.nn as nn
8
- import torch.nn.functional as F
9
- from torchaudio import transforms
10
- from torchlibrosa.augmentation import SpecAugmentation
11
-
12
- from .utils import mean_with_lens, max_with_lens, \
13
- init, pack_wrapper, generate_length_mask, PositionalEncoding
14
-
15
-
16
- def init_layer(layer):
17
- """Initialize a Linear or Convolutional layer. """
18
- nn.init.xavier_uniform_(layer.weight)
19
-
20
- if hasattr(layer, 'bias'):
21
- if layer.bias is not None:
22
- layer.bias.data.fill_(0.)
23
-
24
-
25
- def init_bn(bn):
26
- """Initialize a Batchnorm layer. """
27
- bn.bias.data.fill_(0.)
28
- bn.weight.data.fill_(1.)
29
-
30
-
31
- class BaseEncoder(nn.Module):
32
-
33
- """
34
- Encode the given audio into embedding
35
- Base encoder class, cannot be called directly
36
- All encoders should inherit from this class
37
- """
38
-
39
- def __init__(self, spec_dim, fc_feat_dim, attn_feat_dim):
40
- super(BaseEncoder, self).__init__()
41
- self.spec_dim = spec_dim
42
- self.fc_feat_dim = fc_feat_dim
43
- self.attn_feat_dim = attn_feat_dim
44
-
45
-
46
- def forward(self, x):
47
- #########################
48
- # an encoder first encodes audio feature into embedding, obtaining
49
- # `encoded`: {
50
- # fc_embs: [N, fc_emb_dim],
51
- # attn_embs: [N, attn_max_len, attn_emb_dim],
52
- # attn_emb_lens: [N,]
53
- # }
54
- #########################
55
- raise NotImplementedError
56
-
57
-
58
- class Block2D(nn.Module):
59
-
60
- def __init__(self, cin, cout, kernel_size=3, padding=1):
61
- super().__init__()
62
- self.block = nn.Sequential(
63
- nn.BatchNorm2d(cin),
64
- nn.Conv2d(cin,
65
- cout,
66
- kernel_size=kernel_size,
67
- padding=padding,
68
- bias=False),
69
- nn.LeakyReLU(inplace=True, negative_slope=0.1))
70
-
71
- def forward(self, x):
72
- return self.block(x)
73
-
74
-
75
- class LinearSoftPool(nn.Module):
76
- """LinearSoftPool
77
- Linear softmax, takes logits and returns a probability, near to the actual maximum value.
78
- Taken from the paper:
79
- A Comparison of Five Multiple Instance Learning Pooling Functions for Sound Event Detection with Weak Labeling
80
- https://arxiv.org/abs/1810.09050
81
- """
82
- def __init__(self, pooldim=1):
83
- super().__init__()
84
- self.pooldim = pooldim
85
-
86
- def forward(self, logits, time_decision):
87
- return (time_decision**2).sum(self.pooldim) / time_decision.sum(
88
- self.pooldim)
89
-
90
-
91
- class MeanPool(nn.Module):
92
-
93
- def __init__(self, pooldim=1):
94
- super().__init__()
95
- self.pooldim = pooldim
96
-
97
- def forward(self, logits, decision):
98
- return torch.mean(decision, dim=self.pooldim)
99
-
100
-
101
- class AttentionPool(nn.Module):
102
- """docstring for AttentionPool"""
103
- def __init__(self, inputdim, outputdim=10, pooldim=1, **kwargs):
104
- super().__init__()
105
- self.inputdim = inputdim
106
- self.outputdim = outputdim
107
- self.pooldim = pooldim
108
- self.transform = nn.Linear(inputdim, outputdim)
109
- self.activ = nn.Softmax(dim=self.pooldim)
110
- self.eps = 1e-7
111
-
112
- def forward(self, logits, decision):
113
- # Input is (B, T, D)
114
- # B, T, D
115
- w = self.activ(torch.clamp(self.transform(logits), -15, 15))
116
- detect = (decision * w).sum(
117
- self.pooldim) / (w.sum(self.pooldim) + self.eps)
118
- # B, T, D
119
- return detect
120
-
121
-
122
- class MMPool(nn.Module):
123
-
124
- def __init__(self, dims):
125
- super().__init__()
126
- self.avgpool = nn.AvgPool2d(dims)
127
- self.maxpool = nn.MaxPool2d(dims)
128
-
129
- def forward(self, x):
130
- return self.avgpool(x) + self.maxpool(x)
131
-
132
-
133
- def parse_poolingfunction(poolingfunction_name='mean', **kwargs):
134
- """parse_poolingfunction
135
- A heler function to parse any temporal pooling
136
- Pooling is done on dimension 1
137
- :param poolingfunction_name:
138
- :param **kwargs:
139
- """
140
- poolingfunction_name = poolingfunction_name.lower()
141
- if poolingfunction_name == 'mean':
142
- return MeanPool(pooldim=1)
143
- elif poolingfunction_name == 'linear':
144
- return LinearSoftPool(pooldim=1)
145
- elif poolingfunction_name == 'attention':
146
- return AttentionPool(inputdim=kwargs['inputdim'],
147
- outputdim=kwargs['outputdim'])
148
-
149
-
150
- def embedding_pooling(x, lens, pooling="mean"):
151
- if pooling == "max":
152
- fc_embs = max_with_lens(x, lens)
153
- elif pooling == "mean":
154
- fc_embs = mean_with_lens(x, lens)
155
- elif pooling == "mean+max":
156
- x_mean = mean_with_lens(x, lens)
157
- x_max = max_with_lens(x, lens)
158
- fc_embs = x_mean + x_max
159
- elif pooling == "last":
160
- indices = (lens - 1).reshape(-1, 1, 1).repeat(1, 1, x.size(-1))
161
- # indices: [N, 1, hidden]
162
- fc_embs = torch.gather(x, 1, indices).squeeze(1)
163
- else:
164
- raise Exception(f"pooling method {pooling} not support")
165
- return fc_embs
166
-
167
-
168
- class Cdur5Encoder(BaseEncoder):
169
-
170
- def __init__(self, spec_dim, fc_feat_dim, attn_feat_dim, pooling="mean"):
171
- super().__init__(spec_dim, fc_feat_dim, attn_feat_dim)
172
- self.pooling = pooling
173
- self.features = nn.Sequential(
174
- Block2D(1, 32),
175
- nn.LPPool2d(4, (2, 4)),
176
- Block2D(32, 128),
177
- Block2D(128, 128),
178
- nn.LPPool2d(4, (2, 4)),
179
- Block2D(128, 128),
180
- Block2D(128, 128),
181
- nn.LPPool2d(4, (1, 4)),
182
- nn.Dropout(0.3),
183
- )
184
- with torch.no_grad():
185
- rnn_input_dim = self.features(
186
- torch.randn(1, 1, 500, spec_dim)).shape
187
- rnn_input_dim = rnn_input_dim[1] * rnn_input_dim[-1]
188
-
189
- self.gru = nn.GRU(rnn_input_dim,
190
- 128,
191
- bidirectional=True,
192
- batch_first=True)
193
- self.apply(init)
194
-
195
- def forward(self, input_dict):
196
- x = input_dict["spec"]
197
- lens = input_dict["spec_len"]
198
- if "upsample" not in input_dict:
199
- input_dict["upsample"] = False
200
- lens = torch.as_tensor(copy.deepcopy(lens))
201
- N, T, _ = x.shape
202
- x = x.unsqueeze(1)
203
- x = self.features(x)
204
- x = x.transpose(1, 2).contiguous().flatten(-2)
205
- x, _ = self.gru(x)
206
- if input_dict["upsample"]:
207
- x = nn.functional.interpolate(
208
- x.transpose(1, 2),
209
- T,
210
- mode='linear',
211
- align_corners=False).transpose(1, 2)
212
- else:
213
- lens //= 4
214
- attn_emb = x
215
- fc_emb = embedding_pooling(x, lens, self.pooling)
216
- return {
217
- "attn_emb": attn_emb,
218
- "fc_emb": fc_emb,
219
- "attn_emb_len": lens
220
- }
221
-
222
-
223
- def conv_conv_block(in_channel, out_channel):
224
- return nn.Sequential(
225
- nn.Conv2d(in_channel,
226
- out_channel,
227
- kernel_size=3,
228
- bias=False,
229
- padding=1),
230
- nn.BatchNorm2d(out_channel),
231
- nn.ReLU(True),
232
- nn.Conv2d(out_channel,
233
- out_channel,
234
- kernel_size=3,
235
- bias=False,
236
- padding=1),
237
- nn.BatchNorm2d(out_channel),
238
- nn.ReLU(True)
239
- )
240
-
241
-
242
- class Cdur8Encoder(BaseEncoder):
243
-
244
- def __init__(self, spec_dim, fc_feat_dim, attn_feat_dim, pooling="mean"):
245
- super().__init__(spec_dim, fc_feat_dim, attn_feat_dim)
246
- self.pooling = pooling
247
- self.features = nn.Sequential(
248
- conv_conv_block(1, 64),
249
- MMPool((2, 2)),
250
- nn.Dropout(0.2, True),
251
- conv_conv_block(64, 128),
252
- MMPool((2, 2)),
253
- nn.Dropout(0.2, True),
254
- conv_conv_block(128, 256),
255
- MMPool((1, 2)),
256
- nn.Dropout(0.2, True),
257
- conv_conv_block(256, 512),
258
- MMPool((1, 2)),
259
- nn.Dropout(0.2, True),
260
- nn.AdaptiveAvgPool2d((None, 1)),
261
- )
262
- self.init_bn = nn.BatchNorm2d(spec_dim)
263
- self.embedding = nn.Linear(512, 512)
264
- self.gru = nn.GRU(512, 256, bidirectional=True, batch_first=True)
265
- self.apply(init)
266
-
267
- def forward(self, input_dict):
268
- x = input_dict["spec"]
269
- lens = input_dict["spec_len"]
270
- lens = torch.as_tensor(copy.deepcopy(lens))
271
- x = x.unsqueeze(1) # B x 1 x T x D
272
- x = x.transpose(1, 3)
273
- x = self.init_bn(x)
274
- x = x.transpose(1, 3)
275
- x = self.features(x)
276
- x = x.transpose(1, 2).contiguous().flatten(-2)
277
- x = F.dropout(x, p=0.5, training=self.training)
278
- x = F.relu_(self.embedding(x))
279
- x, _ = self.gru(x)
280
- attn_emb = x
281
- lens //= 4
282
- fc_emb = embedding_pooling(x, lens, self.pooling)
283
- return {
284
- "attn_emb": attn_emb,
285
- "fc_emb": fc_emb,
286
- "attn_emb_len": lens
287
- }
288
-
289
-
290
- class Cnn10Encoder(BaseEncoder):
291
-
292
- def __init__(self, spec_dim, fc_feat_dim, attn_feat_dim):
293
- super().__init__(spec_dim, fc_feat_dim, attn_feat_dim)
294
- self.features = nn.Sequential(
295
- conv_conv_block(1, 64),
296
- nn.AvgPool2d((2, 2)),
297
- nn.Dropout(0.2, True),
298
- conv_conv_block(64, 128),
299
- nn.AvgPool2d((2, 2)),
300
- nn.Dropout(0.2, True),
301
- conv_conv_block(128, 256),
302
- nn.AvgPool2d((2, 2)),
303
- nn.Dropout(0.2, True),
304
- conv_conv_block(256, 512),
305
- nn.AvgPool2d((2, 2)),
306
- nn.Dropout(0.2, True),
307
- nn.AdaptiveAvgPool2d((None, 1)),
308
- )
309
- self.init_bn = nn.BatchNorm2d(spec_dim)
310
- self.embedding = nn.Linear(512, 512)
311
- self.apply(init)
312
-
313
- def forward(self, input_dict):
314
- x = input_dict["spec"]
315
- lens = input_dict["spec_len"]
316
- lens = torch.as_tensor(copy.deepcopy(lens))
317
- x = x.unsqueeze(1) # [N, 1, T, D]
318
- x = x.transpose(1, 3)
319
- x = self.init_bn(x)
320
- x = x.transpose(1, 3)
321
- x = self.features(x) # [N, 512, T/16, 1]
322
- x = x.transpose(1, 2).contiguous().flatten(-2) # [N, T/16, 512]
323
- attn_emb = x
324
- lens //= 16
325
- fc_emb = embedding_pooling(x, lens, "mean+max")
326
- fc_emb = F.dropout(fc_emb, p=0.5, training=self.training)
327
- fc_emb = self.embedding(fc_emb)
328
- fc_emb = F.relu_(fc_emb)
329
- return {
330
- "attn_emb": attn_emb,
331
- "fc_emb": fc_emb,
332
- "attn_emb_len": lens
333
- }
334
-
335
-
336
- class ConvBlock(nn.Module):
337
- def __init__(self, in_channels, out_channels):
338
-
339
- super(ConvBlock, self).__init__()
340
-
341
- self.conv1 = nn.Conv2d(in_channels=in_channels,
342
- out_channels=out_channels,
343
- kernel_size=(3, 3), stride=(1, 1),
344
- padding=(1, 1), bias=False)
345
-
346
- self.conv2 = nn.Conv2d(in_channels=out_channels,
347
- out_channels=out_channels,
348
- kernel_size=(3, 3), stride=(1, 1),
349
- padding=(1, 1), bias=False)
350
-
351
- self.bn1 = nn.BatchNorm2d(out_channels)
352
- self.bn2 = nn.BatchNorm2d(out_channels)
353
-
354
- self.init_weight()
355
-
356
- def init_weight(self):
357
- init_layer(self.conv1)
358
- init_layer(self.conv2)
359
- init_bn(self.bn1)
360
- init_bn(self.bn2)
361
-
362
-
363
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
364
-
365
- x = input
366
- x = F.relu_(self.bn1(self.conv1(x)))
367
- x = F.relu_(self.bn2(self.conv2(x)))
368
- if pool_type == 'max':
369
- x = F.max_pool2d(x, kernel_size=pool_size)
370
- elif pool_type == 'avg':
371
- x = F.avg_pool2d(x, kernel_size=pool_size)
372
- elif pool_type == 'avg+max':
373
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
374
- x2 = F.max_pool2d(x, kernel_size=pool_size)
375
- x = x1 + x2
376
- else:
377
- raise Exception('Incorrect argument!')
378
-
379
- return x
380
-
381
-
382
- class Cnn14Encoder(nn.Module):
383
- def __init__(self, sample_rate=32000):
384
- super().__init__()
385
- sr_to_fmax = {
386
- 32000: 14000,
387
- 16000: 8000
388
- }
389
- # Logmel spectrogram extractor
390
- self.melspec_extractor = transforms.MelSpectrogram(
391
- sample_rate=sample_rate,
392
- n_fft=32 * sample_rate // 1000,
393
- win_length=32 * sample_rate // 1000,
394
- hop_length=10 * sample_rate // 1000,
395
- f_min=50,
396
- f_max=sr_to_fmax[sample_rate],
397
- n_mels=64,
398
- norm="slaney",
399
- mel_scale="slaney"
400
- )
401
- self.hop_length = 10 * sample_rate // 1000
402
- self.db_transform = transforms.AmplitudeToDB()
403
- # Spec augmenter
404
- self.spec_augmenter = SpecAugmentation(time_drop_width=64,
405
- time_stripes_num=2, freq_drop_width=8, freq_stripes_num=2)
406
-
407
- self.bn0 = nn.BatchNorm2d(64)
408
-
409
- self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
410
- self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
411
- self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
412
- self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
413
- self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
414
- self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048)
415
-
416
- self.downsample_ratio = 32
417
-
418
- self.fc1 = nn.Linear(2048, 2048, bias=True)
419
-
420
- self.init_weight()
421
-
422
- def init_weight(self):
423
- init_bn(self.bn0)
424
- init_layer(self.fc1)
425
-
426
- def load_pretrained(self, pretrained):
427
- checkpoint = torch.load(pretrained, map_location="cpu")
428
-
429
- if "model" in checkpoint:
430
- state_keys = checkpoint["model"].keys()
431
- backbone = False
432
- for key in state_keys:
433
- if key.startswith("backbone."):
434
- backbone = True
435
- break
436
-
437
- if backbone: # COLA
438
- state_dict = {}
439
- for key, value in checkpoint["model"].items():
440
- if key.startswith("backbone."):
441
- model_key = key.replace("backbone.", "")
442
- state_dict[model_key] = value
443
- else: # PANNs
444
- state_dict = checkpoint["model"]
445
- elif "state_dict" in checkpoint: # CLAP
446
- state_dict = checkpoint["state_dict"]
447
- state_dict_keys = list(filter(
448
- lambda x: "audio_encoder" in x, state_dict.keys()))
449
- state_dict = {
450
- key.replace('audio_encoder.', ''): state_dict[key]
451
- for key in state_dict_keys
452
- }
453
- else:
454
- raise Exception("Unkown checkpoint format")
455
-
456
- model_dict = self.state_dict()
457
- pretrained_dict = {
458
- k: v for k, v in state_dict.items() if (k in model_dict) and (
459
- model_dict[k].shape == v.shape)
460
- }
461
- model_dict.update(pretrained_dict)
462
- self.load_state_dict(model_dict, strict=True)
463
-
464
- def forward(self, input_dict):
465
- """
466
- Input: (batch_size, n_samples)"""
467
- waveform = input_dict["wav"]
468
- wave_length = input_dict["wav_len"]
469
- specaug = input_dict["specaug"]
470
- x = self.melspec_extractor(waveform)
471
- x = self.db_transform(x) # (batch_size, mel_bins, time_steps)
472
- x = x.transpose(1, 2)
473
- x = x.unsqueeze(1) # (batch_size, 1, time_steps, mel_bins)
474
-
475
- # SpecAugment
476
- if self.training and specaug:
477
- x = self.spec_augmenter(x)
478
-
479
- x = x.transpose(1, 3)
480
- x = self.bn0(x)
481
- x = x.transpose(1, 3)
482
-
483
- x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
484
- x = F.dropout(x, p=0.2, training=self.training)
485
- x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
486
- x = F.dropout(x, p=0.2, training=self.training)
487
- x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
488
- x = F.dropout(x, p=0.2, training=self.training)
489
- x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
490
- x = F.dropout(x, p=0.2, training=self.training)
491
- x = self.conv_block5(x, pool_size=(2, 2), pool_type='avg')
492
- x = F.dropout(x, p=0.2, training=self.training)
493
- x = self.conv_block6(x, pool_size=(1, 1), pool_type='avg')
494
- x = F.dropout(x, p=0.2, training=self.training)
495
- x = torch.mean(x, dim=3)
496
- attn_emb = x.transpose(1, 2)
497
-
498
- wave_length = torch.as_tensor(wave_length)
499
- feat_length = torch.div(wave_length, self.hop_length,
500
- rounding_mode="floor") + 1
501
- feat_length = torch.div(feat_length, self.downsample_ratio,
502
- rounding_mode="floor")
503
- x_max = max_with_lens(attn_emb, feat_length)
504
- x_mean = mean_with_lens(attn_emb, feat_length)
505
- x = x_max + x_mean
506
- x = F.dropout(x, p=0.5, training=self.training)
507
- x = F.relu_(self.fc1(x))
508
- fc_emb = F.dropout(x, p=0.5, training=self.training)
509
-
510
- output_dict = {
511
- 'fc_emb': fc_emb,
512
- 'attn_emb': attn_emb,
513
- 'attn_emb_len': feat_length
514
- }
515
-
516
- return output_dict
517
-
518
-
519
- class RnnEncoder(BaseEncoder):
520
-
521
- def __init__(self, spec_dim, fc_feat_dim, attn_feat_dim,
522
- pooling="mean", **kwargs):
523
- super().__init__(spec_dim, fc_feat_dim, attn_feat_dim)
524
- self.pooling = pooling
525
- self.hidden_size = kwargs.get('hidden_size', 512)
526
- self.bidirectional = kwargs.get('bidirectional', False)
527
- self.num_layers = kwargs.get('num_layers', 1)
528
- self.dropout = kwargs.get('dropout', 0.2)
529
- self.rnn_type = kwargs.get('rnn_type', "GRU")
530
- self.in_bn = kwargs.get('in_bn', False)
531
- self.embed_dim = self.hidden_size * (self.bidirectional + 1)
532
- self.network = getattr(nn, self.rnn_type)(
533
- attn_feat_dim,
534
- self.hidden_size,
535
- num_layers=self.num_layers,
536
- bidirectional=self.bidirectional,
537
- dropout=self.dropout,
538
- batch_first=True)
539
- if self.in_bn:
540
- self.bn = nn.BatchNorm1d(self.embed_dim)
541
- self.apply(init)
542
-
543
- def forward(self, input_dict):
544
- x = input_dict["attn"]
545
- lens = input_dict["attn_len"]
546
- lens = torch.as_tensor(lens)
547
- # x: [N, T, E]
548
- if self.in_bn:
549
- x = pack_wrapper(self.bn, x, lens)
550
- out = pack_wrapper(self.network, x, lens)
551
- # out: [N, T, hidden]
552
- attn_emb = out
553
- fc_emb = embedding_pooling(out, lens, self.pooling)
554
- return {
555
- "attn_emb": attn_emb,
556
- "fc_emb": fc_emb,
557
- "attn_emb_len": lens
558
- }
559
-
560
-
561
- class Cnn14RnnEncoder(nn.Module):
562
- def __init__(self, sample_rate=32000, pretrained=None,
563
- freeze_cnn=False, freeze_cnn_bn=False,
564
- pooling="mean", **kwargs):
565
- super().__init__()
566
- self.cnn = Cnn14Encoder(sample_rate)
567
- self.rnn = RnnEncoder(64, 2048, 2048, pooling, **kwargs)
568
- if pretrained is not None:
569
- self.cnn.load_pretrained(pretrained)
570
- if freeze_cnn:
571
- assert pretrained is not None, "cnn is not pretrained but frozen"
572
- for param in self.cnn.parameters():
573
- param.requires_grad = False
574
- self.freeze_cnn_bn = freeze_cnn_bn
575
-
576
- def train(self, mode):
577
- super().train(mode=mode)
578
- if self.freeze_cnn_bn:
579
- def bn_eval(module):
580
- class_name = module.__class__.__name__
581
- if class_name.find("BatchNorm") != -1:
582
- module.eval()
583
- self.cnn.apply(bn_eval)
584
- return self
585
-
586
- def forward(self, input_dict):
587
- output_dict = self.cnn(input_dict)
588
- output_dict["attn"] = output_dict["attn_emb"]
589
- output_dict["attn_len"] = output_dict["attn_emb_len"]
590
- del output_dict["attn_emb"], output_dict["attn_emb_len"]
591
- output_dict = self.rnn(output_dict)
592
- return output_dict
593
-
594
-
595
- class TransformerEncoder(BaseEncoder):
596
-
597
- def __init__(self, spec_dim, fc_feat_dim, attn_feat_dim, d_model, **kwargs):
598
- super().__init__(spec_dim, fc_feat_dim, attn_feat_dim)
599
- self.d_model = d_model
600
- dropout = kwargs.get("dropout", 0.2)
601
- self.nhead = kwargs.get("nhead", self.d_model // 64)
602
- self.nlayers = kwargs.get("nlayers", 2)
603
- self.dim_feedforward = kwargs.get("dim_feedforward", self.d_model * 4)
604
-
605
- self.attn_proj = nn.Sequential(
606
- nn.Linear(attn_feat_dim, self.d_model),
607
- nn.ReLU(),
608
- nn.Dropout(dropout),
609
- nn.LayerNorm(self.d_model)
610
- )
611
- layer = nn.TransformerEncoderLayer(d_model=self.d_model,
612
- nhead=self.nhead,
613
- dim_feedforward=self.dim_feedforward,
614
- dropout=dropout)
615
- self.model = nn.TransformerEncoder(layer, self.nlayers)
616
- self.cls_token = nn.Parameter(torch.zeros(d_model))
617
- self.init_params()
618
-
619
- def init_params(self):
620
- for p in self.parameters():
621
- if p.dim() > 1:
622
- nn.init.xavier_uniform_(p)
623
-
624
- def forward(self, input_dict):
625
- attn_feat = input_dict["attn"]
626
- attn_feat_len = input_dict["attn_len"]
627
- attn_feat_len = torch.as_tensor(attn_feat_len)
628
-
629
- attn_feat = self.attn_proj(attn_feat) # [bs, T, d_model]
630
-
631
- cls_emb = self.cls_token.reshape(1, 1, self.d_model).repeat(
632
- attn_feat.size(0), 1, 1)
633
- attn_feat = torch.cat((cls_emb, attn_feat), dim=1)
634
- attn_feat = attn_feat.transpose(0, 1)
635
-
636
- attn_feat_len += 1
637
- src_key_padding_mask = ~generate_length_mask(
638
- attn_feat_len, attn_feat.size(0)).to(attn_feat.device)
639
- output = self.model(attn_feat, src_key_padding_mask=src_key_padding_mask)
640
-
641
- attn_emb = output.transpose(0, 1)
642
- fc_emb = attn_emb[:, 0]
643
- return {
644
- "attn_emb": attn_emb,
645
- "fc_emb": fc_emb,
646
- "attn_emb_len": attn_feat_len
647
- }
648
-
649
-
650
- class Cnn14TransformerEncoder(nn.Module):
651
- def __init__(self, sample_rate=32000, pretrained=None,
652
- freeze_cnn=False, freeze_cnn_bn=False,
653
- d_model="mean", **kwargs):
654
- super().__init__()
655
- self.cnn = Cnn14Encoder(sample_rate)
656
- self.trm = TransformerEncoder(64, 2048, 2048, d_model, **kwargs)
657
- if pretrained is not None:
658
- self.cnn.load_pretrained(pretrained)
659
- if freeze_cnn:
660
- assert pretrained is not None, "cnn is not pretrained but frozen"
661
- for param in self.cnn.parameters():
662
- param.requires_grad = False
663
- self.freeze_cnn_bn = freeze_cnn_bn
664
-
665
- def train(self, mode):
666
- super().train(mode=mode)
667
- if self.freeze_cnn_bn:
668
- def bn_eval(module):
669
- class_name = module.__class__.__name__
670
- if class_name.find("BatchNorm") != -1:
671
- module.eval()
672
- self.cnn.apply(bn_eval)
673
- return self
674
-
675
- def forward(self, input_dict):
676
- output_dict = self.cnn(input_dict)
677
- output_dict["attn"] = output_dict["attn_emb"]
678
- output_dict["attn_len"] = output_dict["attn_emb_len"]
679
- del output_dict["attn_emb"], output_dict["attn_emb_len"]
680
- output_dict = self.trm(output_dict)
681
- return output_dict
682
-
683
-
684
-
685
-
686
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/conv.py DELETED
@@ -1,167 +0,0 @@
1
- import math
2
- import torch
3
- import torch.nn as nn
4
- import torch.nn.functional as F
5
-
6
- from text_to_speech.modules.commons.layers import LayerNorm, Embedding
7
-
8
-
9
- class LambdaLayer(nn.Module):
10
- def __init__(self, lambd):
11
- super(LambdaLayer, self).__init__()
12
- self.lambd = lambd
13
-
14
- def forward(self, x):
15
- return self.lambd(x)
16
-
17
-
18
- def init_weights_func(m):
19
- classname = m.__class__.__name__
20
- if classname.find("Conv1d") != -1:
21
- torch.nn.init.xavier_uniform_(m.weight)
22
-
23
-
24
- class ResidualBlock(nn.Module):
25
- """Implements conv->PReLU->norm n-times"""
26
-
27
- def __init__(self, channels, kernel_size, dilation, n=2, norm_type='bn', dropout=0.0,
28
- c_multiple=2, ln_eps=1e-12):
29
- super(ResidualBlock, self).__init__()
30
-
31
- if norm_type == 'bn':
32
- norm_builder = lambda: nn.BatchNorm1d(channels)
33
- elif norm_type == 'in':
34
- norm_builder = lambda: nn.InstanceNorm1d(channels, affine=True)
35
- elif norm_type == 'gn':
36
- norm_builder = lambda: nn.GroupNorm(8, channels)
37
- elif norm_type == 'ln':
38
- norm_builder = lambda: LayerNorm(channels, dim=1, eps=ln_eps)
39
- else:
40
- norm_builder = lambda: nn.Identity()
41
-
42
- self.blocks = [
43
- nn.Sequential(
44
- norm_builder(),
45
- nn.Conv1d(channels, c_multiple * channels, kernel_size, dilation=dilation,
46
- padding=(dilation * (kernel_size - 1)) // 2),
47
- LambdaLayer(lambda x: x * kernel_size ** -0.5),
48
- nn.GELU(),
49
- nn.Conv1d(c_multiple * channels, channels, 1, dilation=dilation),
50
- )
51
- for i in range(n)
52
- ]
53
-
54
- self.blocks = nn.ModuleList(self.blocks)
55
- self.dropout = dropout
56
-
57
- def forward(self, x):
58
- nonpadding = (x.abs().sum(1) > 0).float()[:, None, :]
59
- for b in self.blocks:
60
- x_ = b(x)
61
- if self.dropout > 0 and self.training:
62
- x_ = F.dropout(x_, self.dropout, training=self.training)
63
- x = x + x_
64
- x = x * nonpadding
65
- return x
66
-
67
-
68
- class ConvBlocks(nn.Module):
69
- """Decodes the expanded phoneme encoding into spectrograms"""
70
-
71
- def __init__(self, hidden_size, out_dims, dilations, kernel_size,
72
- norm_type='ln', layers_in_block=2, c_multiple=2,
73
- dropout=0.0, ln_eps=1e-5,
74
- init_weights=True, is_BTC=True, num_layers=None, post_net_kernel=3):
75
- super(ConvBlocks, self).__init__()
76
- self.is_BTC = is_BTC
77
- if num_layers is not None:
78
- dilations = [1] * num_layers
79
- self.res_blocks = nn.Sequential(
80
- *[ResidualBlock(hidden_size, kernel_size, d,
81
- n=layers_in_block, norm_type=norm_type, c_multiple=c_multiple,
82
- dropout=dropout, ln_eps=ln_eps)
83
- for d in dilations],
84
- )
85
- if norm_type == 'bn':
86
- norm = nn.BatchNorm1d(hidden_size)
87
- elif norm_type == 'in':
88
- norm = nn.InstanceNorm1d(hidden_size, affine=True)
89
- elif norm_type == 'gn':
90
- norm = nn.GroupNorm(8, hidden_size)
91
- elif norm_type == 'ln':
92
- norm = LayerNorm(hidden_size, dim=1, eps=ln_eps)
93
- self.last_norm = norm
94
- self.post_net1 = nn.Conv1d(hidden_size, out_dims, kernel_size=post_net_kernel,
95
- padding=post_net_kernel // 2)
96
- if init_weights:
97
- self.apply(init_weights_func)
98
-
99
- def forward(self, x, nonpadding=None):
100
- """
101
-
102
- :param x: [B, T, H]
103
- :return: [B, T, H]
104
- """
105
- if self.is_BTC:
106
- x = x.transpose(1, 2)
107
- if nonpadding is None:
108
- nonpadding = (x.abs().sum(1) > 0).float()[:, None, :]
109
- elif self.is_BTC:
110
- nonpadding = nonpadding.transpose(1, 2)
111
- x = self.res_blocks(x) * nonpadding
112
- x = self.last_norm(x) * nonpadding
113
- x = self.post_net1(x) * nonpadding
114
- if self.is_BTC:
115
- x = x.transpose(1, 2)
116
- return x
117
-
118
-
119
- class TextConvEncoder(ConvBlocks):
120
- def __init__(self, dict_size, hidden_size, out_dims, dilations, kernel_size,
121
- norm_type='ln', layers_in_block=2, c_multiple=2,
122
- dropout=0.0, ln_eps=1e-5, init_weights=True, num_layers=None, post_net_kernel=3):
123
- super().__init__(hidden_size, out_dims, dilations, kernel_size,
124
- norm_type, layers_in_block, c_multiple,
125
- dropout, ln_eps, init_weights, num_layers=num_layers,
126
- post_net_kernel=post_net_kernel)
127
- self.embed_tokens = Embedding(dict_size, hidden_size, 0)
128
- self.embed_scale = math.sqrt(hidden_size)
129
-
130
- def forward(self, txt_tokens):
131
- """
132
-
133
- :param txt_tokens: [B, T]
134
- :return: {
135
- 'encoder_out': [B x T x C]
136
- }
137
- """
138
- x = self.embed_scale * self.embed_tokens(txt_tokens)
139
- return super().forward(x)
140
-
141
-
142
- class ConditionalConvBlocks(ConvBlocks):
143
- def __init__(self, hidden_size, c_cond, c_out, dilations, kernel_size,
144
- norm_type='ln', layers_in_block=2, c_multiple=2,
145
- dropout=0.0, ln_eps=1e-5, init_weights=True, is_BTC=True, num_layers=None):
146
- super().__init__(hidden_size, c_out, dilations, kernel_size,
147
- norm_type, layers_in_block, c_multiple,
148
- dropout, ln_eps, init_weights, is_BTC=False, num_layers=num_layers)
149
- self.g_prenet = nn.Conv1d(c_cond, hidden_size, 3, padding=1)
150
- self.is_BTC_ = is_BTC
151
- if init_weights:
152
- self.g_prenet.apply(init_weights_func)
153
-
154
- def forward(self, x, cond, nonpadding=None):
155
- if self.is_BTC_:
156
- x = x.transpose(1, 2)
157
- cond = cond.transpose(1, 2)
158
- if nonpadding is not None:
159
- nonpadding = nonpadding.transpose(1, 2)
160
- if nonpadding is None:
161
- nonpadding = x.abs().sum(1)[:, None]
162
- x = x + self.g_prenet(cond)
163
- x = x * nonpadding
164
- x = super(ConditionalConvBlocks, self).forward(x) # input needs to be BTC
165
- if self.is_BTC_:
166
- x = x.transpose(1, 2)
167
- return x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/linear_probe.py DELETED
@@ -1,63 +0,0 @@
1
- import numpy as np
2
- import torch.nn.functional as F
3
- from torch import nn
4
- from .model import MLPLayers
5
-
6
-
7
- class LinearProbe(nn.Module):
8
- def __init__(self, model, mlp, freeze, in_ch, out_ch, act=None):
9
- """
10
- Args:
11
- model: nn.Module
12
- mlp: bool, if True, then use the MLP layer as the linear probe module
13
- freeze: bool, if Ture, then freeze all the CLAP model's layers when training the linear probe
14
- in_ch: int, the output channel from CLAP model
15
- out_ch: int, the output channel from linear probe (class_num)
16
- act: torch.nn.functional, the activation function before the loss function
17
- """
18
- super().__init__()
19
- in_ch = 512
20
- self.clap_model = model
21
- self.clap_model.text_branch = None # to save memory
22
- self.freeze = freeze
23
- if mlp:
24
- self.lp_layer = MLPLayers(units=[in_ch, in_ch * 2, out_ch])
25
- else:
26
- self.lp_layer = nn.Linear(in_ch, out_ch)
27
-
28
- if self.freeze:
29
- for param in self.clap_model.parameters():
30
- param.requires_grad = False
31
-
32
- if act == 'None':
33
- self.act = None
34
- elif act == 'relu':
35
- self.act = nn.ReLU()
36
- elif act == 'elu':
37
- self.act = nn.ELU()
38
- elif act == 'prelu':
39
- self.act = nn.PReLU(num_parameters=in_ch)
40
- elif act == 'softmax':
41
- self.act = nn.Softmax(dim=-1)
42
- elif act == 'sigmoid':
43
- self.act = nn.Sigmoid()
44
-
45
- def forward(self, x, mix_lambda=None, device=None):
46
- """
47
- Args:
48
- x: waveform, torch.tensor [batch, t_samples] / batch of mel_spec and longer list
49
- mix_lambda: torch.tensor [batch], the mixup lambda
50
- Returns:
51
- class_prob: torch.tensor [batch, class_num]
52
-
53
- """
54
- # batchnorm cancel grandient
55
- if self.freeze:
56
- self.clap_model.eval()
57
-
58
- x = self.clap_model.audio_projection(
59
- self.clap_model.audio_branch(x, mixup_lambda=mix_lambda, device=device)["embedding"])
60
- out = self.lp_layer(x)
61
- if self.act is not None:
62
- out = self.act(out)
63
- return out
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGText/GlyphControl/ldm/modules/midas/utils.py DELETED
@@ -1,189 +0,0 @@
1
- """Utils for monoDepth."""
2
- import sys
3
- import re
4
- import numpy as np
5
- import cv2
6
- import torch
7
-
8
-
9
- def read_pfm(path):
10
- """Read pfm file.
11
-
12
- Args:
13
- path (str): path to file
14
-
15
- Returns:
16
- tuple: (data, scale)
17
- """
18
- with open(path, "rb") as file:
19
-
20
- color = None
21
- width = None
22
- height = None
23
- scale = None
24
- endian = None
25
-
26
- header = file.readline().rstrip()
27
- if header.decode("ascii") == "PF":
28
- color = True
29
- elif header.decode("ascii") == "Pf":
30
- color = False
31
- else:
32
- raise Exception("Not a PFM file: " + path)
33
-
34
- dim_match = re.match(r"^(\d+)\s(\d+)\s$", file.readline().decode("ascii"))
35
- if dim_match:
36
- width, height = list(map(int, dim_match.groups()))
37
- else:
38
- raise Exception("Malformed PFM header.")
39
-
40
- scale = float(file.readline().decode("ascii").rstrip())
41
- if scale < 0:
42
- # little-endian
43
- endian = "<"
44
- scale = -scale
45
- else:
46
- # big-endian
47
- endian = ">"
48
-
49
- data = np.fromfile(file, endian + "f")
50
- shape = (height, width, 3) if color else (height, width)
51
-
52
- data = np.reshape(data, shape)
53
- data = np.flipud(data)
54
-
55
- return data, scale
56
-
57
-
58
- def write_pfm(path, image, scale=1):
59
- """Write pfm file.
60
-
61
- Args:
62
- path (str): pathto file
63
- image (array): data
64
- scale (int, optional): Scale. Defaults to 1.
65
- """
66
-
67
- with open(path, "wb") as file:
68
- color = None
69
-
70
- if image.dtype.name != "float32":
71
- raise Exception("Image dtype must be float32.")
72
-
73
- image = np.flipud(image)
74
-
75
- if len(image.shape) == 3 and image.shape[2] == 3: # color image
76
- color = True
77
- elif (
78
- len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1
79
- ): # greyscale
80
- color = False
81
- else:
82
- raise Exception("Image must have H x W x 3, H x W x 1 or H x W dimensions.")
83
-
84
- file.write("PF\n" if color else "Pf\n".encode())
85
- file.write("%d %d\n".encode() % (image.shape[1], image.shape[0]))
86
-
87
- endian = image.dtype.byteorder
88
-
89
- if endian == "<" or endian == "=" and sys.byteorder == "little":
90
- scale = -scale
91
-
92
- file.write("%f\n".encode() % scale)
93
-
94
- image.tofile(file)
95
-
96
-
97
- def read_image(path):
98
- """Read image and output RGB image (0-1).
99
-
100
- Args:
101
- path (str): path to file
102
-
103
- Returns:
104
- array: RGB image (0-1)
105
- """
106
- img = cv2.imread(path)
107
-
108
- if img.ndim == 2:
109
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
110
-
111
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) / 255.0
112
-
113
- return img
114
-
115
-
116
- def resize_image(img):
117
- """Resize image and make it fit for network.
118
-
119
- Args:
120
- img (array): image
121
-
122
- Returns:
123
- tensor: data ready for network
124
- """
125
- height_orig = img.shape[0]
126
- width_orig = img.shape[1]
127
-
128
- if width_orig > height_orig:
129
- scale = width_orig / 384
130
- else:
131
- scale = height_orig / 384
132
-
133
- height = (np.ceil(height_orig / scale / 32) * 32).astype(int)
134
- width = (np.ceil(width_orig / scale / 32) * 32).astype(int)
135
-
136
- img_resized = cv2.resize(img, (width, height), interpolation=cv2.INTER_AREA)
137
-
138
- img_resized = (
139
- torch.from_numpy(np.transpose(img_resized, (2, 0, 1))).contiguous().float()
140
- )
141
- img_resized = img_resized.unsqueeze(0)
142
-
143
- return img_resized
144
-
145
-
146
- def resize_depth(depth, width, height):
147
- """Resize depth map and bring to CPU (numpy).
148
-
149
- Args:
150
- depth (tensor): depth
151
- width (int): image width
152
- height (int): image height
153
-
154
- Returns:
155
- array: processed depth
156
- """
157
- depth = torch.squeeze(depth[0, :, :, :]).to("cpu")
158
-
159
- depth_resized = cv2.resize(
160
- depth.numpy(), (width, height), interpolation=cv2.INTER_CUBIC
161
- )
162
-
163
- return depth_resized
164
-
165
- def write_depth(path, depth, bits=1):
166
- """Write depth map to pfm and png file.
167
-
168
- Args:
169
- path (str): filepath without extension
170
- depth (array): depth
171
- """
172
- write_pfm(path + ".pfm", depth.astype(np.float32))
173
-
174
- depth_min = depth.min()
175
- depth_max = depth.max()
176
-
177
- max_val = (2**(8*bits))-1
178
-
179
- if depth_max - depth_min > np.finfo("float").eps:
180
- out = max_val * (depth - depth_min) / (depth_max - depth_min)
181
- else:
182
- out = np.zeros(depth.shape, dtype=depth.type)
183
-
184
- if bits == 1:
185
- cv2.imwrite(path + ".png", out.astype("uint8"))
186
- elif bits == 2:
187
- cv2.imwrite(path + ".png", out.astype("uint16"))
188
-
189
- return
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIatUIUC/CodeLATS/generators/parse.py DELETED
@@ -1,49 +0,0 @@
1
- import re
2
- from typing import Optional
3
-
4
-
5
- def parse_code_block(string: str, lang: str) -> Optional[str]:
6
- code_pattern = fr"```{lang}\n(.*?)\n```"
7
- match = re.search(code_pattern, string, re.DOTALL)
8
-
9
- if match:
10
- return match.group(1)
11
-
12
- generic_code_pattern = r"```\n(.*?)\n```"
13
- match = re.search(generic_code_pattern, string, re.DOTALL)
14
-
15
- if match:
16
- return match.group(1)
17
-
18
- return parse_first_func(string, lang)
19
-
20
-
21
- def parse_first_func(code: str, lang: str) -> Optional[str]:
22
- assert lang == "python", "Only python is supported for now. TODO: Rust"
23
- code_lines = code.split("\n")
24
- def_i = -1
25
- last_i = 0
26
- got_return = False
27
- for i, line in enumerate(code_lines):
28
- if line.startswith("def "):
29
- if def_i == -1:
30
- def_i = i
31
- else:
32
- break
33
- elif "return" in line and def_i != -1:
34
- got_return = True
35
- if line == "" and def_i != -1 and got_return:
36
- last_i = i
37
- break
38
-
39
- if last_i == 0:
40
- last_i = len(code_lines) - 1
41
-
42
- if def_i == -1:
43
- return None
44
-
45
- return "\n".join(code_lines[def_i:last_i+1]).rstrip("[/PYTHON]")
46
-
47
-
48
- def add_code_block(string: str, lang: str) -> str:
49
- return f"```{lang}\n{string}\n```"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Adapter/CoAdapter/ldm/models/diffusion/ddim.py DELETED
@@ -1,293 +0,0 @@
1
- """SAMPLING ONLY."""
2
-
3
- import torch
4
- import numpy as np
5
- from tqdm import tqdm
6
-
7
- from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, \
8
- extract_into_tensor
9
-
10
-
11
- class DDIMSampler(object):
12
- def __init__(self, model, schedule="linear", **kwargs):
13
- super().__init__()
14
- self.model = model
15
- self.ddpm_num_timesteps = model.num_timesteps
16
- self.schedule = schedule
17
-
18
- def register_buffer(self, name, attr):
19
- if type(attr) == torch.Tensor:
20
- if attr.device != torch.device("cuda"):
21
- attr = attr.to(torch.device("cuda"))
22
- setattr(self, name, attr)
23
-
24
- def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True):
25
- self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps,
26
- num_ddpm_timesteps=self.ddpm_num_timesteps, verbose=verbose)
27
- alphas_cumprod = self.model.alphas_cumprod
28
- assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep'
29
- to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device)
30
-
31
- self.register_buffer('betas', to_torch(self.model.betas))
32
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
33
- self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev))
34
-
35
- # calculations for diffusion q(x_t | x_{t-1}) and others
36
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu())))
37
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu())))
38
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu())))
39
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu())))
40
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1)))
41
-
42
- # ddim sampling parameters
43
- ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(),
44
- ddim_timesteps=self.ddim_timesteps,
45
- eta=ddim_eta, verbose=verbose)
46
- self.register_buffer('ddim_sigmas', ddim_sigmas)
47
- self.register_buffer('ddim_alphas', ddim_alphas)
48
- self.register_buffer('ddim_alphas_prev', ddim_alphas_prev)
49
- self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas))
50
- sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt(
51
- (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * (
52
- 1 - self.alphas_cumprod / self.alphas_cumprod_prev))
53
- self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps)
54
-
55
- @torch.no_grad()
56
- def sample(self,
57
- S,
58
- batch_size,
59
- shape,
60
- conditioning=None,
61
- callback=None,
62
- normals_sequence=None,
63
- img_callback=None,
64
- quantize_x0=False,
65
- eta=0.,
66
- mask=None,
67
- x0=None,
68
- temperature=1.,
69
- noise_dropout=0.,
70
- score_corrector=None,
71
- corrector_kwargs=None,
72
- verbose=True,
73
- x_T=None,
74
- log_every_t=100,
75
- unconditional_guidance_scale=1.,
76
- unconditional_conditioning=None,
77
- features_adapter=None,
78
- append_to_context=None,
79
- cond_tau=0.4,
80
- style_cond_tau=1.0,
81
- # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ...
82
- **kwargs
83
- ):
84
- if conditioning is not None:
85
- if isinstance(conditioning, dict):
86
- cbs = conditioning[list(conditioning.keys())[0]].shape[0]
87
- if cbs != batch_size:
88
- print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}")
89
- else:
90
- if conditioning.shape[0] != batch_size:
91
- print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}")
92
-
93
- self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)
94
- # sampling
95
- C, H, W = shape
96
- size = (batch_size, C, H, W)
97
- print(f'Data shape for DDIM sampling is {size}, eta {eta}')
98
-
99
- samples, intermediates = self.ddim_sampling(conditioning, size,
100
- callback=callback,
101
- img_callback=img_callback,
102
- quantize_denoised=quantize_x0,
103
- mask=mask, x0=x0,
104
- ddim_use_original_steps=False,
105
- noise_dropout=noise_dropout,
106
- temperature=temperature,
107
- score_corrector=score_corrector,
108
- corrector_kwargs=corrector_kwargs,
109
- x_T=x_T,
110
- log_every_t=log_every_t,
111
- unconditional_guidance_scale=unconditional_guidance_scale,
112
- unconditional_conditioning=unconditional_conditioning,
113
- features_adapter=features_adapter,
114
- append_to_context=append_to_context,
115
- cond_tau=cond_tau,
116
- style_cond_tau=style_cond_tau,
117
- )
118
- return samples, intermediates
119
-
120
- @torch.no_grad()
121
- def ddim_sampling(self, cond, shape,
122
- x_T=None, ddim_use_original_steps=False,
123
- callback=None, timesteps=None, quantize_denoised=False,
124
- mask=None, x0=None, img_callback=None, log_every_t=100,
125
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
126
- unconditional_guidance_scale=1., unconditional_conditioning=None, features_adapter=None,
127
- append_to_context=None, cond_tau=0.4, style_cond_tau=1.0):
128
- device = self.model.betas.device
129
- b = shape[0]
130
- if x_T is None:
131
- img = torch.randn(shape, device=device)
132
- else:
133
- img = x_T
134
-
135
- if timesteps is None:
136
- timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps
137
- elif timesteps is not None and not ddim_use_original_steps:
138
- subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1
139
- timesteps = self.ddim_timesteps[:subset_end]
140
-
141
- intermediates = {'x_inter': [img], 'pred_x0': [img]}
142
- time_range = reversed(range(0, timesteps)) if ddim_use_original_steps else np.flip(timesteps)
143
- total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0]
144
- print(f"Running DDIM Sampling with {total_steps} timesteps")
145
-
146
- iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps)
147
-
148
- for i, step in enumerate(iterator):
149
- index = total_steps - i - 1
150
- ts = torch.full((b,), step, device=device, dtype=torch.long)
151
-
152
- if mask is not None:
153
- assert x0 is not None
154
- img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass?
155
- img = img_orig * mask + (1. - mask) * img
156
-
157
- outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
158
- quantize_denoised=quantize_denoised, temperature=temperature,
159
- noise_dropout=noise_dropout, score_corrector=score_corrector,
160
- corrector_kwargs=corrector_kwargs,
161
- unconditional_guidance_scale=unconditional_guidance_scale,
162
- unconditional_conditioning=unconditional_conditioning,
163
- features_adapter=None if index < int(
164
- (1 - cond_tau) * total_steps) else features_adapter,
165
- append_to_context=None if index < int(
166
- (1 - style_cond_tau) * total_steps) else append_to_context,
167
- )
168
- img, pred_x0 = outs
169
- if callback: callback(i)
170
- if img_callback: img_callback(pred_x0, i)
171
-
172
- if index % log_every_t == 0 or index == total_steps - 1:
173
- intermediates['x_inter'].append(img)
174
- intermediates['pred_x0'].append(pred_x0)
175
-
176
- return img, intermediates
177
-
178
- @torch.no_grad()
179
- def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,
180
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
181
- unconditional_guidance_scale=1., unconditional_conditioning=None, features_adapter=None,
182
- append_to_context=None):
183
- b, *_, device = *x.shape, x.device
184
-
185
- if unconditional_conditioning is None or unconditional_guidance_scale == 1.:
186
- if append_to_context is not None:
187
- model_output = self.model.apply_model(x, t, torch.cat([c, append_to_context], dim=1),
188
- features_adapter=features_adapter)
189
- else:
190
- model_output = self.model.apply_model(x, t, c, features_adapter=features_adapter)
191
- else:
192
- x_in = torch.cat([x] * 2)
193
- t_in = torch.cat([t] * 2)
194
- if isinstance(c, dict):
195
- assert isinstance(unconditional_conditioning, dict)
196
- c_in = dict()
197
- for k in c:
198
- if isinstance(c[k], list):
199
- c_in[k] = [torch.cat([
200
- unconditional_conditioning[k][i],
201
- c[k][i]]) for i in range(len(c[k]))]
202
- else:
203
- c_in[k] = torch.cat([
204
- unconditional_conditioning[k],
205
- c[k]])
206
- elif isinstance(c, list):
207
- c_in = list()
208
- assert isinstance(unconditional_conditioning, list)
209
- for i in range(len(c)):
210
- c_in.append(torch.cat([unconditional_conditioning[i], c[i]]))
211
- else:
212
- if append_to_context is not None:
213
- pad_len = append_to_context.size(1)
214
- new_unconditional_conditioning = torch.cat(
215
- [unconditional_conditioning, unconditional_conditioning[:, -pad_len:, :]], dim=1)
216
- new_c = torch.cat([c, append_to_context], dim=1)
217
- c_in = torch.cat([new_unconditional_conditioning, new_c])
218
- else:
219
- c_in = torch.cat([unconditional_conditioning, c])
220
- model_uncond, model_t = self.model.apply_model(x_in, t_in, c_in, features_adapter=features_adapter).chunk(2)
221
- model_output = model_uncond + unconditional_guidance_scale * (model_t - model_uncond)
222
-
223
- if self.model.parameterization == "v":
224
- e_t = self.model.predict_eps_from_z_and_v(x, t, model_output)
225
- else:
226
- e_t = model_output
227
-
228
- if score_corrector is not None:
229
- assert self.model.parameterization == "eps", 'not implemented'
230
- e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs)
231
-
232
- alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas
233
- alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev
234
- sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas
235
- sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas
236
- # select parameters corresponding to the currently considered timestep
237
- a_t = torch.full((b, 1, 1, 1), alphas[index], device=device)
238
- a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device)
239
- sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device)
240
- sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index], device=device)
241
-
242
- # current prediction for x_0
243
- if self.model.parameterization != "v":
244
- pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()
245
- else:
246
- pred_x0 = self.model.predict_start_from_z_and_v(x, t, model_output)
247
-
248
- if quantize_denoised:
249
- pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)
250
- # direction pointing to x_t
251
- dir_xt = (1. - a_prev - sigma_t ** 2).sqrt() * e_t
252
- noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature
253
- if noise_dropout > 0.:
254
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
255
- x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise
256
- return x_prev, pred_x0
257
-
258
- @torch.no_grad()
259
- def stochastic_encode(self, x0, t, use_original_steps=False, noise=None):
260
- # fast, but does not allow for exact reconstruction
261
- # t serves as an index to gather the correct alphas
262
- if use_original_steps:
263
- sqrt_alphas_cumprod = self.sqrt_alphas_cumprod
264
- sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod
265
- else:
266
- sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas)
267
- sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas
268
-
269
- if noise is None:
270
- noise = torch.randn_like(x0)
271
- return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 +
272
- extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise)
273
-
274
- @torch.no_grad()
275
- def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None,
276
- use_original_steps=False):
277
-
278
- timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps
279
- timesteps = timesteps[:t_start]
280
-
281
- time_range = np.flip(timesteps)
282
- total_steps = timesteps.shape[0]
283
- print(f"Running DDIM Sampling with {total_steps} timesteps")
284
-
285
- iterator = tqdm(time_range, desc='Decoding image', total=total_steps)
286
- x_dec = x_latent
287
- for i, step in enumerate(iterator):
288
- index = total_steps - i - 1
289
- ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long)
290
- x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps,
291
- unconditional_guidance_scale=unconditional_guidance_scale,
292
- unconditional_conditioning=unconditional_conditioning)
293
- return x_dec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AdithyaSNair/Dog_breed_predictor/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Dog Breed Predictor
3
- emoji: 🏆
4
- colorFrom: indigo
5
- colorTo: green
6
- sdk: streamlit
7
- sdk_version: 1.26.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/touchcursor-plugin.js DELETED
@@ -1,20 +0,0 @@
1
- import TouchCursor from './touchcursor.js';
2
-
3
- class TouchCursorPlugin extends Phaser.Plugins.BasePlugin {
4
-
5
- constructor(pluginManager) {
6
- super(pluginManager);
7
- }
8
-
9
- start() {
10
- var eventEmitter = this.game.events;
11
- eventEmitter.on('destroy', this.destroy, this);
12
- }
13
-
14
- add(gameObject, config) {
15
- return new TouchCursor(gameObject, config);
16
- }
17
-
18
- }
19
-
20
- export default TouchCursorPlugin;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Alesteba/NeRF_ficus-pxl/config.py DELETED
@@ -1,16 +0,0 @@
1
- import streamlit as st
2
- import tensorflow as tf
3
- import numpy as np
4
-
5
- # Setting random seed to obtain reproducible results.
6
- tf.random.set_seed(42)
7
-
8
- # Initialize global variables.
9
- AUTO = tf.data.AUTOTUNE
10
- BATCH_SIZE = 1
11
- NUM_SAMPLES = 32
12
- POS_ENCODE_DIMS = 16
13
- EPOCHS = 30
14
- H = 50
15
- W = 50
16
- focal = 0.6911112070083618
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/libJPG/jpge.h DELETED
@@ -1,172 +0,0 @@
1
-
2
- // jpge.h - C++ class for JPEG compression.
3
- // Public domain, Rich Geldreich <[email protected]>
4
- // Alex Evans: Added RGBA support, linear memory allocator.
5
- #ifndef JPEG_ENCODER_H
6
- #define JPEG_ENCODER_H
7
-
8
- #include <stdint.h>
9
-
10
- namespace jpge
11
- {
12
- typedef unsigned char uint8;
13
- typedef signed short int16;
14
- typedef signed int int32;
15
- typedef unsigned short uint16;
16
- typedef unsigned int uint32;
17
- typedef unsigned int uint;
18
-
19
- // JPEG chroma subsampling factors. Y_ONLY (grayscale images) and H2V2 (color images) are the most common.
20
- enum subsampling_t { Y_ONLY = 0, H1V1 = 1, H2V1 = 2, H2V2 = 3 };
21
-
22
- // JPEG compression parameters structure.
23
- struct params
24
- {
25
- inline params() : m_quality(85), m_subsampling(H2V2), m_no_chroma_discrim_flag(false), m_two_pass_flag(false) { }
26
-
27
- inline bool check_valid() const
28
- {
29
- if ((m_quality < 1) || (m_quality > 100)) return false;
30
- if ((uint)m_subsampling > (uint)H2V2) return false;
31
- return true;
32
- }
33
-
34
- // Quality: 1-100, higher is better. Typical values are around 50-95.
35
- int m_quality;
36
-
37
- // m_subsampling:
38
- // 0 = Y (grayscale) only
39
- // 1 = YCbCr, no subsampling (H1V1, YCbCr 1x1x1, 3 blocks per MCU)
40
- // 2 = YCbCr, H2V1 subsampling (YCbCr 2x1x1, 4 blocks per MCU)
41
- // 3 = YCbCr, H2V2 subsampling (YCbCr 4x1x1, 6 blocks per MCU-- very common)
42
- subsampling_t m_subsampling;
43
-
44
- // Disables CbCr discrimination - only intended for testing.
45
- // If true, the Y quantization table is also used for the CbCr channels.
46
- bool m_no_chroma_discrim_flag;
47
-
48
- bool m_two_pass_flag;
49
- };
50
-
51
- // Writes JPEG image to a file.
52
- // num_channels must be 1 (Y) or 3 (RGB), image pitch must be width*num_channels.
53
- bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
54
-
55
- // Writes JPEG image to memory buffer.
56
- // On entry, buf_size is the size of the output buffer pointed at by pBuf, which should be at least ~1024 bytes.
57
- // If return value is true, buf_size will be set to the size of the compressed data.
58
- bool compress_image_to_jpeg_file_in_memory(void *pBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
59
-
60
- // Output stream abstract class - used by the jpeg_encoder class to write to the output stream.
61
- // put_buf() is generally called with len==JPGE_OUT_BUF_SIZE bytes, but for headers it'll be called with smaller amounts.
62
- class output_stream
63
- {
64
- public:
65
- virtual ~output_stream() { };
66
- virtual bool put_buf(const void* Pbuf, int64_t len) = 0;
67
- template<class T> inline bool put_obj(const T& obj) { return put_buf(&obj, sizeof(T)); }
68
- };
69
-
70
- // Lower level jpeg_encoder class - useful if more control is needed than the above helper functions.
71
- class jpeg_encoder
72
- {
73
- public:
74
- jpeg_encoder();
75
- ~jpeg_encoder();
76
-
77
- // Initializes the compressor.
78
- // pStream: The stream object to use for writing compressed data.
79
- // params - Compression parameters structure, defined above.
80
- // width, height - Image dimensions.
81
- // channels - May be 1, or 3. 1 indicates grayscale, 3 indicates RGB source data.
82
- // Returns false on out of memory or if a stream write fails.
83
- bool init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params = params());
84
-
85
- const params &get_params() const { return m_params; }
86
-
87
- // Deinitializes the compressor, freeing any allocated memory. May be called at any time.
88
- void deinit();
89
-
90
- uint get_total_passes() const { return m_params.m_two_pass_flag ? 2 : 1; }
91
- inline uint get_cur_pass() { return m_pass_num; }
92
-
93
- // Call this method with each source scanline.
94
- // width * src_channels bytes per scanline is expected (RGB or Y format).
95
- // You must call with NULL after all scanlines are processed to finish compression.
96
- // Returns false on out of memory or if a stream write fails.
97
- bool process_scanline(const void* pScanline);
98
-
99
- private:
100
- jpeg_encoder(const jpeg_encoder &);
101
- jpeg_encoder &operator =(const jpeg_encoder &);
102
-
103
- typedef int32 sample_array_t;
104
-
105
- output_stream *m_pStream;
106
- params m_params;
107
- uint8 m_num_components;
108
- uint8 m_comp_h_samp[3], m_comp_v_samp[3];
109
- int m_image_x, m_image_y, m_image_bpp, m_image_bpl;
110
- int m_image_x_mcu, m_image_y_mcu;
111
- int m_image_bpl_xlt, m_image_bpl_mcu;
112
- int m_mcus_per_row;
113
- int m_mcu_x, m_mcu_y;
114
- uint8 *m_mcu_lines[16];
115
- uint8 m_mcu_y_ofs;
116
- sample_array_t m_sample_array[64];
117
- int16 m_coefficient_array[64];
118
- int32 m_quantization_tables[2][64];
119
- uint m_huff_codes[4][256];
120
- uint8 m_huff_code_sizes[4][256];
121
- uint8 m_huff_bits[4][17];
122
- uint8 m_huff_val[4][256];
123
- uint32 m_huff_count[4][256];
124
- int m_last_dc_val[3];
125
- enum { JPGE_OUT_BUF_SIZE = 2048 };
126
- uint8 m_out_buf[JPGE_OUT_BUF_SIZE];
127
- uint8 *m_pOut_buf;
128
- uint m_out_buf_left;
129
- uint32 m_bit_buffer;
130
- uint m_bits_in;
131
- uint8 m_pass_num;
132
- bool m_all_stream_writes_succeeded;
133
-
134
- void optimize_huffman_table(int table_num, int table_len);
135
- void emit_byte(uint8 i);
136
- void emit_word(uint i);
137
- void emit_marker(int marker);
138
- void emit_jfif_app0();
139
- void emit_dqt();
140
- void emit_sof();
141
- void emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag);
142
- void emit_dhts();
143
- void emit_sos();
144
- void emit_markers();
145
- void compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val);
146
- void compute_quant_table(int32 *dst, int16 *src);
147
- void adjust_quant_table(int32 *dst, int32 *src);
148
- void first_pass_init();
149
- bool second_pass_init();
150
- bool jpg_open(int p_x_res, int p_y_res, int src_channels);
151
- void load_block_8_8_grey(int x);
152
- void load_block_8_8(int x, int y, int c);
153
- void load_block_16_8(int x, int c);
154
- void load_block_16_8_8(int x, int c);
155
- void load_quantized_coefficients(int component_num);
156
- void flush_output_buffer();
157
- void put_bits(uint bits, uint len);
158
- void code_coefficients_pass_one(int component_num);
159
- void code_coefficients_pass_two(int component_num);
160
- void code_block(int component_num);
161
- void process_mcu_row();
162
- bool terminate_pass_one();
163
- bool terminate_pass_two();
164
- bool process_end_of_image();
165
- void load_mcu(const void* src);
166
- void clear();
167
- void init();
168
- };
169
-
170
- } // namespace jpge
171
-
172
- #endif // JPEG_ENCODER
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Amon1/ChatGPTForAcadamic/crazy_functions/总结word文档.py DELETED
@@ -1,127 +0,0 @@
1
- from predict import predict_no_ui
2
- from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down
3
- fast_debug = False
4
-
5
-
6
- def 解析docx(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt):
7
- import time, os
8
- # pip install python-docx 用于docx格式,跨平台
9
- # pip install pywin32 用于doc格式,仅支持Win平台
10
-
11
- print('begin analysis on:', file_manifest)
12
- for index, fp in enumerate(file_manifest):
13
- if fp.split(".")[-1] == "docx":
14
- from docx import Document
15
- doc = Document(fp)
16
- file_content = "\n".join([para.text for para in doc.paragraphs])
17
- else:
18
- import win32com.client
19
- word = win32com.client.Dispatch("Word.Application")
20
- word.visible = False
21
- # 打开文件
22
- print('fp', os.getcwd())
23
- doc = word.Documents.Open(os.getcwd() + '/' + fp)
24
- # file_content = doc.Content.Text
25
- doc = word.ActiveDocument
26
- file_content = doc.Range().Text
27
- doc.Close()
28
- word.Quit()
29
-
30
- print(file_content)
31
-
32
- prefix = "接下来请你逐文件分析下面的论文文件," if index == 0 else ""
33
- # private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名
34
- i_say = prefix + f'请对下面的文章片段用中英文做概述,文件名是{os.path.relpath(fp, project_folder)},' \
35
- f'文章内容是 ```{file_content}```'
36
- i_say_show_user = prefix + f'[{index+1}/{len(file_manifest)}] 假设你是论文审稿专家,请对下面的文章片段做概述: {os.path.abspath(fp)}'
37
- chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
38
- yield chatbot, history, '正常'
39
-
40
- if not fast_debug:
41
- msg = '正常'
42
- # ** gpt request **
43
- gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature,
44
- history=[]) # 带超时倒计时
45
- chatbot[-1] = (i_say_show_user, gpt_say)
46
- history.append(i_say_show_user);
47
- history.append(gpt_say)
48
- yield chatbot, history, msg
49
- if not fast_debug: time.sleep(2)
50
-
51
- """
52
- # 可按需启用
53
- i_say = f'根据你上述的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一篇英文的。'
54
- chatbot.append((i_say, "[Local Message] waiting gpt response."))
55
- yield chatbot, history, '正常'
56
-
57
-
58
- i_say = f'我想让你做一个论文写作导师。您的任务是使用人工智能工具(例如自然语言处理)提供有关如何改进其上述文章的反馈。' \
59
- f'您还应该利用您在有效写作技巧方面的修辞知识和经验来建议作者可以更好地以书面形式表达他们的想法和想法的方法。' \
60
- f'根据你之前的分析,提出建议'
61
- chatbot.append((i_say, "[Local Message] waiting gpt response."))
62
- yield chatbot, history, '正常'
63
-
64
- """
65
-
66
- if not fast_debug:
67
- msg = '正常'
68
- # ** gpt request **
69
- gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say, chatbot, top_p, temperature,
70
- history=history) # 带超时倒计时
71
-
72
- chatbot[-1] = (i_say, gpt_say)
73
- history.append(i_say)
74
- history.append(gpt_say)
75
- yield chatbot, history, msg
76
- res = write_results_to_file(history)
77
- chatbot.append(("完成了吗?", res))
78
- yield chatbot, history, msg
79
-
80
-
81
- @CatchException
82
- def 总结word文档(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
83
- import glob, os
84
-
85
- # 基本信息:功能、贡献者
86
- chatbot.append([
87
- "函数插件功能?",
88
- "批量总结Word文档。函数插件贡献者: JasonGuo1"])
89
- yield chatbot, history, '正常'
90
-
91
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
92
- try:
93
- from docx import Document
94
- except:
95
- report_execption(chatbot, history,
96
- a=f"解析项目: {txt}",
97
- b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。")
98
- yield chatbot, history, '正常'
99
- return
100
-
101
- # 清空历史,以免输入溢出
102
- history = []
103
-
104
- # 检测输入参数,如没有给定输入参数,直接退出
105
- if os.path.exists(txt):
106
- project_folder = txt
107
- else:
108
- if txt == "": txt = '空空如也的输入栏'
109
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
110
- yield chatbot, history, '正常'
111
- return
112
-
113
- # 搜索需���处理的文件清单
114
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.docx', recursive=True)] + \
115
- [f for f in glob.glob(f'{project_folder}/**/*.doc', recursive=True)]
116
- # [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \
117
- # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
118
- # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
119
-
120
- # 如果没找到任何文件
121
- if len(file_manifest) == 0:
122
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.docx或doc文件: {txt}")
123
- yield chatbot, history, '正常'
124
- return
125
-
126
- # 开始正式执行任务
127
- yield from 解析docx(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.py DELETED
@@ -1,60 +0,0 @@
1
- import os
2
-
3
- import torch
4
- from torch.nn import functional as F
5
-
6
-
7
- module_path = os.path.dirname(__file__)
8
-
9
-
10
- def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)):
11
- out = upfirdn2d_native(
12
- input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1]
13
- )
14
-
15
- return out
16
-
17
-
18
- def upfirdn2d_native(
19
- input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
20
- ):
21
- _, channel, in_h, in_w = input.shape
22
- input = input.reshape(-1, in_h, in_w, 1)
23
-
24
- _, in_h, in_w, minor = input.shape
25
- kernel_h, kernel_w = kernel.shape
26
-
27
- out = input.view(-1, in_h, 1, in_w, 1, minor)
28
- out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])
29
- out = out.view(-1, in_h * up_y, in_w * up_x, minor)
30
-
31
- out = F.pad(
32
- out, [0, 0, max(pad_x0, 0), max(pad_x1, 0),
33
- max(pad_y0, 0), max(pad_y1, 0)]
34
- )
35
- out = out[
36
- :,
37
- max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0),
38
- max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0),
39
- :,
40
- ]
41
-
42
- out = out.permute(0, 3, 1, 2)
43
- out = out.reshape(
44
- [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]
45
- )
46
- w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)
47
- out = F.conv2d(out, w)
48
- out = out.reshape(
49
- -1,
50
- minor,
51
- in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,
52
- in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,
53
- )
54
- out = out.permute(0, 2, 3, 1)
55
- out = out[:, ::down_y, ::down_x, :]
56
-
57
- out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1
58
- out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1
59
-
60
- return out.view(-1, channel, out_h, out_w)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py DELETED
@@ -1,1020 +0,0 @@
1
- # Copyright 2023 The HuggingFace Team. All rights reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
-
15
- import inspect
16
- import math
17
- import warnings
18
- from typing import Any, Callable, Dict, List, Optional, Tuple, Union
19
-
20
- import numpy as np
21
- import torch
22
- from torch.nn import functional as F
23
- from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
24
-
25
- from ...image_processor import VaeImageProcessor
26
- from ...loaders import LoraLoaderMixin, TextualInversionLoaderMixin
27
- from ...models import AutoencoderKL, UNet2DConditionModel
28
- from ...models.attention_processor import Attention
29
- from ...schedulers import KarrasDiffusionSchedulers
30
- from ...utils import logging, randn_tensor, replace_example_docstring
31
- from ..pipeline_utils import DiffusionPipeline
32
- from . import StableDiffusionPipelineOutput
33
- from .safety_checker import StableDiffusionSafetyChecker
34
-
35
-
36
- logger = logging.get_logger(__name__)
37
-
38
- EXAMPLE_DOC_STRING = """
39
- Examples:
40
- ```py
41
- >>> import torch
42
- >>> from diffusers import StableDiffusionAttendAndExcitePipeline
43
-
44
- >>> pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained(
45
- ... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16
46
- ... ).to("cuda")
47
-
48
-
49
- >>> prompt = "a cat and a frog"
50
-
51
- >>> # use get_indices function to find out indices of the tokens you want to alter
52
- >>> pipe.get_indices(prompt)
53
- {0: '<|startoftext|>', 1: 'a</w>', 2: 'cat</w>', 3: 'and</w>', 4: 'a</w>', 5: 'frog</w>', 6: '<|endoftext|>'}
54
-
55
- >>> token_indices = [2, 5]
56
- >>> seed = 6141
57
- >>> generator = torch.Generator("cuda").manual_seed(seed)
58
-
59
- >>> images = pipe(
60
- ... prompt=prompt,
61
- ... token_indices=token_indices,
62
- ... guidance_scale=7.5,
63
- ... generator=generator,
64
- ... num_inference_steps=50,
65
- ... max_iter_to_alter=25,
66
- ... ).images
67
-
68
- >>> image = images[0]
69
- >>> image.save(f"../images/{prompt}_{seed}.png")
70
- ```
71
- """
72
-
73
-
74
- class AttentionStore:
75
- @staticmethod
76
- def get_empty_store():
77
- return {"down": [], "mid": [], "up": []}
78
-
79
- def __call__(self, attn, is_cross: bool, place_in_unet: str):
80
- if self.cur_att_layer >= 0 and is_cross:
81
- if attn.shape[1] == np.prod(self.attn_res):
82
- self.step_store[place_in_unet].append(attn)
83
-
84
- self.cur_att_layer += 1
85
- if self.cur_att_layer == self.num_att_layers:
86
- self.cur_att_layer = 0
87
- self.between_steps()
88
-
89
- def between_steps(self):
90
- self.attention_store = self.step_store
91
- self.step_store = self.get_empty_store()
92
-
93
- def get_average_attention(self):
94
- average_attention = self.attention_store
95
- return average_attention
96
-
97
- def aggregate_attention(self, from_where: List[str]) -> torch.Tensor:
98
- """Aggregates the attention across the different layers and heads at the specified resolution."""
99
- out = []
100
- attention_maps = self.get_average_attention()
101
- for location in from_where:
102
- for item in attention_maps[location]:
103
- cross_maps = item.reshape(-1, self.attn_res[0], self.attn_res[1], item.shape[-1])
104
- out.append(cross_maps)
105
- out = torch.cat(out, dim=0)
106
- out = out.sum(0) / out.shape[0]
107
- return out
108
-
109
- def reset(self):
110
- self.cur_att_layer = 0
111
- self.step_store = self.get_empty_store()
112
- self.attention_store = {}
113
-
114
- def __init__(self, attn_res):
115
- """
116
- Initialize an empty AttentionStore :param step_index: used to visualize only a specific step in the diffusion
117
- process
118
- """
119
- self.num_att_layers = -1
120
- self.cur_att_layer = 0
121
- self.step_store = self.get_empty_store()
122
- self.attention_store = {}
123
- self.curr_step_index = 0
124
- self.attn_res = attn_res
125
-
126
-
127
- class AttendExciteAttnProcessor:
128
- def __init__(self, attnstore, place_in_unet):
129
- super().__init__()
130
- self.attnstore = attnstore
131
- self.place_in_unet = place_in_unet
132
-
133
- def __call__(self, attn: Attention, hidden_states, encoder_hidden_states=None, attention_mask=None):
134
- batch_size, sequence_length, _ = hidden_states.shape
135
- attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size)
136
-
137
- query = attn.to_q(hidden_states)
138
-
139
- is_cross = encoder_hidden_states is not None
140
- encoder_hidden_states = encoder_hidden_states if encoder_hidden_states is not None else hidden_states
141
- key = attn.to_k(encoder_hidden_states)
142
- value = attn.to_v(encoder_hidden_states)
143
-
144
- query = attn.head_to_batch_dim(query)
145
- key = attn.head_to_batch_dim(key)
146
- value = attn.head_to_batch_dim(value)
147
-
148
- attention_probs = attn.get_attention_scores(query, key, attention_mask)
149
-
150
- # only need to store attention maps during the Attend and Excite process
151
- if attention_probs.requires_grad:
152
- self.attnstore(attention_probs, is_cross, self.place_in_unet)
153
-
154
- hidden_states = torch.bmm(attention_probs, value)
155
- hidden_states = attn.batch_to_head_dim(hidden_states)
156
-
157
- # linear proj
158
- hidden_states = attn.to_out[0](hidden_states)
159
- # dropout
160
- hidden_states = attn.to_out[1](hidden_states)
161
-
162
- return hidden_states
163
-
164
-
165
- class StableDiffusionAttendAndExcitePipeline(DiffusionPipeline, TextualInversionLoaderMixin):
166
- r"""
167
- Pipeline for text-to-image generation using Stable Diffusion and Attend-and-Excite.
168
-
169
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
170
- implemented for all pipelines (downloading, saving, running on a particular device, etc.).
171
-
172
- Args:
173
- vae ([`AutoencoderKL`]):
174
- Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
175
- text_encoder ([`~transformers.CLIPTextModel`]):
176
- Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
177
- tokenizer ([`~transformers.CLIPTokenizer`]):
178
- A `CLIPTokenizer` to tokenize text.
179
- unet ([`UNet2DConditionModel`]):
180
- A `UNet2DConditionModel` to denoise the encoded image latents.
181
- scheduler ([`SchedulerMixin`]):
182
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
183
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
184
- safety_checker ([`StableDiffusionSafetyChecker`]):
185
- Classification module that estimates whether generated images could be considered offensive or harmful.
186
- Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details
187
- about a model's potential harms.
188
- feature_extractor ([`~transformers.CLIPImageProcessor`]):
189
- A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.
190
- """
191
- _optional_components = ["safety_checker", "feature_extractor"]
192
-
193
- def __init__(
194
- self,
195
- vae: AutoencoderKL,
196
- text_encoder: CLIPTextModel,
197
- tokenizer: CLIPTokenizer,
198
- unet: UNet2DConditionModel,
199
- scheduler: KarrasDiffusionSchedulers,
200
- safety_checker: StableDiffusionSafetyChecker,
201
- feature_extractor: CLIPImageProcessor,
202
- requires_safety_checker: bool = True,
203
- ):
204
- super().__init__()
205
-
206
- if safety_checker is None and requires_safety_checker:
207
- logger.warning(
208
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
209
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
210
- " results in services or applications open to the public. Both the diffusers team and Hugging Face"
211
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
212
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
213
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
214
- )
215
-
216
- if safety_checker is not None and feature_extractor is None:
217
- raise ValueError(
218
- "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
219
- " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
220
- )
221
-
222
- self.register_modules(
223
- vae=vae,
224
- text_encoder=text_encoder,
225
- tokenizer=tokenizer,
226
- unet=unet,
227
- scheduler=scheduler,
228
- safety_checker=safety_checker,
229
- feature_extractor=feature_extractor,
230
- )
231
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
232
- self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
233
- self.register_to_config(requires_safety_checker=requires_safety_checker)
234
-
235
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing
236
- def enable_vae_slicing(self):
237
- r"""
238
- Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
239
- compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
240
- """
241
- self.vae.enable_slicing()
242
-
243
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing
244
- def disable_vae_slicing(self):
245
- r"""
246
- Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
247
- computing decoding in one step.
248
- """
249
- self.vae.disable_slicing()
250
-
251
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt
252
- def _encode_prompt(
253
- self,
254
- prompt,
255
- device,
256
- num_images_per_prompt,
257
- do_classifier_free_guidance,
258
- negative_prompt=None,
259
- prompt_embeds: Optional[torch.FloatTensor] = None,
260
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
261
- lora_scale: Optional[float] = None,
262
- ):
263
- r"""
264
- Encodes the prompt into text encoder hidden states.
265
-
266
- Args:
267
- prompt (`str` or `List[str]`, *optional*):
268
- prompt to be encoded
269
- device: (`torch.device`):
270
- torch device
271
- num_images_per_prompt (`int`):
272
- number of images that should be generated per prompt
273
- do_classifier_free_guidance (`bool`):
274
- whether to use classifier free guidance or not
275
- negative_prompt (`str` or `List[str]`, *optional*):
276
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
277
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
278
- less than `1`).
279
- prompt_embeds (`torch.FloatTensor`, *optional*):
280
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
281
- provided, text embeddings will be generated from `prompt` input argument.
282
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
283
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
284
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
285
- argument.
286
- lora_scale (`float`, *optional*):
287
- A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
288
- """
289
- # set lora scale so that monkey patched LoRA
290
- # function of text encoder can correctly access it
291
- if lora_scale is not None and isinstance(self, LoraLoaderMixin):
292
- self._lora_scale = lora_scale
293
-
294
- if prompt is not None and isinstance(prompt, str):
295
- batch_size = 1
296
- elif prompt is not None and isinstance(prompt, list):
297
- batch_size = len(prompt)
298
- else:
299
- batch_size = prompt_embeds.shape[0]
300
-
301
- if prompt_embeds is None:
302
- # textual inversion: procecss multi-vector tokens if necessary
303
- if isinstance(self, TextualInversionLoaderMixin):
304
- prompt = self.maybe_convert_prompt(prompt, self.tokenizer)
305
-
306
- text_inputs = self.tokenizer(
307
- prompt,
308
- padding="max_length",
309
- max_length=self.tokenizer.model_max_length,
310
- truncation=True,
311
- return_tensors="pt",
312
- )
313
- text_input_ids = text_inputs.input_ids
314
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
315
-
316
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
317
- text_input_ids, untruncated_ids
318
- ):
319
- removed_text = self.tokenizer.batch_decode(
320
- untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
321
- )
322
- logger.warning(
323
- "The following part of your input was truncated because CLIP can only handle sequences up to"
324
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
325
- )
326
-
327
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
328
- attention_mask = text_inputs.attention_mask.to(device)
329
- else:
330
- attention_mask = None
331
-
332
- prompt_embeds = self.text_encoder(
333
- text_input_ids.to(device),
334
- attention_mask=attention_mask,
335
- )
336
- prompt_embeds = prompt_embeds[0]
337
-
338
- prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
339
-
340
- bs_embed, seq_len, _ = prompt_embeds.shape
341
- # duplicate text embeddings for each generation per prompt, using mps friendly method
342
- prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
343
- prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
344
-
345
- # get unconditional embeddings for classifier free guidance
346
- if do_classifier_free_guidance and negative_prompt_embeds is None:
347
- uncond_tokens: List[str]
348
- if negative_prompt is None:
349
- uncond_tokens = [""] * batch_size
350
- elif prompt is not None and type(prompt) is not type(negative_prompt):
351
- raise TypeError(
352
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
353
- f" {type(prompt)}."
354
- )
355
- elif isinstance(negative_prompt, str):
356
- uncond_tokens = [negative_prompt]
357
- elif batch_size != len(negative_prompt):
358
- raise ValueError(
359
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
360
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
361
- " the batch size of `prompt`."
362
- )
363
- else:
364
- uncond_tokens = negative_prompt
365
-
366
- # textual inversion: procecss multi-vector tokens if necessary
367
- if isinstance(self, TextualInversionLoaderMixin):
368
- uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer)
369
-
370
- max_length = prompt_embeds.shape[1]
371
- uncond_input = self.tokenizer(
372
- uncond_tokens,
373
- padding="max_length",
374
- max_length=max_length,
375
- truncation=True,
376
- return_tensors="pt",
377
- )
378
-
379
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
380
- attention_mask = uncond_input.attention_mask.to(device)
381
- else:
382
- attention_mask = None
383
-
384
- negative_prompt_embeds = self.text_encoder(
385
- uncond_input.input_ids.to(device),
386
- attention_mask=attention_mask,
387
- )
388
- negative_prompt_embeds = negative_prompt_embeds[0]
389
-
390
- if do_classifier_free_guidance:
391
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
392
- seq_len = negative_prompt_embeds.shape[1]
393
-
394
- negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
395
-
396
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
397
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
398
-
399
- # For classifier free guidance, we need to do two forward passes.
400
- # Here we concatenate the unconditional and text embeddings into a single batch
401
- # to avoid doing two forward passes
402
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
403
-
404
- return prompt_embeds
405
-
406
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker
407
- def run_safety_checker(self, image, device, dtype):
408
- if self.safety_checker is None:
409
- has_nsfw_concept = None
410
- else:
411
- if torch.is_tensor(image):
412
- feature_extractor_input = self.image_processor.postprocess(image, output_type="pil")
413
- else:
414
- feature_extractor_input = self.image_processor.numpy_to_pil(image)
415
- safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device)
416
- image, has_nsfw_concept = self.safety_checker(
417
- images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
418
- )
419
- return image, has_nsfw_concept
420
-
421
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents
422
- def decode_latents(self, latents):
423
- warnings.warn(
424
- "The decode_latents method is deprecated and will be removed in a future version. Please"
425
- " use VaeImageProcessor instead",
426
- FutureWarning,
427
- )
428
- latents = 1 / self.vae.config.scaling_factor * latents
429
- image = self.vae.decode(latents, return_dict=False)[0]
430
- image = (image / 2 + 0.5).clamp(0, 1)
431
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
432
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
433
- return image
434
-
435
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
436
- def prepare_extra_step_kwargs(self, generator, eta):
437
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
438
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
439
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
440
- # and should be between [0, 1]
441
-
442
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
443
- extra_step_kwargs = {}
444
- if accepts_eta:
445
- extra_step_kwargs["eta"] = eta
446
-
447
- # check if the scheduler accepts generator
448
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
449
- if accepts_generator:
450
- extra_step_kwargs["generator"] = generator
451
- return extra_step_kwargs
452
-
453
- def check_inputs(
454
- self,
455
- prompt,
456
- indices,
457
- height,
458
- width,
459
- callback_steps,
460
- negative_prompt=None,
461
- prompt_embeds=None,
462
- negative_prompt_embeds=None,
463
- ):
464
- if height % 8 != 0 or width % 8 != 0:
465
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
466
-
467
- if (callback_steps is None) or (
468
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
469
- ):
470
- raise ValueError(
471
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
472
- f" {type(callback_steps)}."
473
- )
474
-
475
- if prompt is not None and prompt_embeds is not None:
476
- raise ValueError(
477
- f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
478
- " only forward one of the two."
479
- )
480
- elif prompt is None and prompt_embeds is None:
481
- raise ValueError(
482
- "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
483
- )
484
- elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
485
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
486
-
487
- if negative_prompt is not None and negative_prompt_embeds is not None:
488
- raise ValueError(
489
- f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
490
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
491
- )
492
-
493
- if prompt_embeds is not None and negative_prompt_embeds is not None:
494
- if prompt_embeds.shape != negative_prompt_embeds.shape:
495
- raise ValueError(
496
- "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
497
- f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
498
- f" {negative_prompt_embeds.shape}."
499
- )
500
-
501
- indices_is_list_ints = isinstance(indices, list) and isinstance(indices[0], int)
502
- indices_is_list_list_ints = (
503
- isinstance(indices, list) and isinstance(indices[0], list) and isinstance(indices[0][0], int)
504
- )
505
-
506
- if not indices_is_list_ints and not indices_is_list_list_ints:
507
- raise TypeError("`indices` must be a list of ints or a list of a list of ints")
508
-
509
- if indices_is_list_ints:
510
- indices_batch_size = 1
511
- elif indices_is_list_list_ints:
512
- indices_batch_size = len(indices)
513
-
514
- if prompt is not None and isinstance(prompt, str):
515
- prompt_batch_size = 1
516
- elif prompt is not None and isinstance(prompt, list):
517
- prompt_batch_size = len(prompt)
518
- elif prompt_embeds is not None:
519
- prompt_batch_size = prompt_embeds.shape[0]
520
-
521
- if indices_batch_size != prompt_batch_size:
522
- raise ValueError(
523
- f"indices batch size must be same as prompt batch size. indices batch size: {indices_batch_size}, prompt batch size: {prompt_batch_size}"
524
- )
525
-
526
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
527
- def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
528
- shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
529
- if isinstance(generator, list) and len(generator) != batch_size:
530
- raise ValueError(
531
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
532
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
533
- )
534
-
535
- if latents is None:
536
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
537
- else:
538
- latents = latents.to(device)
539
-
540
- # scale the initial noise by the standard deviation required by the scheduler
541
- latents = latents * self.scheduler.init_noise_sigma
542
- return latents
543
-
544
- @staticmethod
545
- def _compute_max_attention_per_index(
546
- attention_maps: torch.Tensor,
547
- indices: List[int],
548
- ) -> List[torch.Tensor]:
549
- """Computes the maximum attention value for each of the tokens we wish to alter."""
550
- attention_for_text = attention_maps[:, :, 1:-1]
551
- attention_for_text *= 100
552
- attention_for_text = torch.nn.functional.softmax(attention_for_text, dim=-1)
553
-
554
- # Shift indices since we removed the first token
555
- indices = [index - 1 for index in indices]
556
-
557
- # Extract the maximum values
558
- max_indices_list = []
559
- for i in indices:
560
- image = attention_for_text[:, :, i]
561
- smoothing = GaussianSmoothing().to(attention_maps.device)
562
- input = F.pad(image.unsqueeze(0).unsqueeze(0), (1, 1, 1, 1), mode="reflect")
563
- image = smoothing(input).squeeze(0).squeeze(0)
564
- max_indices_list.append(image.max())
565
- return max_indices_list
566
-
567
- def _aggregate_and_get_max_attention_per_token(
568
- self,
569
- indices: List[int],
570
- ):
571
- """Aggregates the attention for each token and computes the max activation value for each token to alter."""
572
- attention_maps = self.attention_store.aggregate_attention(
573
- from_where=("up", "down", "mid"),
574
- )
575
- max_attention_per_index = self._compute_max_attention_per_index(
576
- attention_maps=attention_maps,
577
- indices=indices,
578
- )
579
- return max_attention_per_index
580
-
581
- @staticmethod
582
- def _compute_loss(max_attention_per_index: List[torch.Tensor]) -> torch.Tensor:
583
- """Computes the attend-and-excite loss using the maximum attention value for each token."""
584
- losses = [max(0, 1.0 - curr_max) for curr_max in max_attention_per_index]
585
- loss = max(losses)
586
- return loss
587
-
588
- @staticmethod
589
- def _update_latent(latents: torch.Tensor, loss: torch.Tensor, step_size: float) -> torch.Tensor:
590
- """Update the latent according to the computed loss."""
591
- grad_cond = torch.autograd.grad(loss.requires_grad_(True), [latents], retain_graph=True)[0]
592
- latents = latents - step_size * grad_cond
593
- return latents
594
-
595
- def _perform_iterative_refinement_step(
596
- self,
597
- latents: torch.Tensor,
598
- indices: List[int],
599
- loss: torch.Tensor,
600
- threshold: float,
601
- text_embeddings: torch.Tensor,
602
- step_size: float,
603
- t: int,
604
- max_refinement_steps: int = 20,
605
- ):
606
- """
607
- Performs the iterative latent refinement introduced in the paper. Here, we continuously update the latent code
608
- according to our loss objective until the given threshold is reached for all tokens.
609
- """
610
- iteration = 0
611
- target_loss = max(0, 1.0 - threshold)
612
- while loss > target_loss:
613
- iteration += 1
614
-
615
- latents = latents.clone().detach().requires_grad_(True)
616
- self.unet(latents, t, encoder_hidden_states=text_embeddings).sample
617
- self.unet.zero_grad()
618
-
619
- # Get max activation value for each subject token
620
- max_attention_per_index = self._aggregate_and_get_max_attention_per_token(
621
- indices=indices,
622
- )
623
-
624
- loss = self._compute_loss(max_attention_per_index)
625
-
626
- if loss != 0:
627
- latents = self._update_latent(latents, loss, step_size)
628
-
629
- logger.info(f"\t Try {iteration}. loss: {loss}")
630
-
631
- if iteration >= max_refinement_steps:
632
- logger.info(f"\t Exceeded max number of iterations ({max_refinement_steps})! ")
633
- break
634
-
635
- # Run one more time but don't compute gradients and update the latents.
636
- # We just need to compute the new loss - the grad update will occur below
637
- latents = latents.clone().detach().requires_grad_(True)
638
- _ = self.unet(latents, t, encoder_hidden_states=text_embeddings).sample
639
- self.unet.zero_grad()
640
-
641
- # Get max activation value for each subject token
642
- max_attention_per_index = self._aggregate_and_get_max_attention_per_token(
643
- indices=indices,
644
- )
645
- loss = self._compute_loss(max_attention_per_index)
646
- logger.info(f"\t Finished with loss of: {loss}")
647
- return loss, latents, max_attention_per_index
648
-
649
- def register_attention_control(self):
650
- attn_procs = {}
651
- cross_att_count = 0
652
- for name in self.unet.attn_processors.keys():
653
- if name.startswith("mid_block"):
654
- place_in_unet = "mid"
655
- elif name.startswith("up_blocks"):
656
- place_in_unet = "up"
657
- elif name.startswith("down_blocks"):
658
- place_in_unet = "down"
659
- else:
660
- continue
661
-
662
- cross_att_count += 1
663
- attn_procs[name] = AttendExciteAttnProcessor(attnstore=self.attention_store, place_in_unet=place_in_unet)
664
-
665
- self.unet.set_attn_processor(attn_procs)
666
- self.attention_store.num_att_layers = cross_att_count
667
-
668
- def get_indices(self, prompt: str) -> Dict[str, int]:
669
- """Utility function to list the indices of the tokens you wish to alte"""
670
- ids = self.tokenizer(prompt).input_ids
671
- indices = {i: tok for tok, i in zip(self.tokenizer.convert_ids_to_tokens(ids), range(len(ids)))}
672
- return indices
673
-
674
- @torch.no_grad()
675
- @replace_example_docstring(EXAMPLE_DOC_STRING)
676
- def __call__(
677
- self,
678
- prompt: Union[str, List[str]],
679
- token_indices: Union[List[int], List[List[int]]],
680
- height: Optional[int] = None,
681
- width: Optional[int] = None,
682
- num_inference_steps: int = 50,
683
- guidance_scale: float = 7.5,
684
- negative_prompt: Optional[Union[str, List[str]]] = None,
685
- num_images_per_prompt: int = 1,
686
- eta: float = 0.0,
687
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
688
- latents: Optional[torch.FloatTensor] = None,
689
- prompt_embeds: Optional[torch.FloatTensor] = None,
690
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
691
- output_type: Optional[str] = "pil",
692
- return_dict: bool = True,
693
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
694
- callback_steps: int = 1,
695
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
696
- max_iter_to_alter: int = 25,
697
- thresholds: dict = {0: 0.05, 10: 0.5, 20: 0.8},
698
- scale_factor: int = 20,
699
- attn_res: Optional[Tuple[int]] = (16, 16),
700
- ):
701
- r"""
702
- The call function to the pipeline for generation.
703
-
704
- Args:
705
- prompt (`str` or `List[str]`, *optional*):
706
- The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
707
- token_indices (`List[int]`):
708
- The token indices to alter with attend-and-excite.
709
- height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
710
- The height in pixels of the generated image.
711
- width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
712
- The width in pixels of the generated image.
713
- num_inference_steps (`int`, *optional*, defaults to 50):
714
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
715
- expense of slower inference.
716
- guidance_scale (`float`, *optional*, defaults to 7.5):
717
- A higher guidance scale value encourages the model to generate images closely linked to the text
718
- `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
719
- negative_prompt (`str` or `List[str]`, *optional*):
720
- The prompt or prompts to guide what to not include in image generation. If not defined, you need to
721
- pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
722
- num_images_per_prompt (`int`, *optional*, defaults to 1):
723
- The number of images to generate per prompt.
724
- eta (`float`, *optional*, defaults to 0.0):
725
- Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
726
- to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
727
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
728
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
729
- generation deterministic.
730
- latents (`torch.FloatTensor`, *optional*):
731
- Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
732
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
733
- tensor is generated by sampling using the supplied random `generator`.
734
- prompt_embeds (`torch.FloatTensor`, *optional*):
735
- Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
736
- provided, text embeddings are generated from the `prompt` input argument.
737
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
738
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
739
- not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
740
- output_type (`str`, *optional*, defaults to `"pil"`):
741
- The output format of the generated image. Choose between `PIL.Image` or `np.array`.
742
- return_dict (`bool`, *optional*, defaults to `True`):
743
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
744
- plain tuple.
745
- callback (`Callable`, *optional*):
746
- A function that calls every `callback_steps` steps during inference. The function is called with the
747
- following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
748
- callback_steps (`int`, *optional*, defaults to 1):
749
- The frequency at which the `callback` function is called. If not specified, the callback is called at
750
- every step.
751
- cross_attention_kwargs (`dict`, *optional*):
752
- A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in
753
- [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
754
- max_iter_to_alter (`int`, *optional*, defaults to `25`):
755
- Number of denoising steps to apply attend-and-excite. The `max_iter_to_alter` denoising steps are when
756
- attend-and-excite is applied. For example, if `max_iter_to_alter` is `25` and there are a total of `30`
757
- denoising steps, the first `25` denoising steps applies attend-and-excite and the last `5` will not.
758
- thresholds (`dict`, *optional*, defaults to `{0: 0.05, 10: 0.5, 20: 0.8}`):
759
- Dictionary defining the iterations and desired thresholds to apply iterative latent refinement in.
760
- scale_factor (`int`, *optional*, default to 20):
761
- Scale factor to control the step size of each attend-and-excite update.
762
- attn_res (`tuple`, *optional*, default computed from width and height):
763
- The 2D resolution of the semantic attention map.
764
-
765
- Examples:
766
-
767
- Returns:
768
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
769
- If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned,
770
- otherwise a `tuple` is returned where the first element is a list with the generated images and the
771
- second element is a list of `bool`s indicating whether the corresponding generated image contains
772
- "not-safe-for-work" (nsfw) content.
773
- """
774
-
775
- # 0. Default height and width to unet
776
- height = height or self.unet.config.sample_size * self.vae_scale_factor
777
- width = width or self.unet.config.sample_size * self.vae_scale_factor
778
-
779
- # 1. Check inputs. Raise error if not correct
780
- self.check_inputs(
781
- prompt,
782
- token_indices,
783
- height,
784
- width,
785
- callback_steps,
786
- negative_prompt,
787
- prompt_embeds,
788
- negative_prompt_embeds,
789
- )
790
-
791
- # 2. Define call parameters
792
- if prompt is not None and isinstance(prompt, str):
793
- batch_size = 1
794
- elif prompt is not None and isinstance(prompt, list):
795
- batch_size = len(prompt)
796
- else:
797
- batch_size = prompt_embeds.shape[0]
798
-
799
- device = self._execution_device
800
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
801
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
802
- # corresponds to doing no classifier free guidance.
803
- do_classifier_free_guidance = guidance_scale > 1.0
804
-
805
- # 3. Encode input prompt
806
- prompt_embeds = self._encode_prompt(
807
- prompt,
808
- device,
809
- num_images_per_prompt,
810
- do_classifier_free_guidance,
811
- negative_prompt,
812
- prompt_embeds=prompt_embeds,
813
- negative_prompt_embeds=negative_prompt_embeds,
814
- )
815
-
816
- # 4. Prepare timesteps
817
- self.scheduler.set_timesteps(num_inference_steps, device=device)
818
- timesteps = self.scheduler.timesteps
819
-
820
- # 5. Prepare latent variables
821
- num_channels_latents = self.unet.config.in_channels
822
- latents = self.prepare_latents(
823
- batch_size * num_images_per_prompt,
824
- num_channels_latents,
825
- height,
826
- width,
827
- prompt_embeds.dtype,
828
- device,
829
- generator,
830
- latents,
831
- )
832
-
833
- # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
834
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
835
-
836
- if attn_res is None:
837
- attn_res = int(np.ceil(width / 32)), int(np.ceil(height / 32))
838
- self.attention_store = AttentionStore(attn_res)
839
- self.register_attention_control()
840
-
841
- # default config for step size from original repo
842
- scale_range = np.linspace(1.0, 0.5, len(self.scheduler.timesteps))
843
- step_size = scale_factor * np.sqrt(scale_range)
844
-
845
- text_embeddings = (
846
- prompt_embeds[batch_size * num_images_per_prompt :] if do_classifier_free_guidance else prompt_embeds
847
- )
848
-
849
- if isinstance(token_indices[0], int):
850
- token_indices = [token_indices]
851
-
852
- indices = []
853
-
854
- for ind in token_indices:
855
- indices = indices + [ind] * num_images_per_prompt
856
-
857
- # 7. Denoising loop
858
- num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
859
- with self.progress_bar(total=num_inference_steps) as progress_bar:
860
- for i, t in enumerate(timesteps):
861
- # Attend and excite process
862
- with torch.enable_grad():
863
- latents = latents.clone().detach().requires_grad_(True)
864
- updated_latents = []
865
- for latent, index, text_embedding in zip(latents, indices, text_embeddings):
866
- # Forward pass of denoising with text conditioning
867
- latent = latent.unsqueeze(0)
868
- text_embedding = text_embedding.unsqueeze(0)
869
-
870
- self.unet(
871
- latent,
872
- t,
873
- encoder_hidden_states=text_embedding,
874
- cross_attention_kwargs=cross_attention_kwargs,
875
- ).sample
876
- self.unet.zero_grad()
877
-
878
- # Get max activation value for each subject token
879
- max_attention_per_index = self._aggregate_and_get_max_attention_per_token(
880
- indices=index,
881
- )
882
-
883
- loss = self._compute_loss(max_attention_per_index=max_attention_per_index)
884
-
885
- # If this is an iterative refinement step, verify we have reached the desired threshold for all
886
- if i in thresholds.keys() and loss > 1.0 - thresholds[i]:
887
- loss, latent, max_attention_per_index = self._perform_iterative_refinement_step(
888
- latents=latent,
889
- indices=index,
890
- loss=loss,
891
- threshold=thresholds[i],
892
- text_embeddings=text_embedding,
893
- step_size=step_size[i],
894
- t=t,
895
- )
896
-
897
- # Perform gradient update
898
- if i < max_iter_to_alter:
899
- if loss != 0:
900
- latent = self._update_latent(
901
- latents=latent,
902
- loss=loss,
903
- step_size=step_size[i],
904
- )
905
- logger.info(f"Iteration {i} | Loss: {loss:0.4f}")
906
-
907
- updated_latents.append(latent)
908
-
909
- latents = torch.cat(updated_latents, dim=0)
910
-
911
- # expand the latents if we are doing classifier free guidance
912
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
913
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
914
-
915
- # predict the noise residual
916
- noise_pred = self.unet(
917
- latent_model_input,
918
- t,
919
- encoder_hidden_states=prompt_embeds,
920
- cross_attention_kwargs=cross_attention_kwargs,
921
- ).sample
922
-
923
- # perform guidance
924
- if do_classifier_free_guidance:
925
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
926
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
927
-
928
- # compute the previous noisy sample x_t -> x_t-1
929
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
930
-
931
- # call the callback, if provided
932
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
933
- progress_bar.update()
934
- if callback is not None and i % callback_steps == 0:
935
- callback(i, t, latents)
936
-
937
- # 8. Post-processing
938
- if not output_type == "latent":
939
- image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
940
- image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
941
- else:
942
- image = latents
943
- has_nsfw_concept = None
944
-
945
- if has_nsfw_concept is None:
946
- do_denormalize = [True] * image.shape[0]
947
- else:
948
- do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept]
949
-
950
- image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize)
951
-
952
- if not return_dict:
953
- return (image, has_nsfw_concept)
954
-
955
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
956
-
957
-
958
- class GaussianSmoothing(torch.nn.Module):
959
- """
960
- Arguments:
961
- Apply gaussian smoothing on a 1d, 2d or 3d tensor. Filtering is performed seperately for each channel in the input
962
- using a depthwise convolution.
963
- channels (int, sequence): Number of channels of the input tensors. Output will
964
- have this number of channels as well.
965
- kernel_size (int, sequence): Size of the gaussian kernel. sigma (float, sequence): Standard deviation of the
966
- gaussian kernel. dim (int, optional): The number of dimensions of the data.
967
- Default value is 2 (spatial).
968
- """
969
-
970
- # channels=1, kernel_size=kernel_size, sigma=sigma, dim=2
971
- def __init__(
972
- self,
973
- channels: int = 1,
974
- kernel_size: int = 3,
975
- sigma: float = 0.5,
976
- dim: int = 2,
977
- ):
978
- super().__init__()
979
-
980
- if isinstance(kernel_size, int):
981
- kernel_size = [kernel_size] * dim
982
- if isinstance(sigma, float):
983
- sigma = [sigma] * dim
984
-
985
- # The gaussian kernel is the product of the
986
- # gaussian function of each dimension.
987
- kernel = 1
988
- meshgrids = torch.meshgrid([torch.arange(size, dtype=torch.float32) for size in kernel_size])
989
- for size, std, mgrid in zip(kernel_size, sigma, meshgrids):
990
- mean = (size - 1) / 2
991
- kernel *= 1 / (std * math.sqrt(2 * math.pi)) * torch.exp(-(((mgrid - mean) / (2 * std)) ** 2))
992
-
993
- # Make sure sum of values in gaussian kernel equals 1.
994
- kernel = kernel / torch.sum(kernel)
995
-
996
- # Reshape to depthwise convolutional weight
997
- kernel = kernel.view(1, 1, *kernel.size())
998
- kernel = kernel.repeat(channels, *[1] * (kernel.dim() - 1))
999
-
1000
- self.register_buffer("weight", kernel)
1001
- self.groups = channels
1002
-
1003
- if dim == 1:
1004
- self.conv = F.conv1d
1005
- elif dim == 2:
1006
- self.conv = F.conv2d
1007
- elif dim == 3:
1008
- self.conv = F.conv3d
1009
- else:
1010
- raise RuntimeError("Only 1, 2 and 3 dimensions are supported. Received {}.".format(dim))
1011
-
1012
- def forward(self, input):
1013
- """
1014
- Arguments:
1015
- Apply gaussian filter to input.
1016
- input (torch.Tensor): Input to apply gaussian filter on.
1017
- Returns:
1018
- filtered (torch.Tensor): Filtered output.
1019
- """
1020
- return self.conv(input, weight=self.weight.to(input.dtype), groups=self.groups)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_model.py DELETED
@@ -1,27 +0,0 @@
1
- import os
2
- from models.grit_src.image_dense_captions import image_caption_api
3
-
4
- class DenseCaptioning():
5
- def __init__(self, device):
6
- self.device = device
7
-
8
-
9
- def initialize_model(self):
10
- pass
11
-
12
- def image_dense_caption_debug(self, image_src):
13
- dense_caption = """
14
- 1. the broccoli is green, [0, 0, 333, 325];
15
- 2. a piece of broccoli, [0, 147, 143, 324];
16
- 3. silver fork on plate, [4, 547, 252, 612];
17
- """
18
- return dense_caption
19
-
20
- def image_dense_caption(self, image_src):
21
- dense_caption = image_caption_api(image_src, self.device)
22
- print('\033[1;35m' + '*' * 100 + '\033[0m')
23
- print("Step2, Dense Caption:\n")
24
- print(dense_caption)
25
- print('\033[1;35m' + '*' * 100 + '\033[0m')
26
- return dense_caption
27
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/__init__.py DELETED
@@ -1,9 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- from .coco import load_coco_json, load_sem_seg, register_coco_instances, convert_to_coco_json
3
- from .coco_panoptic import register_coco_panoptic, register_coco_panoptic_separated
4
- from .lvis import load_lvis_json, register_lvis_instances, get_lvis_instances_meta
5
- from .pascal_voc import load_voc_instances, register_pascal_voc
6
- from . import builtin as _builtin # ensure the builtin datasets are registered
7
-
8
-
9
- __all__ = [k for k in globals().keys() if not k.startswith("_")]
 
 
 
 
 
 
 
 
 
 
spaces/Bart92/RVC_HF/go-applio-manager-recode.bat DELETED
@@ -1,322 +0,0 @@
1
- @echo off
2
- title Applio Installer
3
-
4
- ::: _ _ _____ _
5
- ::: /\ | (_) | __ \ | |
6
- ::: / \ _ __ _ __ | |_ ___ | |__) |___ ___ ___ __| | ___
7
- ::: / /\ \ | '_ \| '_ \| | |/ _ \ | _ // _ \/ __/ _ \ / _` |/ _ \
8
- ::: / ____ \| |_) | |_) | | | (_) | | | \ \ __/ (_| (_) | (_| | __/
9
- ::: /_/ \_\ .__/| .__/|_|_|\___/ |_| \_\___|\___\___/ \__,_|\___|
10
- ::: | | | |
11
- ::: |_| |_|
12
- :::
13
- :::
14
-
15
- setlocal
16
- set "branch=applio-recode"
17
- set "runtime=runtime-recode"
18
- set "repoUrl=https://github.com/IAHispano/Applio-RVC-Fork/archive/refs/heads/%branch%.zip"
19
- set "fixesFolder=fixes"
20
- set "localFixesPy=local_fixes.py"
21
- set "principal=%cd%"
22
- set "URL_BASE=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main"
23
- set "URL_EXTRA=https://huggingface.co/IAHispano/applio/resolve/main"
24
-
25
- :menu
26
- for /f "delims=: tokens=*" %%A in ('findstr /b ":::" "%~f0"') do @echo(%%A
27
-
28
- echo [1] Reinstall Applio
29
- echo [2] Update Applio
30
- echo [3] Update Applio + Runtime
31
- echo.
32
-
33
- set /p choice=Select an option:
34
- set choice=%choice: =%
35
-
36
- if "%choice%"=="1" (
37
- cls
38
- echo Starting Applio Reinstaller...
39
- echo.
40
- goto reinstaller
41
- pause
42
- cls
43
- goto menu
44
-
45
- )
46
-
47
- if "%choice%"=="2" (
48
- cls
49
- echo Starting Applio Updater...
50
- echo.
51
- goto updater
52
- pause
53
- cls
54
- goto menu
55
- )
56
-
57
- if "%choice%"=="3" (
58
- cls
59
- echo Updating Applio + Runtime...
60
- echo.
61
- goto updaterRuntime
62
- pause
63
- cls
64
- goto menu
65
-
66
- )
67
-
68
- cls
69
- echo Invalid option. Please enter a number from 1 to 3.
70
- echo.
71
- echo Press 'Enter' to access the main menu...
72
- pause>nul
73
- cls
74
- goto menu
75
-
76
- :reinstaller
77
-
78
- echo WARNING: Remember to install Microsoft C++ Build Tools, Redistributable, Python, and Git before continuing.
79
- echo.
80
- echo Step-by-step guide: https://rentry.org/appliolocal
81
- echo Build Tools: https://aka.ms/vs/17/release/vs_BuildTools.exe
82
- echo Redistributable: https://aka.ms/vs/17/release/vc_redist.x64.exe
83
- echo Git: https://github.com/git-for-windows/git/releases/download/v2.42.0.windows.2/Git-2.42.0.2-64-bit.exe
84
- echo Python: Add this route to the windows enviroment variables the user path variable: %principal%\runtime\Scripts
85
- echo.
86
- pause
87
- cls
88
-
89
- echo Downloading ZIP file...
90
- powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }"
91
- echo.
92
-
93
- echo Extracting ZIP file...
94
- powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }"
95
- echo.
96
-
97
- echo Copying folder and file structure from subdirectory to main directory...
98
- robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E
99
- echo.
100
-
101
- echo Deleting contents of subdirectory (files and folders)...
102
- rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q
103
- echo.
104
-
105
- echo Cleaning up...
106
- del "%principal%\repo.zip"
107
- echo.
108
- cls
109
-
110
- echo Proceeding to download the models...
111
- echo.
112
-
113
- echo WARNING: At this point, it's recommended to disable antivirus or firewall, as errors might occur when downloading pretrained models.
114
- pause
115
- cls
116
-
117
- echo Downloading models in the assets folder...
118
- cd "assets"
119
- echo.
120
- echo Downloading the "pretrained" folder...
121
- cd "pretrained"
122
- curl -LJO "%URL_BASE%/pretrained/D32k.pth"
123
- curl -LJO "%URL_BASE%/pretrained/D40k.pth"
124
- curl -LJO "%URL_BASE%/pretrained/D48k.pth"
125
- curl -LJO "%URL_BASE%/pretrained/G32k.pth"
126
- curl -LJO "%URL_BASE%/pretrained/G40k.pth"
127
- curl -LJO "%URL_BASE%/pretrained/G48k.pth"
128
- curl -LJO "%URL_BASE%/pretrained/f0D32k.pth"
129
- curl -LJO "%URL_BASE%/pretrained/f0D40k.pth"
130
- curl -LJO "%URL_BASE%/pretrained/f0D48k.pth"
131
- curl -LJO "%URL_BASE%/pretrained/f0G32k.pth"
132
- curl -LJO "%URL_BASE%/pretrained/f0G40k.pth"
133
- curl -LJO "%URL_BASE%/pretrained/f0G48k.pth"
134
- cd ".."
135
- echo.
136
- cls
137
-
138
- echo Downloading the "pretrained_v2" folder...
139
- cd "pretrained_v2"
140
- curl -LJO "%URL_BASE%/pretrained_v2/D32k.pth"
141
- curl -LJO "%URL_BASE%/pretrained_v2/D40k.pth"
142
- curl -LJO "%URL_BASE%/pretrained_v2/D48k.pth"
143
- curl -LJO "%URL_BASE%/pretrained_v2/G32k.pth"
144
- curl -LJO "%URL_BASE%/pretrained_v2/G40k.pth"
145
- curl -LJO "%URL_BASE%/pretrained_v2/G48k.pth"
146
- curl -LJO "%URL_BASE%/pretrained_v2/f0D32k.pth"
147
- curl -LJO "%URL_BASE%/pretrained_v2/f0D40k.pth"
148
- curl -LJO "%URL_BASE%/pretrained_v2/f0D48k.pth"
149
- curl -LJO "%URL_BASE%/pretrained_v2/f0G32k.pth"
150
- curl -LJO "%URL_BASE%/pretrained_v2/f0G40k.pth"
151
- curl -LJO "%URL_BASE%/pretrained_v2/f0G48k.pth"
152
- cd ".."
153
- echo.
154
- cls
155
-
156
- echo Downloading the hubert_base.pt file...
157
- cd "hubert"
158
- curl -LJO "%URL_BASE%/hubert_base.pt"
159
- cd ".."
160
- echo.
161
- cls
162
-
163
-
164
- echo Downloading the rmvpe.pt file...
165
- cd "rmvpe"
166
- curl -LJO "%URL_BASE%/rmvpe.pt"
167
- echo.
168
- cls
169
-
170
- echo Downloading the rmvpe.onnx file...
171
- curl -LJO "%URL_BASE%/rmvpe.onnx"
172
- cd ".."
173
- cd ".."
174
- echo.
175
- cls
176
-
177
- echo Downloading the rest of the large files
178
-
179
- echo Downloading the "uvr5_weights" folder...
180
- cd "uvr5_weights"
181
- curl -LJO "%URL_BASE%/uvr5_weights/HP2_all_vocals.pth"
182
- curl -LJO "%URL_BASE%/uvr5_weights/HP3_all_vocals.pth"
183
- curl -LJO "%URL_BASE%/uvr5_weights/HP5_only_main_vocal.pth"
184
- curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoAggressive.pth"
185
- curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoDeReverb.pth"
186
- curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoNormal.pth"
187
- cd ".."
188
- echo.
189
- cls
190
-
191
- echo Downloading the ffmpeg.exe file...
192
- curl -LJO "%URL_BASE%/ffmpeg.exe"
193
- echo.
194
- cls
195
-
196
- echo Downloading the ffprobe.exe file...
197
- curl -LJO "%URL_BASE%/ffprobe.exe"
198
- echo.
199
- cls
200
-
201
- echo Downloading the runtime.zip file...
202
- curl -LJO "%URL_EXTRA%/%runtime%.zip"
203
- echo.
204
- cls
205
-
206
- echo Extracting the runtime.zip file, this might take a while...
207
- powershell -Command "Expand-Archive -Path '%runtime%.zip' -DestinationPath '.'"
208
- del %runtime%.zip
209
- echo.
210
- cls
211
-
212
- echo Downloads completed!
213
- echo.
214
-
215
- echo Checking if the local_fixes.py file exists in the Fixes folder...
216
- if exist "%fixesFolder%\%localFixesPy%" (
217
- echo Running the file...
218
- runtime\python.exe "%fixesFolder%\%localFixesPy%"
219
- ) else (
220
- echo The "%localFixesPy%" file was not found in the "Fixes" folder.
221
- )
222
- echo.
223
-
224
- echo Fixes Applied!
225
- echo.
226
-
227
- echo Applio has been reinstalled!
228
- echo.
229
- echo Press 'Enter' to access the main menu...
230
- pause>nul
231
- cls
232
- goto menu
233
-
234
-
235
- :updater
236
-
237
- echo Downloading the ZIP file...
238
- powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }"
239
- echo.
240
-
241
- echo Extracting ZIP file...
242
- powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }"
243
- echo.
244
-
245
- echo Copying folder and file structure from subdirectory to main directory...
246
- robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E
247
- echo.
248
-
249
- echo Deleting contents of the subdirectory (files and folders)...
250
- rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q
251
- echo.
252
-
253
- echo Cleaning up...
254
- del "%principal%\repo.zip"
255
- echo.
256
- cls
257
-
258
- echo Verifying if the local_fixes.py file exists in the Fixes folder...
259
- if exist "%fixesFolder%\%localFixesPy%" (
260
- echo Running the file...
261
- runtime\python.exe "%fixesFolder%\%localFixesPy%"
262
- ) else (
263
- echo The file "%localFixesPy%" was not found in the "Fixes" folder.
264
- )
265
- echo.
266
-
267
- echo Applio has been updated!
268
- echo.
269
- echo Press 'Enter' to access the main menu...
270
- pause>nul
271
- cls
272
- goto menu
273
-
274
-
275
- :updaterRuntime
276
-
277
- echo Downloading the ZIP file...
278
- powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }"
279
- echo.
280
-
281
- echo Extracting ZIP file...
282
- powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }"
283
- echo.
284
-
285
- echo Copying folder and file structure from subdirectory to main directory...
286
- robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E
287
- echo.
288
-
289
- echo Deleting contents of the subdirectory (files and folders)...
290
- rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q
291
- echo.
292
-
293
- echo Cleaning up...
294
- del "%principal%\repo.zip"
295
- echo.
296
- cls
297
-
298
- echo Downloading the runtime.zip file...
299
- curl -LJO "%URL_EXTRA%/%runtime%.zip"
300
- echo.
301
- cls
302
- echo Extracting the runtime.zip file, this might take a while...
303
- powershell -Command "Expand-Archive -Path '%runtime%.zip' -DestinationPath '.'"
304
- del runtime.zip
305
- echo.
306
- cls
307
-
308
- echo Verifying if the local_fixes.py file exists in the Fixes folder...
309
- if exist "%fixesFolder%\%localFixesPy%" (
310
- echo Running the file...
311
- runtime\python.exe "%fixesFolder%\%localFixesPy%"
312
- ) else (
313
- echo The file "%localFixesPy%" was not found in the "Fixes" folder.
314
- )
315
- echo.
316
-
317
- echo Applio has been updated!
318
- echo.
319
- echo Press 'Enter' to access the main menu...
320
- pause>nul
321
- cls
322
- goto menu
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BartPoint/VoiceChange_Beta/vc_infer_pipeline.py DELETED
@@ -1,443 +0,0 @@
1
- import numpy as np, parselmouth, torch, pdb, sys, os
2
- from time import time as ttime
3
- import torch.nn.functional as F
4
- import scipy.signal as signal
5
- import pyworld, os, traceback, faiss, librosa, torchcrepe
6
- from scipy import signal
7
- from functools import lru_cache
8
-
9
- now_dir = os.getcwd()
10
- sys.path.append(now_dir)
11
-
12
- bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
13
-
14
- input_audio_path2wav = {}
15
-
16
-
17
- @lru_cache
18
- def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period):
19
- audio = input_audio_path2wav[input_audio_path]
20
- f0, t = pyworld.harvest(
21
- audio,
22
- fs=fs,
23
- f0_ceil=f0max,
24
- f0_floor=f0min,
25
- frame_period=frame_period,
26
- )
27
- f0 = pyworld.stonemask(audio, f0, t, fs)
28
- return f0
29
-
30
-
31
- def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比
32
- # print(data1.max(),data2.max())
33
- rms1 = librosa.feature.rms(
34
- y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2
35
- ) # 每半秒一个点
36
- rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2)
37
- rms1 = torch.from_numpy(rms1)
38
- rms1 = F.interpolate(
39
- rms1.unsqueeze(0), size=data2.shape[0], mode="linear"
40
- ).squeeze()
41
- rms2 = torch.from_numpy(rms2)
42
- rms2 = F.interpolate(
43
- rms2.unsqueeze(0), size=data2.shape[0], mode="linear"
44
- ).squeeze()
45
- rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6)
46
- data2 *= (
47
- torch.pow(rms1, torch.tensor(1 - rate))
48
- * torch.pow(rms2, torch.tensor(rate - 1))
49
- ).numpy()
50
- return data2
51
-
52
-
53
- class VC(object):
54
- def __init__(self, tgt_sr, config):
55
- self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = (
56
- config.x_pad,
57
- config.x_query,
58
- config.x_center,
59
- config.x_max,
60
- config.is_half,
61
- )
62
- self.sr = 16000 # hubert输入采样率
63
- self.window = 160 # 每帧点数
64
- self.t_pad = self.sr * self.x_pad # 每条前后pad时间
65
- self.t_pad_tgt = tgt_sr * self.x_pad
66
- self.t_pad2 = self.t_pad * 2
67
- self.t_query = self.sr * self.x_query # 查询切点前后查询时间
68
- self.t_center = self.sr * self.x_center # 查询切点位置
69
- self.t_max = self.sr * self.x_max # 免查询时长阈值
70
- self.device = config.device
71
-
72
- def get_f0(
73
- self,
74
- input_audio_path,
75
- x,
76
- p_len,
77
- f0_up_key,
78
- f0_method,
79
- filter_radius,
80
- inp_f0=None,
81
- ):
82
- global input_audio_path2wav
83
- time_step = self.window / self.sr * 1000
84
- f0_min = 50
85
- f0_max = 1100
86
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
87
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
88
- if f0_method == "pm":
89
- f0 = (
90
- parselmouth.Sound(x, self.sr)
91
- .to_pitch_ac(
92
- time_step=time_step / 1000,
93
- voicing_threshold=0.6,
94
- pitch_floor=f0_min,
95
- pitch_ceiling=f0_max,
96
- )
97
- .selected_array["frequency"]
98
- )
99
- pad_size = (p_len - len(f0) + 1) // 2
100
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
101
- f0 = np.pad(
102
- f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
103
- )
104
- elif f0_method == "harvest":
105
- input_audio_path2wav[input_audio_path] = x.astype(np.double)
106
- f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10)
107
- if filter_radius > 2:
108
- f0 = signal.medfilt(f0, 3)
109
- elif f0_method == "crepe":
110
- model = "full"
111
- # Pick a batch size that doesn't cause memory errors on your gpu
112
- batch_size = 512
113
- # Compute pitch using first gpu
114
- audio = torch.tensor(np.copy(x))[None].float()
115
- f0, pd = torchcrepe.predict(
116
- audio,
117
- self.sr,
118
- self.window,
119
- f0_min,
120
- f0_max,
121
- model,
122
- batch_size=batch_size,
123
- device=self.device,
124
- return_periodicity=True,
125
- )
126
- pd = torchcrepe.filter.median(pd, 3)
127
- f0 = torchcrepe.filter.mean(f0, 3)
128
- f0[pd < 0.1] = 0
129
- f0 = f0[0].cpu().numpy()
130
- elif f0_method == "rmvpe":
131
- if hasattr(self, "model_rmvpe") == False:
132
- from rmvpe import RMVPE
133
-
134
- print("loading rmvpe model")
135
- self.model_rmvpe = RMVPE(
136
- "rmvpe.pt", is_half=self.is_half, device=self.device
137
- )
138
- f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03)
139
- f0 *= pow(2, f0_up_key / 12)
140
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
141
- tf0 = self.sr // self.window # 每秒f0点数
142
- if inp_f0 is not None:
143
- delta_t = np.round(
144
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
145
- ).astype("int16")
146
- replace_f0 = np.interp(
147
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
148
- )
149
- shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0]
150
- f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[
151
- :shape
152
- ]
153
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
154
- f0bak = f0.copy()
155
- f0_mel = 1127 * np.log(1 + f0 / 700)
156
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
157
- f0_mel_max - f0_mel_min
158
- ) + 1
159
- f0_mel[f0_mel <= 1] = 1
160
- f0_mel[f0_mel > 255] = 255
161
- f0_coarse = np.rint(f0_mel).astype(np.int)
162
- return f0_coarse, f0bak # 1-0
163
-
164
- def vc(
165
- self,
166
- model,
167
- net_g,
168
- sid,
169
- audio0,
170
- pitch,
171
- pitchf,
172
- times,
173
- index,
174
- big_npy,
175
- index_rate,
176
- version,
177
- protect,
178
- ): # ,file_index,file_big_npy
179
- feats = torch.from_numpy(audio0)
180
- if self.is_half:
181
- feats = feats.half()
182
- else:
183
- feats = feats.float()
184
- if feats.dim() == 2: # double channels
185
- feats = feats.mean(-1)
186
- assert feats.dim() == 1, feats.dim()
187
- feats = feats.view(1, -1)
188
- padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False)
189
-
190
- inputs = {
191
- "source": feats.to(self.device),
192
- "padding_mask": padding_mask,
193
- "output_layer": 9 if version == "v1" else 12,
194
- }
195
- t0 = ttime()
196
- with torch.no_grad():
197
- logits = model.extract_features(**inputs)
198
- feats = model.final_proj(logits[0]) if version == "v1" else logits[0]
199
- if protect < 0.5 and pitch != None and pitchf != None:
200
- feats0 = feats.clone()
201
- if (
202
- isinstance(index, type(None)) == False
203
- and isinstance(big_npy, type(None)) == False
204
- and index_rate != 0
205
- ):
206
- npy = feats[0].cpu().numpy()
207
- if self.is_half:
208
- npy = npy.astype("float32")
209
-
210
- # _, I = index.search(npy, 1)
211
- # npy = big_npy[I.squeeze()]
212
-
213
- score, ix = index.search(npy, k=8)
214
- weight = np.square(1 / score)
215
- weight /= weight.sum(axis=1, keepdims=True)
216
- npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
217
-
218
- if self.is_half:
219
- npy = npy.astype("float16")
220
- feats = (
221
- torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate
222
- + (1 - index_rate) * feats
223
- )
224
-
225
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
226
- if protect < 0.5 and pitch != None and pitchf != None:
227
- feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute(
228
- 0, 2, 1
229
- )
230
- t1 = ttime()
231
- p_len = audio0.shape[0] // self.window
232
- if feats.shape[1] < p_len:
233
- p_len = feats.shape[1]
234
- if pitch != None and pitchf != None:
235
- pitch = pitch[:, :p_len]
236
- pitchf = pitchf[:, :p_len]
237
-
238
- if protect < 0.5 and pitch != None and pitchf != None:
239
- pitchff = pitchf.clone()
240
- pitchff[pitchf > 0] = 1
241
- pitchff[pitchf < 1] = protect
242
- pitchff = pitchff.unsqueeze(-1)
243
- feats = feats * pitchff + feats0 * (1 - pitchff)
244
- feats = feats.to(feats0.dtype)
245
- p_len = torch.tensor([p_len], device=self.device).long()
246
- with torch.no_grad():
247
- if pitch != None and pitchf != None:
248
- audio1 = (
249
- (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0])
250
- .data.cpu()
251
- .float()
252
- .numpy()
253
- )
254
- else:
255
- audio1 = (
256
- (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy()
257
- )
258
- del feats, p_len, padding_mask
259
- if torch.cuda.is_available():
260
- torch.cuda.empty_cache()
261
- t2 = ttime()
262
- times[0] += t1 - t0
263
- times[2] += t2 - t1
264
- return audio1
265
-
266
- def pipeline(
267
- self,
268
- model,
269
- net_g,
270
- sid,
271
- audio,
272
- input_audio_path,
273
- times,
274
- f0_up_key,
275
- f0_method,
276
- file_index,
277
- # file_big_npy,
278
- index_rate,
279
- if_f0,
280
- filter_radius,
281
- tgt_sr,
282
- resample_sr,
283
- rms_mix_rate,
284
- version,
285
- protect,
286
- f0_file=None,
287
- ):
288
- if (
289
- file_index != ""
290
- # and file_big_npy != ""
291
- # and os.path.exists(file_big_npy) == True
292
- and os.path.exists(file_index) == True
293
- and index_rate != 0
294
- ):
295
- try:
296
- index = faiss.read_index(file_index)
297
- # big_npy = np.load(file_big_npy)
298
- big_npy = index.reconstruct_n(0, index.ntotal)
299
- except:
300
- traceback.print_exc()
301
- index = big_npy = None
302
- else:
303
- index = big_npy = None
304
- audio = signal.filtfilt(bh, ah, audio)
305
- audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect")
306
- opt_ts = []
307
- if audio_pad.shape[0] > self.t_max:
308
- audio_sum = np.zeros_like(audio)
309
- for i in range(self.window):
310
- audio_sum += audio_pad[i : i - self.window]
311
- for t in range(self.t_center, audio.shape[0], self.t_center):
312
- opt_ts.append(
313
- t
314
- - self.t_query
315
- + np.where(
316
- np.abs(audio_sum[t - self.t_query : t + self.t_query])
317
- == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min()
318
- )[0][0]
319
- )
320
- s = 0
321
- audio_opt = []
322
- t = None
323
- t1 = ttime()
324
- audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect")
325
- p_len = audio_pad.shape[0] // self.window
326
- inp_f0 = None
327
- if hasattr(f0_file, "name") == True:
328
- try:
329
- with open(f0_file.name, "r") as f:
330
- lines = f.read().strip("\n").split("\n")
331
- inp_f0 = []
332
- for line in lines:
333
- inp_f0.append([float(i) for i in line.split(",")])
334
- inp_f0 = np.array(inp_f0, dtype="float32")
335
- except:
336
- traceback.print_exc()
337
- sid = torch.tensor(sid, device=self.device).unsqueeze(0).long()
338
- pitch, pitchf = None, None
339
- if if_f0 == 1:
340
- pitch, pitchf = self.get_f0(
341
- input_audio_path,
342
- audio_pad,
343
- p_len,
344
- f0_up_key,
345
- f0_method,
346
- filter_radius,
347
- inp_f0,
348
- )
349
- pitch = pitch[:p_len]
350
- pitchf = pitchf[:p_len]
351
- if self.device == "mps":
352
- pitchf = pitchf.astype(np.float32)
353
- pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long()
354
- pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float()
355
- t2 = ttime()
356
- times[1] += t2 - t1
357
- for t in opt_ts:
358
- t = t // self.window * self.window
359
- if if_f0 == 1:
360
- audio_opt.append(
361
- self.vc(
362
- model,
363
- net_g,
364
- sid,
365
- audio_pad[s : t + self.t_pad2 + self.window],
366
- pitch[:, s // self.window : (t + self.t_pad2) // self.window],
367
- pitchf[:, s // self.window : (t + self.t_pad2) // self.window],
368
- times,
369
- index,
370
- big_npy,
371
- index_rate,
372
- version,
373
- protect,
374
- )[self.t_pad_tgt : -self.t_pad_tgt]
375
- )
376
- else:
377
- audio_opt.append(
378
- self.vc(
379
- model,
380
- net_g,
381
- sid,
382
- audio_pad[s : t + self.t_pad2 + self.window],
383
- None,
384
- None,
385
- times,
386
- index,
387
- big_npy,
388
- index_rate,
389
- version,
390
- protect,
391
- )[self.t_pad_tgt : -self.t_pad_tgt]
392
- )
393
- s = t
394
- if if_f0 == 1:
395
- audio_opt.append(
396
- self.vc(
397
- model,
398
- net_g,
399
- sid,
400
- audio_pad[t:],
401
- pitch[:, t // self.window :] if t is not None else pitch,
402
- pitchf[:, t // self.window :] if t is not None else pitchf,
403
- times,
404
- index,
405
- big_npy,
406
- index_rate,
407
- version,
408
- protect,
409
- )[self.t_pad_tgt : -self.t_pad_tgt]
410
- )
411
- else:
412
- audio_opt.append(
413
- self.vc(
414
- model,
415
- net_g,
416
- sid,
417
- audio_pad[t:],
418
- None,
419
- None,
420
- times,
421
- index,
422
- big_npy,
423
- index_rate,
424
- version,
425
- protect,
426
- )[self.t_pad_tgt : -self.t_pad_tgt]
427
- )
428
- audio_opt = np.concatenate(audio_opt)
429
- if rms_mix_rate != 1:
430
- audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate)
431
- if resample_sr >= 16000 and tgt_sr != resample_sr:
432
- audio_opt = librosa.resample(
433
- audio_opt, orig_sr=tgt_sr, target_sr=resample_sr
434
- )
435
- audio_max = np.abs(audio_opt).max() / 0.99
436
- max_int16 = 32768
437
- if audio_max > 1:
438
- max_int16 /= audio_max
439
- audio_opt = (audio_opt * max_int16).astype(np.int16)
440
- del pitch, pitchf, sid
441
- if torch.cuda.is_available():
442
- torch.cuda.empty_cache()
443
- return audio_opt
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BetterAPI/BetterChat_new/src/hooks.server.ts DELETED
@@ -1,37 +0,0 @@
1
- import { dev } from "$app/environment";
2
- import { COOKIE_NAME } from "$env/static/private";
3
- import type { Handle } from "@sveltejs/kit";
4
- import { PUBLIC_GOOGLE_ANALYTICS_ID } from "$env/static/public";
5
- import { addYears } from "date-fns";
6
-
7
- export const handle: Handle = async ({ event, resolve }) => {
8
- const token = event.cookies.get(COOKIE_NAME);
9
-
10
- event.locals.sessionId = token || crypto.randomUUID();
11
-
12
- // Refresh cookie expiration date
13
- event.cookies.set(COOKIE_NAME, event.locals.sessionId, {
14
- path: "/",
15
- // So that it works inside the space's iframe
16
- sameSite: dev ? "lax" : "none",
17
- secure: !dev,
18
- httpOnly: true,
19
- expires: addYears(new Date(), 1),
20
- });
21
-
22
- let replaced = false;
23
-
24
- const response = await resolve(event, {
25
- transformPageChunk: (chunk) => {
26
- // For some reason, Sveltekit doesn't let us load env variables from .env in the app.html template
27
- if (replaced || !chunk.html.includes("%gaId%")) {
28
- return chunk.html;
29
- }
30
- replaced = true;
31
-
32
- return chunk.html.replace("%gaId%", PUBLIC_GOOGLE_ANALYTICS_ID);
33
- },
34
- });
35
-
36
- return response;
37
- };
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BillBojangeles2000/WikiGPT/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Karki TEST
3
- emoji: 🐠
4
- colorFrom: blue
5
- colorTo: red
6
- sdk: streamlit
7
- sdk_version: 1.17.0
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/structures/__init__.py DELETED
@@ -1,10 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
- from .boxes import Boxes, BoxMode, pairwise_iou
3
- from .image_list import ImageList
4
- from .instances import Instances
5
- from .keypoints import Keypoints, heatmaps_to_keypoints
6
- from .masks import BitMasks, PolygonMasks, rasterize_polygons_within_box, polygons_to_bitmask
7
- from .rotated_boxes import RotatedBoxes
8
- from .rotated_boxes import pairwise_iou as pairwise_iou_rotated
9
-
10
- __all__ = [k for k in globals().keys() if not k.startswith("_")]
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/mr/sync_pool.h DELETED
@@ -1,116 +0,0 @@
1
- /*
2
- * Copyright 2018 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- /*! \file sync_pool.h
18
- * \brief A mutex-synchronized version of \p unsynchronized_pool_resource.
19
- */
20
-
21
- #pragma once
22
-
23
- #include <thrust/detail/cpp11_required.h>
24
-
25
- #if THRUST_CPP_DIALECT >= 2011
26
-
27
- #include <mutex>
28
-
29
- #include <thrust/mr/pool.h>
30
-
31
- namespace thrust
32
- {
33
- namespace mr
34
- {
35
-
36
- /*! \addtogroup memory_management Memory Management
37
- * \addtogroup memory_management_classes Memory Management Classes
38
- * \addtogroup memory_resources Memory Resources
39
- * \ingroup memory_resources
40
- * \{
41
- */
42
-
43
- /*! A mutex-synchronized version of \p unsynchronized_pool_resource. Uses \p std::mutex, and therefore requires C++11.
44
- *
45
- * \tparam Upstream the type of memory resources that will be used for allocating memory
46
- */
47
- template<typename Upstream>
48
- struct synchronized_pool_resource : public memory_resource<typename Upstream::pointer>
49
- {
50
- typedef unsynchronized_pool_resource<Upstream> unsync_pool;
51
- typedef std::lock_guard<std::mutex> lock_t;
52
-
53
- typedef typename Upstream::pointer void_ptr;
54
-
55
- public:
56
- /*! Get the default options for a pool. These are meant to be a sensible set of values for many use cases,
57
- * and as such, may be tuned in the future. This function is exposed so that creating a set of options that are
58
- * just a slight departure from the defaults is easy.
59
- */
60
- static pool_options get_default_options()
61
- {
62
- return unsync_pool::get_default_options();
63
- }
64
-
65
- /*! Constructor.
66
- *
67
- * \param upstream the upstream memory resource for allocations
68
- * \param options pool options to use
69
- */
70
- synchronized_pool_resource(Upstream * upstream, pool_options options = get_default_options())
71
- : upstream_pool(upstream, options)
72
- {
73
- }
74
-
75
- /*! Constructor. The upstream resource is obtained by calling \p get_global_resource<Upstream>.
76
- *
77
- * \param options pool options to use
78
- */
79
- synchronized_pool_resource(pool_options options = get_default_options())
80
- : upstream_pool(get_global_resource<Upstream>(), options)
81
- {
82
- }
83
-
84
- /*! Releases all held memory to upstream.
85
- */
86
- void release()
87
- {
88
- lock_t lock(mtx);
89
- upstream_pool.release();
90
- }
91
-
92
- THRUST_NODISCARD virtual void_ptr do_allocate(std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE
93
- {
94
- lock_t lock(mtx);
95
- return upstream_pool.do_allocate(bytes, alignment);
96
- }
97
-
98
- virtual void do_deallocate(void_ptr p, std::size_t n, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE
99
- {
100
- lock_t lock(mtx);
101
- upstream_pool.do_deallocate(p, n, alignment);
102
- }
103
-
104
- private:
105
- std::mutex mtx;
106
- unsync_pool upstream_pool;
107
- };
108
-
109
- /*! \}
110
- */
111
-
112
- } // end mr
113
- } // end thrust
114
-
115
- #endif // THRUST_CPP_DIALECT >= 2011
116
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/logical.h DELETED
@@ -1,22 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
-
21
- // this system has no special version of this algorithm
22
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/mismatch.h DELETED
@@ -1,58 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
-
18
- #pragma once
19
-
20
- #include <thrust/detail/config.h>
21
- #include <thrust/system/detail/generic/tag.h>
22
-
23
- namespace thrust
24
- {
25
- namespace system
26
- {
27
- namespace detail
28
- {
29
- namespace generic
30
- {
31
-
32
-
33
- template<typename DerivedPolicy, typename InputIterator1, typename InputIterator2>
34
- __host__ __device__
35
- thrust::pair<InputIterator1, InputIterator2>
36
- mismatch(thrust::execution_policy<DerivedPolicy> &exec,
37
- InputIterator1 first1,
38
- InputIterator1 last1,
39
- InputIterator2 first2);
40
-
41
-
42
- template<typename DerivedPolicy, typename InputIterator1, typename InputIterator2, typename BinaryPredicate>
43
- __host__ __device__
44
- thrust::pair<InputIterator1, InputIterator2>
45
- mismatch(thrust::execution_policy<DerivedPolicy> &exec,
46
- InputIterator1 first1,
47
- InputIterator1 last1,
48
- InputIterator2 first2,
49
- BinaryPredicate pred);
50
-
51
-
52
- } // end namespace generic
53
- } // end namespace detail
54
- } // end namespace system
55
- } // end namespace thrust
56
-
57
- #include <thrust/system/detail/generic/mismatch.inl>
58
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/datasets/czech_slr_dataset.py DELETED
@@ -1,153 +0,0 @@
1
- import ast
2
- import torch
3
-
4
- import pandas as pd
5
- import torch.utils.data as torch_data
6
-
7
- from random import randrange
8
- from augmentations import *
9
- from normalization.body_normalization import BODY_IDENTIFIERS
10
- from normalization.hand_normalization import HAND_IDENTIFIERS
11
- from normalization.body_normalization import normalize_single_dict as normalize_single_body_dict
12
- from normalization.hand_normalization import normalize_single_dict as normalize_single_hand_dict
13
-
14
- HAND_IDENTIFIERS = [id + "_0" for id in HAND_IDENTIFIERS] + [id + "_1" for id in HAND_IDENTIFIERS]
15
-
16
- DEFAULT_AUGMENTATIONS_CONFIG = {
17
- "rotate-angle": 13,
18
- "perspective-transform-ratio": 0.1,
19
- "squeeze-ratio": 0.15,
20
- "arm-joint-rotate-angle": 4,
21
- "arm-joint-rotate-probability": 0.3
22
- }
23
-
24
-
25
- def load_dataset(file_location: str):
26
-
27
- # Load the datset csv file
28
- df = pd.read_csv(file_location, encoding="utf-8")
29
-
30
- # TO BE DELETED
31
- df.columns = [item.replace("_Left_", "_0_").replace("_Right_", "_1_") for item in list(df.columns)]
32
- if "neck_X" not in df.columns:
33
- df["neck_X"] = [0 for _ in range(df.shape[0])]
34
- df["neck_Y"] = [0 for _ in range(df.shape[0])]
35
-
36
- # TEMP
37
- labels = df["labels"].to_list()
38
- labels = [label + 1 for label in df["labels"].to_list()]
39
- data = []
40
-
41
- for row_index, row in df.iterrows():
42
- current_row = np.empty(shape=(len(ast.literal_eval(row["leftEar_X"])), len(BODY_IDENTIFIERS + HAND_IDENTIFIERS), 2))
43
- for index, identifier in enumerate(BODY_IDENTIFIERS + HAND_IDENTIFIERS):
44
- current_row[:, index, 0] = ast.literal_eval(row[identifier + "_X"])
45
- current_row[:, index, 1] = ast.literal_eval(row[identifier + "_Y"])
46
-
47
- data.append(current_row)
48
-
49
- return data, labels
50
-
51
-
52
- def tensor_to_dictionary(landmarks_tensor: torch.Tensor) -> dict:
53
-
54
- data_array = landmarks_tensor.numpy()
55
- output = {}
56
-
57
- for landmark_index, identifier in enumerate(BODY_IDENTIFIERS + HAND_IDENTIFIERS):
58
- output[identifier] = data_array[:, landmark_index]
59
-
60
- return output
61
-
62
-
63
- def dictionary_to_tensor(landmarks_dict: dict) -> torch.Tensor:
64
-
65
- output = np.empty(shape=(len(landmarks_dict["leftEar"]), len(BODY_IDENTIFIERS + HAND_IDENTIFIERS), 2))
66
-
67
- for landmark_index, identifier in enumerate(BODY_IDENTIFIERS + HAND_IDENTIFIERS):
68
- output[:, landmark_index, 0] = [frame[0] for frame in landmarks_dict[identifier]]
69
- output[:, landmark_index, 1] = [frame[1] for frame in landmarks_dict[identifier]]
70
-
71
- return torch.from_numpy(output)
72
-
73
-
74
- class CzechSLRDataset(torch_data.Dataset):
75
- """Advanced object representation of the HPOES dataset for loading hand joints landmarks utilizing the Torch's
76
- built-in Dataset properties"""
77
-
78
- data: [np.ndarray]
79
- labels: [np.ndarray]
80
-
81
- def __init__(self, dataset_filename: str, num_labels=5, transform=None, augmentations=False,
82
- augmentations_prob=0.5, normalize=True, augmentations_config: dict = DEFAULT_AUGMENTATIONS_CONFIG):
83
- """
84
- Initiates the HPOESDataset with the pre-loaded data from the h5 file.
85
-
86
- :param dataset_filename: Path to the h5 file
87
- :param transform: Any data transformation to be applied (default: None)
88
- """
89
-
90
- loaded_data = load_dataset(dataset_filename)
91
- data, labels = loaded_data[0], loaded_data[1]
92
-
93
- self.data = data
94
- self.labels = labels
95
- self.targets = list(labels)
96
- self.num_labels = num_labels
97
- self.transform = transform
98
-
99
- self.augmentations = augmentations
100
- self.augmentations_prob = augmentations_prob
101
- self.augmentations_config = augmentations_config
102
- self.normalize = normalize
103
-
104
- def __getitem__(self, idx):
105
- """
106
- Allocates, potentially transforms and returns the item at the desired index.
107
-
108
- :param idx: Index of the item
109
- :return: Tuple containing both the depth map and the label
110
- """
111
-
112
- depth_map = torch.from_numpy(np.copy(self.data[idx]))
113
- label = torch.Tensor([self.labels[idx] - 1])
114
-
115
- depth_map = tensor_to_dictionary(depth_map)
116
-
117
- # Apply potential augmentations
118
- if self.augmentations and random.random() < self.augmentations_prob:
119
-
120
- selected_aug = randrange(4)
121
-
122
- if selected_aug == 0:
123
- depth_map = augment_rotate(depth_map, (-self.augmentations_config["rotate-angle"], self.augmentations_config["rotate-angle"]))
124
-
125
- if selected_aug == 1:
126
- depth_map = augment_shear(depth_map, "perspective", (0, self.augmentations_config["perspective-transform-ratio"]))
127
-
128
- if selected_aug == 2:
129
- depth_map = augment_shear(depth_map, "squeeze", (0, self.augmentations_config["squeeze-ratio"]))
130
-
131
- if selected_aug == 3:
132
- depth_map = augment_arm_joint_rotate(depth_map, self.augmentations_config["arm-joint-rotate-probability"], (-self.augmentations_config["arm-joint-rotate-angle"], self.augmentations_config["arm-joint-rotate-angle"]))
133
-
134
- if self.normalize:
135
- depth_map = normalize_single_body_dict(depth_map)
136
- depth_map = normalize_single_hand_dict(depth_map)
137
-
138
- depth_map = dictionary_to_tensor(depth_map)
139
-
140
- # Move the landmark position interval to improve performance
141
- depth_map = depth_map - 0.5
142
-
143
- if self.transform:
144
- depth_map = self.transform(depth_map)
145
-
146
- return depth_map, label
147
-
148
- def __len__(self):
149
- return len(self.labels)
150
-
151
-
152
- if __name__ == "__main__":
153
- pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/configs/walt/walt_people.py DELETED
@@ -1,80 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/occ_mask_rcnn_swin_fpn.py',
3
- '../_base_/datasets/walt_people.py',
4
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
5
- ]
6
-
7
- model = dict(
8
- backbone=dict(
9
- embed_dim=96,
10
- depths=[2, 2, 6, 2],
11
- num_heads=[3, 6, 12, 24],
12
- window_size=7,
13
- ape=False,
14
- drop_path_rate=0.1,
15
- patch_norm=True,
16
- use_checkpoint=False
17
- ),
18
- neck=dict(in_channels=[96, 192, 384, 768]))
19
-
20
- img_norm_cfg = dict(
21
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
22
-
23
- # augmentation strategy originates from DETR / Sparse RCNN
24
- train_pipeline = [
25
- dict(type='LoadImageFromFile'),
26
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
27
- dict(type='RandomFlip', flip_ratio=0.5),
28
- dict(type='AutoAugment',
29
- policies=[
30
- [
31
- dict(type='Resize',
32
- img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333),
33
- (608, 1333), (640, 1333), (672, 1333), (704, 1333),
34
- (736, 1333), (768, 1333), (800, 1333)],
35
- multiscale_mode='value',
36
- keep_ratio=True)
37
- ],
38
- [
39
- dict(type='Resize',
40
- img_scale=[(400, 1333), (500, 1333), (600, 1333)],
41
- multiscale_mode='value',
42
- keep_ratio=True),
43
- dict(type='RandomCrop',
44
- crop_type='absolute_range',
45
- crop_size=(384, 600),
46
- allow_negative_crop=True),
47
- dict(type='Resize',
48
- img_scale=[(480, 1333), (512, 1333), (544, 1333),
49
- (576, 1333), (608, 1333), (640, 1333),
50
- (672, 1333), (704, 1333), (736, 1333),
51
- (768, 1333), (800, 1333)],
52
- multiscale_mode='value',
53
- override=True,
54
- keep_ratio=True)
55
- ]
56
- ]),
57
- dict(type='Normalize', **img_norm_cfg),
58
- dict(type='Pad', size_divisor=32),
59
- dict(type='DefaultFormatBundle'),
60
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
61
- ]
62
- data = dict(train=dict(pipeline=train_pipeline))
63
-
64
- optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, betas=(0.9, 0.999), weight_decay=0.05,
65
- paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.),
66
- 'relative_position_bias_table': dict(decay_mult=0.),
67
- 'norm': dict(decay_mult=0.)}))
68
- lr_config = dict(step=[8, 11])
69
- runner = dict(type='EpochBasedRunnerAmp', max_epochs=12)
70
-
71
- # do not use mmdet version fp16
72
- fp16 = None
73
- optimizer_config = dict(
74
- type="DistOptimizerHook",
75
- update_interval=1,
76
- grad_clip=None,
77
- coalesce=True,
78
- bucket_size_mb=-1,
79
- use_fp16=True,
80
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/datasets/builder.py DELETED
@@ -1,143 +0,0 @@
1
- import copy
2
- import platform
3
- import random
4
- from functools import partial
5
-
6
- import numpy as np
7
- from mmcv.parallel import collate
8
- from mmcv.runner import get_dist_info
9
- from mmcv.utils import Registry, build_from_cfg
10
- from torch.utils.data import DataLoader
11
-
12
- from .samplers import DistributedGroupSampler, DistributedSampler, GroupSampler
13
-
14
- if platform.system() != 'Windows':
15
- # https://github.com/pytorch/pytorch/issues/973
16
- import resource
17
- rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)
18
- hard_limit = rlimit[1]
19
- soft_limit = min(4096, hard_limit)
20
- resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit))
21
-
22
- DATASETS = Registry('dataset')
23
- PIPELINES = Registry('pipeline')
24
-
25
-
26
- def _concat_dataset(cfg, default_args=None):
27
- from .dataset_wrappers import ConcatDataset
28
- ann_files = cfg['ann_file']
29
- img_prefixes = cfg.get('img_prefix', None)
30
- seg_prefixes = cfg.get('seg_prefix', None)
31
- proposal_files = cfg.get('proposal_file', None)
32
- separate_eval = cfg.get('separate_eval', True)
33
-
34
- datasets = []
35
- num_dset = len(ann_files)
36
- for i in range(num_dset):
37
- data_cfg = copy.deepcopy(cfg)
38
- # pop 'separate_eval' since it is not a valid key for common datasets.
39
- if 'separate_eval' in data_cfg:
40
- data_cfg.pop('separate_eval')
41
- data_cfg['ann_file'] = ann_files[i]
42
- if isinstance(img_prefixes, (list, tuple)):
43
- data_cfg['img_prefix'] = img_prefixes[i]
44
- if isinstance(seg_prefixes, (list, tuple)):
45
- data_cfg['seg_prefix'] = seg_prefixes[i]
46
- if isinstance(proposal_files, (list, tuple)):
47
- data_cfg['proposal_file'] = proposal_files[i]
48
- datasets.append(build_dataset(data_cfg, default_args))
49
-
50
- return ConcatDataset(datasets, separate_eval)
51
-
52
-
53
- def build_dataset(cfg, default_args=None):
54
- from .dataset_wrappers import (ConcatDataset, RepeatDataset,
55
- ClassBalancedDataset)
56
- if isinstance(cfg, (list, tuple)):
57
- dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg])
58
- elif cfg['type'] == 'ConcatDataset':
59
- dataset = ConcatDataset(
60
- [build_dataset(c, default_args) for c in cfg['datasets']],
61
- cfg.get('separate_eval', True))
62
- elif cfg['type'] == 'RepeatDataset':
63
- dataset = RepeatDataset(
64
- build_dataset(cfg['dataset'], default_args), cfg['times'])
65
- elif cfg['type'] == 'ClassBalancedDataset':
66
- dataset = ClassBalancedDataset(
67
- build_dataset(cfg['dataset'], default_args), cfg['oversample_thr'])
68
- elif isinstance(cfg.get('ann_file'), (list, tuple)):
69
- dataset = _concat_dataset(cfg, default_args)
70
- else:
71
- dataset = build_from_cfg(cfg, DATASETS, default_args)
72
-
73
- return dataset
74
-
75
-
76
- def build_dataloader(dataset,
77
- samples_per_gpu,
78
- workers_per_gpu,
79
- num_gpus=1,
80
- dist=True,
81
- shuffle=True,
82
- seed=None,
83
- **kwargs):
84
- """Build PyTorch DataLoader.
85
-
86
- In distributed training, each GPU/process has a dataloader.
87
- In non-distributed training, there is only one dataloader for all GPUs.
88
-
89
- Args:
90
- dataset (Dataset): A PyTorch dataset.
91
- samples_per_gpu (int): Number of training samples on each GPU, i.e.,
92
- batch size of each GPU.
93
- workers_per_gpu (int): How many subprocesses to use for data loading
94
- for each GPU.
95
- num_gpus (int): Number of GPUs. Only used in non-distributed training.
96
- dist (bool): Distributed training/test or not. Default: True.
97
- shuffle (bool): Whether to shuffle the data at every epoch.
98
- Default: True.
99
- kwargs: any keyword argument to be used to initialize DataLoader
100
-
101
- Returns:
102
- DataLoader: A PyTorch dataloader.
103
- """
104
- rank, world_size = get_dist_info()
105
- if dist:
106
- # DistributedGroupSampler will definitely shuffle the data to satisfy
107
- # that images on each GPU are in the same group
108
- if shuffle:
109
- sampler = DistributedGroupSampler(
110
- dataset, samples_per_gpu, world_size, rank, seed=seed)
111
- else:
112
- sampler = DistributedSampler(
113
- dataset, world_size, rank, shuffle=False, seed=seed)
114
- batch_size = samples_per_gpu
115
- num_workers = workers_per_gpu
116
- else:
117
- sampler = GroupSampler(dataset, samples_per_gpu) if shuffle else None
118
- batch_size = num_gpus * samples_per_gpu
119
- num_workers = num_gpus * workers_per_gpu
120
-
121
- init_fn = partial(
122
- worker_init_fn, num_workers=num_workers, rank=rank,
123
- seed=seed) if seed is not None else None
124
-
125
- data_loader = DataLoader(
126
- dataset,
127
- batch_size=batch_size,
128
- sampler=sampler,
129
- num_workers=num_workers,
130
- collate_fn=partial(collate, samples_per_gpu=samples_per_gpu),
131
- pin_memory=False,
132
- worker_init_fn=init_fn,
133
- **kwargs)
134
-
135
- return data_loader
136
-
137
-
138
- def worker_init_fn(worker_id, num_workers, rank, seed):
139
- # The seed of each worker equals to
140
- # num_worker * rank + worker_id + user_seed
141
- worker_seed = num_workers * rank + worker_id + seed
142
- np.random.seed(worker_seed)
143
- random.seed(worker_seed)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Chujinze/Res2Net/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Res2Net
3
- emoji: 👁
4
- colorFrom: green
5
- colorTo: gray
6
- sdk: gradio
7
- sdk_version: 3.0.1
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/Yunzai/Yunzai/plugins/adapter/go-cqhttp.js DELETED
@@ -1,842 +0,0 @@
1
- import { randomUUID } from "crypto"
2
- import path from "node:path"
3
- import fs from "node:fs"
4
-
5
- Bot.adapter.push(new class gocqhttpAdapter {
6
- constructor() {
7
- this.id = "QQ"
8
- this.name = "go-cqhttp"
9
- this.path = this.name
10
- }
11
-
12
- toStr(data) {
13
- switch (typeof data) {
14
- case "string":
15
- return data
16
- case "number":
17
- return String(data)
18
- case "object":
19
- if (Buffer.isBuffer(data))
20
- return Buffer.from(data, "utf8").toString()
21
- else
22
- return JSON.stringify(data)
23
- }
24
- return data
25
- }
26
-
27
- makeLog(msg) {
28
- return this.toStr(msg).replace(/base64:\/\/.*?(,|]|")/g, "base64://...$1")
29
- }
30
-
31
- sendApi(ws, action, params) {
32
- const echo = randomUUID()
33
- const msg = { action, params, echo }
34
- ws.sendMsg(msg)
35
- return new Promise(resolve =>
36
- Bot.once(echo, data =>
37
- resolve({ ...data, ...data.data })))
38
- }
39
-
40
- setProfile(data, profile) {
41
- logger.info(`${logger.blue(`[${data.self_id}]`)} 设置资料:${JSON.stringify(profile)}`)
42
- return data.bot.sendApi("set_qq_profile", profile)
43
- }
44
-
45
- makeMsg(msg) {
46
- if (!Array.isArray(msg))
47
- msg = [msg]
48
- const msgs = []
49
- for (const i of msg)
50
- if (typeof i == "object") {
51
- if (i.data)
52
- msgs.push(i)
53
- else
54
- msgs.push({ type: i.type, data: { ...i, type: undefined }})
55
- } else {
56
- msgs.push({ type: "text", data: { text: i }})
57
- }
58
- return msgs
59
- }
60
-
61
- sendFriendMsg(data, msg) {
62
- if (msg?.type == "node")
63
- return this.sendFriendForwardMsg(data, msg.data)
64
-
65
- logger.info(`${logger.blue(`[${data.self_id} => ${data.user_id}]`)} 发送好友消息:${this.makeLog(msg)}`)
66
- return data.bot.sendApi("send_msg", {
67
- user_id: data.user_id,
68
- message: this.makeMsg(msg),
69
- })
70
- }
71
-
72
- sendGroupMsg(data, msg) {
73
- if (msg?.type == "node")
74
- return this.sendGroupForwardMsg(data, msg.data)
75
-
76
- logger.info(`${logger.blue(`[${data.self_id} => ${data.group_id}]`)} 发送群消息:${this.makeLog(msg)}`)
77
- return data.bot.sendApi("send_msg", {
78
- group_id: data.group_id,
79
- message: this.makeMsg(msg),
80
- })
81
- }
82
-
83
- sendGuildMsg(data, msg) {
84
- if (msg?.type == "node")
85
- return Bot.sendForwardMsg(msg => this.sendGuildMsg(data, msg), msg)
86
-
87
- logger.info(`${logger.blue(`[${data.self_id}] => ${data.guild_id}-${data.channel_id}`)} 发送频道消息:${this.makeLog(msg)}`)
88
- return data.bot.sendApi("send_guild_channel_msg", {
89
- guild_id: data.guild_id,
90
- channel_id: data.channel_id,
91
- message: this.makeMsg(msg),
92
- })
93
- }
94
-
95
- async getMsg(data, message_id) {
96
- const msg = (await data.bot.sendApi("get_msg", { message_id })).data
97
-
98
- if (msg?.message) {
99
- const message = []
100
- for (const i of msg.message)
101
- message.push({ ...i.data, type: i.type })
102
- msg.message = message
103
- }
104
-
105
- return msg
106
- }
107
-
108
- recallMsg(data, message_id) {
109
- logger.info(`${logger.blue(`[${data.self_id}]`)} 撤回消息:${message_id}`)
110
- return data.bot.sendApi("delete_msg", { message_id })
111
- }
112
-
113
- getForwardMsg(data, message_id) {
114
- return data.bot.sendApi("get_forward_msg", { message_id })
115
- }
116
-
117
- makeForwardMsg(msg) {
118
- const messages = []
119
- for (const i of msg)
120
- messages.push({
121
- type: "node",
122
- data: {
123
- name: i.nickname || "匿名消息",
124
- uin: Number(i.user_id) || 80000000,
125
- content: this.makeMsg(i.message),
126
- time: i.time,
127
- },
128
- })
129
- return messages
130
- }
131
-
132
- async sendFriendForwardMsg(data, msg) {
133
- logger.info(`${logger.blue(`[${data.self_id} => ${data.user_id}]`)} 发送好友转发消息:${this.makeLog(msg)}`)
134
- msg = await data.bot.sendApi("send_private_forward_msg", {
135
- user_id: data.user_id,
136
- messages: this.makeForwardMsg(msg),
137
- })
138
- return msg
139
- }
140
-
141
- async sendGroupForwardMsg(data, msg) {
142
- logger.info(`${logger.blue(`[${data.self_id} => ${data.group_id}]`)} 发送群转发消息:${this.makeLog(msg)}`)
143
- msg = await data.bot.sendApi("send_group_forward_msg", {
144
- group_id: data.group_id,
145
- messages: this.makeForwardMsg(msg),
146
- })
147
- return msg
148
- }
149
-
150
- async getFriendArray(data) {
151
- return (await data.bot.sendApi("get_friend_list")).data
152
- }
153
-
154
- async getFriendList(data) {
155
- const array = []
156
- for (const { user_id } of (await this.getFriendArray(data)))
157
- array.push(user_id)
158
- return array
159
- }
160
-
161
- async getFriendMap(data) {
162
- for (const i of (await this.getFriendArray(data)))
163
- data.bot.fl.set(i.user_id, i)
164
- return data.bot.fl
165
- }
166
-
167
- getFriendInfo(data) {
168
- return data.bot.sendApi("get_stranger_info", {
169
- user_id: data.user_id,
170
- })
171
- }
172
-
173
- async getGroupArray(data) {
174
- const array = (await data.bot.sendApi("get_group_list")).data
175
- for (const guild of (await this.getGuildArray(data)))
176
- for (const channel of (await this.getGuildChannelArray({
177
- ...data,
178
- guild_id: guild.guild_id,
179
- })))
180
- array.push({
181
- guild,
182
- channel,
183
- group_id: `${guild.guild_id}-${channel.channel_id}`,
184
- group_name: `${guild.guild_name}-${channel.channel_name}`,
185
- })
186
- return array
187
- }
188
-
189
- async getGroupList(data) {
190
- const array = []
191
- for (const { group_id } of (await this.getGroupArray(data)))
192
- array.push(group_id)
193
- return array
194
- }
195
-
196
- async getGroupMap(data) {
197
- for (const i of (await this.getGroupArray(data)))
198
- data.bot.gl.set(i.group_id, i)
199
- return data.bot.gl
200
- }
201
-
202
- getGroupInfo(data) {
203
- return data.bot.sendApi("get_group_info", {
204
- group_id: data.group_id,
205
- })
206
- }
207
-
208
- async getMemberArray(data) {
209
- return (await data.bot.sendApi("get_group_member_list", {
210
- group_id: data.group_id,
211
- })).data
212
- }
213
-
214
- async getMemberList(data) {
215
- const array = []
216
- for (const { user_id } of (await this.getMemberArray(data)))
217
- array.push(user_id)
218
- return array
219
- }
220
-
221
- async getMemberMap(data) {
222
- const map = new Map
223
- for (const i of (await this.getMemberArray(data)))
224
- map.set(i.user_id, i)
225
- return map
226
- }
227
-
228
- getMemberInfo(data) {
229
- return data.bot.sendApi("get_group_member_info", {
230
- group_id: data.group_id,
231
- user_id: data.user_id,
232
- })
233
- }
234
-
235
- async getGuildArray(data) {
236
- return (await data.bot.sendApi("get_guild_list")).data
237
- }
238
-
239
- getGuildInfo(data) {
240
- return data.bot.sendApi("get_guild_meta_by_guest", {
241
- guild_id: data.guild_id,
242
- })
243
- }
244
-
245
- async getGuildChannelArray(data) {
246
- return (await data.bot.sendApi("get_guild_channel_list", {
247
- guild_id: data.guild_id,
248
- })).data
249
- }
250
-
251
- async getGuildChannelMap(data) {
252
- const map = new Map
253
- for (const i of (await this.getGuildChannelArray(data)))
254
- map.set(i.channel_id, i)
255
- return map
256
- }
257
-
258
- async getGuildMemberArray(data) {
259
- const array = []
260
- let next_token = ""
261
- while (true) {
262
- const list = (await data.bot.sendApi("get_guild_member_list", {
263
- guild_id: data.guild_id,
264
- next_token,
265
- })).data
266
-
267
- for (const i of list.members)
268
- array.push({
269
- ...i,
270
- user_id: i.tiny_id,
271
- })
272
- if (list.finished) break
273
- next_token = list.next_token
274
- }
275
- return array
276
- }
277
-
278
- async getGuildMemberList(data) {
279
- const array = []
280
- for (const { user_id } of (await this.getGuildMemberArray(data)))
281
- array.push(user_id)
282
- return array.push
283
- }
284
-
285
- async getGuildMemberMap(data) {
286
- const map = new Map
287
- for (const i of (await this.getGuildMemberArray(data)))
288
- map.set(i.user_id, i)
289
- return map
290
- }
291
-
292
- getGuildMemberInfo(data) {
293
- return data.bot.sendApi("get_guild_member_profile", {
294
- guild_id: data.guild_id,
295
- user_id: data.user_id,
296
- })
297
- }
298
-
299
- setGroupName(data, group_name) {
300
- logger.info(`${logger.blue(`[${data.self_id}]`)} 设置群名:[${data.group_id}] ${group_name}`)
301
- return data.bot.sendApi("set_group_name", {
302
- group_id: data.group_id,
303
- group_name,
304
- })
305
- }
306
-
307
- setGroupAvatar(data, file) {
308
- logger.info(`${logger.blue(`[${data.self_id}]`)} 设置群头像:[${data.group_id}] ${file}`)
309
- return data.bot.sendApi("set_group_portrait", {
310
- group_id: data.group_id,
311
- file: segment.image(file).file,
312
- })
313
- }
314
-
315
- setGroupAdmin(data, user_id, enable) {
316
- logger.info(`${logger.blue(`[${data.self_id}]`)} ${enable ? "设置" : "取消"}群管理员:[${data.group_id}] ${user_id}`)
317
- return data.bot.sendApi("set_group_admin", {
318
- group_id: data.group_id,
319
- user_id,
320
- enable,
321
- })
322
- }
323
-
324
- setGroupCard(data, user_id, card) {
325
- logger.info(`${logger.blue(`[${data.self_id}]`)} 设置群名片:[${data.group_id}] ${user_id} ${card}`)
326
- return data.bot.sendApi("set_group_card", {
327
- group_id: data.group_id,
328
- user_id,
329
- card,
330
- })
331
- }
332
-
333
- setGroupTitle(data, user_id, special_title, duration) {
334
- logger.info(`${logger.blue(`[${data.self_id}]`)} 设置群头衔:[${data.group_id}] ${user_id} ${special_title} ${duration}`)
335
- return data.bot.sendApi("set_group_special_title", {
336
- group_id: data.group_id,
337
- user_id,
338
- special_title,
339
- duration,
340
- })
341
- }
342
-
343
- downloadFile(data, url, thread_count, headers) {
344
- return data.bot.sendApi("download_file", {
345
- url,
346
- thread_count,
347
- headers,
348
- })
349
- }
350
-
351
- async makeFile(data, file, name = path.basename(file)) {
352
- if (file.match(/^https?:\/\//))
353
- file = (await this.downloadFile(data, file)).file
354
- else if (fs.existsSync(file))
355
- file = path.resolve(file)
356
- return { file, name }
357
- }
358
-
359
- async sendFriendFile(data, file, name) {
360
- logger.info(`${logger.blue(`[${data.self_id} => ${data.user_id}]`)} 发送好友文件:${name}(${file})`)
361
- return data.bot.sendApi("upload_private_file", {
362
- user_id: data.user_id,
363
- ...await this.makeFile(data, file, name),
364
- })
365
- }
366
-
367
- async sendGroupFile(data, file, folder, name) {
368
- logger.info(`${logger.blue(`[${data.self_id}]`)} 发送群文件:[${data.group_id}] ${folder||""}/${name}(${file})`)
369
- return data.bot.sendApi("upload_group_file", {
370
- group_id: data.group_id,
371
- folder,
372
- ...await this.makeFile(data, file, name),
373
- })
374
- }
375
-
376
- deleteGroupFile(data, file_id, busid) {
377
- logger.info(`${logger.blue(`[${data.self_id}]`)} 删除群文件:[${data.group_id}] ${file_id}(${busid})`)
378
- return data.bot.sendApi("delete_group_file", {
379
- group_id: data.group_id,
380
- file_id,
381
- busid,
382
- })
383
- }
384
-
385
- createGroupFileFolder(data, name) {
386
- logger.info(`${logger.blue(`[${data.self_id}]`)} 创建群文件夹:[${data.group_id}] ${name}`)
387
- return data.bot.sendApi("create_group_file_folder", {
388
- group_id: data.group_id,
389
- name,
390
- })
391
- }
392
-
393
- getGroupFileSystemInfo(data) {
394
- return data.bot.sendApi("get_group_file_system_info", {
395
- group_id: data.group_id,
396
- })
397
- }
398
-
399
- getGroupFiles(data, folder_id) {
400
- if (folder_id)
401
- return data.bot.sendApi("get_group_files_by_folder", {
402
- group_id: data.group_id,
403
- folder_id,
404
- })
405
- return data.bot.sendApi("get_group_root_files", {
406
- group_id: data.group_id,
407
- })
408
- }
409
-
410
- getGroupFileUrl(data, file_id, busid) {
411
- return data.bot.sendApi("get_group_file_url", {
412
- group_id: data.group_id,
413
- file_id,
414
- busid,
415
- })
416
- }
417
-
418
- getGroupFs(data) {
419
- return {
420
- upload: (file, folder, name) => this.sendGroupFile(data, file, folder, name),
421
- rm: (file_id, busid) => this.deleteGroupFile(data, file_id, busid),
422
- mkdir: name => this.createGroupFileFolder(data, name),
423
- df: () => this.getGroupFileSystemInfo(data),
424
- ls: folder_id => this.getGroupFiles(data, folder_id),
425
- download: (file_id, busid) => this.getGroupFileUrl(data, file_id, busid),
426
- }
427
- }
428
-
429
- setFriendAddRequest(data, flag, approve, remark) {
430
- return data.bot.sendApi("set_friend_add_request", {
431
- flag,
432
- approve,
433
- remark,
434
- })
435
- }
436
-
437
- setGroupAddRequest(data, flag, sub_type, approve, reason) {
438
- return data.bot.sendApi("set_group_add_request", {
439
- flag,
440
- sub_type,
441
- approve,
442
- reason,
443
- })
444
- }
445
-
446
- pickFriend(data, user_id) {
447
- const i = {
448
- ...data.bot.fl.get(user_id),
449
- ...data,
450
- user_id,
451
- }
452
- return {
453
- ...i,
454
- sendMsg: msg => this.sendFriendMsg(i, msg),
455
- getMsg: message_id => this.getMsg(i, message_id),
456
- recallMsg: message_id => this.recallMsg(i, message_id),
457
- getForwardMsg: message_id => this.getForwardMsg(i, message_id),
458
- sendForwardMsg: msg => this.sendFriendForwardMsg(i, msg),
459
- sendFile: (file, name) => this.sendFriendFile(i, file, name),
460
- getInfo: () => this.getFriendInfo(i),
461
- getAvatarUrl: () => `https://q1.qlogo.cn/g?b=qq&s=0&nk=${user_id}`,
462
- }
463
- }
464
-
465
- pickMember(data, group_id, user_id) {
466
- if (typeof group_id == "string" && group_id.match("-")) {
467
- const guild_id = group_id.split("-")
468
- const i = {
469
- ...data,
470
- guild_id: guild_id[0],
471
- channel_id: guild_id[1],
472
- user_id,
473
- }
474
- return {
475
- ...this.pickGroup(i, group_id),
476
- ...i,
477
- getInfo: () => this.getGuildMemberInfo(i),
478
- getAvatarUrl: async () => (await this.getGuildMemberInfo(i)).avatar_url,
479
- }
480
- }
481
-
482
- const i = {
483
- ...data.bot.fl.get(user_id),
484
- ...data,
485
- group_id,
486
- user_id,
487
- }
488
- return {
489
- ...this.pickFriend(i, user_id),
490
- ...i,
491
- getInfo: () => this.getMemberInfo(i),
492
- poke: () => this.sendGroupMsg(i, segment.poke(user_id)),
493
- }
494
- }
495
-
496
- pickGroup(data, group_id) {
497
- if (typeof group_id == "string" && group_id.match("-")) {
498
- const guild_id = group_id.split("-")
499
- const i = {
500
- ...data.bot.gl.get(group_id),
501
- ...data,
502
- guild_id: guild_id[0],
503
- channel_id: guild_id[1],
504
- }
505
- return {
506
- ...i,
507
- sendMsg: msg => this.sendGuildMsg(i, msg),
508
- getMsg: message_id => this.getMsg(i, message_id),
509
- recallMsg: message_id => this.recallMsg(i, message_id),
510
- getForwardMsg: message_id => this.getForwardMsg(i, message_id),
511
- getInfo: () => this.getGuildInfo(i),
512
- getChannelArray: () => this.getGuildChannelArray(i),
513
- getChannelList: () => this.getGuildChannelList(i),
514
- getChannelMap: () => this.getGuildChannelMap(i),
515
- getMemberArray: () => this.getGuildMemberArray(i),
516
- getMemberList: () => this.getGuildMemberList(i),
517
- getMemberMap: () => this.getGuildMemberMap(i),
518
- pickMember: user_id => this.pickMember(i, group_id, user_id),
519
- }
520
- }
521
-
522
- const i = {
523
- ...data.bot.gl.get(group_id),
524
- ...data,
525
- group_id,
526
- }
527
- return {
528
- ...i,
529
- sendMsg: msg => this.sendGroupMsg(i, msg),
530
- getMsg: message_id => this.getMsg(i, message_id),
531
- recallMsg: message_id => this.recallMsg(i, message_id),
532
- getForwardMsg: message_id => this.getForwardMsg(i, message_id),
533
- sendForwardMsg: msg => this.sendGroupForwardMsg(i, msg),
534
- sendFile: (file, name) => this.sendGroupFile(i, file, undefined, name),
535
- getInfo: () => this.getGroupInfo(i),
536
- getAvatarUrl: () => `https://p.qlogo.cn/gh/${group_id}/${group_id}/0`,
537
- getMemberArray: () => this.getMemberArray(i),
538
- getMemberList: () => this.getMemberList(i),
539
- getMemberMap: () => this.getMemberMap(i),
540
- pickMember: user_id => this.pickMember(i, group_id, user_id),
541
- pokeMember: user_id => this.sendGroupMsg(i, segment.poke(user_id)),
542
- setName: group_name => this.setGroupName(i, group_name),
543
- setAvatar: file => this.setGroupAvatar(i, file),
544
- setAdmin: (user_id, enable) => this.setGroupAdmin(i, user_id, enable),
545
- setCard: (user_id, card) => this.setGroupCard(i, user_id, card),
546
- setTitle: (user_id, special_title, duration) => this.setGroupTitle(i, user_id, special_title, duration),
547
- fs: this.getGroupFs(i),
548
- }
549
- }
550
-
551
- async connect(data, ws) {
552
- Bot[data.self_id] = {
553
- adapter: this,
554
- ws: ws,
555
- sendApi: (action, params) => this.sendApi(ws, action, params),
556
- stat: { start_time: data.time },
557
- model: "TRSS Yunzai ",
558
-
559
- info: {},
560
- get uin() { return this.info.user_id },
561
- get nickname() { return this.info.nickname },
562
- get avatar() { return `https://q1.qlogo.cn/g?b=qq&s=0&nk=${this.uin}` },
563
-
564
- setProfile: profile => this.setProfile(data, profile),
565
- setNickname: nickname => this.setProfile(data, { nickname }),
566
-
567
- pickFriend: user_id => this.pickFriend(data, user_id),
568
- get pickUser() { return this.pickFriend },
569
- getFriendArray: () => this.getFriendArray(data),
570
- getFriendList: () => this.getFriendList(data),
571
- getFriendMap: () => this.getFriendMap(data),
572
- fl: new Map,
573
-
574
- pickMember: (group_id, user_id) => this.pickMember(data, group_id, user_id),
575
- pickGroup: group_id => this.pickGroup(data, group_id),
576
- getGroupArray: () => this.getGroupArray(data),
577
- getGroupList: () => this.getGroupList(data),
578
- getGroupMap: () => this.getGroupMap(data),
579
- gl: new Map,
580
- gml: new Map,
581
-
582
- request_list: [],
583
- getSystemMsg: () => data.bot.request_list,
584
- setFriendAddRequest: (flag, approve, remark) => this.setFriendAddRequest(data, flag, approve, remark),
585
- setGroupAddRequest: (flag, sub_type, approve, reason) => this.setGroupAddRequest(data, flag, sub_type, approve, reason),
586
- }
587
- data.bot = Bot[data.self_id]
588
-
589
- if (!Bot.uin.includes(data.self_id))
590
- Bot.uin.push(data.self_id)
591
-
592
- data.bot.sendApi("_set_model_show", {
593
- model: data.bot.model,
594
- model_show: data.bot.model,
595
- })
596
-
597
- data.bot.info = (await data.bot.sendApi("get_login_info")).data
598
- data.bot.guild_info = (await data.bot.sendApi("get_guild_service_profile")).data
599
- data.bot.clients = (await data.bot.sendApi("get_online_clients")).clients
600
- data.bot.version = {
601
- ...(await data.bot.sendApi("get_version_info")).data,
602
- id: this.id,
603
- name: this.name,
604
- }
605
-
606
- data.bot.getFriendMap()
607
- data.bot.getGroupMap()
608
-
609
- logger.mark(`${logger.blue(`[${data.self_id}]`)} ${this.name}(${this.id}) ${data.bot.version.app_full_name} 已连接`)
610
- Bot.em(`connect.${data.self_id}`, data)
611
- }
612
-
613
- makeMessage(data) {
614
- const message = []
615
- for (const i of data.message)
616
- message.push({ ...i.data, type: i.type })
617
- data.message = message
618
-
619
- switch (data.message_type) {
620
- case "private":
621
- logger.info(`${logger.blue(`[${data.self_id}]`)} 好友消息:[${data.sender.nickname}(${data.user_id})] ${data.raw_message}`)
622
- break
623
- case "group":
624
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群消息:[${data.group_id}, ${data.sender.card||data.sender.nickname}(${data.user_id})] ${data.raw_message}`)
625
- break
626
- case "guild":
627
- data.message_type = "group"
628
- data.group_id = `${data.guild_id}-${data.channel_id}`
629
- logger.info(`${logger.blue(`[${data.self_id}]`)} 频道消息:[${data.group_id}, ${data.sender.nickname}(${data.user_id})] ${JSON.stringify(data.message)}`)
630
- Object.defineProperty(data, "friend", { get() { return this.member || {}}})
631
- break
632
- default:
633
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知消息:${logger.magenta(JSON.stringify(data))}`)
634
- }
635
-
636
- Bot.em(`${data.post_type}.${data.message_type}.${data.sub_type}`, data)
637
- }
638
-
639
- async makeNotice(data) {
640
- switch (data.notice_type) {
641
- case "friend_recall":
642
- logger.info(`${logger.blue(`[${data.self_id}]`)} 好友消息撤回:[${data.user_id}] ${data.message_id}`)
643
- break
644
- case "group_recall":
645
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群消息撤回:[${data.group_id}, ${data.operator_id}=>${data.user_id}] ${data.message_id}`)
646
- break
647
- case "group_increase":
648
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群成员增加:[${data.group_id}, ${data.operator_id}=>${data.user_id}] ${data.sub_type}`)
649
- if (data.user_id == data.self_id)
650
- data.bot.getGroupMap()
651
- break
652
- case "group_decrease":
653
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群成员减少:[${data.group_id}, ${data.operator_id}=>${data.user_id}] ${data.sub_type}`)
654
- if (data.user_id == data.self_id)
655
- data.bot.getGroupMap()
656
- break
657
- case "group_admin":
658
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群管理员变动:[${data.group_id}, ${data.user_id}] ${data.sub_type}`)
659
- data.set = data.sub_type == "set"
660
- break
661
- case "group_upload":
662
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群文件上传:[${data.group_id}, ${data.user_id}] ${JSON.stringify(data.file)}`)
663
- break
664
- case "group_ban":
665
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群禁言:[${data.group_id}, ${data.operator_id}=>${data.user_id}] ${data.sub_type} ${data.duration}秒`)
666
- break
667
- case "friend_add":
668
- logger.info(`${logger.blue(`[${data.self_id}]`)} 好友添加:[${data.user_id}]`)
669
- data.bot.getFriendMap()
670
- break
671
- case "notify":
672
- if (data.group_id)
673
- data.notice_type = "group"
674
- else
675
- data.notice_type = "friend"
676
- switch (data.sub_type) {
677
- case "poke":
678
- data.operator_id = data.user_id
679
- if (data.group_id)
680
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群戳一戳:[${data.group_id}, ${data.operator_id}=>${data.target_id}]`)
681
- else
682
- logger.info(`${logger.blue(`[${data.self_id}]`)} 好友戳一戳:[${data.operator_id}=>${data.target_id}]`)
683
- break
684
- case "honor":
685
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群荣誉:[${data.group_id}, ${data.user_id}] ${data.honor_type}`)
686
- break
687
- case "title":
688
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群头衔:[${data.group_id}, ${data.user_id}] ${data.title}`)
689
- break
690
- default:
691
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知通知:${logger.magenta(JSON.stringify(data))}`)
692
- }
693
- break
694
- case "group_card":
695
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群名片更新:[${data.group_id}, ${data.user_id}] ${data.card_old}=>${data.card_new}`)
696
- break
697
- case "offline_file":
698
- logger.info(`${logger.blue(`[${data.self_id}]`)} 离线文件:[${data.user_id}] ${JSON.stringify(data.file)}`)
699
- break
700
- case "client_status":
701
- logger.info(`${logger.blue(`[${data.self_id}]`)} 客户端${data.online ? "上线" : "下线"}:${JSON.stringify(data.client)}`)
702
- data.clients = (await data.bot.sendApi("get_online_clients")).clients
703
- data.bot.clients = data.clients
704
- break
705
- case "essence":
706
- data.notice_type = "group_essence"
707
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群精华消息:[${data.group_id}, ${data.operator_id}=>${data.sender_id}] ${data.sub_type} ${data.message_id}`)
708
- break
709
- case "guild_channel_recall":
710
- logger.info(`${logger.blue(`[${data.self_id}]`)} 频道消息撤回:[${data.guild_id}-${data.channel_id}, ${data.operator_id}=>${data.user_id}] ${data.message_id}`)
711
- break
712
- case "message_reactions_updated":
713
- data.notice_type = "guild_message_reactions_updated"
714
- logger.info(`${logger.blue(`[${data.self_id}]`)} 频道消息表情贴:[${data.guild_id}-${data.channel_id}, ${data.user_id}] ${data.message_id} ${JSON.stringify(data.current_reactions)}`)
715
- break
716
- case "channel_updated":
717
- data.notice_type = "guild_channel_updated"
718
- logger.info(`${logger.blue(`[${data.self_id}]`)} 子频道更新:[${data.guild_id}-${data.channel_id}, ${data.user_id}] ${JSON.stringify(data.old_info)}=>${JSON.stringify(data.new_info)}`)
719
- break
720
- case "channel_created":
721
- data.notice_type = "guild_channel_created"
722
- logger.info(`${logger.blue(`[${data.self_id}]`)} 子频道创建:[${data.guild_id}-${data.channel_id}, ${data.user_id}] ${JSON.stringify(data.channel_info)}`)
723
- data.bot.getGroupMap()
724
- break
725
- case "channel_destroyed":
726
- data.notice_type = "guild_channel_destroyed"
727
- logger.info(`${logger.blue(`[${data.self_id}]`)} 子频道删除:[${data.guild_id}-${data.channel_id}, ${data.user_id}] ${JSON.stringify(data.channel_info)}`)
728
- data.bot.getGroupMap()
729
- break
730
- default:
731
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知通知:${logger.magenta(JSON.stringify(data))}`)
732
- }
733
-
734
- let notice = data.notice_type.split("_")
735
- data.notice_type = notice.shift()
736
- notice = notice.join("_")
737
- if (notice)
738
- data.sub_type = notice
739
-
740
- if (data.guild_id && data.channel_id) {
741
- data.group_id = `${data.guild_id}-${data.channel_id}`
742
- Object.defineProperty(data, "friend", { get() { return this.member || {}}})
743
- }
744
-
745
- Bot.em(`${data.post_type}.${data.notice_type}.${data.sub_type}`, data)
746
- }
747
-
748
- makeRequest(data) {
749
- switch (data.request_type) {
750
- case "friend":
751
- logger.info(`${logger.blue(`[${data.self_id}]`)} 加好友请求:[${data.user_id}] ${data.comment}(${data.flag})`)
752
- data.sub_type = "add"
753
- data.approve = approve => data.bot.setFriendAddRequest(data.flag, approve)
754
- break
755
- case "group":
756
- logger.info(`${logger.blue(`[${data.self_id}]`)} 加群请求:[${data.group_id}, ${data.user_id}] ${data.sub_type} ${data.comment}(${data.flag})`)
757
- data.approve = approve => data.bot.setGroupAddRequest(data.flag, data.sub_type, approve)
758
- break
759
- default:
760
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知请求:${logger.magenta(JSON.stringify(data))}`)
761
- }
762
-
763
- data.bot.request_list.push(data)
764
- Bot.em(`${data.post_type}.${data.request_type}.${data.sub_type}`, data)
765
- }
766
-
767
- heartbeat(data) {
768
- if (data.status?.stat)
769
- data.bot.stat = {
770
- ...data.status,
771
- lost_pkt_cnt: data.status.stat.packet_lost,
772
- lost_times: data.status.stat.lost_times,
773
- recv_msg_cnt: data.status.stat.message_received,
774
- recv_pkt_cnt: data.status.stat.packet_received,
775
- sent_msg_cnt: data.status.stat.message_sent,
776
- sent_pkt_cnt: data.status.stat.packet_sent,
777
- start_time: data.bot.stat.start_time,
778
- }
779
- }
780
-
781
- makeMeta(data, ws) {
782
- switch (data.meta_event_type) {
783
- case "heartbeat":
784
- this.heartbeat(data)
785
- break
786
- case "lifecycle":
787
- this.connect(data, ws)
788
- break
789
- default:
790
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知消息:${logger.magenta(JSON.stringify(data))}`)
791
- }
792
- }
793
-
794
- message(data, ws) {
795
- try {
796
- data = JSON.parse(data)
797
- } catch (err) {
798
- return logger.error(`解码数据失败:${logger.red(err)}`)
799
- }
800
-
801
- if (data.post_type) {
802
- if (data.meta_event_type != "lifecycle" && !Bot.uin.includes(data.self_id)) {
803
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 找不到对应Bot,忽略消息:${logger.magenta(JSON.stringify(data))}`)
804
- return false
805
- }
806
- data.bot = Bot[data.self_id]
807
-
808
- switch (data.post_type) {
809
- case "meta_event":
810
- this.makeMeta(data, ws)
811
- break
812
- case "message":
813
- this.makeMessage(data)
814
- break
815
- case "notice":
816
- this.makeNotice(data)
817
- break
818
- case "request":
819
- this.makeRequest(data)
820
- break
821
- case "message_sent":
822
- data.post_type = "message"
823
- this.makeMessage(data)
824
- break
825
- default:
826
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知消息:${logger.magenta(JSON.stringify(data))}`)
827
- }
828
- } else if (data.echo) {
829
- Bot.emit(data.echo, data)
830
- } else {
831
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知消息:${logger.magenta(JSON.stringify(data))}`)
832
- }
833
- }
834
-
835
- load() {
836
- if (!Array.isArray(Bot.wsf[this.path]))
837
- Bot.wsf[this.path] = []
838
- Bot.wsf[this.path].push((ws, ...args) =>
839
- ws.on("message", data => this.message(data, ws, ...args))
840
- )
841
- }
842
- })
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CobaltZvc/Hyper_Bot/style.css DELETED
@@ -1,28 +0,0 @@
1
- body {
2
- padding: 2rem;
3
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
4
- }
5
-
6
- h1 {
7
- font-size: 16px;
8
- margin-top: 0;
9
- }
10
-
11
- p {
12
- color: rgb(107, 114, 128);
13
- font-size: 15px;
14
- margin-bottom: 10px;
15
- margin-top: 5px;
16
- }
17
-
18
- .card {
19
- max-width: 620px;
20
- margin: 0 auto;
21
- padding: 16px;
22
- border: 1px solid lightgray;
23
- border-radius: 16px;
24
- }
25
-
26
- .card p:last-child {
27
- margin-bottom: 0;
28
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CofAI/netlist/index.html DELETED
@@ -1,12 +0,0 @@
1
- <head>
2
- <title>NetList</title>
3
-
4
- <link rel="apple-touch-icon" sizes="180x180" href="apple-touch-icon.png">
5
- <link rel="icon" type="image/png" sizes="32x32" href="favicon-32x32.png">
6
- <link rel="icon" type="image/png" sizes="16x16" href="favicon-16x16.png">
7
- <link rel="manifest" href="site.webmanifest">
8
- </head>
9
-
10
- <script async src="https://cse.google.com/cse.js?cx=d3fab78fd2a994ee3">
11
- </script>
12
- <div class="gcse-search"></div>
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CorvaeOboro/gen_ability_icon/torch_utils/ops/bias_act.h DELETED
@@ -1,38 +0,0 @@
1
- // Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2
- //
3
- // NVIDIA CORPORATION and its licensors retain all intellectual property
4
- // and proprietary rights in and to this software, related documentation
5
- // and any modifications thereto. Any use, reproduction, disclosure or
6
- // distribution of this software and related documentation without an express
7
- // license agreement from NVIDIA CORPORATION is strictly prohibited.
8
-
9
- //------------------------------------------------------------------------
10
- // CUDA kernel parameters.
11
-
12
- struct bias_act_kernel_params
13
- {
14
- const void* x; // [sizeX]
15
- const void* b; // [sizeB] or NULL
16
- const void* xref; // [sizeX] or NULL
17
- const void* yref; // [sizeX] or NULL
18
- const void* dy; // [sizeX] or NULL
19
- void* y; // [sizeX]
20
-
21
- int grad;
22
- int act;
23
- float alpha;
24
- float gain;
25
- float clamp;
26
-
27
- int sizeX;
28
- int sizeB;
29
- int stepB;
30
- int loopX;
31
- };
32
-
33
- //------------------------------------------------------------------------
34
- // CUDA kernel selection.
35
-
36
- template <class T> void* choose_bias_act_kernel(const bias_act_kernel_params& p);
37
-
38
- //------------------------------------------------------------------------
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Curranj/GPT-SQL/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: GPT SQL
3
- emoji: 💻
4
- colorFrom: purple
5
- colorTo: blue
6
- sdk: gradio
7
- sdk_version: 3.16.2
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/config/defaults.py DELETED
@@ -1,471 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
2
- import os
3
-
4
- from yacs.config import CfgNode as CN
5
-
6
-
7
- # -----------------------------------------------------------------------------
8
- # Convention about Training / Test specific parameters
9
- # -----------------------------------------------------------------------------
10
- # Whenever an argument can be either used for training or for testing, the
11
- # corresponding name will be post-fixed by a _TRAIN for a training parameter,
12
- # or _TEST for a test-specific parameter.
13
- # For example, the number of images during training will be
14
- # IMAGES_PER_BATCH_TRAIN, while the number of images for testing will be
15
- # IMAGES_PER_BATCH_TEST
16
-
17
- # -----------------------------------------------------------------------------
18
- # Config definition
19
- # -----------------------------------------------------------------------------
20
-
21
- _C = CN()
22
-
23
- _C.MODEL = CN()
24
- _C.MODEL.RPN_ONLY = False
25
- _C.MODEL.MASK_ON = False
26
- _C.MODEL.FCOS_ON = False
27
- _C.MODEL.KE_ON = False
28
- _C.MODEL.BOUNDARY_ON = False
29
- _C.MODEL.MSR_ON = False
30
- _C.MODEL.RETINANET_ON = False
31
- _C.MODEL.KEYPOINT_ON = False
32
- _C.MODEL.DEVICE = "cuda"
33
- _C.MODEL.META_ARCHITECTURE = "GeneralizedRCNN"
34
- _C.MODEL.CLS_AGNOSTIC_BBOX_REG = False
35
-
36
- # If the WEIGHT starts with a catalog://, like :R-50, the code will look for
37
- # the path in paths_catalog. Else, it will use it as the specified absolute
38
- # path
39
- _C.MODEL.WEIGHT = ""
40
-
41
-
42
- # -----------------------------------------------------------------------------
43
- # INPUT
44
- # -----------------------------------------------------------------------------
45
- _C.INPUT = CN()
46
- # Size of the smallest side of the image during training
47
- _C.INPUT.MIN_SIZE_TRAIN = (800,) # (800,)
48
- # The range of the smallest side for multi-scale training
49
- _C.INPUT.MIN_SIZE_RANGE_TRAIN = (-1, -1) # -1 means disabled and it will use MIN_SIZE_TRAIN
50
- # Maximum size of the side of the image during training
51
- _C.INPUT.MAX_SIZE_TRAIN = 1333
52
- # Size of the smallest side of the image during testing
53
- _C.INPUT.MIN_SIZE_TEST = 1000
54
- # Maximum size of the side of the image during testing
55
- _C.INPUT.MAX_SIZE_TEST = 1333
56
- # Values to be used for image normalization
57
- _C.INPUT.PIXEL_MEAN = [102.9801, 115.9465, 122.7717]
58
- # Values to be used for image normalization
59
- _C.INPUT.PIXEL_STD = [1., 1., 1.]
60
- # Convert image to BGR format (for Caffe2 models), in range 0-255
61
- _C.INPUT.TO_BGR255 = True
62
- _C.INPUT.CROP_PROB_TRAIN = 1.0
63
- _C.INPUT.ROTATE_PROB_TRAIN = 0.3
64
- _C.INPUT.ROTATE_DEGREE = (0,15,-15,45,-45,90,-90)
65
- # _C.INPUT.ROTATE_DEGREE = 15
66
-
67
-
68
-
69
-
70
- # -----------------------------------------------------------------------------
71
- # Dataset
72
- # -----------------------------------------------------------------------------
73
- _C.DATASETS = CN()
74
- # List of the dataset names for training, as present in paths_catalog.py
75
- _C.DATASETS.TRAIN = ()
76
- # List of the dataset names for testing, as present in paths_catalog.py
77
- _C.DATASETS.TEST = ()
78
- _C.DATASETS.Test_Visual = False
79
- # -----------------------------------------------------------------------------
80
- # DataLoader
81
- # -----------------------------------------------------------------------------
82
- _C.DATALOADER = CN()
83
- # Number of data loading threads
84
- _C.DATALOADER.NUM_WORKERS = 4
85
- # If > 0, this enforces that each collated batch should have a size divisible
86
- # by SIZE_DIVISIBILITY
87
- _C.DATALOADER.SIZE_DIVISIBILITY = 0
88
- # If True, each batch should contain only images for which the aspect ratio
89
- # is compatible. This groups portrait images together, and landscape images
90
- # are not batched with portrait images.
91
- _C.DATALOADER.ASPECT_RATIO_GROUPING = True
92
-
93
-
94
- # ---------------------------------------------------------------------------- #
95
- # Backbone options
96
- # ---------------------------------------------------------------------------- #
97
- _C.MODEL.BACKBONE = CN()
98
-
99
- # The backbone conv body to use
100
- # The string must match a function that is imported in modeling.model_builder
101
- # (e.g., 'FPN.add_fpn_ResNet101_conv5_body' to specify a ResNet-101-FPN
102
- # backbone)
103
- _C.MODEL.BACKBONE.CONV_BODY = "R-50-C4"
104
-
105
- # Add StopGrad at a specified stage so the bottom layers are frozen
106
- _C.MODEL.BACKBONE.FREEZE_CONV_BODY_AT = 2
107
- # GN for backbone
108
-
109
- ##123123123
110
- _C.MODEL.BACKBONE.USE_GN = False
111
-
112
-
113
- # ---------------------------------------------------------------------------- #
114
- # FPN options
115
- # ---------------------------------------------------------------------------- #
116
- _C.MODEL.FPN = CN()
117
-
118
- # 123123123
119
- _C.MODEL.FPN.USE_GN = False
120
- _C.MODEL.FPN.USE_RELU = False
121
-
122
- #############123123123
123
- _C.MODEL.FPN.USE_DEFORMABLE = False
124
-
125
-
126
- # ---------------------------------------------------------------------------- #
127
- # Group Norm options
128
- # ---------------------------------------------------------------------------- #
129
- _C.MODEL.GROUP_NORM = CN()
130
- # Number of dimensions per group in GroupNorm (-1 if using NUM_GROUPS)
131
- _C.MODEL.GROUP_NORM.DIM_PER_GP = -1
132
- # Number of groups in GroupNorm (-1 if using DIM_PER_GP)
133
- _C.MODEL.GROUP_NORM.NUM_GROUPS = 32
134
- # GroupNorm's small constant in the denominator
135
- _C.MODEL.GROUP_NORM.EPSILON = 1e-5
136
-
137
-
138
- # ---------------------------------------------------------------------------- #
139
- # RPN options
140
- # ---------------------------------------------------------------------------- #
141
- _C.MODEL.RPN = CN()
142
- _C.MODEL.RPN.USE_FPN = False
143
- # Base RPN anchor sizes given in absolute pixels w.r.t. the scaled network input
144
- _C.MODEL.RPN.ANCHOR_SIZES = (32, 64, 128, 256, 512)
145
- # Stride of the feature map that RPN is attached.
146
- # For FPN, number of strides should match number of scales
147
- _C.MODEL.RPN.ANCHOR_STRIDE = (16,)
148
- # RPN anchor aspect ratios
149
- _C.MODEL.RPN.ASPECT_RATIOS = (0.5, 1.0, 2.0)
150
- # Remove RPN anchors that go outside the image by RPN_STRADDLE_THRESH pixels
151
- # Set to -1 or a large value, e.g. 100000, to disable pruning anchors
152
- _C.MODEL.RPN.STRADDLE_THRESH = 0
153
- # Minimum overlap required between an anchor and ground-truth box for the
154
- # (anchor, gt box) pair to be a positive example (IoU >= FG_IOU_THRESHOLD
155
- # ==> positive RPN example)
156
- _C.MODEL.RPN.FG_IOU_THRESHOLD = 0.7
157
- # Maximum overlap allowed between an anchor and ground-truth box for the
158
- # (anchor, gt box) pair to be a negative examples (IoU < BG_IOU_THRESHOLD
159
- # ==> negative RPN example)
160
- _C.MODEL.RPN.BG_IOU_THRESHOLD = 0.3
161
- # Total number of RPN examples per image
162
- _C.MODEL.RPN.BATCH_SIZE_PER_IMAGE = 256
163
- # Target fraction of foreground (positive) examples per RPN minibatch
164
- _C.MODEL.RPN.POSITIVE_FRACTION = 0.5
165
- # Number of top scoring RPN proposals to keep before applying NMS
166
- # When FPN is used, this is *per FPN level* (not total)
167
- _C.MODEL.RPN.PRE_NMS_TOP_N_TRAIN = 12000
168
-
169
- _C.MODEL.RPN.PRE_NMS_TOP_N_TEST = 6000
170
- # Number of top scoring RPN proposals to keep after applying NMS
171
- _C.MODEL.RPN.POST_NMS_TOP_N_TRAIN = 2000
172
- _C.MODEL.RPN.POST_NMS_TOP_N_TEST = 1000
173
- # NMS threshold used on RPN proposals
174
- _C.MODEL.RPN.NMS_THRESH = 0.7
175
- # Proposal height and width both need to be greater than RPN_MIN_SIZE
176
- # (a the scale used during training or inference)
177
- _C.MODEL.RPN.MIN_SIZE = 0
178
- # Number of top scoring RPN proposals to keep after combining proposals from
179
- # all FPN levels
180
- _C.MODEL.RPN.FPN_POST_NMS_TOP_N_TRAIN = 2000
181
- _C.MODEL.RPN.FPN_POST_NMS_TOP_N_TEST = 2000
182
- # Custom rpn head, empty to use default conv or separable conv
183
- _C.MODEL.RPN.RPN_HEAD = "SingleConvRPNHead_1"
184
-
185
-
186
- # ---------------------------------------------------------------------------- #
187
- # ROI HEADS options
188
- # ---------------------------------------------------------------------------- #
189
- _C.MODEL.ROI_HEADS = CN()
190
- _C.MODEL.ROI_HEADS.USE_FPN = False
191
- _C.MODEL.ROI_HEADS.USE_FPN = False
192
- # Overlap threshold for an RoI to be considered foreground (if >= FG_IOU_THRESHOLD)
193
- _C.MODEL.ROI_HEADS.FG_IOU_THRESHOLD = 0.5
194
- # Overlap threshold for an RoI to be considered background
195
- # (class = 0 if overlap in [0, BG_IOU_THRESHOLD))
196
- _C.MODEL.ROI_HEADS.BG_IOU_THRESHOLD = 0.5
197
- # Default weights on (dx, dy, dw, dh) for normalizing bbox regression targets
198
- # These are empirically chosen to approximately lead to unit variance targets
199
- _C.MODEL.ROI_HEADS.BBOX_REG_WEIGHTS = (10., 10., 5., 5.)
200
- # RoI minibatch size *per image* (number of regions of interest [ROIs])
201
- # Total number of RoIs per training minibatch =
202
- # TRAIN.BATCH_SIZE_PER_IM * TRAIN.IMS_PER_BATCH
203
- # E.g., a common configuration is: 512 * 2 * 8 = 8192
204
- _C.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512
205
- # Target fraction of RoI minibatch that is labeled foreground (i.e. class > 0)
206
- _C.MODEL.ROI_HEADS.POSITIVE_FRACTION = 0.25
207
-
208
- # Only used on test mode
209
-
210
- # Minimum score threshold (assuming scores in a [0, 1] range); a value chosen to
211
- # balance obtaining high recall with not having too many low precision
212
- # detections that will slow down inference post processing steps (like NMS)
213
- _C.MODEL.ROI_HEADS.SCORE_THRESH = 0.05
214
- # Overlap threshold used for non-maximum suppression (suppress boxes with
215
- # IoU >= this threshold)
216
- _C.MODEL.ROI_HEADS.NMS = 0.5
217
- # Maximum number of detections to return per image (100 is based on the limit established for the COCO dataset)
218
- _C.MODEL.ROI_HEADS.DETECTIONS_PER_IMG = 100
219
-
220
-
221
- _C.MODEL.ROI_BOX_HEAD = CN()
222
- _C.MODEL.ROI_BOX_HEAD.FEATURE_EXTRACTOR = "ResNet50Conv5ROIFeatureExtractor"
223
- _C.MODEL.ROI_BOX_HEAD.PREDICTOR = "FastRCNNPredictor"
224
- _C.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION = 14
225
- _C.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO = 0
226
- _C.MODEL.ROI_BOX_HEAD.POOLER_SCALES = (1.0 / 16,)
227
- _C.MODEL.ROI_BOX_HEAD.NUM_CLASSES = 81
228
- # Hidden layer dimension when using an MLP for the RoI box head
229
- _C.MODEL.ROI_BOX_HEAD.MLP_HEAD_DIM = 1024
230
- # GN
231
- #####123123123
232
- _C.MODEL.ROI_BOX_HEAD.USE_GN = False
233
- # Dilation
234
- _C.MODEL.ROI_BOX_HEAD.DILATION = 1
235
- _C.MODEL.ROI_BOX_HEAD.CONV_HEAD_DIM = 256
236
-
237
- #### 123123
238
- _C.MODEL.ROI_BOX_HEAD.NUM_STACKED_CONVS = 4
239
- _C.MODEL.ROI_BOX_HEAD.CLASS_WEIGHT = 0.1
240
- _C.MODEL.ROI_BOX_HEAD.DEFORMABLE_POOLING = False
241
-
242
- _C.MODEL.ROI_MASK_HEAD = CN()
243
- # Whether or not resize and translate masks to the input image.
244
- _C.MODEL.ROI_MASK_HEAD.POSTPROCESS_MASKS = False
245
- _C.MODEL.ROI_MASK_HEAD.POSTPROCESS_MASKS_THRESHOLD = 0.5
246
- _C.MODEL.ROI_MASK_HEAD.DILATION = 1
247
- _C.MODEL.ROI_MASK_HEAD.USE_GN = False
248
-
249
- # Boundary edge
250
- _C.MODEL.ROI_BOUNDARY_HEAD = CN()
251
- _C.MODEL.ROI_BOUNDARY_HEAD.DEFORMABLE_POOLING = False
252
-
253
- _C.MODEL.ROI_BOUNDARY_HEAD.FEATURE_EXTRACTOR = "ResNet50Conv5ROIFeatureExtractor"
254
- _C.MODEL.ROI_BOUNDARY_HEAD.POOLER_RESOLUTION = 14
255
- _C.MODEL.ROI_BOUNDARY_HEAD.POOLER_SCALES = (1.0 / 16,)
256
- _C.MODEL.ROI_BOUNDARY_HEAD.POOLER_SAMPLING_RATIO = 0
257
- _C.MODEL.ROI_BOUNDARY_HEAD.CONV_LAYERS = (256, 256, 256, 256)
258
-
259
- _C.MODEL.ROI_BOUNDARY_HEAD.PREDICTOR = "KERCNNC4Predictor"
260
- _C.MODEL.ROI_BOUNDARY_HEAD.RESOLUTION = 14
261
- _C.MODEL.ROI_BOUNDARY_HEAD.SHARE_BOX_FEATURE_EXTRACTOR = True
262
- _C.MODEL.ROI_BOUNDARY_HEAD.BO_WEIGHT = 1.0
263
- _C.MODEL.ROI_BOUNDARY_HEAD.Loss_balance = 1.2
264
-
265
- # ---------------------------------------------------------------------------- #
266
- # ResNe[X]t options (ResNets = {ResNet, ResNeXt}
267
- # Note that parts of a resnet may be used for both the backbone and the head
268
- # These options apply to both
269
- # ---------------------------------------------------------------------------- #
270
- _C.MODEL.RESNETS = CN()
271
-
272
- # Number of groups to use; 1 ==> ResNet; > 1 ==> ResNeXt
273
- _C.MODEL.RESNETS.NUM_GROUPS = 1
274
-
275
- # Baseline width of each group
276
- _C.MODEL.RESNETS.WIDTH_PER_GROUP = 64
277
-
278
- # Place the stride 2 conv on the 1x1 filter
279
- # Use True only for the original MSRA ResNet; use False for C2 and Torch models
280
- _C.MODEL.RESNETS.STRIDE_IN_1X1 = True
281
-
282
- # Residual transformation function
283
- _C.MODEL.RESNETS.TRANS_FUNC = "BottleneckWithFixedBatchNorm"
284
- _C.MODEL.RESNETS.DEF_FUNC = "DeformableConvWithFixedBatchNorm"
285
- # ResNet's stem function (conv1 and pool1)
286
- _C.MODEL.RESNETS.STEM_FUNC = "StemWithFixedBatchNorm"
287
- _C.MODEL.RESNETS.DEF_START_MODULE = "NA"
288
-
289
- #########123123123
290
- _C.MODEL.RESNETS.DEFORM_POOLING = False
291
-
292
- # Apply dilation in stage "res5"
293
- _C.MODEL.RESNETS.RES5_DILATION = 1
294
-
295
- _C.MODEL.RESNETS.BACKBONE_OUT_CHANNELS = 256 * 4
296
- _C.MODEL.RESNETS.RES2_OUT_CHANNELS = 256
297
- _C.MODEL.RESNETS.STEM_OUT_CHANNELS = 64
298
-
299
- # ---------------------------------------------------------------------------- #
300
- # FCOS Options
301
- # ---------------------------------------------------------------------------- #
302
- _C.MODEL.FCOS = CN()
303
- _C.MODEL.FCOS.NUM_CLASSES = 81 # the number of classes including background
304
- _C.MODEL.FCOS.FPN_STRIDES = [8, 16, 32, 64, 128]
305
- _C.MODEL.FCOS.PRIOR_PROB = 0.01
306
- _C.MODEL.FCOS.INFERENCE_TH = 0.05
307
- _C.MODEL.FCOS.NMS_TH = 0.4
308
- _C.MODEL.FCOS.PRE_NMS_TOP_N = 1000
309
-
310
- # Focal loss parameter: alpha
311
- _C.MODEL.FCOS.LOSS_ALPHA = 0.25
312
- # Focal loss parameter: gamma
313
- _C.MODEL.FCOS.LOSS_GAMMA = 2.0
314
- _C.MODEL.FCOS.SIZES_OF_INTEREST = [64, 128, 256, 512]
315
-
316
- # the number of convolutions used in the cls and bbox tower
317
- _C.MODEL.FCOS.NUM_CONVS = 4
318
-
319
- # ---------------------------------------------------------------------------- #
320
- # RetinaNet Options (Follow the Detectron version)
321
- # ---------------------------------------------------------------------------- #
322
- _C.MODEL.RETINANET = CN()
323
-
324
- # This is the number of foreground classes and background.
325
- _C.MODEL.RETINANET.NUM_CLASSES = 81
326
-
327
- # Anchor aspect ratios to use
328
- _C.MODEL.RETINANET.ANCHOR_SIZES = (32, 64, 128, 256, 512)
329
- _C.MODEL.RETINANET.ASPECT_RATIOS = (0.5, 1.0, 2.0)
330
- _C.MODEL.RETINANET.ANCHOR_STRIDES = (8, 16, 32, 64, 128)
331
- _C.MODEL.RETINANET.STRADDLE_THRESH = 0
332
-
333
- # Anchor scales per octave
334
- _C.MODEL.RETINANET.OCTAVE = 2.0
335
- _C.MODEL.RETINANET.SCALES_PER_OCTAVE = 3
336
-
337
- # Use C5 or P5 to generate P6
338
- _C.MODEL.RETINANET.USE_C5 = True
339
-
340
- # Convolutions to use in the cls and bbox tower
341
- # NOTE: this doesn't include the last conv for logits
342
- _C.MODEL.RETINANET.NUM_CONVS = 4
343
-
344
- # Weight for bbox_regression loss
345
- _C.MODEL.RETINANET.BBOX_REG_WEIGHT = 4.0
346
-
347
- # Smooth L1 loss beta for bbox regression
348
- _C.MODEL.RETINANET.BBOX_REG_BETA = 0.11
349
-
350
- # During inference, #locs to select based on cls score before NMS is performed
351
- # per FPN level
352
- _C.MODEL.RETINANET.PRE_NMS_TOP_N = 1000
353
-
354
- # IoU overlap ratio for labeling an anchor as positive
355
- # Anchors with >= iou overlap are labeled positive
356
- _C.MODEL.RETINANET.FG_IOU_THRESHOLD = 0.5
357
-
358
- # IoU overlap ratio for labeling an anchor as negative
359
- # Anchors with < iou overlap are labeled negative
360
- _C.MODEL.RETINANET.BG_IOU_THRESHOLD = 0.4
361
-
362
- # Focal loss parameter: alpha
363
- _C.MODEL.RETINANET.LOSS_ALPHA = 0.25
364
-
365
- # Focal loss parameter: gamma
366
- _C.MODEL.RETINANET.LOSS_GAMMA = 2.0
367
-
368
- # Prior prob for the positives at the beginning of training. This is used to set
369
- # the bias init for the logits layer
370
- _C.MODEL.RETINANET.PRIOR_PROB = 0.01
371
-
372
- # Inference cls score threshold, anchors with score > INFERENCE_TH are
373
- # considered for inference
374
- _C.MODEL.RETINANET.INFERENCE_TH = 0.05
375
-
376
- # NMS threshold used in RetinaNet
377
- _C.MODEL.RETINANET.NMS_TH = 0.4
378
-
379
-
380
- # ---------------------------------------------------------------------------- #
381
- # FBNet options
382
- # ---------------------------------------------------------------------------- #
383
- _C.MODEL.FBNET = CN()
384
- _C.MODEL.FBNET.ARCH = "default"
385
- # custom arch
386
- _C.MODEL.FBNET.ARCH_DEF = ""
387
- _C.MODEL.FBNET.BN_TYPE = "bn"
388
- _C.MODEL.FBNET.SCALE_FACTOR = 1.0
389
- # the output channels will be divisible by WIDTH_DIVISOR
390
- _C.MODEL.FBNET.WIDTH_DIVISOR = 1
391
- _C.MODEL.FBNET.DW_CONV_SKIP_BN = True
392
- _C.MODEL.FBNET.DW_CONV_SKIP_RELU = True
393
-
394
- # > 0 scale, == 0 skip, < 0 same dimension
395
- _C.MODEL.FBNET.DET_HEAD_LAST_SCALE = 1.0
396
- _C.MODEL.FBNET.DET_HEAD_BLOCKS = []
397
- # overwrite the stride for the head, 0 to use original value
398
- _C.MODEL.FBNET.DET_HEAD_STRIDE = 0
399
-
400
- # > 0 scale, == 0 skip, < 0 same dimension
401
- _C.MODEL.FBNET.KPTS_HEAD_LAST_SCALE = 0.0
402
- _C.MODEL.FBNET.KPTS_HEAD_BLOCKS = []
403
- # overwrite the stride for the head, 0 to use original value
404
- _C.MODEL.FBNET.KPTS_HEAD_STRIDE = 0
405
-
406
- # > 0 scale, == 0 skip, < 0 same dimension
407
- _C.MODEL.FBNET.MASK_HEAD_LAST_SCALE = 0.0
408
- _C.MODEL.FBNET.MASK_HEAD_BLOCKS = []
409
- # overwrite the stride for the head, 0 to use original value
410
- _C.MODEL.FBNET.MASK_HEAD_STRIDE = 0
411
-
412
- # 0 to use all blocks defined in arch_def
413
- _C.MODEL.FBNET.RPN_HEAD_BLOCKS = 0
414
- _C.MODEL.FBNET.RPN_BN_TYPE = ""
415
-
416
-
417
- # ---------------------------------------------------------------------------- #
418
- # Solver
419
- # ---------------------------------------------------------------------------- #
420
- _C.SOLVER = CN()
421
- _C.SOLVER.MAX_ITER = 40000
422
-
423
- _C.SOLVER.BASE_LR = 0.001
424
- _C.SOLVER.BIAS_LR_FACTOR = 2
425
-
426
- _C.SOLVER.MOMENTUM = 0.9
427
-
428
- _C.SOLVER.WEIGHT_DECAY = 0.0005
429
- _C.SOLVER.WEIGHT_DECAY_BIAS = 0
430
-
431
- _C.SOLVER.GAMMA = 0.1
432
- _C.SOLVER.STEPS = (30000,)
433
-
434
- _C.SOLVER.WARMUP_FACTOR = 1.0 / 3
435
- _C.SOLVER.WARMUP_ITERS = 500
436
- _C.SOLVER.WARMUP_METHOD = "linear"
437
-
438
- _C.SOLVER.CHECKPOINT_PERIOD = 2500
439
-
440
- # Number of images per batch
441
- # This is global, so if we have 8 GPUs and IMS_PER_BATCH = 16, each GPU will
442
- # see 2 images per batch
443
- _C.SOLVER.IMS_PER_BATCH = 4
444
-
445
- # ---------------------------------------------------------------------------- #
446
- # Specific test options
447
- # ---------------------------------------------------------------------------- #
448
- _C.TEST = CN()
449
- _C.TEST.EXPECTED_RESULTS = []
450
- _C.TEST.EXPECTED_RESULTS_SIGMA_TOL = 4
451
- # Number of images per batch
452
- # This is global, so if we have 8 GPUs and IMS_PER_BATCH = 16, each GPU will
453
- # see 2 images per batch
454
- _C.TEST.IMS_PER_BATCH = 16
455
- # Number of detections per image
456
- _C.TEST.DETECTIONS_PER_IMG = 100
457
-
458
-
459
- # ---------------------------------------------------------------------------- #
460
- # Misc options
461
- # ---------------------------------------------------------------------------- #
462
- _C.OUTPUT_DIR = "./1"
463
- _C.IS_LOAD_OPTIMIZER = True
464
- _C.IS_LOAD_SCHEDULER = True
465
- _C.PROCESS = CN()
466
-
467
- #####123123123
468
- _C.PROCESS.PNMS = False
469
- _C.PROCESS.NMS_THRESH = 0.4
470
-
471
- _C.PATHS_CATALOG = os.path.join(os.path.dirname(__file__), "paths_catalog.py")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DGSpitzer/TXT-2-IMG-2-MUSIC-2-VIDEO-w-RIFFUSION/utils.py DELETED
@@ -1,50 +0,0 @@
1
- import json
2
- import numpy as np
3
- import httpx
4
-
5
- from constants import MUBERT_TAGS, MUBERT_LICENSE, MUBERT_MODE, MUBERT_TOKEN
6
-
7
-
8
- def get_mubert_tags_embeddings(w2v_model):
9
- return w2v_model.encode(MUBERT_TAGS)
10
-
11
-
12
- def get_pat(email: str):
13
- r = httpx.post('https://api-b2b.mubert.com/v2/GetServiceAccess',
14
- json={
15
- "method": "GetServiceAccess",
16
- "params": {
17
- "email": email,
18
- "license": MUBERT_LICENSE,
19
- "token": MUBERT_TOKEN,
20
- "mode": MUBERT_MODE,
21
- }
22
- })
23
-
24
- rdata = json.loads(r.text)
25
- assert rdata['status'] == 1, "probably incorrect e-mail"
26
- pat = rdata['data']['pat']
27
- return pat
28
-
29
-
30
- def find_similar(em, embeddings, method='cosine'):
31
- scores = []
32
- for ref in embeddings:
33
- if method == 'cosine':
34
- scores.append(1 - np.dot(ref, em) / (np.linalg.norm(ref) * np.linalg.norm(em)))
35
- if method == 'norm':
36
- scores.append(np.linalg.norm(ref - em))
37
- return np.array(scores), np.argsort(scores)
38
-
39
-
40
- def get_tags_for_prompts(w2v_model, mubert_tags_embeddings, prompts, top_n=3, debug=False):
41
- prompts_embeddings = w2v_model.encode(prompts)
42
- ret = []
43
- for i, pe in enumerate(prompts_embeddings):
44
- scores, idxs = find_similar(pe, mubert_tags_embeddings)
45
- top_tags = MUBERT_TAGS[idxs[:top_n]]
46
- top_prob = 1 - scores[idxs[:top_n]]
47
- if debug:
48
- print(f"Prompt: {prompts[i]}\nTags: {', '.join(top_tags)}\nScores: {top_prob}\n\n\n")
49
- ret.append((prompts[i], list(top_tags)))
50
- return ret
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/FontFile.py DELETED
@@ -1,110 +0,0 @@
1
- #
2
- # The Python Imaging Library
3
- # $Id$
4
- #
5
- # base class for raster font file parsers
6
- #
7
- # history:
8
- # 1997-06-05 fl created
9
- # 1997-08-19 fl restrict image width
10
- #
11
- # Copyright (c) 1997-1998 by Secret Labs AB
12
- # Copyright (c) 1997-1998 by Fredrik Lundh
13
- #
14
- # See the README file for information on usage and redistribution.
15
- #
16
-
17
-
18
- import os
19
-
20
- from . import Image, _binary
21
-
22
- WIDTH = 800
23
-
24
-
25
- def puti16(fp, values):
26
- """Write network order (big-endian) 16-bit sequence"""
27
- for v in values:
28
- if v < 0:
29
- v += 65536
30
- fp.write(_binary.o16be(v))
31
-
32
-
33
- class FontFile:
34
- """Base class for raster font file handlers."""
35
-
36
- bitmap = None
37
-
38
- def __init__(self):
39
- self.info = {}
40
- self.glyph = [None] * 256
41
-
42
- def __getitem__(self, ix):
43
- return self.glyph[ix]
44
-
45
- def compile(self):
46
- """Create metrics and bitmap"""
47
-
48
- if self.bitmap:
49
- return
50
-
51
- # create bitmap large enough to hold all data
52
- h = w = maxwidth = 0
53
- lines = 1
54
- for glyph in self:
55
- if glyph:
56
- d, dst, src, im = glyph
57
- h = max(h, src[3] - src[1])
58
- w = w + (src[2] - src[0])
59
- if w > WIDTH:
60
- lines += 1
61
- w = src[2] - src[0]
62
- maxwidth = max(maxwidth, w)
63
-
64
- xsize = maxwidth
65
- ysize = lines * h
66
-
67
- if xsize == 0 and ysize == 0:
68
- return ""
69
-
70
- self.ysize = h
71
-
72
- # paste glyphs into bitmap
73
- self.bitmap = Image.new("1", (xsize, ysize))
74
- self.metrics = [None] * 256
75
- x = y = 0
76
- for i in range(256):
77
- glyph = self[i]
78
- if glyph:
79
- d, dst, src, im = glyph
80
- xx = src[2] - src[0]
81
- # yy = src[3] - src[1]
82
- x0, y0 = x, y
83
- x = x + xx
84
- if x > WIDTH:
85
- x, y = 0, y + h
86
- x0, y0 = x, y
87
- x = xx
88
- s = src[0] + x0, src[1] + y0, src[2] + x0, src[3] + y0
89
- self.bitmap.paste(im.crop(src), s)
90
- self.metrics[i] = d, dst, s
91
-
92
- def save(self, filename):
93
- """Save font"""
94
-
95
- self.compile()
96
-
97
- # font data
98
- self.bitmap.save(os.path.splitext(filename)[0] + ".pbm", "PNG")
99
-
100
- # font metrics
101
- with open(os.path.splitext(filename)[0] + ".pil", "wb") as fp:
102
- fp.write(b"PILfont\n")
103
- fp.write(f";;;;;;{self.ysize};\n".encode("ascii")) # HACK!!!
104
- fp.write(b"DATA\n")
105
- for id in range(256):
106
- m = self.metrics[id]
107
- if not m:
108
- puti16(fp, [0] * 10)
109
- else:
110
- puti16(fp, m[0] + m[1] + m[2])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Datasculptor/StyleGAN-NADA/e4e/options/train_options.py DELETED
@@ -1,84 +0,0 @@
1
- from argparse import ArgumentParser
2
- from configs.paths_config import model_paths
3
-
4
-
5
- class TrainOptions:
6
-
7
- def __init__(self):
8
- self.parser = ArgumentParser()
9
- self.initialize()
10
-
11
- def initialize(self):
12
- self.parser.add_argument('--exp_dir', type=str, help='Path to experiment output directory')
13
- self.parser.add_argument('--dataset_type', default='ffhq_encode', type=str,
14
- help='Type of dataset/experiment to run')
15
- self.parser.add_argument('--encoder_type', default='Encoder4Editing', type=str, help='Which encoder to use')
16
-
17
- self.parser.add_argument('--batch_size', default=4, type=int, help='Batch size for training')
18
- self.parser.add_argument('--test_batch_size', default=2, type=int, help='Batch size for testing and inference')
19
- self.parser.add_argument('--workers', default=4, type=int, help='Number of train dataloader workers')
20
- self.parser.add_argument('--test_workers', default=2, type=int,
21
- help='Number of test/inference dataloader workers')
22
-
23
- self.parser.add_argument('--learning_rate', default=0.0001, type=float, help='Optimizer learning rate')
24
- self.parser.add_argument('--optim_name', default='ranger', type=str, help='Which optimizer to use')
25
- self.parser.add_argument('--train_decoder', default=False, type=bool, help='Whether to train the decoder model')
26
- self.parser.add_argument('--start_from_latent_avg', action='store_true',
27
- help='Whether to add average latent vector to generate codes from encoder.')
28
- self.parser.add_argument('--lpips_type', default='alex', type=str, help='LPIPS backbone')
29
-
30
- self.parser.add_argument('--lpips_lambda', default=0.8, type=float, help='LPIPS loss multiplier factor')
31
- self.parser.add_argument('--id_lambda', default=0.1, type=float, help='ID loss multiplier factor')
32
- self.parser.add_argument('--l2_lambda', default=1.0, type=float, help='L2 loss multiplier factor')
33
-
34
- self.parser.add_argument('--stylegan_weights', default=model_paths['stylegan_ffhq'], type=str,
35
- help='Path to StyleGAN model weights')
36
- self.parser.add_argument('--stylegan_size', default=1024, type=int,
37
- help='size of pretrained StyleGAN Generator')
38
- self.parser.add_argument('--checkpoint_path', default=None, type=str, help='Path to pSp model checkpoint')
39
-
40
- self.parser.add_argument('--max_steps', default=500000, type=int, help='Maximum number of training steps')
41
- self.parser.add_argument('--image_interval', default=100, type=int,
42
- help='Interval for logging train images during training')
43
- self.parser.add_argument('--board_interval', default=50, type=int,
44
- help='Interval for logging metrics to tensorboard')
45
- self.parser.add_argument('--val_interval', default=1000, type=int, help='Validation interval')
46
- self.parser.add_argument('--save_interval', default=None, type=int, help='Model checkpoint interval')
47
-
48
- # Discriminator flags
49
- self.parser.add_argument('--w_discriminator_lambda', default=0, type=float, help='Dw loss multiplier')
50
- self.parser.add_argument('--w_discriminator_lr', default=2e-5, type=float, help='Dw learning rate')
51
- self.parser.add_argument("--r1", type=float, default=10, help="weight of the r1 regularization")
52
- self.parser.add_argument("--d_reg_every", type=int, default=16,
53
- help="interval for applying r1 regularization")
54
- self.parser.add_argument('--use_w_pool', action='store_true',
55
- help='Whether to store a latnet codes pool for the discriminator\'s training')
56
- self.parser.add_argument("--w_pool_size", type=int, default=50,
57
- help="W\'s pool size, depends on --use_w_pool")
58
-
59
- # e4e specific
60
- self.parser.add_argument('--delta_norm', type=int, default=2, help="norm type of the deltas")
61
- self.parser.add_argument('--delta_norm_lambda', type=float, default=2e-4, help="lambda for delta norm loss")
62
-
63
- # Progressive training
64
- self.parser.add_argument('--progressive_steps', nargs='+', type=int, default=None,
65
- help="The training steps of training new deltas. steps[i] starts the delta_i training")
66
- self.parser.add_argument('--progressive_start', type=int, default=None,
67
- help="The training step to start training the deltas, overrides progressive_steps")
68
- self.parser.add_argument('--progressive_step_every', type=int, default=2_000,
69
- help="Amount of training steps for each progressive step")
70
-
71
- # Save additional training info to enable future training continuation from produced checkpoints
72
- self.parser.add_argument('--save_training_data', action='store_true',
73
- help='Save intermediate training data to resume training from the checkpoint')
74
- self.parser.add_argument('--sub_exp_dir', default=None, type=str, help='Name of sub experiment directory')
75
- self.parser.add_argument('--keep_optimizer', action='store_true',
76
- help='Whether to continue from the checkpoint\'s optimizer')
77
- self.parser.add_argument('--resume_training_from_ckpt', default=None, type=str,
78
- help='Path to training checkpoint, works when --save_training_data was set to True')
79
- self.parser.add_argument('--update_param_list', nargs='+', type=str, default=None,
80
- help="Name of training parameters to update the loaded training checkpoint")
81
-
82
- def parse(self):
83
- opts = self.parser.parse_args()
84
- return opts