parquet-converter commited on
Commit
d94c811
·
1 Parent(s): 42472b3

Update parquet files (step 1 of 476)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Step-by-Step Guide to Installing ArcGIS Desktop 10.8 on Your PC.md +0 -26
  2. spaces/1gistliPinn/ChatGPT4/Examples/Ace Combat Assault Horizon Multiplayer Crack Downloadl BEST.md +0 -6
  3. spaces/1gistliPinn/ChatGPT4/Examples/Adobeindesigncs6freeserialnumberlist [PORTABLE].md +0 -6
  4. spaces/1gistliPinn/ChatGPT4/Examples/Autocad Longbow Converter Crack.md +0 -39
  5. spaces/1phancelerku/anime-remove-background/Customize Your Weapons Units and Rules in Zombie Combat Simulator APK.md +0 -80
  6. spaces/1phancelerku/anime-remove-background/Download Mp3 and Lyrics of Worship Rise by Travis Greene A Song that will Make You Pour Your Love on God.md +0 -115
  7. spaces/1phancelerku/anime-remove-background/Download Pokemon Fire Red XY GBA ROM and Enjoy the Best of Both Worlds.md +0 -133
  8. spaces/1phancelerku/anime-remove-background/Enjoy the Ultimate Parking Simulation with Parking Master Multiplayer Mod APK.md +0 -102
  9. spaces/232labs/VToonify/vtoonify/model/raft/core/corr.py +0 -91
  10. spaces/3B-Group/ConvRe-Leaderboard/src/demo.py +0 -83
  11. spaces/801artistry/RVC801/lib/globals/globals.py +0 -5
  12. spaces/801artistry/RVC801/tools/infer/train-index-v2.py +0 -79
  13. spaces/AIConsultant/MusicGen/audiocraft/adversarial/discriminators/base.py +0 -34
  14. spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/txt_processors/base_text_processor.py +0 -47
  15. spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/midas/utils.py +0 -189
  16. spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb128_coslr-90e_in21k.py +0 -11
  17. spaces/Ababababababbababa/AraPoet/README.md +0 -14
  18. spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/modules/seanet.py +0 -258
  19. spaces/AbeShinzo0708/AI_Kishida_Fumio_speaker/hooks/hook-jamo.py +0 -3
  20. spaces/AchyuthGamer/OpenGPT/client/js/theme-toggler.js +0 -22
  21. spaces/Adapter/CoAdapter/app.py +0 -264
  22. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/Sizer.d.ts +0 -203
  23. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/swipe/Swipe.js +0 -2
  24. spaces/AlexWang/lama/models/ade20k/segm_lib/nn/__init__.py +0 -2
  25. spaces/Alfasign/Midjourney_Prompt/README.md +0 -12
  26. spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_r34.py +0 -26
  27. spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/train.py +0 -73
  28. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/self_attention_guidance.md +0 -35
  29. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py +0 -900
  30. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py +0 -706
  31. spaces/Andy1621/uniformer_image_detection/configs/ghm/README.md +0 -23
  32. spaces/Andy1621/uniformer_image_detection/mmdet/models/losses/cross_entropy_loss.py +0 -214
  33. spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v2/deeplabv3plus_m-v2-d8_512x512_160k_ade20k.py +0 -12
  34. spaces/AndySAnker/DeepStruc/README.md +0 -23
  35. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/contour_expand.py +0 -49
  36. spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/distributions/distributions.py +0 -92
  37. spaces/Anthony-Ml/covid_predictor/utils.py +0 -100
  38. spaces/Anustup/NS_AI_LABS/app-network.py +0 -3
  39. spaces/ArtGAN/Segment-Anything-Video/demo.py +0 -110
  40. spaces/Artrajz/vits-simple-api/vits/text/japanese.py +0 -169
  41. spaces/Ash2219/AIchatbot/README.md +0 -12
  42. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/vcs/mercurial.py +0 -163
  43. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_fileno.py +0 -24
  44. spaces/Awesimo/jojogan/op/upfirdn2d.cpp +0 -23
  45. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/__init__.py +0 -14
  46. spaces/Banbri/zcvzcv/src/components/ui/use-toast.ts +0 -192
  47. spaces/Benson/text-generation/Examples/Apksum Choque De Clanes.md +0 -98
  48. spaces/Benson/text-generation/Examples/Chess Cnvcs Apk.md +0 -71
  49. spaces/Benson/text-generation/Examples/Cmo Hacer Un Regalo De Cumpleaos.md +0 -53
  50. spaces/Benson/text-generation/Examples/Descargar Ftbol Real 2023.md +0 -109
spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Step-by-Step Guide to Installing ArcGIS Desktop 10.8 on Your PC.md DELETED
@@ -1,26 +0,0 @@
1
- <br />
2
- <h1>How to install ArcGIS Desktop 10.8 on Windows 10</h1>
3
- <p>ArcGIS Desktop 10.8 is a powerful geographic information system (GIS) software that allows you to create, analyze and visualize spatial data. In this article, we will show you how to install ArcGIS Desktop 10.8 on Windows 10.</p>
4
- <h2>install arcgis desktop 10.8</h2><br /><p><b><b>Download Zip</b> &#10002; <a href="https://byltly.com/2uKwam">https://byltly.com/2uKwam</a></b></p><br /><br />
5
- <h2>Step 1: Download the setup file</h2>
6
- <p>First, you need to download the setup file from the Esri website. You can choose between a web installer or a full setup file. The web installer is smaller and will download the necessary files during the installation process. The full setup file is larger and contains all the files needed for the installation. You can download either one from <a href="https://www.esri.com/en-us/industries/overview">here</a>.</p>
7
- <h2>Step 2: Run the setup file</h2>
8
- <p>After downloading the setup file, double-click on it to start the installation process. You will see a welcome screen that asks you to choose your preferred language. Select your language and click <b>OK</b>.</p>
9
- <p>Then, you will see a license agreement screen that asks you to accept the terms and conditions. Read the agreement carefully and click <b>I accept the license agreement</b> if you agree. Then click <b>Next</b>.</p>
10
- <h2>Step 3: Choose the installation type</h2>
11
- <p>Next, you will see a screen that asks you to choose the installation type. You can choose between a complete or a custom installation. A complete installation will install all the components of ArcGIS Desktop 10.8, while a custom installation will allow you to select which components you want to install. For this tutorial, we will choose a complete installation. Click <b>Complete</b> and then click <b>Next</b>.</p>
12
- <p></p>
13
- <h2>Step 4: Choose the installation folder</h2>
14
- <p>Then, you will see a screen that asks you to choose the installation folder. The default folder is C:\Program Files (x86)\ArcGIS\Desktop10.8\. You can change it if you want, but make sure you have enough disk space for the installation. Click <b>Next</b>.</p>
15
- <h2>Step 5: Choose the authorization option</h2>
16
- <p>Next, you will see a screen that asks you to choose the authorization option. You can choose between a single use or a concurrent use license. A single use license allows you to use ArcGIS Desktop 10.8 on one computer only, while a concurrent use license allows you to use it on multiple computers that are connected to a license manager server. For this tutorial, we will choose a single use license. Click <b>Single Use</b> and then click <b>Next</b>.</p>
17
- <h2>Step 6: Enter the authorization number</h2>
18
- <p>Then, you will see a screen that asks you to enter the authorization number. This is a 12-digit code that you should have received from Esri when you purchased ArcGIS Desktop 10.8. Enter the authorization number and click <b>Next</b>.</p>
19
- <h2>Step 7: Review and start the installation</h2>
20
- <p>Finally, you will see a screen that shows you a summary of your installation choices. Review them and click <b>Install</b> to start the installation process.</p>
21
- <p>The installation process may take several minutes depending on your computer speed and internet connection. You will see a progress bar that shows you how much of the installation is completed.</p>
22
- <h2>Step 8: Finish the installation</h2>
23
- <p>When the installation is finished, you will see a screen that says <b>The installation was successful.</b>. Click <b>Finish</b> to exit the setup wizard.</p>
24
- <p>Congratulations! You have successfully installed ArcGIS Desktop 10.8 on Windows 10.</p> ddb901b051<br />
25
- <br />
26
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Ace Combat Assault Horizon Multiplayer Crack Downloadl BEST.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>Ace Combat Assault Horizon Multiplayer Crack Downloadl</h2><br /><p><b><b>Download File</b> &#9193; <a href="https://imgfil.com/2uxY71">https://imgfil.com/2uxY71</a></b></p><br /><br />
2
-
3
- World Conqueror 4 is an Strategy Game for android Download latest version of ... Welcome to MPGH - MultiPlayer Game Hacking, the world's leader in Fortnite ... Combat Arms Hacks, Crossfire Hacks, WarRock Hacks, SoldierFront Hacks, ... a 171(Pré-Alpha) Free Download PC Game Cracked in Direct Link and Torrent. 4d29de3e1b<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Adobeindesigncs6freeserialnumberlist [PORTABLE].md DELETED
@@ -1,6 +0,0 @@
1
- <h2>adobeindesigncs6freeserialnumberlist</h2><br /><p><b><b>Download Zip</b> &#187;&#187;&#187; <a href="https://imgfil.com/2uxZsQ">https://imgfil.com/2uxZsQ</a></b></p><br /><br />
2
-
3
- Product key for Adobe Photoshop CS6. The serial keys for the full version of Adobe Photoshop CS6 are 1330-1156-0980-8094-0093-3404. 1330-1416-8167-3432-7342-5065. 1416-1578-3382-7447-8057-2084. 1578-1226-1272-1320-3409-6036. 1226-1332-1341-1330-8047-3245. 1331-1354-1355-1353-3470-3841-1397. 1354-1399-1398-1398-1419-8036-5048. 1399-1421-1422-1420-1389-7342-3747. 1419-1424-1425-1425-1339-7432-3749. 1424-1429-1433-1428-1378-8602-6051. 1429-1427-1428-1429-1438-8582. 1428-1439-1439-1438-1391-8603-6052. 1440-1427-1428-1429 8a78ff9644<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Autocad Longbow Converter Crack.md DELETED
@@ -1,39 +0,0 @@
1
- <br />
2
- <h1>How to Install and Run Old Versions of AutoCAD with Longbow Converter</h1>
3
- <p>If you have an old version of AutoCAD that you want to use on your new Windows system, you may face some compatibility issues that prevent you from installing or running your software. This can be frustrating and costly, especially if you have a valid license for your old AutoCAD and you don't want to upgrade to a newer version. Fortunately, there is a solution that can help you install and run old versions of AutoCAD on new Windows systems: Longbow Converter.</p>
4
- <h2>autocad longbow converter crack</h2><br /><p><b><b>Download</b> &#9913; <a href="https://imgfil.com/2uxXX4">https://imgfil.com/2uxXX4</a></b></p><br /><br />
5
- <p>Longbow Converter is a software that can update your installer media so that your 32-bit AutoCAD can install and run normally on your 64-bit Windows 7 or Windows Vista. It can also configure your installer so that once you install your 32-bit AutoCAD, it can use up to 4GB of RAM on a 64-bit system, instead of the restrictive 2GB limit. Longbow Converter can also fix various errors and bugs that may occur when running old versions of AutoCAD on new Windows systems.</p>
6
- <p>In this article, we will explain what Longbow Converter is, how it works, what are its benefits and drawbacks, and how to use it effectively. We will also warn you about the risks of using a crack or a keygen to activate Longbow Converter, and suggest some alternatives to Longbow Converter.</p>
7
- <h2>What is Longbow Converter?</h2>
8
- <p>Longbow Converter is a software that can modify the installer media of your old AutoCAD software so that it can bypass the compatibility checks and install on your new Windows system. It can also modify the registry entries and files of your installed AutoCAD software so that it can run smoothly on your new Windows system.</p>
9
- <p>Longbow Converter supports most versions of AutoCAD from AutoCAD R14 to AutoCAD 2013. It also supports most versions of Windows from Windows XP to Windows 10. However, it does not support 64-bit versions of AutoCAD or Windows XP 64-bit.</p>
10
- <p></p>
11
- <p>To use Longbow Converter, you need to have the original installer media of your old AutoCAD software, such as a CD-ROM or a DVD-ROM. You also need to have a valid license key for your old AutoCAD software. You cannot use Longbow Converter with cracked or pirated versions of AutoCAD.</p>
12
- <h2>How Does Longbow Converter Work?</h2>
13
- <p>Longbow Converter works by modifying the installer media of your old AutoCAD software so that it can bypass the compatibility checks and install on your new Windows system. It also modifies the registry entries and files of your installed AutoCAD software so that it can run smoothly on your new Windows system.</p>
14
- <p>To use Longbow Converter, you need to follow these steps:</p>
15
- <ol>
16
- <li><strong>Download and install Longbow Converter.</strong> You can download Longbow Converter from its official website (https://www.longbowsoftware.com/). You need to pay a one-time fee of $39.95 to get a license key for Longbow Converter. You can also request a free trial or a demo before buying it.</li>
17
- <li><strong>Insert your old AutoCAD installer media.</strong> You need to insert your old AutoCAD installer media into your CD-ROM or DVD-ROM drive. Make sure that the drive letter is D: or E:. If not, you need to change it in the Disk Management tool.</li>
18
- <li><strong>Run Longbow Converter.</strong> You need to run Longbow Converter as an administrator. You can do this by right-clicking on the Longbow Converter icon and selecting <strong>Run as administrator</strong>.</li>
19
- <li><strong>Select your old AutoCAD version.</strong> You need to select your old AutoCAD version from the drop-down list in the Longbow Converter window. For example, if you have AutoCAD 2008, you need to select <strong>AutoCAD 2008</strong>.</li>
20
- <li><strong>Select your installer media type.</strong> You need to select your installer media type from the drop-down list in the Longbow Converter window. For example, if you have a CD-ROM installer media, you need to select <strong>CD-ROM</strong>.</li>
21
- <li><strong>Select your installer media drive letter.</strong> You need to select your installer media drive letter from the drop-down list in the Longbow Converter window. For example, if your CD-ROM drive letter is D:, you need to select <strong>D:</strong>.</li>
22
- <li><strong>Select your installation folder.</strong> You need to select your installation folder for your old AutoCAD software in the Longbow Converter window. You can use the default folder or browse for a different folder. For example, if you want to install your old AutoCAD software in C:\Program Files\AutoCAD 2008, you need to select <strong>C:\Program Files\AutoCAD 2008</strong>.</li>
23
- <li><strong>Select your license type.</strong> You need to select your license type for your old AutoCAD software in the Longbow Converter window. You can choose between <strong>Standalone</strong> or <strong>Network</strong>. For example, if you have a standalone license for your old AutoCAD software, you need to select <strong>Standalone</strong>.</li>
24
- <li><strong>Select your license key.</strong> You need to enter or paste your license key for your old AutoCAD software in the Longbow Converter window. You can find your license key in your email confirmation or in your Autodesk account. For example, if your license key is XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XX</p>
25
- <h2>Upgrade to a newer version of AutoCAD.</h2>
26
- <p>You can upgrade to a newer version of AutoCAD that is compatible with your new Windows system. You can buy a new license or subscribe to an Autodesk plan that suits your needs and budget. You can enjoy the latest features and improvements of AutoCAD, as well as the updates and support from Autodesk. You can also maintain compatibility with other software applications and hardware devices that require newer versions of AutoCAD.</p>
27
- <p>However, upgrading to a newer version of AutoCAD may also have some drawbacks. You may need to learn a new interface or features of AutoCAD that are different from your old version. You may also need to convert your old projects and files to the new format of AutoCAD. You may also need to invest in a more powerful computer or system that can run the newer version of AutoCAD.</p>
28
- <h2>Use a virtual machine or a compatibility mode.</h2>
29
- <p>You can use a virtual machine or a compatibility mode to run your old version of AutoCAD on your new Windows system. A virtual machine is a software that can create and run a simulated environment of another operating system on your computer. A compatibility mode is a feature that can make your software run as if it is on an older version of Windows.</p>
30
- <p>By using a virtual machine or a compatibility mode, you can install and run your old version of AutoCAD on your new Windows system without modifying the installer media or the registry entries. You can also avoid some errors or bugs that may occur when running old versions of AutoCAD on new Windows systems.</p>
31
- <p>However, using a virtual machine or a compatibility mode may also have some drawbacks. You may need to install and configure the virtual machine or the compatibility mode software on your computer. You may also need to allocate some disk space and memory for the virtual machine or the compatibility mode software. You may also experience some performance issues or lagging when running your old version of AutoCAD on the virtual machine or the compatibility mode.</p>
32
- <h2>Conclusion</h2>
33
- <p>In this article, we have discussed what AutoCAD Longbow Converter is, how it works, what are its benefits and drawbacks, and how to use it effectively. We have also warned you about the risks of using a crack or a keygen to activate Longbow Converter, and suggested some alternatives to Longbow Converter.</p>
34
- <p>We hope that this article has been informative and helpful for you. If you have any questions or comments, please feel free to leave them below. Thank you for reading!</p>
35
- <h2>Conclusion</h2>
36
- <p>In this article, we have discussed what AutoCAD Longbow Converter is, how it works, what are its benefits and drawbacks, and how to use it effectively. We have also warned you about the risks of using a crack or a keygen to activate Longbow Converter, and suggested some alternatives to Longbow Converter.</p>
37
- <p>We hope that this article has been informative and helpful for you. If you have any questions or comments, please feel free to leave them below. Thank you for reading!</p> 3cee63e6c2<br />
38
- <br />
39
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Customize Your Weapons Units and Rules in Zombie Combat Simulator APK.md DELETED
@@ -1,80 +0,0 @@
1
- <br />
2
- <h1>Zombie Combat Simulator APK: A Highly Customizable Game for Zombie Lovers</h1>
3
- <p>If you are a fan of zombie games, you might want to check out Zombie Combat Simulator APK, a free Android game that lets you create your own scenarios and rules for fighting against hordes of undead. Zombie Combat Simulator APK is not just another zombie shooting game. It is a highly customizable game that gives you full control over how you want to play. You can choose from three different modes: sandbox mode, third person shooter, and multiplayer. Each mode offers a unique and fun experience that will keep you entertained for hours.</p>
4
- <h2>zombie combat simulator apk</h2><br /><p><b><b>DOWNLOAD</b> &#10084;&#10084;&#10084; <a href="https://jinyurl.com/2uNMGB">https://jinyurl.com/2uNMGB</a></b></p><br /><br />
5
- <h2>Sandbox mode</h2>
6
- <p>In sandbox mode, you can create units in any location and decide on their weapons, damage resistance, and HP. You can also modify the rules of the game, such as unit spawn system that can automatically spawn soldiers and zombies. As well as time limit, conditions of victory and defeat. In short, you can make the gameplay a significant change by modifying the rules. Of course, if you do not want to do it by yourself I also gave you some preset ways to play;</p>
7
- <p>Some of the preset ways to play in sandbox mode are:</p>
8
- <ul>
9
- <li>Zombie infection: One soldier will be infected randomly, and he will infect other soldiers.</li>
10
- <li>Night combat: You will face more zombies in the night than in the day.</li>
11
- <li>Headshot only: Zombies will only die from headshots.</li>
12
- <li>Custom battle: You can set up your own teams and fight against each other.</li>
13
- </ul>
14
- <h2>Third person shooter</h2>
15
- <p>In third person shooter mode, you can control a soldier by yourself and fight against zombies. You can use different weapons and items, such as rifles, pistols, grenades, medkits, etc. You can also switch between first person and third person view depending on your preference. You will have to survive as long as possible and kill as many zombies as you can.</p>
16
- <h2>Multiplayer</h2>
17
- <p>In multiplayer mode, you can play online, LAN, or offline with other players. You can join or create a room and invite your friends. You can also communicate with other players using voice chat or text chat. You can choose from different maps and modes, such as team deathmatch, capture the flag, zombie mode, etc. You can also customize your character's appearance and equipment.</p>
18
- <p>zombie combat simulator mod apk<br />
19
- zombie combat simulator download<br />
20
- zombie combat simulator online<br />
21
- zombie combat simulator pc<br />
22
- zombie combat simulator sandbox mode<br />
23
- zombie combat simulator multiplayer<br />
24
- zombie combat simulator cheats<br />
25
- zombie combat simulator hack<br />
26
- zombie combat simulator game<br />
27
- zombie combat simulator android<br />
28
- zombie combat simulator ios<br />
29
- zombie combat simulator apk pure<br />
30
- zombie combat simulator apk mod menu<br />
31
- zombie combat simulator apk obb<br />
32
- zombie combat simulator apk latest version<br />
33
- zombie combat simulator apk unlimited money<br />
34
- zombie combat simulator apk revdl<br />
35
- zombie combat simulator apk happymod<br />
36
- zombie combat simulator apk rexdl<br />
37
- zombie combat simulator apk offline<br />
38
- zombie combat simulator gameplay<br />
39
- zombie combat simulator review<br />
40
- zombie combat simulator tips<br />
41
- zombie combat simulator tricks<br />
42
- zombie combat simulator guide<br />
43
- zombie combat simulator best weapons<br />
44
- zombie combat simulator best units<br />
45
- zombie combat simulator best maps<br />
46
- zombie combat simulator best mods<br />
47
- zombie combat simulator best settings<br />
48
- zombie combat simulator free download<br />
49
- zombie combat simulator free play<br />
50
- zombie combat simulator free online<br />
51
- zombie combat simulator free apk mod<br />
52
- zombie combat simulator free coins<br />
53
- zombie combat simulator update<br />
54
- zombie combat simulator new version<br />
55
- zombie combat simulator new map<br />
56
- zombie combat simulator new weapons<br />
57
- zombie combat simulator new units<br />
58
- zombie combat simulator custom map<br />
59
- zombie combat simulator custom weapons<br />
60
- zombie combat simulator custom units<br />
61
- zombie combat simulator custom mode<br />
62
- zombie combat simulator custom rules<br />
63
- zombie combat simulator lan mode<br />
64
- zombie combat simulator coop mode<br />
65
- zombie combat simulator pvp mode</p>
66
- <h2>Conclusion</h2>
67
- <p>Zombie Combat Simulator APK is a game that offers a lot of possibilities and fun for zombie lovers. You can create your own scenarios and rules, control a soldier or a zombie, play with other players online or offline, and enjoy a realistic and immersive graphics and sound effects. If you are looking for a game that will challenge your creativity and skills, download Zombie Combat Simulator APK today and have fun!</p>
68
- <h2>FAQs</h2>
69
- <h3>What are the system requirements for Zombie Combat Simulator APK?</h3>
70
- <p>Zombie Combat Simulator APK requires Android 5.1 or higher and at least 329 MB of free storage space.</p>
71
- <h3>How to install Zombie Combat Simulator APK on your device?</h3>
72
- <p>You can download Zombie Combat Simulator APK from [Zombie Combat Simulator APK (Android Game) - Free Download](^1^) or [ Zombie Combat Simulator APK from [Zombie Combat Simulator - Apps on Google Play](^2^). You will need to allow installation from unknown sources in your device settings. Then, you can open the APK file and follow the instructions to install the game.</p>
73
- <h3>Is Zombie Combat Simulator APK safe and virus-free?</h3>
74
- <p>Yes, Zombie Combat Simulator APK is safe and virus-free. It has been scanned by various antivirus programs and has no malware or spyware. However, you should always download the APK file from a trusted source, such as the ones we provided above.</p>
75
- <h3>How to update Zombie Combat Simulator APK to the latest version?</h3>
76
- <p>You can update Zombie Combat Simulator APK to the latest version by downloading the new APK file from the same source you used before. You can also check for updates within the game settings. You will need to uninstall the previous version before installing the new one.</p>
77
- <h3>How to contact the developer of Zombie Combat Simulator APK for feedback or support?</h3>
78
- <p>You can contact the developer of Zombie Combat Simulator APK by sending an email to [email protected]. You can also visit their Facebook page at [Airblade Studio - Home | Facebook] or their YouTube channel at [Airblade Studio - YouTube].</p> 401be4b1e0<br />
79
- <br />
80
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Mp3 and Lyrics of Worship Rise by Travis Greene A Song that will Make You Pour Your Love on God.md DELETED
@@ -1,115 +0,0 @@
1
-
2
- <h1>How to Download \"Worship Rise\" by Travis Greene</h1>
3
- <p>If you are looking for a song that will lift your spirit and inspire you to worship God, you might want to check out \"Worship Rise\" by Travis Greene. This song is a powerful anthem that declares God's glory and invites His presence. In this article, we will show you how to download this song on your computer or smartphone, either by buying it from iTunes or Google Play Music, or by using a free music downloader like OKmusi. Whether you want to listen to it offline, add it to your playlist, or share it with others, we have got you covered.</p>
4
- <h2>How to Buy and Download \"Worship Rise\" from iTunes</h2>
5
- <p>iTunes is one of the most popular and convenient ways to buy and download music online. You can use iTunes on your Windows or Mac computer, or on your iPhone or iPad. Here are the steps to follow:</p>
6
- <h2>download worship rise by travis greene</h2><br /><p><b><b>Download Zip</b> <a href="https://jinyurl.com/2uNSSO">https://jinyurl.com/2uNSSO</a></b></p><br /><br />
7
- <ol>
8
- <li>Install iTunes if you don't have it already. You can download it from <a href="(^1^)">Apple's website</a>.</li>
9
- <li>Open iTunes and sign in with your Apple ID. If you don't have one, you can create one for free.</li>
10
- <li>Click on Store at the top of the iTunes window.</li>
11
- <li>Click on the search bar in the upper-right corner and type in \"Worship Rise\" or \"Travis Greene\".</li>
12
- <li>Select the song or album you want to buy from the results. You can preview the song by clicking on the play button.</li>
13
- <li>Click on the price button next to the song or album. You may need to enter your Apple ID password or use Touch ID to confirm your purchase.</li>
14
- <li>The song or album will be added to your library and downloaded to your computer or device. You can find it under Music > Library > Songs or Albums.</li>
15
- </ol>
16
- <p>To view and transfer the music files on your computer, you can do the following:</p>
17
- <ul>
18
- <li>On Windows, go to C:\Users\username\Music\iTunes\iTunes Media\Music\Travis Greene.</li>
19
- <li>On Mac, go to /Users/username/Music/iTunes/iTunes Media/Music/Travis Greene.</li>
20
- <li>You can copy, move, or delete the music files as you wish. You can also sync them with other devices using iTunes.</li>
21
- </ul>
22
- <h2>How to Buy and Download \"Worship Rise\" from Google Play Music</h2>
23
- <p>Google Play Music is another popular and convenient way to buy and download music online. You can use Google Play Music on your Android device or on any web browser. Here are the steps to follow:</p>
24
- <ol>
25
- <li>Open Google Play Music on your Android device or go to <a href="(^2^)">play.google.com/music</a> on any web browser.</li>
26
- <li>Sign in with your Google account. If you don't have one, you can create one for free.</li>
27
- <li>Tap or click on the search bar at the top and type in \"Worship Rise\" or \"Travis Greene\".</ <li>Select the song or album you want to buy from the results. You can preview the song by tapping or clicking on the play button.</li>
28
- <li>Tap or click on the price button next to the song or album. You may need to enter your Google account password or use your fingerprint to confirm your purchase.</li>
29
- <li>The song or album will be added to your library and downloaded to your device or computer. You can find it under Library > Songs or Albums.</li>
30
- </ol>
31
- <p>To view and transfer the music files on your device or computer, you can do the following:</p>
32
- <ul>
33
- <li>On Android, go to Settings > Storage > Music > Travis Greene.</li>
34
- <li>On web browser, go to <a href="">play.google.com/music/listen#/manager</a> and select Travis Greene from the list of artists.</li>
35
- <li>You can copy, move, or delete the music files as you wish. You can also sync them with other devices using Google Play Music.</li>
36
- </ul>
37
- <h2>How to Download \"Worship Rise\" for Free from YouTube or SoundCloud</h2>
38
- <p>If you don't want to spend money on buying music online, you might be tempted to use free music download sites or apps that let you download music from YouTube or SoundCloud. However, you should be aware of the pros and cons of using these methods.</p>
39
- <p>The pros are:</p>
40
- <ul>
41
- <li>You can download any song or video you want for free.</li>
42
- <li>You can choose the quality and format of the music files.</li>
43
- <li>You can access a large variety of music genres and artists.</li>
44
- </ul>
45
- <p>The cons are:</p>
46
- <ul>
47
- <li>You may violate the copyright laws and risk legal consequences.</li>
48
- <li>You may download malware or viruses along with the music files.</li>
49
- <li>You may compromise the sound quality and integrity of the music files.</li>
50
- </ul>
51
- <p>If you still want to download music for free from YouTube or SoundCloud, we recommend using OKmusi, a safe and reliable online music downloader that works on any device. Here are the steps to follow:</p>
52
- <p>download worship rise by travis greene lyrics and mp3<br />
53
- download worship rise by travis greene crossover album<br />
54
- download worship rise by travis greene audio music<br />
55
- download worship rise by travis greene gospel song<br />
56
- download worship rise by travis greene live performance<br />
57
- download worship rise by travis greene instrumental<br />
58
- download worship rise by travis greene video<br />
59
- download worship rise by travis greene chords and lyrics<br />
60
- download worship rise by travis greene free mp3<br />
61
- download worship rise by travis greene on itunes<br />
62
- download worship rise by travis greene song ministration<br />
63
- download worship rise by travis greene ft tasha cobbs<br />
64
- download worship rise by travis greene mp4<br />
65
- download worship rise by travis greene karaoke<br />
66
- download worship rise by travis greene ringtone<br />
67
- download worship rise by travis greene piano tutorial<br />
68
- download worship rise by travis greene cover version<br />
69
- download worship rise by travis greene remix<br />
70
- download worship rise by travis greene sheet music<br />
71
- download worship rise by travis greene spotify<br />
72
- download worship rise by travis greene youtube<br />
73
- download worship rise by travis greene guitar tabs<br />
74
- download worship rise by travis greene medley<br />
75
- download worship rise by travis greene acapella<br />
76
- download worship rise by travis greene extended version<br />
77
- download worship rise by travis greene official video<br />
78
- download worship rise by travis greene bass cover<br />
79
- download worship rise by travis greene drum cover<br />
80
- download worship rise by travis greene saxophone cover<br />
81
- download worship rise by travis greene violin cover<br />
82
- download worship rise by travis greene amazon music<br />
83
- download worship rise by travis greene soundcloud<br />
84
- download worship rise by travis greene deezer<br />
85
- download worship rise by travis greene apple music<br />
86
- download worship rise by travis greene tidal<br />
87
- download worship rise by travis greene napster<br />
88
- download worship rise by travis greene pandora<br />
89
- download worship rise by travis greene iheartradio<br />
90
- download worship rise by travis greene shazam<br />
91
- download worship rise by travis greene genius lyrics<br />
92
- download worship rise by travis greene azlyrics<br />
93
- download worship rise by travis g</p>
94
- <ol>
95
- <li>Go to <a href="">okmusi.com</a> on any web browser.</li>
96
- <li>Copy and paste the URL of the YouTube or SoundCloud song or video you want to download into the search box and click on Download.</li>
97
- <li>Select the quality and format of the music file you want to download and click on Download again.</li>
98
- <li>The music file will be downloaded to your device or computer. You can find it under Downloads folder.</li>
99
- </ol>
100
- <p>To view and transfer the music files on your device or computer, you can do the same as above for iTunes or Google Play Music.</p>
101
- <h2>Conclusion</h2>
102
- <p>In this article, we have shown you how to download \"Worship Rise\" by Travis Greene on your computer or smartphone, either by buying it from iTunes or Google Play Music, or by using a free music downloader like OKmusi. We hope you have found this article helpful and informative. Now that you have downloaded this song, why not listen to it and experience its uplifting message? You can also share it with your friends and family and let them know how much you love this song. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!</p>
103
- <h2>FAQs</h2>
104
- <h3>What genre is \"Worship Rise\"?</h3>
105
- <p>\"Worship Rise\" is a gospel song that blends contemporary Christian music with soul and R&B influences.</p>
106
- <h3>How long is \"Worship Rise\"?</h3>
107
- <p>\"Worship Rise\" is 5 minutes and 11 seconds long.</p>
108
- <h3>What album is \"Worship Rise\" from?</h3>
109
- <p>\"Worship Rise\" is from Travis Greene's third studio album, Crossover: Live from Music City, which was released in 2017.</p>
110
- <h3>What are some other songs by Travis Greene?</h3>
111
- <p>Some other popular songs by Travis Greene are \"Intentional\", \"Made a Way\", \"You Waited\", \"Good and Loved\", and \"Won't Let Go\".</p>
112
- <h3>Is it legal to download music for free?</h3>
113
- <p>It depends on the source and the usage of the music. Generally, downloading music for free from YouTube or SoundCloud without the permission of the artist or the owner is illegal and unethical. However, some artists may allow their fans to download their music for free for personal use only. You should always check the terms and conditions of the source before downloading any music for free.</p> 401be4b1e0<br />
114
- <br />
115
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Pokemon Fire Red XY GBA ROM and Enjoy the Best of Both Worlds.md DELETED
@@ -1,133 +0,0 @@
1
-
2
- <h1>Pokemon Fire Red XY: A New Twist on a Classic Game</h1>
3
- <p>If you are a fan of the Pokemon series, you probably have played or heard of Pokemon Fire Red, the remake of the original Pokemon Red and Green games for the Game Boy Advance. But did you know that there is a fan-made hack of Pokemon Fire Red that adds new features, Pokemon, and challenges? It's called Pokemon Fire Red XY, and it's one of the best Pokemon hacks you can play on your GBA emulator. In this article, we will tell you everything you need to know about Pokemon Fire Red XY, including what it is, how to download it, why you should play it, and how to use cheats to enhance your gaming experience.</p>
4
- <h2>pokemon fire red xy gba rom download</h2><br /><p><b><b>DOWNLOAD</b> >> <a href="https://jinyurl.com/2uNO9K">https://jinyurl.com/2uNO9K</a></b></p><br /><br />
5
- <h2>What is Pokemon Fire Red XY?</h2>
6
- <p>Pokemon Fire Red XY is a Pokemon Fire Red hack of Gameboy handheld. The storyline of it is so good and you will be impressed with it. With the very first beta release, you can play 180 minutes and enjoy new features. The hack is based on the Pokemon XY anime series, which follows the adventures of Ash Ketchum and his friends in the Kalos region. You will encounter familiar characters, locations, and events from the anime, as well as some original ones created by the hacker. You will also be able to catch and train many Pokemon that come from both FireRed and LeafGreen versions, as well as some from other generations.</p>
7
- <h3>Features of Pokemon Fire Red XY</h3>
8
- <p>Some of the features that make Pokemon Fire Red XY stand out from other hacks are:</p>
9
- <ul>
10
- <li>You can catch many Pokemon that come from both of FireRed and LeafGreen versions, as well as some from other generations.</li>
11
- <li>You can choose between three starter Pokemon: Froakie, Fennekin, or Chespin.</li>
12
- <li>You can customize your character's appearance, gender, and name.</li>
13
- <li>You can explore new areas and events based on the Pokemon XY anime series.</li>
14
- <li>You can battle against stronger gyms and trainers, as well as some familiar faces from the anime.</li>
15
- <li>You can use updated graphics and sound effects that enhance the quality of the game.</li>
16
- </ul>
17
- <h3>How to download Pokemon Fire Red XY</h3>
18
- <p>To play Pokemon Fire Red XY, you will need two things: a GBA emulator and a ROM file. A GBA emulator is a software that allows you to run GBA games on your computer or mobile device. A ROM file is a digital copy of the game that you can load on your emulator. Here are the steps to download Pokemon Fire Red XY:</p>
19
- <ol>
20
- <li>Download a GBA emulator that is compatible with your device. Some popular ones are Visual Boy Advance for Windows, My Boy for Android, and GBA4iOS for iOS.</li>
21
- <li>Download the ROM file of Pokemon Fire Red XY from this link: . Make sure you have enough space on your device to store it.</li>
22
- <li>Extract the ZIP file using a file manager or an extractor app. You should see a file with a .gba extension.</li>
23
- <li>Open your GBA emulator and locate the ROM file. Tap or click on it to start playing.</li>
24
- </ol>
25
- <h2>Why you should play Pokemon Fire Red XY</h2>
26
- <p>Pokemon Fire Red XY is not just another hack of Pokemon Fire Red. It's a whole new experience that will challenge and entertain you in many ways. Here are some reasons why you should play Pokemon Fire Red XY:</p>
27
- <h3>Enhanced graphics and gameplay</h3>
28
- <p>Pokemon Fire Red XY features updated graphics compared to the original Pokemon Fire Red game, with more detailed and colorful <p>sprites, backgrounds, and animations. The game also runs smoothly and has no major bugs or glitches. The gameplay is also improved, with more balanced and diverse battles, new moves and abilities, and more options to customize your Pokemon and your character.</p>
29
- <p>pokemon fire red xy gba rom hack download<br />
30
- pokemon fire red xy gba rom free download<br />
31
- pokemon fire red xy gba rom english download<br />
32
- pokemon fire red xy gba rom zip download<br />
33
- pokemon fire red xy gba rom mediafire download<br />
34
- pokemon fire red xy gba rom mega download<br />
35
- pokemon fire red xy gba rom cheats download<br />
36
- pokemon fire red xy gba rom android download<br />
37
- pokemon fire red xy gba rom online download<br />
38
- pokemon fire red xy gba rom emulator download<br />
39
- pokemon fire red xy gba rom patched download<br />
40
- pokemon fire red xy gba rom full version download<br />
41
- pokemon fire red xy gba rom latest version download<br />
42
- pokemon fire red xy gba rom beta version download<br />
43
- pokemon fire red xy gba rom final version download<br />
44
- pokemon fire red xy gba rom walkthrough download<br />
45
- pokemon fire red xy gba rom gameplay download<br />
46
- pokemon fire red xy gba rom review download<br />
47
- pokemon fire red xy gba rom features download<br />
48
- pokemon fire red xy gba rom informations download<br />
49
- pokemon fire red xy gba rom screenshots download<br />
50
- pokemon fire red xy gba rom trailer download<br />
51
- pokemon fire red xy gba rom video download<br />
52
- pokemon fire red xy gba rom guide download<br />
53
- pokemon fire red xy gba rom tips download<br />
54
- pokemon fire red xy gba rom best team download<br />
55
- pokemon fire red xy gba rom starter choices download<br />
56
- pokemon fire red xy gba rom legendary locations download<br />
57
- pokemon fire red xy gba rom shiny hunting download<br />
58
- pokemon fire red xy gba rom nuzlocke challenge download<br />
59
- pokemon fire red xy gba rom randomizer mode download<br />
60
- pokemon fire red xy gba rom hard mode download<br />
61
- pokemon fire red xy gba rom easy mode download<br />
62
- pokemon fire red xy gba rom expert mode download<br />
63
- pokemon fire red xy gba rom master mode download<br />
64
- pokemon fire red xy gba rom new region download<br />
65
- pokemon fire red xy gba rom new story download<br />
66
- pokemon fire red xy gba rom new characters download<br />
67
- pokemon fire red xy gba rom new graphics download<br />
68
- pokemon fire red xy gba rom new music download<br />
69
- pokemon fire red xy gba rom new moves download<br />
70
- pokemon fire red xy gba rom new abilities download<br />
71
- pokemon fire red xy gba rom new items download<br />
72
- pokemon fire red xy gba rom new events download<br />
73
- pokemon fire red xy gba rom new quests download<br />
74
- pokemon fire red xy gba rom new challenges download<br />
75
- pokemon fire red xy gba rom new secrets download<br />
76
- pokemon fire red xy gba rom new updates download</p>
77
- <h3>More Pokemon to catch and train</h3>
78
- <p>Pokemon Fire Red XY gives you the opportunity to catch and train many Pokemon that come from both of FireRed and LeafGreen versions, as well as some from other generations. You can choose between three starter Pokemon: Froakie, Fennekin, or Chespin, which are the same as the ones in the Pokemon XY anime series. You can also encounter many other Pokemon from different regions, such as Kalos, Hoenn, Sinnoh, and Unova. You can even find some legendary and mythical Pokemon, such as Mewtwo, Zygarde, and Diancie. You will have a lot of fun collecting and training your Pokemon team.</p>
79
- <h3>Challenging difficulty and events</h3>
80
- <p>Pokemon Fire Red XY is not a game for the faint of heart. It is designed to challenge even the most experienced Pokemon players. The game has a higher difficulty level than the original Pokemon Fire Red game, with stronger gyms and trainers, more complex puzzles and obstacles, and more surprises and twists. The game also follows the storyline of the Pokemon XY anime series, which means you will face some familiar foes, such as Team Rocket, Team Flare, and Lysandre. You will also encounter some new events and characters that are exclusive to the hack. You will never get bored or feel like you are playing the same game again.</p>
81
- <h2>How to use cheats in Pokemon Fire Red XY</h2>
82
- <p>If you want to spice up your gaming experience even more, you can use cheats in Pokemon Fire Red XY. Cheats are codes that you can enter in your emulator to modify certain aspects of the game, such as getting unlimited money, items, or rare candies, or catching any Pokemon you want. Cheats can make the game easier or harder, depending on how you use them. Here are some tips on how to use cheats in Pokemon Fire Red XY:</p>
83
- <h3>Benefits of using cheats</h3>
84
- <p>Some of the benefits of using cheats in Pokemon Fire Red XY are:</p>
85
- <ul>
86
- <li>You can save time and effort by getting what you need or want without having to grind or search for it.</li>
87
- <li>You can experiment with different Pokemon combinations and strategies by catching any Pokemon you want or changing their stats and moves.</li>
88
- <li>You can explore hidden areas and secrets that are normally inaccessible or hard to find.</li>
89
- <li>You can have more fun and excitement by creating your own challenges or scenarios with cheats.</li>
90
- </ul>
91
- <h3>Types of cheats available</h3>
92
- <p>Some of the types of cheats available for Pokemon Fire Red XY are:</p>
93
- <ul>
94
- <li>Master codes: These are codes that enable the use of other cheats. You need to enter them first before entering any other cheat code.</li>
95
- <li>Money codes: These are codes that give you unlimited money or change the amount of money you have.</li>
96
- <li>Item codes: These are codes that give you unlimited items or change the quantity of items you have.</li>
97
- <li>Rare candy codes: These are codes that give you unlimited rare candies or change the number of rare candies you have.</li>
98
- <li>Pokemon codes: These are codes that let you catch any Pokemon you want or change the Pokemon in your party or PC.</li>
99
- <li>Stat codes: These are codes that change the stats of your Pokemon, such as level, HP, attack, defense, speed, etc.</li>
100
- <li>Move codes: These are codes that change the moves of your Pokemon, such as type, power, accuracy, PP, etc.</li>
101
- <li>Miscellaneous codes: These are codes that affect other aspects of the game, such as time, weather, events, etc.</li>
102
- </ul>
103
- <h3>How to enter cheats in an emulator</h3>
104
- <p>To enter cheats in an emulator, you need to follow these steps:</p>
105
- <ol>
106
- <li>Open your GBA emulator and load your Pokemon Fire Red XY ROM file.</li>
107
- <li>Go to the menu bar and click on "Cheats" or "Options".</li>
108
- <li>Select "Cheat list" or "Cheat codes".</li>
109
- <li>Click on "Gameshark" or "Codebreaker".</li>
110
- <li>Type or paste the cheat code in the box. Make sure it is correct and compatible with your emulator.</li>
111
- <li>Click on "OK" or "Add".</li>
112
- <li>Repeat steps 4-6 for any other cheat code you want to use.</li>
113
- <li>Enjoy your game with cheats enabled.</li>
114
- </ol>
115
- <h2> <h2>Conclusion</h2>
116
- <p>Pokemon Fire Red XY is a fantastic hack of Pokemon Fire Red that offers a new twist on a classic game. It has many features that make it unique and enjoyable, such as enhanced graphics and gameplay, more Pokemon to catch and train, challenging difficulty and events, and cheats to customize your experience. If you are looking for a fun and exciting Pokemon game to play on your GBA emulator, you should definitely give Pokemon Fire Red XY a try. You won't regret it.</p>
117
- <p>To download Pokemon Fire Red XY, click on this link: . You will need a GBA emulator and a ROM file to play it. Follow the instructions in this article to install and run the game. Have fun and catch 'em all!</p>
118
- <h3>FAQs</h3>
119
- <p>Here are some frequently asked questions about Pokemon Fire Red XY:</p>
120
- <ul>
121
- <li>Q: Is Pokemon Fire Red XY an official game?</li>
122
- <li>A: No, Pokemon Fire Red XY is a fan-made hack of Pokemon Fire Red. It is not endorsed or affiliated with Nintendo or Game Freak.</li>
123
- <li>Q: How long is Pokemon Fire Red XY?</li>
124
- <li>A: Pokemon Fire Red XY has the same length as the original Pokemon Fire Red game, which is about 30-40 hours of gameplay. However, it may take longer depending on your playstyle and how much you explore.</li>
125
- <li>Q: Can I trade or battle with other players in Pokemon Fire Red XY?</li>
126
- <li>A: Yes, you can trade or battle with other players who have the same ROM file and emulator as you. You will need to use a link cable or a wireless adapter to connect your devices.</li>
127
- <li>Q: Can I play Pokemon Fire Red XY on my GBA console?</li>
128
- <li>A: No, you can only play Pokemon Fire Red XY on a GBA emulator. You will need a computer or a mobile device to run the emulator and the ROM file.</li>
129
- <li>Q: Are there any bugs or glitches in Pokemon Fire Red XY?</li>
130
- <li>A: Pokemon Fire Red XY is a stable and polished hack that has no major bugs or glitches. However, if you encounter any minor issues, you can report them to the hacker or the community.</li>
131
- </ul></p> 197e85843d<br />
132
- <br />
133
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Enjoy the Ultimate Parking Simulation with Parking Master Multiplayer Mod APK.md DELETED
@@ -1,102 +0,0 @@
1
- <br />
2
- <br> - Benefits of downloading the mod apk version: Unlimited money, free purchase, unlocked cars and levels, etc. <br> - How to download and install the mod apk version: Step by step guide with screenshots <br> - Conclusion: Summarize the main points and invite the reader to try the game | | H2: Introduction | - Explain what Car Parking Master Multiplayer is: A realistic car parking and driving game with various modes and features <br> - Explain why you should download the mod apk version: To enjoy the game without any limitations or restrictions | | H3: Benefits of downloading the mod apk version | - List the benefits of downloading the mod apk version: <br> - Unlimited money, gems, and coins to buy and upgrade cars <br> - Free purchase of any items in the game store <br> - Unlocked cars and levels to access all the content in the game <br> - Multiplayer mode to compete with other players from around the world | | H4: Unlimited money, gems, and coins | - Explain how the mod apk version gives you unlimited money, gems, and coins to buy and upgrade cars <br> - Give some examples of cars that you can buy and upgrade with the mod apk version | | H4: Free purchase of any items | - Explain how the mod apk version allows you to purchase any items in the game store for free <br> - Give some examples of items that you can purchase for free with the mod apk version | | H4: Unlocked cars and levels | - Explain how the mod apk version unlocks all the cars and levels in the game <br> - Give some examples of cars and levels that you can access with the mod apk version | | H4: Multiplayer mode | - Explain how the mod apk version enables you to play multiplayer mode with other players from around the world <br> - Give some examples of multiplayer modes that you can play with the mod apk version | | H2: How to download and install the mod apk version | - Provide a step by step guide on how to download and install the mod apk version on your Android device <br> - Include screenshots for each step to make it easier for the reader to follow | | H3: Step 1: Download the mod apk file | - Provide a link to download the mod apk file from a reliable source (e.g. [1](https://m.happymod.com/real-car-parking-master-multiplayer-car-game-app-mod/com.SpektraGames.ParkingMasterMultiplayerCarGame/)) <br> - Explain how to download the file by clicking on the download button and waiting for it to finish | | H3: Step 2: Enable unknown sources | - Explain how to enable unknown sources on your Android device by going to settings > security > unknown sources and toggling it on <br> - Explain why this is necessary to install apps from sources other than Google Play Store | | H3: Step 3: Install the mod apk file | - Explain how to install the mod apk file by locating it in your downloads folder and tapping on it <br> - Explain how to follow the installation prompts and grant permissions if needed | | H3: Step 4: Launch the game and enjoy | - Explain how to launch the game by tapping on its icon on your home screen or app drawer <br> - Explain how to enjoy the game with all its features and benefits | | H2: Conclusion | - Summarize the main points of the article: What is Car Parking Master Multiplayer, why you should download the mod apk version, and how to do it <br> - Invite the reader to try the game and share their feedback or questions | Table 2: Article with HTML formatting <h1>How to Download Car Parking Master Multiplayer Mod APK</h1>
3
- <p>If you are a fan of car parking and driving games, you might have heard of Car Parking Master Multiplayer. This is a realistic car parking game that lets you test your driving skills in various modes and features. You can play car parking, drift, free drive, checkpoint, time trial, parkour, or multiplayer mode with more than 60 cars with interior views and real graphics. You can also customize your car with different options such as tires, spoilers, paint, suspensions, and more.</p>
4
- <h2>download car parking master multiplayer mod apk</h2><br /><p><b><b>Download File</b> --->>> <a href="https://jinyurl.com/2uNUut">https://jinyurl.com/2uNUut</a></b></p><br /><br />
5
- <p>However, if you want to enjoy this game without any limitations or restrictions, you might want to download Car Parking Master Multiplayer Mod APK. This is a modified version of the original game that gives you <p>some amazing benefits such as unlimited money, free purchase, unlocked cars and levels, and multiplayer mode. In this article, we will show you what are the benefits of downloading the mod apk version, and how to do it step by step with screenshots. So, let's get started!</p>
6
- <h2>Benefits of downloading the mod apk version</h2>
7
- <p>Downloading the mod apk version of Car Parking Master Multiplayer will give you several advantages over the original game. Here are some of them:</p>
8
- <h4>Unlimited money, gems, and coins</h4>
9
- <p>One of the benefits of downloading the mod apk version is that you will get unlimited money, gems, and coins to buy and upgrade cars in the game. You can use these resources to purchase any car you want, from sports cars to SUVs, from classic cars to supercars. You can also use them to upgrade your car's performance, such as speed, acceleration, handling, and braking. With the mod apk version, you can enjoy the game without worrying about running out of money or gems.</p>
10
- <p>Some examples of cars that you can buy and upgrade with the mod apk version are:</p>
11
- <ul>
12
- <li>Lamborghini Aventador: A powerful and luxurious supercar that can reach speeds of over 350 km/h.</li>
13
- <li>Ford Mustang: A legendary muscle car that has a distinctive sound and style.</li>
14
- <li>Toyota Supra: A popular sports car that is known for its high performance and customization options.</li>
15
- <li>Jeep Wrangler: A rugged and versatile SUV that can handle any terrain and weather.</li>
16
- </ul>
17
- <h4>Free purchase of any items</h4>
18
- <p>Another benefit of downloading the mod apk version is that you will be able to purchase any items in the game store for free. You can buy anything you want, from car parts to accessories, from stickers to paint jobs. You can also buy premium items that are normally only available with real money, such as VIP membership, special cars, or exclusive offers. With the mod apk version, you can customize your car however you like without spending a dime.</p>
19
- <p>Some examples of items that you can purchase for free with the mod apk version are:</p>
20
- <ul>
21
- <li>Tires: You can choose from different types of tires, such as racing tires, off-road tires, or drift tires.</li>
22
- <li>Spoilers: You can add spoilers to your car to improve its aerodynamics and stability.</li>
23
- <li>Paint: You can change the color of your car or apply different patterns and designs.</li>
24
- <li>Suspensions: You can adjust the height and stiffness of your car's suspensions to suit your driving style.</li>
25
- </ul>
26
- <h4>Unlocked cars and levels</h4>
27
- <p>A third benefit of downloading the mod apk version is that you will have access to all the cars and levels in the game. You will not have to wait for them to unlock or complete certain tasks or challenges to get them. You can play any car or level you want, from the easiest to the hardest, from the simplest to the most complex. You can also switch between different modes and features without any restrictions. With the mod apk version, you can explore all the content in the game without any limitations.</p>
28
- <p>download parking master multiplayer mod apk unlimited money<br />
29
- download car parking master multiplayer hack apk<br />
30
- download car parking master multiplayer mod apk latest version<br />
31
- download car parking master multiplayer mod apk android 1<br />
32
- download car parking master multiplayer mod apk revdl<br />
33
- download car parking master multiplayer mod apk free shopping<br />
34
- download car parking master multiplayer mod apk offline<br />
35
- download car parking master multiplayer mod apk for pc<br />
36
- download car parking master multiplayer mod apk rexdl<br />
37
- download car parking master multiplayer mod apk no ads<br />
38
- download car parking master multiplayer mod apk 1.7.6<br />
39
- download car parking master multiplayer mod apk 2023<br />
40
- download car parking master multiplayer mod apk unlimited coins<br />
41
- download car parking master multiplayer mod apk unlocked all cars<br />
42
- download car parking master multiplayer mod apk unlimited gems<br />
43
- download car parking master multiplayer mod apk happymod<br />
44
- download car parking master multiplayer mod apk obb<br />
45
- download car parking master multiplayer mod apk online<br />
46
- download car parking master multiplayer mod apk data<br />
47
- download car parking master multiplayer mod apk pure<br />
48
- download car parking master multiplayer mod apk vip<br />
49
- download car parking master multiplayer mod apk pro<br />
50
- download car parking master multiplayer mod apk full version<br />
51
- download car parking master multiplayer mod apk new update<br />
52
- download car parking master multiplayer mod apk old version<br />
53
- download car parking master multiplayer cheat apk<br />
54
- download car parking master multiplayer cracked apk<br />
55
- download car parking master multiplayer premium apk<br />
56
- download car parking master multiplayer mega mod apk<br />
57
- download car parking master multiplayer simulation game mod apk<br />
58
- download car parking master 3d game multiplayer mod apk<br />
59
- how to download car parking master multiplayer mod apk<br />
60
- where to download car parking master multiplayer mod apk<br />
61
- best site to download car parking master multiplayer mod apk<br />
62
- safe site to download car parking master multiplayer mod apk<br />
63
- legit site to download car parking master multiplayer mod apk<br />
64
- easy way to download car parking master multiplayer mod apk<br />
65
- fast way to download car parking master multiplayer mod apk<br />
66
- free way to download car parking master multiplayer mod apk<br />
67
- working link to download car parking master multiplayer mod apk</p>
68
- <p>Some examples of cars and levels that you can access with the mod apk version are:</p>
69
- <ul>
70
- <li>Parkour mode: A mode where you have to park your car on various platforms and obstacles without falling or crashing.</li>
71
- <li>Drift mode: A mode where you have to drift your car around corners and curves without losing control or speed.</li>
72
- <li>Checkpoint mode: A mode where you have to reach certain checkpoints within a time limit without hitting any obstacles or traffic.</li>
73
- <li>Free drive mode: A mode where you can drive your car freely around the city or countryside without any rules or objectives.</li>
74
- </ul>
75
- <h4>Multiplayer mode</h4>
76
- <p>A fourth benefit of downloading the mod apk version is that you will be able to play multiplayer mode with other players from around the world. You can join online rooms or create your own and invite your friends. You can compete with other players in different modes and features, such as car parking, drift, time trial, parkour, or free drive. You can also chat with other players and share your tips and tricks. With the mod apk version, you can have fun and challenge yourself with other players online.</p>
77
- <p>Some examples of multiplayer modes that you can play with the mod apk version are:</p>
78
- <ul>
79
- <li>Car parking mode: A mode where you have to park your car in a designated spot before your opponent does.</li>
80
- <li>Drift mode: A mode where you have to drift your car more than your opponent in a given time or distance.</li>
81
- <li>Time trial mode: A mode where and tap on it. You will see a pop-up window that asks you to confirm the installation. Tap on Install and wait for the installation to finish. You might see another pop-up window that asks you to grant permissions to the app. Tap on Accept or Allow to grant the permissions. You will see a message that says the app has been installed successfully.</p>
82
- <p><img src="https://i.imgur.com/6qXZ1cO.png" alt="Install mod apk file"></p>
83
- <h3>Step 4: Launch the game and enjoy</h3>
84
- <p>The final step is to launch the game and enjoy it with all its features and benefits. To do this, tap on the game's icon on your home screen or app drawer. You will see the game's logo and loading screen. Wait for the game to load and start. You will see the game's main menu with different options such as Play, Garage, Store, Settings, etc. Tap on Play and choose your preferred mode and level. You will see your car and the parking spot or track. Use the controls on the screen to drive and park your car. You will also see your money, gems, and coins on the top of the screen. You can use them to buy and upgrade cars or items in the game store. You can also switch to multiplayer mode and play with other players online.</p>
85
- <p><img src="https://i.imgur.com/0RQnJyM.png" alt="Launch the game"></p>
86
- <h2>Conclusion</h2>
87
- <p>In this article, we have shown you how to download Car Parking Master Multiplayer Mod APK, a modified version of the original game that gives you unlimited money, free purchase, unlocked cars and levels, and multiplayer mode. We have also explained what are the benefits of downloading the mod apk version, and how to install it on your Android device step by step with screenshots. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you.</p>
88
- <p>Now that you know how to download Car Parking Master Multiplayer Mod APK, why not give it a try and see for yourself how fun and challenging this game is? You can enjoy realistic car parking and driving with various modes and features, customize your car with different options, and compete with other players from around the world. Download Car Parking Master Multiplayer Mod APK today and have a blast!</p>
89
- <h2>FAQs</h2>
90
- <p>Here are some frequently asked questions about Car Parking Master Multiplayer Mod APK:</p>
91
- <h4>Is Car Parking Master Multiplayer Mod APK safe to download and install?</h4>
92
- <p>Yes, Car Parking Master Multiplayer Mod APK is safe to download and install on your Android device. The mod apk file that we have provided in this article is from a reliable source that has verified and tested it for viruses and malware. However, you should always be careful when downloading and installing apps from unknown sources, as some of them might contain harmful or malicious code. Always use a trusted antivirus or security app to scan any files before opening them.</p>
93
- <h4>Is Car Parking Master Multiplayer Mod APK compatible with my device?</h4>
94
- <p>Car Parking Master Multiplayer Mod APK is compatible with most Android devices that run on Android 4.4 or higher. However, some devices might have different specifications or settings that might affect the performance or compatibility of the game. If you encounter any issues or errors while playing the game, you can try adjusting the graphics quality or resolution in the game settings, or clearing the cache or data of the game in your device settings.</p>
95
- <h4>How can I update Car Parking Master Multiplayer Mod APK?</h4>
96
- <p>Car Parking Master Multiplayer Mod APK is updated regularly by its developers to fix any bugs or glitches, add new features or content, or improve the gameplay or graphics. However, since this is a modified version of the original game, you might not be able to update it automatically through Google Play Store or other sources. To update Car Parking Master Multiplayer Mod APK, you will have to download and install the latest version of the mod apk file from the same source that you used before. You can also check this article for any updates or changes in the mod apk version.</p>
97
- <h4>Can I play Car Parking Master Multiplayer Mod APK offline?</h4>
98
- <p>Yes, you can play Car Parking Master Multiplayer Mod APK offline without an internet connection. You can play any mode or level in offline mode, except for multiplayer mode which requires an internet connection to connect with other players online. You can also access all the features and benefits of the mod apk version in offline mode, such as unlimited money, free purchase, unlocked cars and levels, etc.</p>
99
- <h4>Can I play Car Parking Master Multiplayer Mod APK with my friends?</h4>
100
- <p>Yes Yes, you can play Car Parking Master Multiplayer Mod APK with your friends online. You can invite your friends to join your online room or join theirs. You can also chat with your friends and other players in the game. To play with your friends online, you will need an internet connection and a valid account in the game. You can create an account or log in with your Facebook or Google account in the game settings.</p> 401be4b1e0<br />
101
- <br />
102
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/232labs/VToonify/vtoonify/model/raft/core/corr.py DELETED
@@ -1,91 +0,0 @@
1
- import torch
2
- import torch.nn.functional as F
3
- from model.raft.core.utils.utils import bilinear_sampler, coords_grid
4
-
5
- try:
6
- import alt_cuda_corr
7
- except:
8
- # alt_cuda_corr is not compiled
9
- pass
10
-
11
-
12
- class CorrBlock:
13
- def __init__(self, fmap1, fmap2, num_levels=4, radius=4):
14
- self.num_levels = num_levels
15
- self.radius = radius
16
- self.corr_pyramid = []
17
-
18
- # all pairs correlation
19
- corr = CorrBlock.corr(fmap1, fmap2)
20
-
21
- batch, h1, w1, dim, h2, w2 = corr.shape
22
- corr = corr.reshape(batch*h1*w1, dim, h2, w2)
23
-
24
- self.corr_pyramid.append(corr)
25
- for i in range(self.num_levels-1):
26
- corr = F.avg_pool2d(corr, 2, stride=2)
27
- self.corr_pyramid.append(corr)
28
-
29
- def __call__(self, coords):
30
- r = self.radius
31
- coords = coords.permute(0, 2, 3, 1)
32
- batch, h1, w1, _ = coords.shape
33
-
34
- out_pyramid = []
35
- for i in range(self.num_levels):
36
- corr = self.corr_pyramid[i]
37
- dx = torch.linspace(-r, r, 2*r+1, device=coords.device)
38
- dy = torch.linspace(-r, r, 2*r+1, device=coords.device)
39
- delta = torch.stack(torch.meshgrid(dy, dx), axis=-1)
40
-
41
- centroid_lvl = coords.reshape(batch*h1*w1, 1, 1, 2) / 2**i
42
- delta_lvl = delta.view(1, 2*r+1, 2*r+1, 2)
43
- coords_lvl = centroid_lvl + delta_lvl
44
-
45
- corr = bilinear_sampler(corr, coords_lvl)
46
- corr = corr.view(batch, h1, w1, -1)
47
- out_pyramid.append(corr)
48
-
49
- out = torch.cat(out_pyramid, dim=-1)
50
- return out.permute(0, 3, 1, 2).contiguous().float()
51
-
52
- @staticmethod
53
- def corr(fmap1, fmap2):
54
- batch, dim, ht, wd = fmap1.shape
55
- fmap1 = fmap1.view(batch, dim, ht*wd)
56
- fmap2 = fmap2.view(batch, dim, ht*wd)
57
-
58
- corr = torch.matmul(fmap1.transpose(1,2), fmap2)
59
- corr = corr.view(batch, ht, wd, 1, ht, wd)
60
- return corr / torch.sqrt(torch.tensor(dim).float())
61
-
62
-
63
- class AlternateCorrBlock:
64
- def __init__(self, fmap1, fmap2, num_levels=4, radius=4):
65
- self.num_levels = num_levels
66
- self.radius = radius
67
-
68
- self.pyramid = [(fmap1, fmap2)]
69
- for i in range(self.num_levels):
70
- fmap1 = F.avg_pool2d(fmap1, 2, stride=2)
71
- fmap2 = F.avg_pool2d(fmap2, 2, stride=2)
72
- self.pyramid.append((fmap1, fmap2))
73
-
74
- def __call__(self, coords):
75
- coords = coords.permute(0, 2, 3, 1)
76
- B, H, W, _ = coords.shape
77
- dim = self.pyramid[0][0].shape[1]
78
-
79
- corr_list = []
80
- for i in range(self.num_levels):
81
- r = self.radius
82
- fmap1_i = self.pyramid[0][0].permute(0, 2, 3, 1).contiguous()
83
- fmap2_i = self.pyramid[i][1].permute(0, 2, 3, 1).contiguous()
84
-
85
- coords_i = (coords / 2**i).reshape(B, 1, H, W, 2).contiguous()
86
- corr, = alt_cuda_corr.forward(fmap1_i, fmap2_i, coords_i, r)
87
- corr_list.append(corr.squeeze(1))
88
-
89
- corr = torch.stack(corr_list, dim=1)
90
- corr = corr.reshape(B, -1, H, W)
91
- return corr / torch.sqrt(torch.tensor(dim).float())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/3B-Group/ConvRe-Leaderboard/src/demo.py DELETED
@@ -1,83 +0,0 @@
1
- import os
2
- import random
3
- from threading import Thread
4
- from typing import Iterable
5
-
6
- import torch
7
- from huggingface_hub import HfApi
8
- from datasets import load_dataset
9
- from transformers import AutoTokenizer, AutoModelForCausalLM, TextIteratorStreamer
10
-
11
-
12
- ground_truth = ""
13
-
14
- TOKEN = os.environ.get("HF_TOKEN", None)
15
-
16
- type2dataset = {
17
- "re2text-easy": load_dataset('3B-Group/ConvRe', "en-re2text", token=TOKEN, split="prompt1"),
18
- "re2text-hard": load_dataset('3B-Group/ConvRe', "en-re2text", token=TOKEN, split="prompt4"),
19
- "text2re-easy": load_dataset('3B-Group/ConvRe', "en-text2re", token=TOKEN, split="prompt1"),
20
- "text2re-hard": load_dataset('3B-Group/ConvRe', "en-text2re", token=TOKEN, split="prompt3")
21
- }
22
-
23
- model_id = "meta-llama/Llama-2-7b-chat-hf"
24
- tokenizer = AutoTokenizer.from_pretrained(model_id, token=TOKEN)
25
- model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, token=TOKEN, device_map="auto").eval()
26
-
27
-
28
- # model_id = "google/flan-t5-base"
29
- # tokenizer = T5Tokenizer.from_pretrained(model_id)
30
- # model = T5ForConditionalGeneration.from_pretrained(model_id, device_map="auto")
31
-
32
- # type2dataset = {}
33
-
34
-
35
- def generate(input_text, sys_prompt, temperature, max_new_tokens) -> str:
36
- sys_prompt = f'''[INST] <<SYS>>
37
- {sys_prompt}
38
- <</SYS>>
39
-
40
- '''
41
- input_str = sys_prompt + input_text + " [/INST]"
42
-
43
- input_ids = tokenizer(input_str, return_tensors="pt").to('cuda')
44
-
45
- streamer = TextIteratorStreamer(tokenizer, timeout=10., skip_prompt=True, skip_special_tokens=True)
46
-
47
- generate_kwargs = dict(
48
- input_ids,
49
- streamer=streamer,
50
- max_new_tokens=max_new_tokens,
51
- do_sample=True,
52
- temperature=float(temperature)
53
- )
54
- t = Thread(target=model.generate, kwargs=generate_kwargs)
55
- t.start()
56
-
57
- # Pull the generated text from the streamer, and update the model output.
58
- model_output = ""
59
- for new_text in streamer:
60
- model_output += new_text
61
- yield model_output
62
- return model_output
63
-
64
-
65
- def random_examples(dataset_key) -> str:
66
-
67
-
68
- # target_dataset = type2dataset[f"{task.lower()}-{type.lower()}"]
69
- target_dataset = type2dataset[dataset_key]
70
-
71
- idx = random.randint(0, len(target_dataset) - 1)
72
- item = target_dataset[idx]
73
-
74
- global ground_truth
75
- ground_truth = item['answer']
76
-
77
- return item['query']
78
-
79
-
80
- def return_ground_truth() -> str:
81
- correct_answer = ground_truth
82
- return correct_answer
83
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/801artistry/RVC801/lib/globals/globals.py DELETED
@@ -1,5 +0,0 @@
1
- DoFormant: bool = False
2
- Quefrency: float = 8.0
3
- Timbre: float = 1.2
4
-
5
- NotesOrHertz: bool = False
 
 
 
 
 
 
spaces/801artistry/RVC801/tools/infer/train-index-v2.py DELETED
@@ -1,79 +0,0 @@
1
- """
2
- 格式:直接cid为自带的index位;aid放不下了,通过字典来查,反正就5w个
3
- """
4
- import os
5
- import traceback
6
- import logging
7
-
8
- logger = logging.getLogger(__name__)
9
-
10
- from multiprocessing import cpu_count
11
-
12
- import faiss
13
- import numpy as np
14
- from sklearn.cluster import MiniBatchKMeans
15
-
16
- # ###########如果是原始特征要先写save
17
- n_cpu = 0
18
- if n_cpu == 0:
19
- n_cpu = cpu_count()
20
- inp_root = r"./logs/anz/3_feature768"
21
- npys = []
22
- listdir_res = list(os.listdir(inp_root))
23
- for name in sorted(listdir_res):
24
- phone = np.load("%s/%s" % (inp_root, name))
25
- npys.append(phone)
26
- big_npy = np.concatenate(npys, 0)
27
- big_npy_idx = np.arange(big_npy.shape[0])
28
- np.random.shuffle(big_npy_idx)
29
- big_npy = big_npy[big_npy_idx]
30
- logger.debug(big_npy.shape) # (6196072, 192)#fp32#4.43G
31
- if big_npy.shape[0] > 2e5:
32
- # if(1):
33
- info = "Trying doing kmeans %s shape to 10k centers." % big_npy.shape[0]
34
- logger.info(info)
35
- try:
36
- big_npy = (
37
- MiniBatchKMeans(
38
- n_clusters=10000,
39
- verbose=True,
40
- batch_size=256 * n_cpu,
41
- compute_labels=False,
42
- init="random",
43
- )
44
- .fit(big_npy)
45
- .cluster_centers_
46
- )
47
- except:
48
- info = traceback.format_exc()
49
- logger.warn(info)
50
-
51
- np.save("tools/infer/big_src_feature_mi.npy", big_npy)
52
-
53
- ##################train+add
54
- # big_npy=np.load("/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/inference_f0/big_src_feature_mi.npy")
55
- n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])), big_npy.shape[0] // 39)
56
- index = faiss.index_factory(768, "IVF%s,Flat" % n_ivf) # mi
57
- logger.info("Training...")
58
- index_ivf = faiss.extract_index_ivf(index) #
59
- index_ivf.nprobe = 1
60
- index.train(big_npy)
61
- faiss.write_index(
62
- index, "tools/infer/trained_IVF%s_Flat_baseline_src_feat_v2.index" % (n_ivf)
63
- )
64
- logger.info("Adding...")
65
- batch_size_add = 8192
66
- for i in range(0, big_npy.shape[0], batch_size_add):
67
- index.add(big_npy[i : i + batch_size_add])
68
- faiss.write_index(
69
- index, "tools/infer/added_IVF%s_Flat_mi_baseline_src_feat.index" % (n_ivf)
70
- )
71
- """
72
- 大小(都是FP32)
73
- big_src_feature 2.95G
74
- (3098036, 256)
75
- big_emb 4.43G
76
- (6196072, 192)
77
- big_emb双倍是因为求特征要repeat后再加pitch
78
-
79
- """
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIConsultant/MusicGen/audiocraft/adversarial/discriminators/base.py DELETED
@@ -1,34 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- from abc import ABC, abstractmethod
8
- import typing as tp
9
-
10
- import torch
11
- import torch.nn as nn
12
-
13
-
14
- FeatureMapType = tp.List[torch.Tensor]
15
- LogitsType = torch.Tensor
16
- MultiDiscriminatorOutputType = tp.Tuple[tp.List[LogitsType], tp.List[FeatureMapType]]
17
-
18
-
19
- class MultiDiscriminator(ABC, nn.Module):
20
- """Base implementation for discriminators composed of sub-discriminators acting at different scales.
21
- """
22
- def __init__(self):
23
- super().__init__()
24
-
25
- @abstractmethod
26
- def forward(self, x: torch.Tensor) -> MultiDiscriminatorOutputType:
27
- ...
28
-
29
- @property
30
- @abstractmethod
31
- def num_discriminators(self) -> int:
32
- """Number of discriminators.
33
- """
34
- ...
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/txt_processors/base_text_processor.py DELETED
@@ -1,47 +0,0 @@
1
- from data_gen.tts.data_gen_utils import is_sil_phoneme
2
-
3
- REGISTERED_TEXT_PROCESSORS = {}
4
-
5
- def register_txt_processors(name):
6
- def _f(cls):
7
- REGISTERED_TEXT_PROCESSORS[name] = cls
8
- return cls
9
-
10
- return _f
11
-
12
-
13
- def get_txt_processor_cls(name):
14
- return REGISTERED_TEXT_PROCESSORS.get(name, None)
15
-
16
-
17
- class BaseTxtProcessor:
18
- @staticmethod
19
- def sp_phonemes():
20
- return ['|']
21
-
22
- @classmethod
23
- def process(cls, txt, preprocess_args):
24
- raise NotImplementedError
25
-
26
- @classmethod
27
- def postprocess(cls, txt_struct, preprocess_args):
28
- # remove sil phoneme in head and tail
29
- while len(txt_struct) > 0 and is_sil_phoneme(txt_struct[0][0]):
30
- txt_struct = txt_struct[1:]
31
- while len(txt_struct) > 0 and is_sil_phoneme(txt_struct[-1][0]):
32
- txt_struct = txt_struct[:-1]
33
- if preprocess_args['with_phsep']:
34
- txt_struct = cls.add_bdr(txt_struct)
35
- if preprocess_args['add_eos_bos']:
36
- txt_struct = [["<BOS>", ["<BOS>"]]] + txt_struct + [["<EOS>", ["<EOS>"]]]
37
- return txt_struct
38
-
39
- @classmethod
40
- def add_bdr(cls, txt_struct):
41
- txt_struct_ = []
42
- for i, ts in enumerate(txt_struct):
43
- txt_struct_.append(ts)
44
- if i != len(txt_struct) - 1 and \
45
- not is_sil_phoneme(txt_struct[i][0]) and not is_sil_phoneme(txt_struct[i + 1][0]):
46
- txt_struct_.append(['|', ['|']])
47
- return txt_struct_
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/midas/utils.py DELETED
@@ -1,189 +0,0 @@
1
- """Utils for monoDepth."""
2
- import sys
3
- import re
4
- import numpy as np
5
- import cv2
6
- import torch
7
-
8
-
9
- def read_pfm(path):
10
- """Read pfm file.
11
-
12
- Args:
13
- path (str): path to file
14
-
15
- Returns:
16
- tuple: (data, scale)
17
- """
18
- with open(path, "rb") as file:
19
-
20
- color = None
21
- width = None
22
- height = None
23
- scale = None
24
- endian = None
25
-
26
- header = file.readline().rstrip()
27
- if header.decode("ascii") == "PF":
28
- color = True
29
- elif header.decode("ascii") == "Pf":
30
- color = False
31
- else:
32
- raise Exception("Not a PFM file: " + path)
33
-
34
- dim_match = re.match(r"^(\d+)\s(\d+)\s$", file.readline().decode("ascii"))
35
- if dim_match:
36
- width, height = list(map(int, dim_match.groups()))
37
- else:
38
- raise Exception("Malformed PFM header.")
39
-
40
- scale = float(file.readline().decode("ascii").rstrip())
41
- if scale < 0:
42
- # little-endian
43
- endian = "<"
44
- scale = -scale
45
- else:
46
- # big-endian
47
- endian = ">"
48
-
49
- data = np.fromfile(file, endian + "f")
50
- shape = (height, width, 3) if color else (height, width)
51
-
52
- data = np.reshape(data, shape)
53
- data = np.flipud(data)
54
-
55
- return data, scale
56
-
57
-
58
- def write_pfm(path, image, scale=1):
59
- """Write pfm file.
60
-
61
- Args:
62
- path (str): pathto file
63
- image (array): data
64
- scale (int, optional): Scale. Defaults to 1.
65
- """
66
-
67
- with open(path, "wb") as file:
68
- color = None
69
-
70
- if image.dtype.name != "float32":
71
- raise Exception("Image dtype must be float32.")
72
-
73
- image = np.flipud(image)
74
-
75
- if len(image.shape) == 3 and image.shape[2] == 3: # color image
76
- color = True
77
- elif (
78
- len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1
79
- ): # greyscale
80
- color = False
81
- else:
82
- raise Exception("Image must have H x W x 3, H x W x 1 or H x W dimensions.")
83
-
84
- file.write("PF\n" if color else "Pf\n".encode())
85
- file.write("%d %d\n".encode() % (image.shape[1], image.shape[0]))
86
-
87
- endian = image.dtype.byteorder
88
-
89
- if endian == "<" or endian == "=" and sys.byteorder == "little":
90
- scale = -scale
91
-
92
- file.write("%f\n".encode() % scale)
93
-
94
- image.tofile(file)
95
-
96
-
97
- def read_image(path):
98
- """Read image and output RGB image (0-1).
99
-
100
- Args:
101
- path (str): path to file
102
-
103
- Returns:
104
- array: RGB image (0-1)
105
- """
106
- img = cv2.imread(path)
107
-
108
- if img.ndim == 2:
109
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
110
-
111
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) / 255.0
112
-
113
- return img
114
-
115
-
116
- def resize_image(img):
117
- """Resize image and make it fit for network.
118
-
119
- Args:
120
- img (array): image
121
-
122
- Returns:
123
- tensor: data ready for network
124
- """
125
- height_orig = img.shape[0]
126
- width_orig = img.shape[1]
127
-
128
- if width_orig > height_orig:
129
- scale = width_orig / 384
130
- else:
131
- scale = height_orig / 384
132
-
133
- height = (np.ceil(height_orig / scale / 32) * 32).astype(int)
134
- width = (np.ceil(width_orig / scale / 32) * 32).astype(int)
135
-
136
- img_resized = cv2.resize(img, (width, height), interpolation=cv2.INTER_AREA)
137
-
138
- img_resized = (
139
- torch.from_numpy(np.transpose(img_resized, (2, 0, 1))).contiguous().float()
140
- )
141
- img_resized = img_resized.unsqueeze(0)
142
-
143
- return img_resized
144
-
145
-
146
- def resize_depth(depth, width, height):
147
- """Resize depth map and bring to CPU (numpy).
148
-
149
- Args:
150
- depth (tensor): depth
151
- width (int): image width
152
- height (int): image height
153
-
154
- Returns:
155
- array: processed depth
156
- """
157
- depth = torch.squeeze(depth[0, :, :, :]).to("cpu")
158
-
159
- depth_resized = cv2.resize(
160
- depth.numpy(), (width, height), interpolation=cv2.INTER_CUBIC
161
- )
162
-
163
- return depth_resized
164
-
165
- def write_depth(path, depth, bits=1):
166
- """Write depth map to pfm and png file.
167
-
168
- Args:
169
- path (str): filepath without extension
170
- depth (array): depth
171
- """
172
- write_pfm(path + ".pfm", depth.astype(np.float32))
173
-
174
- depth_min = depth.min()
175
- depth_max = depth.max()
176
-
177
- max_val = (2**(8*bits))-1
178
-
179
- if depth_max - depth_min > np.finfo("float").eps:
180
- out = max_val * (depth - depth_min) / (depth_max - depth_min)
181
- else:
182
- out = np.zeros(depth.shape, dtype=depth.type)
183
-
184
- if bits == 1:
185
- cv2.imwrite(path + ".png", out.astype("uint8"))
186
- elif bits == 2:
187
- cv2.imwrite(path + ".png", out.astype("uint16"))
188
-
189
- return
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb128_coslr-90e_in21k.py DELETED
@@ -1,11 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/resnet50.py', '../_base_/datasets/imagenet21k_bs128.py',
3
- '../_base_/schedules/imagenet_bs1024_coslr.py',
4
- '../_base_/default_runtime.py'
5
- ]
6
-
7
- # model settings
8
- model = dict(head=dict(num_classes=21843))
9
-
10
- # runtime settings
11
- train_cfg = dict(by_epoch=True, max_epochs=90)
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ababababababbababa/AraPoet/README.md DELETED
@@ -1,14 +0,0 @@
1
- ---
2
- title: AraPoet
3
- emoji: ✍️
4
- colorFrom: green
5
- colorTo: blue
6
- sdk: gradio
7
- sdk_version: 3.18.0
8
- app_file: app.py
9
- pinned: false
10
- license: gpl-3.0
11
- duplicated_from: bkhmsi/AraPoet
12
- ---
13
-
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/modules/seanet.py DELETED
@@ -1,258 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- import typing as tp
8
-
9
- import numpy as np
10
- import torch.nn as nn
11
-
12
- from .conv import StreamableConv1d, StreamableConvTranspose1d
13
- from .lstm import StreamableLSTM
14
-
15
-
16
- class SEANetResnetBlock(nn.Module):
17
- """Residual block from SEANet model.
18
-
19
- Args:
20
- dim (int): Dimension of the input/output.
21
- kernel_sizes (list): List of kernel sizes for the convolutions.
22
- dilations (list): List of dilations for the convolutions.
23
- activation (str): Activation function.
24
- activation_params (dict): Parameters to provide to the activation function.
25
- norm (str): Normalization method.
26
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
27
- causal (bool): Whether to use fully causal convolution.
28
- pad_mode (str): Padding mode for the convolutions.
29
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
30
- true_skip (bool): Whether to use true skip connection or a simple
31
- (streamable) convolution as the skip connection.
32
- """
33
- def __init__(self, dim: int, kernel_sizes: tp.List[int] = [3, 1], dilations: tp.List[int] = [1, 1],
34
- activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
35
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, causal: bool = False,
36
- pad_mode: str = 'reflect', compress: int = 2, true_skip: bool = True):
37
- super().__init__()
38
- assert len(kernel_sizes) == len(dilations), 'Number of kernel sizes should match number of dilations'
39
- act = getattr(nn, activation)
40
- hidden = dim // compress
41
- block = []
42
- for i, (kernel_size, dilation) in enumerate(zip(kernel_sizes, dilations)):
43
- in_chs = dim if i == 0 else hidden
44
- out_chs = dim if i == len(kernel_sizes) - 1 else hidden
45
- block += [
46
- act(**activation_params),
47
- StreamableConv1d(in_chs, out_chs, kernel_size=kernel_size, dilation=dilation,
48
- norm=norm, norm_kwargs=norm_params,
49
- causal=causal, pad_mode=pad_mode),
50
- ]
51
- self.block = nn.Sequential(*block)
52
- self.shortcut: nn.Module
53
- if true_skip:
54
- self.shortcut = nn.Identity()
55
- else:
56
- self.shortcut = StreamableConv1d(dim, dim, kernel_size=1, norm=norm, norm_kwargs=norm_params,
57
- causal=causal, pad_mode=pad_mode)
58
-
59
- def forward(self, x):
60
- return self.shortcut(x) + self.block(x)
61
-
62
-
63
- class SEANetEncoder(nn.Module):
64
- """SEANet encoder.
65
-
66
- Args:
67
- channels (int): Audio channels.
68
- dimension (int): Intermediate representation dimension.
69
- n_filters (int): Base width for the model.
70
- n_residual_layers (int): nb of residual layers.
71
- ratios (Sequence[int]): kernel size and stride ratios. The encoder uses downsampling ratios instead of
72
- upsampling ratios, hence it will use the ratios in the reverse order to the ones specified here
73
- that must match the decoder order. We use the decoder order as some models may only employ the decoder.
74
- activation (str): Activation function.
75
- activation_params (dict): Parameters to provide to the activation function.
76
- norm (str): Normalization method.
77
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
78
- kernel_size (int): Kernel size for the initial convolution.
79
- last_kernel_size (int): Kernel size for the initial convolution.
80
- residual_kernel_size (int): Kernel size for the residual layers.
81
- dilation_base (int): How much to increase the dilation with each layer.
82
- causal (bool): Whether to use fully causal convolution.
83
- pad_mode (str): Padding mode for the convolutions.
84
- true_skip (bool): Whether to use true skip connection or a simple
85
- (streamable) convolution as the skip connection in the residual network blocks.
86
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
87
- lstm (int): Number of LSTM layers at the end of the encoder.
88
- disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm.
89
- For the encoder, it corresponds to the N first blocks.
90
- """
91
- def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3,
92
- ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
93
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7,
94
- last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False,
95
- pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0,
96
- disable_norm_outer_blocks: int = 0):
97
- super().__init__()
98
- self.channels = channels
99
- self.dimension = dimension
100
- self.n_filters = n_filters
101
- self.ratios = list(reversed(ratios))
102
- del ratios
103
- self.n_residual_layers = n_residual_layers
104
- self.hop_length = np.prod(self.ratios)
105
- self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks
106
- self.disable_norm_outer_blocks = disable_norm_outer_blocks
107
- assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \
108
- "Number of blocks for which to disable norm is invalid." \
109
- "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0."
110
-
111
- act = getattr(nn, activation)
112
- mult = 1
113
- model: tp.List[nn.Module] = [
114
- StreamableConv1d(channels, mult * n_filters, kernel_size,
115
- norm='none' if self.disable_norm_outer_blocks >= 1 else norm,
116
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
117
- ]
118
- # Downsample to raw audio scale
119
- for i, ratio in enumerate(self.ratios):
120
- block_norm = 'none' if self.disable_norm_outer_blocks >= i + 2 else norm
121
- # Add residual layers
122
- for j in range(n_residual_layers):
123
- model += [
124
- SEANetResnetBlock(mult * n_filters, kernel_sizes=[residual_kernel_size, 1],
125
- dilations=[dilation_base ** j, 1],
126
- norm=block_norm, norm_params=norm_params,
127
- activation=activation, activation_params=activation_params,
128
- causal=causal, pad_mode=pad_mode, compress=compress, true_skip=true_skip)]
129
-
130
- # Add downsampling layers
131
- model += [
132
- act(**activation_params),
133
- StreamableConv1d(mult * n_filters, mult * n_filters * 2,
134
- kernel_size=ratio * 2, stride=ratio,
135
- norm=block_norm, norm_kwargs=norm_params,
136
- causal=causal, pad_mode=pad_mode),
137
- ]
138
- mult *= 2
139
-
140
- if lstm:
141
- model += [StreamableLSTM(mult * n_filters, num_layers=lstm)]
142
-
143
- model += [
144
- act(**activation_params),
145
- StreamableConv1d(mult * n_filters, dimension, last_kernel_size,
146
- norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm,
147
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
148
- ]
149
-
150
- self.model = nn.Sequential(*model)
151
-
152
- def forward(self, x):
153
- return self.model(x)
154
-
155
-
156
- class SEANetDecoder(nn.Module):
157
- """SEANet decoder.
158
-
159
- Args:
160
- channels (int): Audio channels.
161
- dimension (int): Intermediate representation dimension.
162
- n_filters (int): Base width for the model.
163
- n_residual_layers (int): nb of residual layers.
164
- ratios (Sequence[int]): kernel size and stride ratios.
165
- activation (str): Activation function.
166
- activation_params (dict): Parameters to provide to the activation function.
167
- final_activation (str): Final activation function after all convolutions.
168
- final_activation_params (dict): Parameters to provide to the activation function.
169
- norm (str): Normalization method.
170
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
171
- kernel_size (int): Kernel size for the initial convolution.
172
- last_kernel_size (int): Kernel size for the initial convolution.
173
- residual_kernel_size (int): Kernel size for the residual layers.
174
- dilation_base (int): How much to increase the dilation with each layer.
175
- causal (bool): Whether to use fully causal convolution.
176
- pad_mode (str): Padding mode for the convolutions.
177
- true_skip (bool): Whether to use true skip connection or a simple.
178
- (streamable) convolution as the skip connection in the residual network blocks.
179
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
180
- lstm (int): Number of LSTM layers at the end of the encoder.
181
- disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm.
182
- For the decoder, it corresponds to the N last blocks.
183
- trim_right_ratio (float): Ratio for trimming at the right of the transposed convolution under the causal setup.
184
- If equal to 1.0, it means that all the trimming is done at the right.
185
- """
186
- def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3,
187
- ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
188
- final_activation: tp.Optional[str] = None, final_activation_params: tp.Optional[dict] = None,
189
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7,
190
- last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False,
191
- pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0,
192
- disable_norm_outer_blocks: int = 0, trim_right_ratio: float = 1.0):
193
- super().__init__()
194
- self.dimension = dimension
195
- self.channels = channels
196
- self.n_filters = n_filters
197
- self.ratios = ratios
198
- del ratios
199
- self.n_residual_layers = n_residual_layers
200
- self.hop_length = np.prod(self.ratios)
201
- self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks
202
- self.disable_norm_outer_blocks = disable_norm_outer_blocks
203
- assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \
204
- "Number of blocks for which to disable norm is invalid." \
205
- "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0."
206
-
207
- act = getattr(nn, activation)
208
- mult = int(2 ** len(self.ratios))
209
- model: tp.List[nn.Module] = [
210
- StreamableConv1d(dimension, mult * n_filters, kernel_size,
211
- norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm,
212
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
213
- ]
214
-
215
- if lstm:
216
- model += [StreamableLSTM(mult * n_filters, num_layers=lstm)]
217
-
218
- # Upsample to raw audio scale
219
- for i, ratio in enumerate(self.ratios):
220
- block_norm = 'none' if self.disable_norm_outer_blocks >= self.n_blocks - (i + 1) else norm
221
- # Add upsampling layers
222
- model += [
223
- act(**activation_params),
224
- StreamableConvTranspose1d(mult * n_filters, mult * n_filters // 2,
225
- kernel_size=ratio * 2, stride=ratio,
226
- norm=block_norm, norm_kwargs=norm_params,
227
- causal=causal, trim_right_ratio=trim_right_ratio),
228
- ]
229
- # Add residual layers
230
- for j in range(n_residual_layers):
231
- model += [
232
- SEANetResnetBlock(mult * n_filters // 2, kernel_sizes=[residual_kernel_size, 1],
233
- dilations=[dilation_base ** j, 1],
234
- activation=activation, activation_params=activation_params,
235
- norm=block_norm, norm_params=norm_params, causal=causal,
236
- pad_mode=pad_mode, compress=compress, true_skip=true_skip)]
237
-
238
- mult //= 2
239
-
240
- # Add final layers
241
- model += [
242
- act(**activation_params),
243
- StreamableConv1d(n_filters, channels, last_kernel_size,
244
- norm='none' if self.disable_norm_outer_blocks >= 1 else norm,
245
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
246
- ]
247
- # Add optional final activation to decoder (eg. tanh)
248
- if final_activation is not None:
249
- final_act = getattr(nn, final_activation)
250
- final_activation_params = final_activation_params or {}
251
- model += [
252
- final_act(**final_activation_params)
253
- ]
254
- self.model = nn.Sequential(*model)
255
-
256
- def forward(self, z):
257
- y = self.model(z)
258
- return y
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AbeShinzo0708/AI_Kishida_Fumio_speaker/hooks/hook-jamo.py DELETED
@@ -1,3 +0,0 @@
1
- from PyInstaller.utils.hooks import copy_metadata
2
-
3
- datas = copy_metadata('jamo')
 
 
 
 
spaces/AchyuthGamer/OpenGPT/client/js/theme-toggler.js DELETED
@@ -1,22 +0,0 @@
1
- var switch_theme_toggler = document.getElementById("theme-toggler");
2
-
3
- switch_theme_toggler.addEventListener("change", toggleTheme);
4
-
5
- function setTheme(themeName) {
6
- localStorage.setItem("theme", themeName);
7
- document.documentElement.className = themeName;
8
- }
9
-
10
- function toggleTheme() {
11
- var currentTheme = localStorage.getItem("theme");
12
- var newTheme = currentTheme === "theme-dark" ? "theme-light" : "theme-dark";
13
-
14
- setTheme(newTheme);
15
- switch_theme_toggler.checked = newTheme === "theme-dark";
16
- }
17
-
18
- (function () {
19
- var currentTheme = localStorage.getItem("theme") || "theme-dark";
20
- setTheme(currentTheme);
21
- switch_theme_toggler.checked = currentTheme === "theme-dark";
22
- })();
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Adapter/CoAdapter/app.py DELETED
@@ -1,264 +0,0 @@
1
- # demo inspired by https://huggingface.co/spaces/lambdalabs/image-mixer-demo
2
- import argparse
3
- import copy
4
- import gradio as gr
5
- import torch
6
- from functools import partial
7
- from itertools import chain
8
- from torch import autocast
9
- from pytorch_lightning import seed_everything
10
-
11
- from basicsr.utils import tensor2img
12
- from ldm.inference_base import DEFAULT_NEGATIVE_PROMPT, diffusion_inference, get_adapters, get_sd_models
13
- from ldm.modules.extra_condition import api
14
- from ldm.modules.extra_condition.api import ExtraCondition, get_cond_model
15
- from ldm.modules.encoders.adapter import CoAdapterFuser
16
- import os
17
- from huggingface_hub import hf_hub_url
18
- import subprocess
19
- import shlex
20
- import cv2
21
-
22
- torch.set_grad_enabled(False)
23
-
24
- urls = {
25
- 'TencentARC/T2I-Adapter':[
26
- 'third-party-models/body_pose_model.pth', 'third-party-models/table5_pidinet.pth',
27
- 'models/coadapter-canny-sd15v1.pth',
28
- 'models/coadapter-color-sd15v1.pth',
29
- 'models/coadapter-sketch-sd15v1.pth',
30
- 'models/coadapter-style-sd15v1.pth',
31
- 'models/coadapter-depth-sd15v1.pth',
32
- 'models/coadapter-fuser-sd15v1.pth',
33
-
34
- ],
35
- 'runwayml/stable-diffusion-v1-5': ['v1-5-pruned-emaonly.ckpt'],
36
- 'andite/anything-v4.0': ['anything-v4.5-pruned.ckpt', 'anything-v4.0.vae.pt'],
37
- }
38
-
39
- if os.path.exists('models') == False:
40
- os.mkdir('models')
41
- for repo in urls:
42
- files = urls[repo]
43
- for file in files:
44
- url = hf_hub_url(repo, file)
45
- name_ckp = url.split('/')[-1]
46
- save_path = os.path.join('models',name_ckp)
47
- if os.path.exists(save_path) == False:
48
- subprocess.run(shlex.split(f'wget {url} -O {save_path}'))
49
-
50
- supported_cond = ['style', 'color', 'sketch', 'depth', 'canny']
51
-
52
- # config
53
- parser = argparse.ArgumentParser()
54
- parser.add_argument(
55
- '--sd_ckpt',
56
- type=str,
57
- default='models/v1-5-pruned-emaonly.ckpt',
58
- help='path to checkpoint of stable diffusion model, both .ckpt and .safetensor are supported',
59
- )
60
- parser.add_argument(
61
- '--vae_ckpt',
62
- type=str,
63
- default=None,
64
- help='vae checkpoint, anime SD models usually have seperate vae ckpt that need to be loaded',
65
- )
66
- global_opt = parser.parse_args()
67
- global_opt.config = 'configs/stable-diffusion/sd-v1-inference.yaml'
68
- for cond_name in supported_cond:
69
- setattr(global_opt, f'{cond_name}_adapter_ckpt', f'models/coadapter-{cond_name}-sd15v1.pth')
70
- global_opt.device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
71
- global_opt.max_resolution = 512 * 512
72
- global_opt.sampler = 'ddim'
73
- global_opt.cond_weight = 1.0
74
- global_opt.C = 4
75
- global_opt.f = 8
76
- #TODO: expose style_cond_tau to users
77
- global_opt.style_cond_tau = 1.0
78
-
79
- # stable-diffusion model
80
- sd_model, sampler = get_sd_models(global_opt)
81
- # adapters and models to processing condition inputs
82
- adapters = {}
83
- cond_models = {}
84
-
85
- torch.cuda.empty_cache()
86
-
87
- # fuser is indispensable
88
- coadapter_fuser = CoAdapterFuser(unet_channels=[320, 640, 1280, 1280], width=768, num_head=8, n_layes=3)
89
- coadapter_fuser.load_state_dict(torch.load(f'models/coadapter-fuser-sd15v1.pth'))
90
- coadapter_fuser = coadapter_fuser.to(global_opt.device)
91
-
92
-
93
- def run(*args):
94
- with torch.inference_mode(), \
95
- sd_model.ema_scope(), \
96
- autocast('cuda'):
97
-
98
- inps = []
99
- for i in range(0, len(args) - 8, len(supported_cond)):
100
- inps.append(args[i:i + len(supported_cond)])
101
-
102
- opt = copy.deepcopy(global_opt)
103
- opt.prompt, opt.neg_prompt, opt.scale, opt.n_samples, opt.seed, opt.steps, opt.resize_short_edge, opt.cond_tau \
104
- = args[-8:]
105
-
106
- ims1 = []
107
- ims2 = []
108
- for idx, (b, im1, im2, cond_weight) in enumerate(zip(*inps)):
109
- if idx > 0:
110
- if b != 'Nothing' and (im1 is not None or im2 is not None):
111
- if im1 is not None:
112
- h, w, _ = im1.shape
113
- else:
114
- h, w, _ = im2.shape
115
- # break
116
- # resize all the images to the same size
117
- for idx, (b, im1, im2, cond_weight) in enumerate(zip(*inps)):
118
- if idx == 0:
119
- ims1.append(im1)
120
- ims2.append(im2)
121
- continue
122
- if b != 'Nothing':
123
- if im1 is not None:
124
- im1 = cv2.resize(im1, (w, h), interpolation=cv2.INTER_CUBIC)
125
- if im2 is not None:
126
- im2 = cv2.resize(im2, (w, h), interpolation=cv2.INTER_CUBIC)
127
- ims1.append(im1)
128
- ims2.append(im2)
129
-
130
- conds = []
131
- activated_conds = []
132
- for idx, (b, im1, im2, cond_weight) in enumerate(zip(*inps)):
133
- cond_name = supported_cond[idx]
134
- if b == 'Nothing':
135
- if cond_name in adapters:
136
- adapters[cond_name]['model'] = adapters[cond_name]['model'].cpu()
137
- else:
138
- activated_conds.append(cond_name)
139
- if cond_name in adapters:
140
- adapters[cond_name]['model'] = adapters[cond_name]['model'].to(opt.device)
141
- else:
142
- adapters[cond_name] = get_adapters(opt, getattr(ExtraCondition, cond_name))
143
- adapters[cond_name]['cond_weight'] = cond_weight
144
-
145
- process_cond_module = getattr(api, f'get_cond_{cond_name}')
146
-
147
- if b == 'Image':
148
- if cond_name not in cond_models:
149
- cond_models[cond_name] = get_cond_model(opt, getattr(ExtraCondition, cond_name))
150
- conds.append(process_cond_module(opt, ims1[idx], 'image', cond_models[cond_name]))
151
- else:
152
- conds.append(process_cond_module(opt, ims2[idx], cond_name, None))
153
-
154
- features = dict()
155
- for idx, cond_name in enumerate(activated_conds):
156
- cur_feats = adapters[cond_name]['model'](conds[idx])
157
- if isinstance(cur_feats, list):
158
- for i in range(len(cur_feats)):
159
- cur_feats[i] *= adapters[cond_name]['cond_weight']
160
- else:
161
- cur_feats *= adapters[cond_name]['cond_weight']
162
- features[cond_name] = cur_feats
163
-
164
- adapter_features, append_to_context = coadapter_fuser(features)
165
-
166
- output_conds = []
167
- for cond in conds:
168
- output_conds.append(tensor2img(cond, rgb2bgr=False))
169
-
170
- ims = []
171
- seed_everything(opt.seed)
172
- for _ in range(opt.n_samples):
173
- result = diffusion_inference(opt, sd_model, sampler, adapter_features, append_to_context)
174
- ims.append(tensor2img(result, rgb2bgr=False))
175
-
176
- # Clear GPU memory cache so less likely to OOM
177
- torch.cuda.empty_cache()
178
- return ims, output_conds
179
-
180
-
181
- def change_visible(im1, im2, val):
182
- outputs = {}
183
- if val == "Image":
184
- outputs[im1] = gr.update(visible=True)
185
- outputs[im2] = gr.update(visible=False)
186
- elif val == "Nothing":
187
- outputs[im1] = gr.update(visible=False)
188
- outputs[im2] = gr.update(visible=False)
189
- else:
190
- outputs[im1] = gr.update(visible=False)
191
- outputs[im2] = gr.update(visible=True)
192
- return outputs
193
-
194
-
195
- DESCRIPTION = '# [CoAdapter (Composable Adapter)](https://github.com/TencentARC/T2I-Adapter)'
196
-
197
- DESCRIPTION += f'<p>Gradio demo for **CoAdapter**: [[GitHub]](https://github.com/TencentARC/T2I-Adapter), [[Details]](https://github.com/TencentARC/T2I-Adapter/blob/main/docs/coadapter.md). If CoAdapter is helpful, please help to ⭐ the [Github Repo](https://github.com/TencentARC/T2I-Adapter) and recommend it to your friends 😊 </p>'
198
-
199
- DESCRIPTION += f'<p>For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings. <a href="https://huggingface.co/spaces/Adapter/T2I-Adapter?duplicate=true"><img style="display: inline; margin-top: 0em; margin-bottom: 0em" src="https://bit.ly/3gLdBN6" alt="Duplicate Space" /></a></p>'
200
- # with gr.Blocks(title="CoAdapter", css=".gr-box {border-color: #8136e2}") as demo:
201
- with gr.Blocks(css='style.css') as demo:
202
- gr.Markdown(DESCRIPTION)
203
-
204
- btns = []
205
- ims1 = []
206
- ims2 = []
207
- cond_weights = []
208
-
209
- with gr.Row():
210
- for cond_name in supported_cond:
211
- with gr.Box():
212
- with gr.Column():
213
- if cond_name == 'style':
214
- btn1 = gr.Radio(
215
- choices=["Image", "Nothing"],
216
- label=f"Input type for {cond_name}",
217
- interactive=True,
218
- value="Nothing",
219
- )
220
- else:
221
- btn1 = gr.Radio(
222
- choices=["Image", cond_name, "Nothing"],
223
- label=f"Input type for {cond_name}",
224
- interactive=True,
225
- value="Nothing",
226
- )
227
- im1 = gr.Image(source='upload', label="Image", interactive=True, visible=False, type="numpy")
228
- im2 = gr.Image(source='upload', label=cond_name, interactive=True, visible=False, type="numpy")
229
- cond_weight = gr.Slider(
230
- label="Condition weight", minimum=0, maximum=5, step=0.05, value=1, interactive=True)
231
-
232
- fn = partial(change_visible, im1, im2)
233
- btn1.change(fn=fn, inputs=[btn1], outputs=[im1, im2], queue=False)
234
-
235
- btns.append(btn1)
236
- ims1.append(im1)
237
- ims2.append(im2)
238
- cond_weights.append(cond_weight)
239
-
240
- with gr.Column():
241
- prompt = gr.Textbox(label="Prompt")
242
- neg_prompt = gr.Textbox(label="Negative Prompt", value=DEFAULT_NEGATIVE_PROMPT)
243
- scale = gr.Slider(label="Guidance Scale (Classifier free guidance)", value=7.5, minimum=1, maximum=20, step=0.1)
244
- n_samples = gr.Slider(label="Num samples", value=1, minimum=1, maximum=1, step=1)
245
- seed = gr.Slider(label="Seed", value=42, minimum=0, maximum=10000, step=1)
246
- steps = gr.Slider(label="Steps", value=50, minimum=10, maximum=100, step=1)
247
- resize_short_edge = gr.Slider(label="Image resolution", value=512, minimum=320, maximum=1024, step=1)
248
- cond_tau = gr.Slider(
249
- label="timestamp parameter that determines until which step the adapter is applied",
250
- value=1.0,
251
- minimum=0.1,
252
- maximum=1.0,
253
- step=0.05)
254
-
255
- with gr.Row():
256
- submit = gr.Button("Generate")
257
- output = gr.Gallery().style(grid=2, height='auto')
258
- cond = gr.Gallery().style(grid=2, height='auto')
259
-
260
- inps = list(chain(btns, ims1, ims2, cond_weights))
261
- inps.extend([prompt, neg_prompt, scale, n_samples, seed, steps, resize_short_edge, cond_tau])
262
- submit.click(fn=run, inputs=inps, outputs=[output, cond])
263
- # demo.launch()
264
- demo.queue().launch(debug=True, server_name='0.0.0.0')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/Sizer.d.ts DELETED
@@ -1,203 +0,0 @@
1
- // import * as Phaser from 'phaser';
2
- import BaseSizer from '../basesizer/BaseSizer.js';
3
-
4
- export default Sizer;
5
-
6
- declare namespace Sizer {
7
-
8
- type OrientationTypes = 0 | 1 | 'x' | 'y' | 'h' | 'v' | 'horizontal' | 'vertical' | 'left-to-right' | 'top-to-bottom';
9
-
10
- type AlignTypes = number | 'center' | 'left' | 'right' | 'top' | 'bottom' |
11
- 'left-top' | 'left-center' | 'left-bottom' |
12
- 'center-top' | 'center-center' | 'center-bottom' |
13
- 'right-top' | 'right-center' | 'right-bottom';
14
-
15
- type PaddingTypes = number |
16
- {
17
- left?: number,
18
- right?: number,
19
- top?: number,
20
- bottom?: number
21
- }
22
-
23
- interface IConfig extends BaseSizer.IConfig {
24
- x?: number,
25
- y?: number,
26
- width?: number,
27
- height?: number,
28
- orientation?: OrientationTypes,
29
- rtl?: boolean,
30
- space?: {
31
- left?: number, right?: number, top?: number, bottom?: number,
32
-
33
- item?: number,
34
- },
35
- }
36
- }
37
-
38
- declare class Sizer extends BaseSizer {
39
-
40
- sizerChildren: Phaser.GameObjects.GameObject[];
41
-
42
- constructor(
43
- scene: Phaser.Scene,
44
- config?: Sizer.IConfig
45
- );
46
-
47
- constructor(
48
- scene: Phaser.Scene,
49
- x: number, y: number,
50
- config?: Sizer.IConfig
51
- );
52
-
53
- constructor(
54
- scene: Phaser.Scene,
55
- x: number, y: number,
56
- width: number, height: number,
57
- config?: Sizer.IConfig
58
- );
59
-
60
- constructor(
61
- scene: Phaser.Scene,
62
- x: number, y: number,
63
- width: number, height: number,
64
- orientation?: Sizer.OrientationTypes,
65
- config?: Sizer.IConfig
66
- );
67
-
68
- setOrientation(
69
- orientation?: Sizer.OrientationTypes
70
- ): this;
71
- orientation: number;
72
-
73
- setRTL(enable?: boolean): this;
74
-
75
- setItemSpacing(value: number): this;
76
-
77
- add(
78
- gameObject: Phaser.GameObjects.GameObject,
79
- config?: {
80
- proportion?: number,
81
- align?: Sizer.AlignTypes,
82
- padding?: Sizer.PaddingTypes,
83
- expand?: boolean,
84
- key?: string,
85
- index?: number,
86
- minWidth?: number,
87
- minHeight?: number,
88
- fitRatio?: number,
89
- }
90
- ): this;
91
-
92
- add(
93
- gameObject: Phaser.GameObjects.GameObject,
94
- proportion?: number,
95
- align?: Sizer.AlignTypes,
96
- padding?: Sizer.PaddingTypes,
97
- expand?: boolean,
98
- key?: string,
99
- index?: number,
100
- minWidth?: number,
101
- minHeight?: number,
102
- fitRatio?: number,
103
- ): this;
104
-
105
- insert(
106
- index: number,
107
- gameObject: Phaser.GameObjects.GameObject,
108
- config?: {
109
- proportion?: number,
110
- align?: Sizer.AlignTypes,
111
- padding?: Sizer.PaddingTypes,
112
- expand?: boolean,
113
- key?: string,
114
- minWidth?: number,
115
- minHeight?: number
116
- }
117
- ): this;
118
-
119
- insert(
120
- index: number,
121
- gameObject: Phaser.GameObjects.GameObject,
122
- proportion?: number,
123
- align?: Sizer.AlignTypes,
124
- padding?: Sizer.PaddingTypes,
125
- expand?: boolean,
126
- key?: string
127
- ): this;
128
-
129
- insertAtPosition(
130
- x: number,
131
- y: number,
132
- gameObject: Phaser.GameObjects.GameObject,
133
- config?: {
134
- proportion?: number,
135
- align?: Sizer.AlignTypes,
136
- padding?: Sizer.PaddingTypes,
137
- expand?: boolean,
138
- key?: string,
139
- minWidth?: number,
140
- minHeight?: number
141
- }
142
- ): this;
143
-
144
- insertAtPosition(
145
- x: number,
146
- y: number,
147
- gameObject: Phaser.GameObjects.GameObject,
148
- proportion?: number,
149
- align?: Sizer.AlignTypes,
150
- padding?: Sizer.PaddingTypes,
151
- expand?: boolean,
152
- key?: string
153
- ): this;
154
-
155
- addSpace(
156
- proportion?: number
157
- ): this;
158
-
159
- insertSpace(
160
- index?: number,
161
- proportion?: number
162
- ): this;
163
-
164
- remove(
165
- gameObject: Phaser.GameObjects.GameObject,
166
- destroyChild?: boolean
167
- ): this;
168
-
169
- removeAll(
170
- destroyChild?: boolean
171
- ): this;
172
-
173
- clear(
174
- destroyChild?: boolean
175
- ): this;
176
-
177
- getChildAlign(
178
- gameObject: Phaser.GameObjects.GameObject
179
- ): Sizer.AlignTypes;
180
-
181
- setChildAlign(
182
- gameObject: Phaser.GameObjects.GameObject,
183
- align: Sizer.AlignTypes
184
- ): this;
185
-
186
- getChildProportion(
187
- gameObject: Phaser.GameObjects.GameObject
188
- ): number;
189
-
190
- setChildProportion(
191
- gameObject: Phaser.GameObjects.GameObject,
192
- proportion: number
193
- ): this;
194
-
195
- getChildExpand(
196
- gameObject: Phaser.GameObjects.GameObject
197
- ): boolean;
198
-
199
- setChildExpand(
200
- gameObject: Phaser.GameObjects.GameObject,
201
- expand: boolean
202
- ): this;
203
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/swipe/Swipe.js DELETED
@@ -1,2 +0,0 @@
1
- import { Swipe } from '../../../plugins/gestures.js';
2
- export default Swipe;
 
 
 
spaces/AlexWang/lama/models/ade20k/segm_lib/nn/__init__.py DELETED
@@ -1,2 +0,0 @@
1
- from .modules import *
2
- from .parallel import UserScatteredDataParallel, user_scattered_collate, async_copy_to
 
 
 
spaces/Alfasign/Midjourney_Prompt/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Midjourney Prompt
3
- emoji: 🐢
4
- colorFrom: blue
5
- colorTo: gray
6
- sdk: gradio
7
- sdk_version: 3.36.1
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_r34.py DELETED
@@ -1,26 +0,0 @@
1
- from easydict import EasyDict as edict
2
-
3
- # make training faster
4
- # our RAM is 256G
5
- # mount -t tmpfs -o size=140G tmpfs /train_tmp
6
-
7
- config = edict()
8
- config.loss = "cosface"
9
- config.network = "r34"
10
- config.resume = False
11
- config.output = None
12
- config.embedding_size = 512
13
- config.sample_rate = 1.0
14
- config.fp16 = True
15
- config.momentum = 0.9
16
- config.weight_decay = 5e-4
17
- config.batch_size = 128
18
- config.lr = 0.1 # batch size is 512
19
-
20
- config.rec = "/train_tmp/glint360k"
21
- config.num_classes = 360232
22
- config.num_image = 17091657
23
- config.num_epoch = 20
24
- config.warmup_epoch = -1
25
- config.decay_epoch = [8, 12, 15, 18]
26
- config.val_targets = ["lfw", "cfp_fp", "agedb_30"]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/train.py DELETED
@@ -1,73 +0,0 @@
1
- import os
2
-
3
- import torch
4
- from lightning_fabric import seed_everything
5
- import pytorch_lightning as pl
6
- from pytorch_lightning.loggers.wandb import WandbLogger
7
- import datetime
8
- import wandb
9
-
10
- from src.callback import CALLBACK_REGISTRY
11
-
12
- from src.loop.feature_training_loop import FeatureTrainingLoop
13
- from src.loop.style_training_loop import StyleTrainingLoop
14
- from src.model import MODEL_REGISTRY
15
- from src.utils.opt import Opts
16
- from src.utils.renderer import OctreeRender_trilinear_fast
17
-
18
-
19
- def train(config):
20
- model = MODEL_REGISTRY.get(config["model"]["name"])(config)
21
- epoch = config["trainer"]["n_iters"]
22
-
23
- time_str = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
24
- run_name = f"{config['global']['name']}-{time_str}"
25
-
26
- wandb_logger = WandbLogger(
27
- project=config["global"]["project_name"],
28
- name=run_name,
29
- save_dir=config["global"]["save_dir"],
30
- entity=config["global"]["username"],
31
- )
32
- wandb_logger.watch((model))
33
- wandb_logger.experiment.config.update(config)
34
-
35
- callbacks = [
36
- CALLBACK_REGISTRY.get(mcfg["name"])(**mcfg["params"])
37
- for mcfg in config["callbacks"]
38
- ]
39
-
40
- trainer = pl.Trainer(
41
- default_root_dir="src",
42
- check_val_every_n_epoch=config["trainer"]["evaluate_interval"],
43
- log_every_n_steps=config["trainer"]["log_interval"],
44
- enable_checkpointing=True,
45
- accelerator="gpu" if torch.cuda.is_available() else "auto",
46
- devices=-1,
47
- sync_batchnorm=True if torch.cuda.is_available() else False,
48
- precision=16 if config["trainer"]["use_fp16"] else 32,
49
- fast_dev_run=config["trainer"]["debug"],
50
- logger=wandb_logger,
51
- callbacks=callbacks,
52
- num_sanity_val_steps=-1, # Sanity full validation required for visualization callbacks
53
- deterministic=False,
54
- auto_lr_find=True,
55
- )
56
-
57
- print("Trainer: ", trainer)
58
- if cfg["model"]["type"] == "feature":
59
- trainer.fit_loop = FeatureTrainingLoop(epoch=epoch, cfg=config, renderer=OctreeRender_trilinear_fast)
60
- elif cfg["model"]["type"] == "style":
61
- trainer.fit_loop = StyleTrainingLoop(epoch=epoch, cfg=config, renderer=OctreeRender_trilinear_fast)
62
- else:
63
- raise NotImplementedError
64
-
65
- trainer.fit(model, ckpt_path=config["global"]["resume"])
66
- return os.path.join(os.path.join(os.path.join(config["global"]["save_dir"],
67
- config["global"]["project_name"]), wandb.run.id), "checkpoints")
68
-
69
-
70
- if __name__ == "__main__":
71
- cfg = Opts(cfg="configs/style_baseline.yml").parse_args()
72
- seed_everything(seed=cfg["global"]["SEED"])
73
- train(cfg)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/self_attention_guidance.md DELETED
@@ -1,35 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Self-Attention Guidance
14
-
15
- [Improving Sample Quality of Diffusion Models Using Self-Attention Guidance](https://huggingface.co/papers/2210.00939) is by Susung Hong et al.
16
-
17
- The abstract from the paper is:
18
-
19
- *Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement.*
20
-
21
- You can find additional information about Self-Attention Guidance on the [project page](https://ku-cvlab.github.io/Self-Attention-Guidance), [original codebase](https://github.com/KU-CVLAB/Self-Attention-Guidance), and try it out in a [demo](https://huggingface.co/spaces/susunghong/Self-Attention-Guidance) or [notebook](https://colab.research.google.com/github/SusungHong/Self-Attention-Guidance/blob/main/SAG_Stable.ipynb).
22
-
23
- <Tip>
24
-
25
- Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
26
-
27
- </Tip>
28
-
29
- ## StableDiffusionSAGPipeline
30
- [[autodoc]] StableDiffusionSAGPipeline
31
- - __call__
32
- - all
33
-
34
- ## StableDiffusionOutput
35
- [[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py DELETED
@@ -1,900 +0,0 @@
1
- # Copyright 2023 The HuggingFace Team. All rights reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
-
15
- import inspect
16
- import warnings
17
- from typing import Any, Callable, Dict, List, Optional, Tuple, Union
18
-
19
- import torch
20
- from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
21
- from transformers.models.clip.modeling_clip import CLIPTextModelOutput
22
-
23
- from ...image_processor import VaeImageProcessor
24
- from ...loaders import LoraLoaderMixin, TextualInversionLoaderMixin
25
- from ...models import AutoencoderKL, PriorTransformer, UNet2DConditionModel
26
- from ...models.embeddings import get_timestep_embedding
27
- from ...schedulers import KarrasDiffusionSchedulers
28
- from ...utils import is_accelerate_available, is_accelerate_version, logging, randn_tensor, replace_example_docstring
29
- from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
30
- from .stable_unclip_image_normalizer import StableUnCLIPImageNormalizer
31
-
32
-
33
- logger = logging.get_logger(__name__) # pylint: disable=invalid-name
34
-
35
- EXAMPLE_DOC_STRING = """
36
- Examples:
37
- ```py
38
- >>> import torch
39
- >>> from diffusers import StableUnCLIPPipeline
40
-
41
- >>> pipe = StableUnCLIPPipeline.from_pretrained(
42
- ... "fusing/stable-unclip-2-1-l", torch_dtype=torch.float16
43
- ... ) # TODO update model path
44
- >>> pipe = pipe.to("cuda")
45
-
46
- >>> prompt = "a photo of an astronaut riding a horse on mars"
47
- >>> images = pipe(prompt).images
48
- >>> images[0].save("astronaut_horse.png")
49
- ```
50
- """
51
-
52
-
53
- class StableUnCLIPPipeline(DiffusionPipeline, TextualInversionLoaderMixin, LoraLoaderMixin):
54
- """
55
- Pipeline for text-to-image generation using stable unCLIP.
56
-
57
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
58
- implemented for all pipelines (downloading, saving, running on a particular device, etc.).
59
-
60
- Args:
61
- prior_tokenizer ([`CLIPTokenizer`]):
62
- A [`CLIPTokenizer`].
63
- prior_text_encoder ([`CLIPTextModelWithProjection`]):
64
- Frozen [`CLIPTextModelWithProjection`] text-encoder.
65
- prior ([`PriorTransformer`]):
66
- The canonincal unCLIP prior to approximate the image embedding from the text embedding.
67
- prior_scheduler ([`KarrasDiffusionSchedulers`]):
68
- Scheduler used in the prior denoising process.
69
- image_normalizer ([`StableUnCLIPImageNormalizer`]):
70
- Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image
71
- embeddings after the noise has been applied.
72
- image_noising_scheduler ([`KarrasDiffusionSchedulers`]):
73
- Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined
74
- by the `noise_level`.
75
- tokenizer ([`CLIPTokenizer`]):
76
- A [`CLIPTokenizer`].
77
- text_encoder ([`CLIPTextModel`]):
78
- Frozen [`CLIPTextModel`] text-encoder.
79
- unet ([`UNet2DConditionModel`]):
80
- A [`UNet2DConditionModel`] to denoise the encoded image latents.
81
- scheduler ([`KarrasDiffusionSchedulers`]):
82
- A scheduler to be used in combination with `unet` to denoise the encoded image latents.
83
- vae ([`AutoencoderKL`]):
84
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
85
- """
86
-
87
- _exclude_from_cpu_offload = ["prior", "image_normalizer"]
88
-
89
- # prior components
90
- prior_tokenizer: CLIPTokenizer
91
- prior_text_encoder: CLIPTextModelWithProjection
92
- prior: PriorTransformer
93
- prior_scheduler: KarrasDiffusionSchedulers
94
-
95
- # image noising components
96
- image_normalizer: StableUnCLIPImageNormalizer
97
- image_noising_scheduler: KarrasDiffusionSchedulers
98
-
99
- # regular denoising components
100
- tokenizer: CLIPTokenizer
101
- text_encoder: CLIPTextModel
102
- unet: UNet2DConditionModel
103
- scheduler: KarrasDiffusionSchedulers
104
-
105
- vae: AutoencoderKL
106
-
107
- def __init__(
108
- self,
109
- # prior components
110
- prior_tokenizer: CLIPTokenizer,
111
- prior_text_encoder: CLIPTextModelWithProjection,
112
- prior: PriorTransformer,
113
- prior_scheduler: KarrasDiffusionSchedulers,
114
- # image noising components
115
- image_normalizer: StableUnCLIPImageNormalizer,
116
- image_noising_scheduler: KarrasDiffusionSchedulers,
117
- # regular denoising components
118
- tokenizer: CLIPTokenizer,
119
- text_encoder: CLIPTextModelWithProjection,
120
- unet: UNet2DConditionModel,
121
- scheduler: KarrasDiffusionSchedulers,
122
- # vae
123
- vae: AutoencoderKL,
124
- ):
125
- super().__init__()
126
-
127
- self.register_modules(
128
- prior_tokenizer=prior_tokenizer,
129
- prior_text_encoder=prior_text_encoder,
130
- prior=prior,
131
- prior_scheduler=prior_scheduler,
132
- image_normalizer=image_normalizer,
133
- image_noising_scheduler=image_noising_scheduler,
134
- tokenizer=tokenizer,
135
- text_encoder=text_encoder,
136
- unet=unet,
137
- scheduler=scheduler,
138
- vae=vae,
139
- )
140
-
141
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
142
- self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
143
-
144
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing
145
- def enable_vae_slicing(self):
146
- r"""
147
- Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
148
- compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
149
- """
150
- self.vae.enable_slicing()
151
-
152
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing
153
- def disable_vae_slicing(self):
154
- r"""
155
- Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
156
- computing decoding in one step.
157
- """
158
- self.vae.disable_slicing()
159
-
160
- def enable_model_cpu_offload(self, gpu_id=0):
161
- r"""
162
- Offload all models to CPU to reduce memory usage with a low impact on performance. Moves one whole model at a
163
- time to the GPU when its `forward` method is called, and the model remains in GPU until the next model runs.
164
- Memory savings are lower than using `enable_sequential_cpu_offload`, but performance is much better due to the
165
- iterative execution of the `unet`.
166
- """
167
- if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
168
- from accelerate import cpu_offload_with_hook
169
- else:
170
- raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
171
-
172
- device = torch.device(f"cuda:{gpu_id}")
173
-
174
- if self.device.type != "cpu":
175
- self.to("cpu", silence_dtype_warnings=True)
176
- torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
177
-
178
- hook = None
179
- for cpu_offloaded_model in [self.text_encoder, self.prior_text_encoder, self.unet, self.vae]:
180
- _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
181
-
182
- # We'll offload the last model manually.
183
- self.final_offload_hook = hook
184
-
185
- # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline._encode_prompt with _encode_prompt->_encode_prior_prompt, tokenizer->prior_tokenizer, text_encoder->prior_text_encoder
186
- def _encode_prior_prompt(
187
- self,
188
- prompt,
189
- device,
190
- num_images_per_prompt,
191
- do_classifier_free_guidance,
192
- text_model_output: Optional[Union[CLIPTextModelOutput, Tuple]] = None,
193
- text_attention_mask: Optional[torch.Tensor] = None,
194
- ):
195
- if text_model_output is None:
196
- batch_size = len(prompt) if isinstance(prompt, list) else 1
197
- # get prompt text embeddings
198
- text_inputs = self.prior_tokenizer(
199
- prompt,
200
- padding="max_length",
201
- max_length=self.prior_tokenizer.model_max_length,
202
- truncation=True,
203
- return_tensors="pt",
204
- )
205
- text_input_ids = text_inputs.input_ids
206
- text_mask = text_inputs.attention_mask.bool().to(device)
207
-
208
- untruncated_ids = self.prior_tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
209
-
210
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
211
- text_input_ids, untruncated_ids
212
- ):
213
- removed_text = self.prior_tokenizer.batch_decode(
214
- untruncated_ids[:, self.prior_tokenizer.model_max_length - 1 : -1]
215
- )
216
- logger.warning(
217
- "The following part of your input was truncated because CLIP can only handle sequences up to"
218
- f" {self.prior_tokenizer.model_max_length} tokens: {removed_text}"
219
- )
220
- text_input_ids = text_input_ids[:, : self.prior_tokenizer.model_max_length]
221
-
222
- prior_text_encoder_output = self.prior_text_encoder(text_input_ids.to(device))
223
-
224
- prompt_embeds = prior_text_encoder_output.text_embeds
225
- prior_text_encoder_hidden_states = prior_text_encoder_output.last_hidden_state
226
-
227
- else:
228
- batch_size = text_model_output[0].shape[0]
229
- prompt_embeds, prior_text_encoder_hidden_states = text_model_output[0], text_model_output[1]
230
- text_mask = text_attention_mask
231
-
232
- prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0)
233
- prior_text_encoder_hidden_states = prior_text_encoder_hidden_states.repeat_interleave(
234
- num_images_per_prompt, dim=0
235
- )
236
- text_mask = text_mask.repeat_interleave(num_images_per_prompt, dim=0)
237
-
238
- if do_classifier_free_guidance:
239
- uncond_tokens = [""] * batch_size
240
-
241
- uncond_input = self.prior_tokenizer(
242
- uncond_tokens,
243
- padding="max_length",
244
- max_length=self.prior_tokenizer.model_max_length,
245
- truncation=True,
246
- return_tensors="pt",
247
- )
248
- uncond_text_mask = uncond_input.attention_mask.bool().to(device)
249
- negative_prompt_embeds_prior_text_encoder_output = self.prior_text_encoder(
250
- uncond_input.input_ids.to(device)
251
- )
252
-
253
- negative_prompt_embeds = negative_prompt_embeds_prior_text_encoder_output.text_embeds
254
- uncond_prior_text_encoder_hidden_states = (
255
- negative_prompt_embeds_prior_text_encoder_output.last_hidden_state
256
- )
257
-
258
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
259
-
260
- seq_len = negative_prompt_embeds.shape[1]
261
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt)
262
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len)
263
-
264
- seq_len = uncond_prior_text_encoder_hidden_states.shape[1]
265
- uncond_prior_text_encoder_hidden_states = uncond_prior_text_encoder_hidden_states.repeat(
266
- 1, num_images_per_prompt, 1
267
- )
268
- uncond_prior_text_encoder_hidden_states = uncond_prior_text_encoder_hidden_states.view(
269
- batch_size * num_images_per_prompt, seq_len, -1
270
- )
271
- uncond_text_mask = uncond_text_mask.repeat_interleave(num_images_per_prompt, dim=0)
272
-
273
- # done duplicates
274
-
275
- # For classifier free guidance, we need to do two forward passes.
276
- # Here we concatenate the unconditional and text embeddings into a single batch
277
- # to avoid doing two forward passes
278
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
279
- prior_text_encoder_hidden_states = torch.cat(
280
- [uncond_prior_text_encoder_hidden_states, prior_text_encoder_hidden_states]
281
- )
282
-
283
- text_mask = torch.cat([uncond_text_mask, text_mask])
284
-
285
- return prompt_embeds, prior_text_encoder_hidden_states, text_mask
286
-
287
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt
288
- def _encode_prompt(
289
- self,
290
- prompt,
291
- device,
292
- num_images_per_prompt,
293
- do_classifier_free_guidance,
294
- negative_prompt=None,
295
- prompt_embeds: Optional[torch.FloatTensor] = None,
296
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
297
- lora_scale: Optional[float] = None,
298
- ):
299
- r"""
300
- Encodes the prompt into text encoder hidden states.
301
-
302
- Args:
303
- prompt (`str` or `List[str]`, *optional*):
304
- prompt to be encoded
305
- device: (`torch.device`):
306
- torch device
307
- num_images_per_prompt (`int`):
308
- number of images that should be generated per prompt
309
- do_classifier_free_guidance (`bool`):
310
- whether to use classifier free guidance or not
311
- negative_prompt (`str` or `List[str]`, *optional*):
312
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
313
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
314
- less than `1`).
315
- prompt_embeds (`torch.FloatTensor`, *optional*):
316
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
317
- provided, text embeddings will be generated from `prompt` input argument.
318
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
319
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
320
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
321
- argument.
322
- lora_scale (`float`, *optional*):
323
- A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
324
- """
325
- # set lora scale so that monkey patched LoRA
326
- # function of text encoder can correctly access it
327
- if lora_scale is not None and isinstance(self, LoraLoaderMixin):
328
- self._lora_scale = lora_scale
329
-
330
- if prompt is not None and isinstance(prompt, str):
331
- batch_size = 1
332
- elif prompt is not None and isinstance(prompt, list):
333
- batch_size = len(prompt)
334
- else:
335
- batch_size = prompt_embeds.shape[0]
336
-
337
- if prompt_embeds is None:
338
- # textual inversion: procecss multi-vector tokens if necessary
339
- if isinstance(self, TextualInversionLoaderMixin):
340
- prompt = self.maybe_convert_prompt(prompt, self.tokenizer)
341
-
342
- text_inputs = self.tokenizer(
343
- prompt,
344
- padding="max_length",
345
- max_length=self.tokenizer.model_max_length,
346
- truncation=True,
347
- return_tensors="pt",
348
- )
349
- text_input_ids = text_inputs.input_ids
350
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
351
-
352
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
353
- text_input_ids, untruncated_ids
354
- ):
355
- removed_text = self.tokenizer.batch_decode(
356
- untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
357
- )
358
- logger.warning(
359
- "The following part of your input was truncated because CLIP can only handle sequences up to"
360
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
361
- )
362
-
363
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
364
- attention_mask = text_inputs.attention_mask.to(device)
365
- else:
366
- attention_mask = None
367
-
368
- prompt_embeds = self.text_encoder(
369
- text_input_ids.to(device),
370
- attention_mask=attention_mask,
371
- )
372
- prompt_embeds = prompt_embeds[0]
373
-
374
- prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
375
-
376
- bs_embed, seq_len, _ = prompt_embeds.shape
377
- # duplicate text embeddings for each generation per prompt, using mps friendly method
378
- prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
379
- prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
380
-
381
- # get unconditional embeddings for classifier free guidance
382
- if do_classifier_free_guidance and negative_prompt_embeds is None:
383
- uncond_tokens: List[str]
384
- if negative_prompt is None:
385
- uncond_tokens = [""] * batch_size
386
- elif prompt is not None and type(prompt) is not type(negative_prompt):
387
- raise TypeError(
388
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
389
- f" {type(prompt)}."
390
- )
391
- elif isinstance(negative_prompt, str):
392
- uncond_tokens = [negative_prompt]
393
- elif batch_size != len(negative_prompt):
394
- raise ValueError(
395
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
396
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
397
- " the batch size of `prompt`."
398
- )
399
- else:
400
- uncond_tokens = negative_prompt
401
-
402
- # textual inversion: procecss multi-vector tokens if necessary
403
- if isinstance(self, TextualInversionLoaderMixin):
404
- uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer)
405
-
406
- max_length = prompt_embeds.shape[1]
407
- uncond_input = self.tokenizer(
408
- uncond_tokens,
409
- padding="max_length",
410
- max_length=max_length,
411
- truncation=True,
412
- return_tensors="pt",
413
- )
414
-
415
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
416
- attention_mask = uncond_input.attention_mask.to(device)
417
- else:
418
- attention_mask = None
419
-
420
- negative_prompt_embeds = self.text_encoder(
421
- uncond_input.input_ids.to(device),
422
- attention_mask=attention_mask,
423
- )
424
- negative_prompt_embeds = negative_prompt_embeds[0]
425
-
426
- if do_classifier_free_guidance:
427
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
428
- seq_len = negative_prompt_embeds.shape[1]
429
-
430
- negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
431
-
432
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
433
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
434
-
435
- # For classifier free guidance, we need to do two forward passes.
436
- # Here we concatenate the unconditional and text embeddings into a single batch
437
- # to avoid doing two forward passes
438
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
439
-
440
- return prompt_embeds
441
-
442
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents
443
- def decode_latents(self, latents):
444
- warnings.warn(
445
- "The decode_latents method is deprecated and will be removed in a future version. Please"
446
- " use VaeImageProcessor instead",
447
- FutureWarning,
448
- )
449
- latents = 1 / self.vae.config.scaling_factor * latents
450
- image = self.vae.decode(latents, return_dict=False)[0]
451
- image = (image / 2 + 0.5).clamp(0, 1)
452
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
453
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
454
- return image
455
-
456
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs with prepare_extra_step_kwargs->prepare_prior_extra_step_kwargs, scheduler->prior_scheduler
457
- def prepare_prior_extra_step_kwargs(self, generator, eta):
458
- # prepare extra kwargs for the prior_scheduler step, since not all prior_schedulers have the same signature
459
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other prior_schedulers.
460
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
461
- # and should be between [0, 1]
462
-
463
- accepts_eta = "eta" in set(inspect.signature(self.prior_scheduler.step).parameters.keys())
464
- extra_step_kwargs = {}
465
- if accepts_eta:
466
- extra_step_kwargs["eta"] = eta
467
-
468
- # check if the prior_scheduler accepts generator
469
- accepts_generator = "generator" in set(inspect.signature(self.prior_scheduler.step).parameters.keys())
470
- if accepts_generator:
471
- extra_step_kwargs["generator"] = generator
472
- return extra_step_kwargs
473
-
474
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
475
- def prepare_extra_step_kwargs(self, generator, eta):
476
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
477
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
478
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
479
- # and should be between [0, 1]
480
-
481
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
482
- extra_step_kwargs = {}
483
- if accepts_eta:
484
- extra_step_kwargs["eta"] = eta
485
-
486
- # check if the scheduler accepts generator
487
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
488
- if accepts_generator:
489
- extra_step_kwargs["generator"] = generator
490
- return extra_step_kwargs
491
-
492
- def check_inputs(
493
- self,
494
- prompt,
495
- height,
496
- width,
497
- callback_steps,
498
- noise_level,
499
- negative_prompt=None,
500
- prompt_embeds=None,
501
- negative_prompt_embeds=None,
502
- ):
503
- if height % 8 != 0 or width % 8 != 0:
504
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
505
-
506
- if (callback_steps is None) or (
507
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
508
- ):
509
- raise ValueError(
510
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
511
- f" {type(callback_steps)}."
512
- )
513
-
514
- if prompt is not None and prompt_embeds is not None:
515
- raise ValueError(
516
- "Provide either `prompt` or `prompt_embeds`. Please make sure to define only one of the two."
517
- )
518
-
519
- if prompt is None and prompt_embeds is None:
520
- raise ValueError(
521
- "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
522
- )
523
-
524
- if prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
525
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
526
-
527
- if negative_prompt is not None and negative_prompt_embeds is not None:
528
- raise ValueError(
529
- "Provide either `negative_prompt` or `negative_prompt_embeds`. Cannot leave both `negative_prompt` and `negative_prompt_embeds` undefined."
530
- )
531
-
532
- if prompt is not None and negative_prompt is not None:
533
- if type(prompt) is not type(negative_prompt):
534
- raise TypeError(
535
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
536
- f" {type(prompt)}."
537
- )
538
-
539
- if prompt_embeds is not None and negative_prompt_embeds is not None:
540
- if prompt_embeds.shape != negative_prompt_embeds.shape:
541
- raise ValueError(
542
- "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
543
- f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
544
- f" {negative_prompt_embeds.shape}."
545
- )
546
-
547
- if noise_level < 0 or noise_level >= self.image_noising_scheduler.config.num_train_timesteps:
548
- raise ValueError(
549
- f"`noise_level` must be between 0 and {self.image_noising_scheduler.config.num_train_timesteps - 1}, inclusive."
550
- )
551
-
552
- # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents
553
- def prepare_latents(self, shape, dtype, device, generator, latents, scheduler):
554
- if latents is None:
555
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
556
- else:
557
- if latents.shape != shape:
558
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
559
- latents = latents.to(device)
560
-
561
- latents = latents * scheduler.init_noise_sigma
562
- return latents
563
-
564
- def noise_image_embeddings(
565
- self,
566
- image_embeds: torch.Tensor,
567
- noise_level: int,
568
- noise: Optional[torch.FloatTensor] = None,
569
- generator: Optional[torch.Generator] = None,
570
- ):
571
- """
572
- Add noise to the image embeddings. The amount of noise is controlled by a `noise_level` input. A higher
573
- `noise_level` increases the variance in the final un-noised images.
574
-
575
- The noise is applied in two ways:
576
- 1. A noise schedule is applied directly to the embeddings.
577
- 2. A vector of sinusoidal time embeddings are appended to the output.
578
-
579
- In both cases, the amount of noise is controlled by the same `noise_level`.
580
-
581
- The embeddings are normalized before the noise is applied and un-normalized after the noise is applied.
582
- """
583
- if noise is None:
584
- noise = randn_tensor(
585
- image_embeds.shape, generator=generator, device=image_embeds.device, dtype=image_embeds.dtype
586
- )
587
-
588
- noise_level = torch.tensor([noise_level] * image_embeds.shape[0], device=image_embeds.device)
589
-
590
- self.image_normalizer.to(image_embeds.device)
591
- image_embeds = self.image_normalizer.scale(image_embeds)
592
-
593
- image_embeds = self.image_noising_scheduler.add_noise(image_embeds, timesteps=noise_level, noise=noise)
594
-
595
- image_embeds = self.image_normalizer.unscale(image_embeds)
596
-
597
- noise_level = get_timestep_embedding(
598
- timesteps=noise_level, embedding_dim=image_embeds.shape[-1], flip_sin_to_cos=True, downscale_freq_shift=0
599
- )
600
-
601
- # `get_timestep_embeddings` does not contain any weights and will always return f32 tensors,
602
- # but we might actually be running in fp16. so we need to cast here.
603
- # there might be better ways to encapsulate this.
604
- noise_level = noise_level.to(image_embeds.dtype)
605
-
606
- image_embeds = torch.cat((image_embeds, noise_level), 1)
607
-
608
- return image_embeds
609
-
610
- @torch.no_grad()
611
- @replace_example_docstring(EXAMPLE_DOC_STRING)
612
- def __call__(
613
- self,
614
- # regular denoising process args
615
- prompt: Optional[Union[str, List[str]]] = None,
616
- height: Optional[int] = None,
617
- width: Optional[int] = None,
618
- num_inference_steps: int = 20,
619
- guidance_scale: float = 10.0,
620
- negative_prompt: Optional[Union[str, List[str]]] = None,
621
- num_images_per_prompt: Optional[int] = 1,
622
- eta: float = 0.0,
623
- generator: Optional[torch.Generator] = None,
624
- latents: Optional[torch.FloatTensor] = None,
625
- prompt_embeds: Optional[torch.FloatTensor] = None,
626
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
627
- output_type: Optional[str] = "pil",
628
- return_dict: bool = True,
629
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
630
- callback_steps: int = 1,
631
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
632
- noise_level: int = 0,
633
- # prior args
634
- prior_num_inference_steps: int = 25,
635
- prior_guidance_scale: float = 4.0,
636
- prior_latents: Optional[torch.FloatTensor] = None,
637
- ):
638
- """
639
- The call function to the pipeline for generation.
640
-
641
- Args:
642
- prompt (`str` or `List[str]`, *optional*):
643
- The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
644
- height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
645
- The height in pixels of the generated image.
646
- width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
647
- The width in pixels of the generated image.
648
- num_inference_steps (`int`, *optional*, defaults to 20):
649
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
650
- expense of slower inference.
651
- guidance_scale (`float`, *optional*, defaults to 10.0):
652
- A higher guidance scale value encourages the model to generate images closely linked to the text
653
- `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
654
- negative_prompt (`str` or `List[str]`, *optional*):
655
- The prompt or prompts to guide what to not include in image generation. If not defined, you need to
656
- pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
657
- num_images_per_prompt (`int`, *optional*, defaults to 1):
658
- The number of images to generate per prompt.
659
- eta (`float`, *optional*, defaults to 0.0):
660
- Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
661
- to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
662
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
663
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
664
- generation deterministic.
665
- latents (`torch.FloatTensor`, *optional*):
666
- Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
667
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
668
- tensor is generated by sampling using the supplied random `generator`.
669
- prompt_embeds (`torch.FloatTensor`, *optional*):
670
- Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
671
- provided, text embeddings are generated from the `prompt` input argument.
672
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
673
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
674
- not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
675
- output_type (`str`, *optional*, defaults to `"pil"`):
676
- The output format of the generated image. Choose between `PIL.Image` or `np.array`.
677
- return_dict (`bool`, *optional*, defaults to `True`):
678
- Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
679
- callback (`Callable`, *optional*):
680
- A function that calls every `callback_steps` steps during inference. The function is called with the
681
- following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
682
- callback_steps (`int`, *optional*, defaults to 1):
683
- The frequency at which the `callback` function is called. If not specified, the callback is called at
684
- every step.
685
- cross_attention_kwargs (`dict`, *optional*):
686
- A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in
687
- [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
688
- noise_level (`int`, *optional*, defaults to `0`):
689
- The amount of noise to add to the image embeddings. A higher `noise_level` increases the variance in
690
- the final un-noised images. See [`StableUnCLIPPipeline.noise_image_embeddings`] for more details.
691
- prior_num_inference_steps (`int`, *optional*, defaults to 25):
692
- The number of denoising steps in the prior denoising process. More denoising steps usually lead to a
693
- higher quality image at the expense of slower inference.
694
- prior_guidance_scale (`float`, *optional*, defaults to 4.0):
695
- A higher guidance scale value encourages the model to generate images closely linked to the text
696
- `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
697
- prior_latents (`torch.FloatTensor`, *optional*):
698
- Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
699
- embedding generation in the prior denoising process. Can be used to tweak the same generation with
700
- different prompts. If not provided, a latents tensor is generated by sampling using the supplied random
701
- `generator`.
702
-
703
- Examples:
704
-
705
- Returns:
706
- [`~pipelines.ImagePipelineOutput`] or `tuple`:
707
- [`~ pipeline_utils.ImagePipelineOutput`] if `return_dict` is True, otherwise a `tuple`. When returning
708
- a tuple, the first element is a list with the generated images.
709
- """
710
- # 0. Default height and width to unet
711
- height = height or self.unet.config.sample_size * self.vae_scale_factor
712
- width = width or self.unet.config.sample_size * self.vae_scale_factor
713
-
714
- # 1. Check inputs. Raise error if not correct
715
- self.check_inputs(
716
- prompt=prompt,
717
- height=height,
718
- width=width,
719
- callback_steps=callback_steps,
720
- noise_level=noise_level,
721
- negative_prompt=negative_prompt,
722
- prompt_embeds=prompt_embeds,
723
- negative_prompt_embeds=negative_prompt_embeds,
724
- )
725
-
726
- # 2. Define call parameters
727
- if prompt is not None and isinstance(prompt, str):
728
- batch_size = 1
729
- elif prompt is not None and isinstance(prompt, list):
730
- batch_size = len(prompt)
731
- else:
732
- batch_size = prompt_embeds.shape[0]
733
-
734
- batch_size = batch_size * num_images_per_prompt
735
-
736
- device = self._execution_device
737
-
738
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
739
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
740
- # corresponds to doing no classifier free guidance.
741
- prior_do_classifier_free_guidance = prior_guidance_scale > 1.0
742
-
743
- # 3. Encode input prompt
744
- prior_prompt_embeds, prior_text_encoder_hidden_states, prior_text_mask = self._encode_prior_prompt(
745
- prompt=prompt,
746
- device=device,
747
- num_images_per_prompt=num_images_per_prompt,
748
- do_classifier_free_guidance=prior_do_classifier_free_guidance,
749
- )
750
-
751
- # 4. Prepare prior timesteps
752
- self.prior_scheduler.set_timesteps(prior_num_inference_steps, device=device)
753
- prior_timesteps_tensor = self.prior_scheduler.timesteps
754
-
755
- # 5. Prepare prior latent variables
756
- embedding_dim = self.prior.config.embedding_dim
757
- prior_latents = self.prepare_latents(
758
- (batch_size, embedding_dim),
759
- prior_prompt_embeds.dtype,
760
- device,
761
- generator,
762
- prior_latents,
763
- self.prior_scheduler,
764
- )
765
-
766
- # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
767
- prior_extra_step_kwargs = self.prepare_prior_extra_step_kwargs(generator, eta)
768
-
769
- # 7. Prior denoising loop
770
- for i, t in enumerate(self.progress_bar(prior_timesteps_tensor)):
771
- # expand the latents if we are doing classifier free guidance
772
- latent_model_input = torch.cat([prior_latents] * 2) if prior_do_classifier_free_guidance else prior_latents
773
- latent_model_input = self.prior_scheduler.scale_model_input(latent_model_input, t)
774
-
775
- predicted_image_embedding = self.prior(
776
- latent_model_input,
777
- timestep=t,
778
- proj_embedding=prior_prompt_embeds,
779
- encoder_hidden_states=prior_text_encoder_hidden_states,
780
- attention_mask=prior_text_mask,
781
- ).predicted_image_embedding
782
-
783
- if prior_do_classifier_free_guidance:
784
- predicted_image_embedding_uncond, predicted_image_embedding_text = predicted_image_embedding.chunk(2)
785
- predicted_image_embedding = predicted_image_embedding_uncond + prior_guidance_scale * (
786
- predicted_image_embedding_text - predicted_image_embedding_uncond
787
- )
788
-
789
- prior_latents = self.prior_scheduler.step(
790
- predicted_image_embedding,
791
- timestep=t,
792
- sample=prior_latents,
793
- **prior_extra_step_kwargs,
794
- return_dict=False,
795
- )[0]
796
-
797
- if callback is not None and i % callback_steps == 0:
798
- callback(i, t, prior_latents)
799
-
800
- prior_latents = self.prior.post_process_latents(prior_latents)
801
-
802
- image_embeds = prior_latents
803
-
804
- # done prior
805
-
806
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
807
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
808
- # corresponds to doing no classifier free guidance.
809
- do_classifier_free_guidance = guidance_scale > 1.0
810
-
811
- # 8. Encode input prompt
812
- text_encoder_lora_scale = (
813
- cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None
814
- )
815
- prompt_embeds = self._encode_prompt(
816
- prompt=prompt,
817
- device=device,
818
- num_images_per_prompt=num_images_per_prompt,
819
- do_classifier_free_guidance=do_classifier_free_guidance,
820
- negative_prompt=negative_prompt,
821
- prompt_embeds=prompt_embeds,
822
- negative_prompt_embeds=negative_prompt_embeds,
823
- lora_scale=text_encoder_lora_scale,
824
- )
825
-
826
- # 9. Prepare image embeddings
827
- image_embeds = self.noise_image_embeddings(
828
- image_embeds=image_embeds,
829
- noise_level=noise_level,
830
- generator=generator,
831
- )
832
-
833
- if do_classifier_free_guidance:
834
- negative_prompt_embeds = torch.zeros_like(image_embeds)
835
-
836
- # For classifier free guidance, we need to do two forward passes.
837
- # Here we concatenate the unconditional and text embeddings into a single batch
838
- # to avoid doing two forward passes
839
- image_embeds = torch.cat([negative_prompt_embeds, image_embeds])
840
-
841
- # 10. Prepare timesteps
842
- self.scheduler.set_timesteps(num_inference_steps, device=device)
843
- timesteps = self.scheduler.timesteps
844
-
845
- # 11. Prepare latent variables
846
- num_channels_latents = self.unet.config.in_channels
847
- shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
848
- latents = self.prepare_latents(
849
- shape=shape,
850
- dtype=prompt_embeds.dtype,
851
- device=device,
852
- generator=generator,
853
- latents=latents,
854
- scheduler=self.scheduler,
855
- )
856
-
857
- # 12. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
858
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
859
-
860
- # 13. Denoising loop
861
- for i, t in enumerate(self.progress_bar(timesteps)):
862
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
863
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
864
-
865
- # predict the noise residual
866
- noise_pred = self.unet(
867
- latent_model_input,
868
- t,
869
- encoder_hidden_states=prompt_embeds,
870
- class_labels=image_embeds,
871
- cross_attention_kwargs=cross_attention_kwargs,
872
- return_dict=False,
873
- )[0]
874
-
875
- # perform guidance
876
- if do_classifier_free_guidance:
877
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
878
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
879
-
880
- # compute the previous noisy sample x_t -> x_t-1
881
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
882
-
883
- if callback is not None and i % callback_steps == 0:
884
- callback(i, t, latents)
885
-
886
- if not output_type == "latent":
887
- image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
888
- else:
889
- image = latents
890
-
891
- image = self.image_processor.postprocess(image, output_type=output_type)
892
-
893
- # Offload last model to CPU
894
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
895
- self.final_offload_hook.offload()
896
-
897
- if not return_dict:
898
- return (image,)
899
-
900
- return ImagePipelineOutput(images=image)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py DELETED
@@ -1,706 +0,0 @@
1
- import inspect
2
- import warnings
3
- from typing import Callable, List, Optional, Union
4
-
5
- import numpy as np
6
- import torch
7
- from packaging import version
8
- from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
9
-
10
- from ...configuration_utils import FrozenDict
11
- from ...models import AutoencoderKL, UNet2DConditionModel
12
- from ...schedulers import KarrasDiffusionSchedulers
13
- from ...utils import deprecate, logging, randn_tensor
14
- from ..pipeline_utils import DiffusionPipeline
15
- from . import StableDiffusionSafePipelineOutput
16
- from .safety_checker import SafeStableDiffusionSafetyChecker
17
-
18
-
19
- logger = logging.get_logger(__name__) # pylint: disable=invalid-name
20
-
21
-
22
- class StableDiffusionPipelineSafe(DiffusionPipeline):
23
- r"""
24
- Pipeline based on the [`StableDiffusionPipeline`] for text-to-image generation using Safe Latent Diffusion.
25
-
26
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
27
- implemented for all pipelines (downloading, saving, running on a particular device, etc.).
28
-
29
- Args:
30
- vae ([`AutoencoderKL`]):
31
- Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
32
- text_encoder ([`~transformers.CLIPTextModel`]):
33
- Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
34
- tokenizer ([`~transformers.CLIPTokenizer`]):
35
- A `CLIPTokenizer` to tokenize text.
36
- unet ([`UNet2DConditionModel`]):
37
- A `UNet2DConditionModel` to denoise the encoded image latents.
38
- scheduler ([`SchedulerMixin`]):
39
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
40
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
41
- safety_checker ([`StableDiffusionSafetyChecker`]):
42
- Classification module that estimates whether generated images could be considered offensive or harmful.
43
- Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details
44
- about a model's potential harms.
45
- feature_extractor ([`~transformers.CLIPImageProcessor`]):
46
- A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.
47
- """
48
-
49
- _optional_components = ["safety_checker", "feature_extractor"]
50
-
51
- def __init__(
52
- self,
53
- vae: AutoencoderKL,
54
- text_encoder: CLIPTextModel,
55
- tokenizer: CLIPTokenizer,
56
- unet: UNet2DConditionModel,
57
- scheduler: KarrasDiffusionSchedulers,
58
- safety_checker: SafeStableDiffusionSafetyChecker,
59
- feature_extractor: CLIPImageProcessor,
60
- requires_safety_checker: bool = True,
61
- ):
62
- super().__init__()
63
- safety_concept: Optional[str] = (
64
- "an image showing hate, harassment, violence, suffering, humiliation, harm, suicide, sexual, nudity,"
65
- " bodily fluids, blood, obscene gestures, illegal activity, drug use, theft, vandalism, weapons, child"
66
- " abuse, brutality, cruelty"
67
- )
68
-
69
- if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
70
- deprecation_message = (
71
- f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
72
- f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
73
- "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
74
- " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
75
- " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
76
- " file"
77
- )
78
- deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
79
- new_config = dict(scheduler.config)
80
- new_config["steps_offset"] = 1
81
- scheduler._internal_dict = FrozenDict(new_config)
82
-
83
- if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True:
84
- deprecation_message = (
85
- f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`."
86
- " `clip_sample` should be set to False in the configuration file. Please make sure to update the"
87
- " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in"
88
- " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very"
89
- " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file"
90
- )
91
- deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
92
- new_config = dict(scheduler.config)
93
- new_config["clip_sample"] = False
94
- scheduler._internal_dict = FrozenDict(new_config)
95
-
96
- if safety_checker is None and requires_safety_checker:
97
- logger.warning(
98
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
99
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
100
- " results in services or applications open to the public. Both the diffusers team and Hugging Face"
101
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
102
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
103
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
104
- )
105
-
106
- if safety_checker is not None and feature_extractor is None:
107
- raise ValueError(
108
- "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
109
- " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
110
- )
111
-
112
- is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse(
113
- version.parse(unet.config._diffusers_version).base_version
114
- ) < version.parse("0.9.0.dev0")
115
- is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
116
- if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64:
117
- deprecation_message = (
118
- "The configuration file of the unet has set the default `sample_size` to smaller than"
119
- " 64 which seems highly unlikely .If you're checkpoint is a fine-tuned version of any of the"
120
- " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-"
121
- " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5"
122
- " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the"
123
- " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`"
124
- " in the config might lead to incorrect results in future versions. If you have downloaded this"
125
- " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for"
126
- " the `unet/config.json` file"
127
- )
128
- deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False)
129
- new_config = dict(unet.config)
130
- new_config["sample_size"] = 64
131
- unet._internal_dict = FrozenDict(new_config)
132
-
133
- self.register_modules(
134
- vae=vae,
135
- text_encoder=text_encoder,
136
- tokenizer=tokenizer,
137
- unet=unet,
138
- scheduler=scheduler,
139
- safety_checker=safety_checker,
140
- feature_extractor=feature_extractor,
141
- )
142
- self._safety_text_concept = safety_concept
143
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
144
- self.register_to_config(requires_safety_checker=requires_safety_checker)
145
-
146
- @property
147
- def safety_concept(self):
148
- r"""
149
- Getter method for the safety concept used with SLD
150
-
151
- Returns:
152
- `str`: The text describing the safety concept
153
- """
154
- return self._safety_text_concept
155
-
156
- @safety_concept.setter
157
- def safety_concept(self, concept):
158
- r"""
159
- Setter method for the safety concept used with SLD
160
-
161
- Args:
162
- concept (`str`):
163
- The text of the new safety concept
164
- """
165
- self._safety_text_concept = concept
166
-
167
- def _encode_prompt(
168
- self,
169
- prompt,
170
- device,
171
- num_images_per_prompt,
172
- do_classifier_free_guidance,
173
- negative_prompt,
174
- enable_safety_guidance,
175
- ):
176
- r"""
177
- Encodes the prompt into text encoder hidden states.
178
-
179
- Args:
180
- prompt (`str` or `List[str]`):
181
- prompt to be encoded
182
- device: (`torch.device`):
183
- torch device
184
- num_images_per_prompt (`int`):
185
- number of images that should be generated per prompt
186
- do_classifier_free_guidance (`bool`):
187
- whether to use classifier free guidance or not
188
- negative_prompt (`str` or `List[str]`):
189
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
190
- if `guidance_scale` is less than `1`).
191
- """
192
- batch_size = len(prompt) if isinstance(prompt, list) else 1
193
-
194
- text_inputs = self.tokenizer(
195
- prompt,
196
- padding="max_length",
197
- max_length=self.tokenizer.model_max_length,
198
- truncation=True,
199
- return_tensors="pt",
200
- )
201
- text_input_ids = text_inputs.input_ids
202
- untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="pt").input_ids
203
-
204
- if not torch.equal(text_input_ids, untruncated_ids):
205
- removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
206
- logger.warning(
207
- "The following part of your input was truncated because CLIP can only handle sequences up to"
208
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
209
- )
210
-
211
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
212
- attention_mask = text_inputs.attention_mask.to(device)
213
- else:
214
- attention_mask = None
215
-
216
- prompt_embeds = self.text_encoder(
217
- text_input_ids.to(device),
218
- attention_mask=attention_mask,
219
- )
220
- prompt_embeds = prompt_embeds[0]
221
-
222
- # duplicate text embeddings for each generation per prompt, using mps friendly method
223
- bs_embed, seq_len, _ = prompt_embeds.shape
224
- prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
225
- prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
226
-
227
- # get unconditional embeddings for classifier free guidance
228
- if do_classifier_free_guidance:
229
- uncond_tokens: List[str]
230
- if negative_prompt is None:
231
- uncond_tokens = [""] * batch_size
232
- elif type(prompt) is not type(negative_prompt):
233
- raise TypeError(
234
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
235
- f" {type(prompt)}."
236
- )
237
- elif isinstance(negative_prompt, str):
238
- uncond_tokens = [negative_prompt]
239
- elif batch_size != len(negative_prompt):
240
- raise ValueError(
241
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
242
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
243
- " the batch size of `prompt`."
244
- )
245
- else:
246
- uncond_tokens = negative_prompt
247
-
248
- max_length = text_input_ids.shape[-1]
249
- uncond_input = self.tokenizer(
250
- uncond_tokens,
251
- padding="max_length",
252
- max_length=max_length,
253
- truncation=True,
254
- return_tensors="pt",
255
- )
256
-
257
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
258
- attention_mask = uncond_input.attention_mask.to(device)
259
- else:
260
- attention_mask = None
261
-
262
- negative_prompt_embeds = self.text_encoder(
263
- uncond_input.input_ids.to(device),
264
- attention_mask=attention_mask,
265
- )
266
- negative_prompt_embeds = negative_prompt_embeds[0]
267
-
268
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
269
- seq_len = negative_prompt_embeds.shape[1]
270
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
271
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
272
-
273
- # Encode the safety concept text
274
- if enable_safety_guidance:
275
- safety_concept_input = self.tokenizer(
276
- [self._safety_text_concept],
277
- padding="max_length",
278
- max_length=max_length,
279
- truncation=True,
280
- return_tensors="pt",
281
- )
282
- safety_embeddings = self.text_encoder(safety_concept_input.input_ids.to(self.device))[0]
283
-
284
- # duplicate safety embeddings for each generation per prompt, using mps friendly method
285
- seq_len = safety_embeddings.shape[1]
286
- safety_embeddings = safety_embeddings.repeat(batch_size, num_images_per_prompt, 1)
287
- safety_embeddings = safety_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
288
-
289
- # For classifier free guidance + sld, we need to do three forward passes.
290
- # Here we concatenate the unconditional and text embeddings into a single batch
291
- # to avoid doing three forward passes
292
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds, safety_embeddings])
293
-
294
- else:
295
- # For classifier free guidance, we need to do two forward passes.
296
- # Here we concatenate the unconditional and text embeddings into a single batch
297
- # to avoid doing two forward passes
298
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
299
-
300
- return prompt_embeds
301
-
302
- def run_safety_checker(self, image, device, dtype, enable_safety_guidance):
303
- if self.safety_checker is not None:
304
- images = image.copy()
305
- safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device)
306
- image, has_nsfw_concept = self.safety_checker(
307
- images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
308
- )
309
- flagged_images = np.zeros((2, *image.shape[1:]))
310
- if any(has_nsfw_concept):
311
- logger.warning(
312
- "Potential NSFW content was detected in one or more images. A black image will be returned"
313
- " instead."
314
- f"{'You may look at this images in the `unsafe_images` variable of the output at your own discretion.' if enable_safety_guidance else 'Try again with a different prompt and/or seed.'}"
315
- )
316
- for idx, has_nsfw_concept in enumerate(has_nsfw_concept):
317
- if has_nsfw_concept:
318
- flagged_images[idx] = images[idx]
319
- image[idx] = np.zeros(image[idx].shape) # black image
320
- else:
321
- has_nsfw_concept = None
322
- flagged_images = None
323
- return image, has_nsfw_concept, flagged_images
324
-
325
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents
326
- def decode_latents(self, latents):
327
- warnings.warn(
328
- "The decode_latents method is deprecated and will be removed in a future version. Please"
329
- " use VaeImageProcessor instead",
330
- FutureWarning,
331
- )
332
- latents = 1 / self.vae.config.scaling_factor * latents
333
- image = self.vae.decode(latents, return_dict=False)[0]
334
- image = (image / 2 + 0.5).clamp(0, 1)
335
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
336
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
337
- return image
338
-
339
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
340
- def prepare_extra_step_kwargs(self, generator, eta):
341
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
342
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
343
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
344
- # and should be between [0, 1]
345
-
346
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
347
- extra_step_kwargs = {}
348
- if accepts_eta:
349
- extra_step_kwargs["eta"] = eta
350
-
351
- # check if the scheduler accepts generator
352
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
353
- if accepts_generator:
354
- extra_step_kwargs["generator"] = generator
355
- return extra_step_kwargs
356
-
357
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.check_inputs
358
- def check_inputs(
359
- self,
360
- prompt,
361
- height,
362
- width,
363
- callback_steps,
364
- negative_prompt=None,
365
- prompt_embeds=None,
366
- negative_prompt_embeds=None,
367
- ):
368
- if height % 8 != 0 or width % 8 != 0:
369
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
370
-
371
- if (callback_steps is None) or (
372
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
373
- ):
374
- raise ValueError(
375
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
376
- f" {type(callback_steps)}."
377
- )
378
-
379
- if prompt is not None and prompt_embeds is not None:
380
- raise ValueError(
381
- f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
382
- " only forward one of the two."
383
- )
384
- elif prompt is None and prompt_embeds is None:
385
- raise ValueError(
386
- "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
387
- )
388
- elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
389
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
390
-
391
- if negative_prompt is not None and negative_prompt_embeds is not None:
392
- raise ValueError(
393
- f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
394
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
395
- )
396
-
397
- if prompt_embeds is not None and negative_prompt_embeds is not None:
398
- if prompt_embeds.shape != negative_prompt_embeds.shape:
399
- raise ValueError(
400
- "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
401
- f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
402
- f" {negative_prompt_embeds.shape}."
403
- )
404
-
405
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
406
- def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
407
- shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
408
- if isinstance(generator, list) and len(generator) != batch_size:
409
- raise ValueError(
410
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
411
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
412
- )
413
-
414
- if latents is None:
415
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
416
- else:
417
- latents = latents.to(device)
418
-
419
- # scale the initial noise by the standard deviation required by the scheduler
420
- latents = latents * self.scheduler.init_noise_sigma
421
- return latents
422
-
423
- def perform_safety_guidance(
424
- self,
425
- enable_safety_guidance,
426
- safety_momentum,
427
- noise_guidance,
428
- noise_pred_out,
429
- i,
430
- sld_guidance_scale,
431
- sld_warmup_steps,
432
- sld_threshold,
433
- sld_momentum_scale,
434
- sld_mom_beta,
435
- ):
436
- # Perform SLD guidance
437
- if enable_safety_guidance:
438
- if safety_momentum is None:
439
- safety_momentum = torch.zeros_like(noise_guidance)
440
- noise_pred_text, noise_pred_uncond = noise_pred_out[0], noise_pred_out[1]
441
- noise_pred_safety_concept = noise_pred_out[2]
442
-
443
- # Equation 6
444
- scale = torch.clamp(torch.abs((noise_pred_text - noise_pred_safety_concept)) * sld_guidance_scale, max=1.0)
445
-
446
- # Equation 6
447
- safety_concept_scale = torch.where(
448
- (noise_pred_text - noise_pred_safety_concept) >= sld_threshold, torch.zeros_like(scale), scale
449
- )
450
-
451
- # Equation 4
452
- noise_guidance_safety = torch.mul((noise_pred_safety_concept - noise_pred_uncond), safety_concept_scale)
453
-
454
- # Equation 7
455
- noise_guidance_safety = noise_guidance_safety + sld_momentum_scale * safety_momentum
456
-
457
- # Equation 8
458
- safety_momentum = sld_mom_beta * safety_momentum + (1 - sld_mom_beta) * noise_guidance_safety
459
-
460
- if i >= sld_warmup_steps: # Warmup
461
- # Equation 3
462
- noise_guidance = noise_guidance - noise_guidance_safety
463
- return noise_guidance, safety_momentum
464
-
465
- @torch.no_grad()
466
- def __call__(
467
- self,
468
- prompt: Union[str, List[str]],
469
- height: Optional[int] = None,
470
- width: Optional[int] = None,
471
- num_inference_steps: int = 50,
472
- guidance_scale: float = 7.5,
473
- negative_prompt: Optional[Union[str, List[str]]] = None,
474
- num_images_per_prompt: Optional[int] = 1,
475
- eta: float = 0.0,
476
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
477
- latents: Optional[torch.FloatTensor] = None,
478
- output_type: Optional[str] = "pil",
479
- return_dict: bool = True,
480
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
481
- callback_steps: int = 1,
482
- sld_guidance_scale: Optional[float] = 1000,
483
- sld_warmup_steps: Optional[int] = 10,
484
- sld_threshold: Optional[float] = 0.01,
485
- sld_momentum_scale: Optional[float] = 0.3,
486
- sld_mom_beta: Optional[float] = 0.4,
487
- ):
488
- r"""
489
- The call function to the pipeline for generation.
490
-
491
- Args:
492
- prompt (`str` or `List[str]`):
493
- The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
494
- height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
495
- The height in pixels of the generated image.
496
- width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
497
- The width in pixels of the generated image.
498
- num_inference_steps (`int`, *optional*, defaults to 50):
499
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
500
- expense of slower inference.
501
- guidance_scale (`float`, *optional*, defaults to 7.5):
502
- A higher guidance scale value encourages the model to generate images closely linked to the text
503
- `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
504
- negative_prompt (`str` or `List[str]`, *optional*):
505
- The prompt or prompts to guide what to not include in image generation. If not defined, you need to
506
- pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
507
- num_images_per_prompt (`int`, *optional*, defaults to 1):
508
- The number of images to generate per prompt.
509
- eta (`float`, *optional*, defaults to 0.0):
510
- Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
511
- to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
512
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
513
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
514
- generation deterministic.
515
- latents (`torch.FloatTensor`, *optional*):
516
- Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
517
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
518
- tensor is generated by sampling using the supplied random `generator`.
519
- output_type (`str`, *optional*, defaults to `"pil"`):
520
- The output format of the generated image. Choose between `PIL.Image` or `np.array`.
521
- return_dict (`bool`, *optional*, defaults to `True`):
522
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
523
- plain tuple.
524
- callback (`Callable`, *optional*):
525
- A function that calls every `callback_steps` steps during inference. The function is called with the
526
- following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
527
- callback_steps (`int`, *optional*, defaults to 1):
528
- The frequency at which the `callback` function is called. If not specified, the callback is called at
529
- every step.
530
- sld_guidance_scale (`float`, *optional*, defaults to 1000):
531
- If `sld_guidance_scale < 1`, safety guidance is disabled.
532
- sld_warmup_steps (`int`, *optional*, defaults to 10):
533
- Number of warmup steps for safety guidance. SLD is only be applied for diffusion steps greater than
534
- `sld_warmup_steps`.
535
- sld_threshold (`float`, *optional*, defaults to 0.01):
536
- Threshold that separates the hyperplane between appropriate and inappropriate images.
537
- sld_momentum_scale (`float`, *optional*, defaults to 0.3):
538
- Scale of the SLD momentum to be added to the safety guidance at each diffusion step. If set to 0.0,
539
- momentum is disabled. Momentum is built up during warmup for diffusion steps smaller than
540
- `sld_warmup_steps`.
541
- sld_mom_beta (`float`, *optional*, defaults to 0.4):
542
- Defines how safety guidance momentum builds up. `sld_mom_beta` indicates how much of the previous
543
- momentum is kept. Momentum is built up during warmup for diffusion steps smaller than
544
- `sld_warmup_steps`.
545
-
546
- Returns:
547
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
548
- If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned,
549
- otherwise a `tuple` is returned where the first element is a list with the generated images and the
550
- second element is a list of `bool`s indicating whether the corresponding generated image contains
551
- "not-safe-for-work" (nsfw) content.
552
-
553
- Examples:
554
-
555
- ```py
556
- import torch
557
- from diffusers import StableDiffusionPipelineSafe
558
-
559
- pipeline = StableDiffusionPipelineSafe.from_pretrained(
560
- "AIML-TUDA/stable-diffusion-safe", torch_dtype=torch.float16
561
- )
562
- prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker"
563
- image = pipeline(prompt=prompt, **SafetyConfig.MEDIUM).images[0]
564
- ```
565
- """
566
- # 0. Default height and width to unet
567
- height = height or self.unet.config.sample_size * self.vae_scale_factor
568
- width = width or self.unet.config.sample_size * self.vae_scale_factor
569
-
570
- # 1. Check inputs. Raise error if not correct
571
- self.check_inputs(prompt, height, width, callback_steps)
572
-
573
- # 2. Define call parameters
574
- batch_size = 1 if isinstance(prompt, str) else len(prompt)
575
- device = self._execution_device
576
-
577
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
578
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
579
- # corresponds to doing no classifier free guidance.
580
- do_classifier_free_guidance = guidance_scale > 1.0
581
-
582
- enable_safety_guidance = sld_guidance_scale > 1.0 and do_classifier_free_guidance
583
- if not enable_safety_guidance:
584
- warnings.warn("Safety checker disabled!")
585
-
586
- # 3. Encode input prompt
587
- prompt_embeds = self._encode_prompt(
588
- prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt, enable_safety_guidance
589
- )
590
-
591
- # 4. Prepare timesteps
592
- self.scheduler.set_timesteps(num_inference_steps, device=device)
593
- timesteps = self.scheduler.timesteps
594
-
595
- # 5. Prepare latent variables
596
- num_channels_latents = self.unet.config.in_channels
597
- latents = self.prepare_latents(
598
- batch_size * num_images_per_prompt,
599
- num_channels_latents,
600
- height,
601
- width,
602
- prompt_embeds.dtype,
603
- device,
604
- generator,
605
- latents,
606
- )
607
-
608
- # 6. Prepare extra step kwargs.
609
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
610
-
611
- safety_momentum = None
612
-
613
- num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
614
- with self.progress_bar(total=num_inference_steps) as progress_bar:
615
- for i, t in enumerate(timesteps):
616
- # expand the latents if we are doing classifier free guidance
617
- latent_model_input = (
618
- torch.cat([latents] * (3 if enable_safety_guidance else 2))
619
- if do_classifier_free_guidance
620
- else latents
621
- )
622
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
623
-
624
- # predict the noise residual
625
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=prompt_embeds).sample
626
-
627
- # perform guidance
628
- if do_classifier_free_guidance:
629
- noise_pred_out = noise_pred.chunk((3 if enable_safety_guidance else 2))
630
- noise_pred_uncond, noise_pred_text = noise_pred_out[0], noise_pred_out[1]
631
-
632
- # default classifier free guidance
633
- noise_guidance = noise_pred_text - noise_pred_uncond
634
-
635
- # Perform SLD guidance
636
- if enable_safety_guidance:
637
- if safety_momentum is None:
638
- safety_momentum = torch.zeros_like(noise_guidance)
639
- noise_pred_safety_concept = noise_pred_out[2]
640
-
641
- # Equation 6
642
- scale = torch.clamp(
643
- torch.abs((noise_pred_text - noise_pred_safety_concept)) * sld_guidance_scale, max=1.0
644
- )
645
-
646
- # Equation 6
647
- safety_concept_scale = torch.where(
648
- (noise_pred_text - noise_pred_safety_concept) >= sld_threshold,
649
- torch.zeros_like(scale),
650
- scale,
651
- )
652
-
653
- # Equation 4
654
- noise_guidance_safety = torch.mul(
655
- (noise_pred_safety_concept - noise_pred_uncond), safety_concept_scale
656
- )
657
-
658
- # Equation 7
659
- noise_guidance_safety = noise_guidance_safety + sld_momentum_scale * safety_momentum
660
-
661
- # Equation 8
662
- safety_momentum = sld_mom_beta * safety_momentum + (1 - sld_mom_beta) * noise_guidance_safety
663
-
664
- if i >= sld_warmup_steps: # Warmup
665
- # Equation 3
666
- noise_guidance = noise_guidance - noise_guidance_safety
667
-
668
- noise_pred = noise_pred_uncond + guidance_scale * noise_guidance
669
-
670
- # compute the previous noisy sample x_t -> x_t-1
671
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
672
-
673
- # call the callback, if provided
674
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
675
- progress_bar.update()
676
- if callback is not None and i % callback_steps == 0:
677
- callback(i, t, latents)
678
-
679
- # 8. Post-processing
680
- image = self.decode_latents(latents)
681
-
682
- # 9. Run safety checker
683
- image, has_nsfw_concept, flagged_images = self.run_safety_checker(
684
- image, device, prompt_embeds.dtype, enable_safety_guidance
685
- )
686
-
687
- # 10. Convert to PIL
688
- if output_type == "pil":
689
- image = self.numpy_to_pil(image)
690
- if flagged_images is not None:
691
- flagged_images = self.numpy_to_pil(flagged_images)
692
-
693
- if not return_dict:
694
- return (
695
- image,
696
- has_nsfw_concept,
697
- self._safety_text_concept if enable_safety_guidance else None,
698
- flagged_images,
699
- )
700
-
701
- return StableDiffusionSafePipelineOutput(
702
- images=image,
703
- nsfw_content_detected=has_nsfw_concept,
704
- applied_safety_concept=self._safety_text_concept if enable_safety_guidance else None,
705
- unsafe_images=flagged_images,
706
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/ghm/README.md DELETED
@@ -1,23 +0,0 @@
1
- # Gradient Harmonized Single-stage Detector
2
-
3
- ## Introduction
4
-
5
- [ALGORITHM]
6
-
7
- ```
8
- @inproceedings{li2019gradient,
9
- title={Gradient Harmonized Single-stage Detector},
10
- author={Li, Buyu and Liu, Yu and Wang, Xiaogang},
11
- booktitle={AAAI Conference on Artificial Intelligence},
12
- year={2019}
13
- }
14
- ```
15
-
16
- ## Results and Models
17
-
18
- | Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
19
- | :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :------: | :--------: |
20
- | R-50-FPN | pytorch | 1x | 4.0 | 3.3 | 37.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ghm/retinanet_ghm_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ghm/retinanet_ghm_r50_fpn_1x_coco/retinanet_ghm_r50_fpn_1x_coco_20200130-a437fda3.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/ghm/retinanet_ghm_r50_fpn_1x_coco/retinanet_ghm_r50_fpn_1x_coco_20200130_004213.log.json) |
21
- | R-101-FPN | pytorch | 1x | 6.0 | 4.4 | 39.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ghm/retinanet_ghm_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ghm/retinanet_ghm_r101_fpn_1x_coco/retinanet_ghm_r101_fpn_1x_coco_20200130-c148ee8f.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/ghm/retinanet_ghm_r101_fpn_1x_coco/retinanet_ghm_r101_fpn_1x_coco_20200130_145259.log.json) |
22
- | X-101-32x4d-FPN | pytorch | 1x | 7.2 | 5.1 | 40.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ghm/retinanet_ghm_x101_32x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ghm/retinanet_ghm_x101_32x4d_fpn_1x_coco/retinanet_ghm_x101_32x4d_fpn_1x_coco_20200131-e4333bd0.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/ghm/retinanet_ghm_x101_32x4d_fpn_1x_coco/retinanet_ghm_x101_32x4d_fpn_1x_coco_20200131_113653.log.json) |
23
- | X-101-64x4d-FPN | pytorch | 1x | 10.3 | 5.2 | 41.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ghm/retinanet_ghm_x101_64x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ghm/retinanet_ghm_x101_64x4d_fpn_1x_coco/retinanet_ghm_x101_64x4d_fpn_1x_coco_20200131-dd381cef.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/ghm/retinanet_ghm_x101_64x4d_fpn_1x_coco/retinanet_ghm_x101_64x4d_fpn_1x_coco_20200131_113723.log.json) |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/models/losses/cross_entropy_loss.py DELETED
@@ -1,214 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
- import torch.nn.functional as F
4
-
5
- from ..builder import LOSSES
6
- from .utils import weight_reduce_loss
7
-
8
-
9
- def cross_entropy(pred,
10
- label,
11
- weight=None,
12
- reduction='mean',
13
- avg_factor=None,
14
- class_weight=None):
15
- """Calculate the CrossEntropy loss.
16
-
17
- Args:
18
- pred (torch.Tensor): The prediction with shape (N, C), C is the number
19
- of classes.
20
- label (torch.Tensor): The learning label of the prediction.
21
- weight (torch.Tensor, optional): Sample-wise loss weight.
22
- reduction (str, optional): The method used to reduce the loss.
23
- avg_factor (int, optional): Average factor that is used to average
24
- the loss. Defaults to None.
25
- class_weight (list[float], optional): The weight for each class.
26
-
27
- Returns:
28
- torch.Tensor: The calculated loss
29
- """
30
- # element-wise losses
31
- loss = F.cross_entropy(pred, label, weight=class_weight, reduction='none')
32
-
33
- # apply weights and do the reduction
34
- if weight is not None:
35
- weight = weight.float()
36
- loss = weight_reduce_loss(
37
- loss, weight=weight, reduction=reduction, avg_factor=avg_factor)
38
-
39
- return loss
40
-
41
-
42
- def _expand_onehot_labels(labels, label_weights, label_channels):
43
- bin_labels = labels.new_full((labels.size(0), label_channels), 0)
44
- inds = torch.nonzero(
45
- (labels >= 0) & (labels < label_channels), as_tuple=False).squeeze()
46
- if inds.numel() > 0:
47
- bin_labels[inds, labels[inds]] = 1
48
-
49
- if label_weights is None:
50
- bin_label_weights = None
51
- else:
52
- bin_label_weights = label_weights.view(-1, 1).expand(
53
- label_weights.size(0), label_channels)
54
-
55
- return bin_labels, bin_label_weights
56
-
57
-
58
- def binary_cross_entropy(pred,
59
- label,
60
- weight=None,
61
- reduction='mean',
62
- avg_factor=None,
63
- class_weight=None):
64
- """Calculate the binary CrossEntropy loss.
65
-
66
- Args:
67
- pred (torch.Tensor): The prediction with shape (N, 1).
68
- label (torch.Tensor): The learning label of the prediction.
69
- weight (torch.Tensor, optional): Sample-wise loss weight.
70
- reduction (str, optional): The method used to reduce the loss.
71
- Options are "none", "mean" and "sum".
72
- avg_factor (int, optional): Average factor that is used to average
73
- the loss. Defaults to None.
74
- class_weight (list[float], optional): The weight for each class.
75
-
76
- Returns:
77
- torch.Tensor: The calculated loss
78
- """
79
- if pred.dim() != label.dim():
80
- label, weight = _expand_onehot_labels(label, weight, pred.size(-1))
81
-
82
- # weighted element-wise losses
83
- if weight is not None:
84
- weight = weight.float()
85
- loss = F.binary_cross_entropy_with_logits(
86
- pred, label.float(), pos_weight=class_weight, reduction='none')
87
- # do the reduction for the weighted loss
88
- loss = weight_reduce_loss(
89
- loss, weight, reduction=reduction, avg_factor=avg_factor)
90
-
91
- return loss
92
-
93
-
94
- def mask_cross_entropy(pred,
95
- target,
96
- label,
97
- reduction='mean',
98
- avg_factor=None,
99
- class_weight=None):
100
- """Calculate the CrossEntropy loss for masks.
101
-
102
- Args:
103
- pred (torch.Tensor): The prediction with shape (N, C, *), C is the
104
- number of classes. The trailing * indicates arbitrary shape.
105
- target (torch.Tensor): The learning label of the prediction.
106
- label (torch.Tensor): ``label`` indicates the class label of the mask
107
- corresponding object. This will be used to select the mask in the
108
- of the class which the object belongs to when the mask prediction
109
- if not class-agnostic.
110
- reduction (str, optional): The method used to reduce the loss.
111
- Options are "none", "mean" and "sum".
112
- avg_factor (int, optional): Average factor that is used to average
113
- the loss. Defaults to None.
114
- class_weight (list[float], optional): The weight for each class.
115
-
116
- Returns:
117
- torch.Tensor: The calculated loss
118
-
119
- Example:
120
- >>> N, C = 3, 11
121
- >>> H, W = 2, 2
122
- >>> pred = torch.randn(N, C, H, W) * 1000
123
- >>> target = torch.rand(N, H, W)
124
- >>> label = torch.randint(0, C, size=(N,))
125
- >>> reduction = 'mean'
126
- >>> avg_factor = None
127
- >>> class_weights = None
128
- >>> loss = mask_cross_entropy(pred, target, label, reduction,
129
- >>> avg_factor, class_weights)
130
- >>> assert loss.shape == (1,)
131
- """
132
- # TODO: handle these two reserved arguments
133
- assert reduction == 'mean' and avg_factor is None
134
- num_rois = pred.size()[0]
135
- inds = torch.arange(0, num_rois, dtype=torch.long, device=pred.device)
136
- pred_slice = pred[inds, label].squeeze(1)
137
- return F.binary_cross_entropy_with_logits(
138
- pred_slice, target, weight=class_weight, reduction='mean')[None]
139
-
140
-
141
- @LOSSES.register_module()
142
- class CrossEntropyLoss(nn.Module):
143
-
144
- def __init__(self,
145
- use_sigmoid=False,
146
- use_mask=False,
147
- reduction='mean',
148
- class_weight=None,
149
- loss_weight=1.0):
150
- """CrossEntropyLoss.
151
-
152
- Args:
153
- use_sigmoid (bool, optional): Whether the prediction uses sigmoid
154
- of softmax. Defaults to False.
155
- use_mask (bool, optional): Whether to use mask cross entropy loss.
156
- Defaults to False.
157
- reduction (str, optional): . Defaults to 'mean'.
158
- Options are "none", "mean" and "sum".
159
- class_weight (list[float], optional): Weight of each class.
160
- Defaults to None.
161
- loss_weight (float, optional): Weight of the loss. Defaults to 1.0.
162
- """
163
- super(CrossEntropyLoss, self).__init__()
164
- assert (use_sigmoid is False) or (use_mask is False)
165
- self.use_sigmoid = use_sigmoid
166
- self.use_mask = use_mask
167
- self.reduction = reduction
168
- self.loss_weight = loss_weight
169
- self.class_weight = class_weight
170
-
171
- if self.use_sigmoid:
172
- self.cls_criterion = binary_cross_entropy
173
- elif self.use_mask:
174
- self.cls_criterion = mask_cross_entropy
175
- else:
176
- self.cls_criterion = cross_entropy
177
-
178
- def forward(self,
179
- cls_score,
180
- label,
181
- weight=None,
182
- avg_factor=None,
183
- reduction_override=None,
184
- **kwargs):
185
- """Forward function.
186
-
187
- Args:
188
- cls_score (torch.Tensor): The prediction.
189
- label (torch.Tensor): The learning label of the prediction.
190
- weight (torch.Tensor, optional): Sample-wise loss weight.
191
- avg_factor (int, optional): Average factor that is used to average
192
- the loss. Defaults to None.
193
- reduction (str, optional): The method used to reduce the loss.
194
- Options are "none", "mean" and "sum".
195
- Returns:
196
- torch.Tensor: The calculated loss
197
- """
198
- assert reduction_override in (None, 'none', 'mean', 'sum')
199
- reduction = (
200
- reduction_override if reduction_override else self.reduction)
201
- if self.class_weight is not None:
202
- class_weight = cls_score.new_tensor(
203
- self.class_weight, device=cls_score.device)
204
- else:
205
- class_weight = None
206
- loss_cls = self.loss_weight * self.cls_criterion(
207
- cls_score,
208
- label,
209
- weight,
210
- class_weight=class_weight,
211
- reduction=reduction,
212
- avg_factor=avg_factor,
213
- **kwargs)
214
- return loss_cls
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v2/deeplabv3plus_m-v2-d8_512x512_160k_ade20k.py DELETED
@@ -1,12 +0,0 @@
1
- _base_ = '../deeplabv3plus/deeplabv3plus_r101-d8_512x512_160k_ade20k.py'
2
- model = dict(
3
- pretrained='mmcls://mobilenet_v2',
4
- backbone=dict(
5
- _delete_=True,
6
- type='MobileNetV2',
7
- widen_factor=1.,
8
- strides=(1, 2, 2, 1, 1, 1, 1),
9
- dilations=(1, 1, 1, 2, 2, 4, 4),
10
- out_indices=(1, 2, 4, 6)),
11
- decode_head=dict(in_channels=320, c1_in_channels=24),
12
- auxiliary_head=dict(in_channels=96))
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AndySAnker/DeepStruc/README.md DELETED
@@ -1,23 +0,0 @@
1
- ---
2
- title: DeepStruc App
3
- emoji: 🦀
4
- colorFrom: green
5
- colorTo: blue
6
- sdk: streamlit
7
- sdk_version: 1.10.0
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- python_version: 3.8
12
- ---
13
-
14
-
15
- This is a app to use DeepStruc presented in, ["DeepStruc: towards structure solution from pair distribution function data using deep generative models"](https://pubs.rsc.org/en/content/articlelanding/2023/dd/d2dd00086e)
16
-
17
- ```
18
- @article{kjaer2022deepstruc,
19
- title={DeepStruc: Towards structure solution from pair distribution function data using deep generative models},
20
- author={Kjær, Emil Thyge Skanning and Anker, Andy Sode and Weng, Marcus Nørgaard and Billinge, Simon J. L. and Selvan, Raghavendra and Jensen, Kirsten Marie Ørnsbjerg},
21
- year={2022}
22
- }
23
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/contour_expand.py DELETED
@@ -1,49 +0,0 @@
1
- # Copyright (c) OpenMMLab. All rights reserved.
2
- import numpy as np
3
- import torch
4
-
5
- from ..utils import ext_loader
6
-
7
- ext_module = ext_loader.load_ext('_ext', ['contour_expand'])
8
-
9
-
10
- def contour_expand(kernel_mask, internal_kernel_label, min_kernel_area,
11
- kernel_num):
12
- """Expand kernel contours so that foreground pixels are assigned into
13
- instances.
14
-
15
- Arguments:
16
- kernel_mask (np.array or Tensor): The instance kernel mask with
17
- size hxw.
18
- internal_kernel_label (np.array or Tensor): The instance internal
19
- kernel label with size hxw.
20
- min_kernel_area (int): The minimum kernel area.
21
- kernel_num (int): The instance kernel number.
22
-
23
- Returns:
24
- label (list): The instance index map with size hxw.
25
- """
26
- assert isinstance(kernel_mask, (torch.Tensor, np.ndarray))
27
- assert isinstance(internal_kernel_label, (torch.Tensor, np.ndarray))
28
- assert isinstance(min_kernel_area, int)
29
- assert isinstance(kernel_num, int)
30
-
31
- if isinstance(kernel_mask, np.ndarray):
32
- kernel_mask = torch.from_numpy(kernel_mask)
33
- if isinstance(internal_kernel_label, np.ndarray):
34
- internal_kernel_label = torch.from_numpy(internal_kernel_label)
35
-
36
- if torch.__version__ == 'parrots':
37
- if kernel_mask.shape[0] == 0 or internal_kernel_label.shape[0] == 0:
38
- label = []
39
- else:
40
- label = ext_module.contour_expand(
41
- kernel_mask,
42
- internal_kernel_label,
43
- min_kernel_area=min_kernel_area,
44
- kernel_num=kernel_num)
45
- label = label.tolist()
46
- else:
47
- label = ext_module.contour_expand(kernel_mask, internal_kernel_label,
48
- min_kernel_area, kernel_num)
49
- return label
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/distributions/distributions.py DELETED
@@ -1,92 +0,0 @@
1
- import torch
2
- import numpy as np
3
-
4
-
5
- class AbstractDistribution:
6
- def sample(self):
7
- raise NotImplementedError()
8
-
9
- def mode(self):
10
- raise NotImplementedError()
11
-
12
-
13
- class DiracDistribution(AbstractDistribution):
14
- def __init__(self, value):
15
- self.value = value
16
-
17
- def sample(self):
18
- return self.value
19
-
20
- def mode(self):
21
- return self.value
22
-
23
-
24
- class DiagonalGaussianDistribution(object):
25
- def __init__(self, parameters, deterministic=False):
26
- self.parameters = parameters
27
- self.mean, self.logvar = torch.chunk(parameters, 2, dim=1)
28
- self.logvar = torch.clamp(self.logvar, -30.0, 20.0)
29
- self.deterministic = deterministic
30
- self.std = torch.exp(0.5 * self.logvar)
31
- self.var = torch.exp(self.logvar)
32
- if self.deterministic:
33
- self.var = self.std = torch.zeros_like(self.mean).to(device=self.parameters.device)
34
-
35
- def sample(self):
36
- x = self.mean + self.std * torch.randn(self.mean.shape).to(device=self.parameters.device)
37
- return x
38
-
39
- def kl(self, other=None):
40
- if self.deterministic:
41
- return torch.Tensor([0.])
42
- else:
43
- if other is None:
44
- return 0.5 * torch.sum(torch.pow(self.mean, 2)
45
- + self.var - 1.0 - self.logvar,
46
- dim=[1, 2, 3])
47
- else:
48
- return 0.5 * torch.sum(
49
- torch.pow(self.mean - other.mean, 2) / other.var
50
- + self.var / other.var - 1.0 - self.logvar + other.logvar,
51
- dim=[1, 2, 3])
52
-
53
- def nll(self, sample, dims=[1,2,3]):
54
- if self.deterministic:
55
- return torch.Tensor([0.])
56
- logtwopi = np.log(2.0 * np.pi)
57
- return 0.5 * torch.sum(
58
- logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var,
59
- dim=dims)
60
-
61
- def mode(self):
62
- return self.mean
63
-
64
-
65
- def normal_kl(mean1, logvar1, mean2, logvar2):
66
- """
67
- source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12
68
- Compute the KL divergence between two gaussians.
69
- Shapes are automatically broadcasted, so batches can be compared to
70
- scalars, among other use cases.
71
- """
72
- tensor = None
73
- for obj in (mean1, logvar1, mean2, logvar2):
74
- if isinstance(obj, torch.Tensor):
75
- tensor = obj
76
- break
77
- assert tensor is not None, "at least one argument must be a Tensor"
78
-
79
- # Force variances to be Tensors. Broadcasting helps convert scalars to
80
- # Tensors, but it does not work for torch.exp().
81
- logvar1, logvar2 = [
82
- x if isinstance(x, torch.Tensor) else torch.tensor(x).to(tensor)
83
- for x in (logvar1, logvar2)
84
- ]
85
-
86
- return 0.5 * (
87
- -1.0
88
- + logvar2
89
- - logvar1
90
- + torch.exp(logvar1 - logvar2)
91
- + ((mean1 - mean2) ** 2) * torch.exp(-logvar2)
92
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anthony-Ml/covid_predictor/utils.py DELETED
@@ -1,100 +0,0 @@
1
- import numpy as np
2
- import torch
3
- import torch.nn as nn
4
- import torch.nn.functional as F
5
- import matplotlib.cm
6
- from PIL import Image
7
-
8
- # Adapted from: https://colab.research.google.com/github/kevinzakka/clip_playground/blob/main/CLIP_GradCAM_Visualization.ipynb
9
- class Hook:
10
- """Attaches to a module and records its activations and gradients."""
11
-
12
- def __init__(self, module: nn.Module):
13
- self.data = None
14
- self.hook = module.register_forward_hook(self.save_grad)
15
-
16
- def save_grad(self, module, input, output):
17
- self.data = output
18
- output.requires_grad_(True)
19
- output.retain_grad()
20
-
21
- def __enter__(self):
22
- return self
23
-
24
- def __exit__(self, exc_type, exc_value, exc_traceback):
25
- self.hook.remove()
26
-
27
- @property
28
- def activation(self) -> torch.Tensor:
29
- return self.data
30
-
31
- @property
32
- def gradient(self) -> torch.Tensor:
33
- return self.data.grad
34
-
35
-
36
- # Reference: https://arxiv.org/abs/1610.02391
37
- def gradCAM(
38
- model: nn.Module,
39
- input: torch.Tensor,
40
- target: torch.Tensor,
41
- layer: nn.Module
42
- ) -> torch.Tensor:
43
- # Zero out any gradients at the input.
44
- if input.grad is not None:
45
- input.grad.data.zero_()
46
-
47
- # Disable gradient settings.
48
- requires_grad = {}
49
- for name, param in model.named_parameters():
50
- requires_grad[name] = param.requires_grad
51
- param.requires_grad_(False)
52
-
53
- # Attach a hook to the model at the desired layer.
54
- assert isinstance(layer, nn.Module)
55
- with Hook(layer) as hook:
56
- # Do a forward and backward pass.
57
- output = model(input)
58
- output.backward(target)
59
-
60
- grad = hook.gradient.float()
61
- act = hook.activation.float()
62
-
63
- # Global average pool gradient across spatial dimension
64
- # to obtain importance weights.
65
- alpha = grad.mean(dim=(2, 3), keepdim=True)
66
- # Weighted combination of activation maps over channel
67
- # dimension.
68
- gradcam = torch.sum(act * alpha, dim=1, keepdim=True)
69
- # We only want neurons with positive influence so we
70
- # clamp any negative ones.
71
- gradcam = torch.clamp(gradcam, min=0)
72
-
73
- # Resize gradcam to input resolution.
74
- gradcam = F.interpolate(
75
- gradcam,
76
- input.shape[2:],
77
- mode='bicubic',
78
- align_corners=False)
79
-
80
- # Restore gradient settings.
81
- for name, param in model.named_parameters():
82
- param.requires_grad_(requires_grad[name])
83
-
84
- return gradcam
85
-
86
-
87
- # Modified from: https://github.com/salesforce/ALBEF/blob/main/visualization.ipynb
88
- def getAttMap(img, attn_map):
89
- # Normalize attention map
90
- attn_map = attn_map - attn_map.min()
91
- if attn_map.max() > 0:
92
- attn_map = attn_map / attn_map.max()
93
-
94
- H = matplotlib.cm.jet(attn_map)
95
- H = (H * 255).astype(np.uint8)[:, :, :3]
96
- img_heatmap = Image.fromarray(H)
97
- img_heatmap = img_heatmap.resize((256, 256))
98
-
99
- return Image.blend(
100
- img.resize((256, 256)), img_heatmap, 0.4)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anustup/NS_AI_LABS/app-network.py DELETED
@@ -1,3 +0,0 @@
1
- # Run the app with no audio file restrictions, and make it available on the network
2
- from app import create_ui
3
- create_ui(-1, server_name="0.0.0.0")
 
 
 
 
spaces/ArtGAN/Segment-Anything-Video/demo.py DELETED
@@ -1,110 +0,0 @@
1
- from metaseg import (
2
- SahiAutoSegmentation,
3
- SegAutoMaskPredictor,
4
- SegManualMaskPredictor,
5
- sahi_sliced_predict,
6
- )
7
-
8
- # For image
9
-
10
-
11
- def automask_image_app(
12
- image_path, model_type, points_per_side, points_per_batch, min_area
13
- ):
14
- SegAutoMaskPredictor().image_predict(
15
-
16
- source=image_path,
17
- model_type=model_type, # vit_l, vit_h, vit_b
18
- points_per_side=points_per_side,
19
- points_per_batch=points_per_batch,
20
- min_area=min_area,
21
- output_path="output.png",
22
- show=False,
23
- save=True,
24
- )
25
- return "output.png"
26
-
27
-
28
- # For video
29
-
30
-
31
- def automask_video_app(
32
- video_path, model_type, points_per_side, points_per_batch, min_area
33
- ):
34
- SegAutoMaskPredictor().video_predict(
35
- source=video_path,
36
- model_type=model_type, # vit_l, vit_h, vit_b
37
- points_per_side=points_per_side,
38
- points_per_batch=points_per_batch,
39
- min_area=min_area,
40
- output_path="output.mp4",
41
- )
42
- return "output.mp4"
43
-
44
-
45
- # For manuel box and point selection
46
-
47
-
48
- def manual_app(
49
- image_path,
50
- model_type,
51
- input_point,
52
- input_label,
53
- input_box,
54
- multimask_output,
55
- random_color,
56
- ):
57
- SegManualMaskPredictor().image_predict(
58
- source=image_path,
59
- model_type=model_type, # vit_l, vit_h, vit_b
60
- input_point=input_point,
61
- input_label=input_label,
62
- input_box=input_box,
63
- multimask_output=multimask_output,
64
- random_color=random_color,
65
- output_path="output.png",
66
- show=False,
67
- save=True,
68
- )
69
- return "output.png"
70
-
71
-
72
- # For sahi sliced prediction
73
-
74
-
75
- def sahi_autoseg_app(
76
- image_path,
77
- sam_model_type,
78
- detection_model_type,
79
- detection_model_path,
80
- conf_th,
81
- image_size,
82
- slice_height,
83
- slice_width,
84
- overlap_height_ratio,
85
- overlap_width_ratio,
86
- ):
87
- boxes = sahi_sliced_predict(
88
- image_path=image_path,
89
- # yolov8, detectron2, mmdetection, torchvision
90
- detection_model_type=detection_model_type,
91
- detection_model_path=detection_model_path,
92
- conf_th=conf_th,
93
- image_size=image_size,
94
- slice_height=slice_height,
95
- slice_width=slice_width,
96
- overlap_height_ratio=overlap_height_ratio,
97
- overlap_width_ratio=overlap_width_ratio,
98
- )
99
-
100
- SahiAutoSegmentation().image_predict(
101
- source=image_path,
102
- model_type=sam_model_type,
103
- input_box=boxes,
104
- multimask_output=False,
105
- random_color=False,
106
- show=False,
107
- save=True,
108
- )
109
-
110
- return "output.png"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Artrajz/vits-simple-api/vits/text/japanese.py DELETED
@@ -1,169 +0,0 @@
1
- import os
2
- import re
3
- from unidecode import unidecode
4
- import pyopenjtalk
5
-
6
- from config import ABS_PATH
7
- from utils.download import download_and_verify
8
-
9
- URLS = [
10
- "https://github.com/r9y9/open_jtalk/releases/download/v1.11.1/open_jtalk_dic_utf_8-1.11.tar.gz",
11
- "https://ghproxy.com/https://github.com/r9y9/open_jtalk/releases/download/v1.11.1/open_jtalk_dic_utf_8-1.11.tar.gz",
12
- ]
13
- install_path = os.path.dirname(pyopenjtalk.__file__)
14
- dict_path = os.path.join(install_path, "open_jtalk_dic_utf_8-1.11", "char.bin")
15
- TARGET_PATH = os.path.join(ABS_PATH, "open_jtalk_dic_utf_8-1.11.tar.gz")
16
- EXTRACT_DESTINATION = os.path.join(install_path, "")
17
- EXPECTED_MD5 = None
18
-
19
- if not os.path.exists(dict_path):
20
- success, message = download_and_verify(URLS, TARGET_PATH, EXPECTED_MD5, EXTRACT_DESTINATION)
21
-
22
- # Regular expression matching Japanese without punctuation marks:
23
- _japanese_characters = re.compile(
24
- r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
25
-
26
- # Regular expression matching non-Japanese characters or punctuation marks:
27
- _japanese_marks = re.compile(
28
- r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
29
-
30
- # List of (symbol, Japanese) pairs for marks:
31
- _symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [
32
- ('%', 'パーセント')
33
- ]]
34
-
35
- # List of (romaji, ipa) pairs for marks:
36
- _romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
37
- ('ts', 'ʦ'),
38
- ('u', 'ɯ'),
39
- ('j', 'ʥ'),
40
- ('y', 'j'),
41
- ('ni', 'n^i'),
42
- ('nj', 'n^'),
43
- ('hi', 'çi'),
44
- ('hj', 'ç'),
45
- ('f', 'ɸ'),
46
- ('I', 'i*'),
47
- ('U', 'ɯ*'),
48
- ('r', 'ɾ')
49
- ]]
50
-
51
- # List of (romaji, ipa2) pairs for marks:
52
- _romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
53
- ('u', 'ɯ'),
54
- ('ʧ', 'tʃ'),
55
- ('j', 'dʑ'),
56
- ('y', 'j'),
57
- ('ni', 'n^i'),
58
- ('nj', 'n^'),
59
- ('hi', 'çi'),
60
- ('hj', 'ç'),
61
- ('f', 'ɸ'),
62
- ('I', 'i*'),
63
- ('U', 'ɯ*'),
64
- ('r', 'ɾ')
65
- ]]
66
-
67
- # List of (consonant, sokuon) pairs:
68
- _real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [
69
- (r'Q([↑↓]*[kg])', r'k#\1'),
70
- (r'Q([↑↓]*[tdjʧ])', r't#\1'),
71
- (r'Q([↑↓]*[sʃ])', r's\1'),
72
- (r'Q([↑↓]*[pb])', r'p#\1')
73
- ]]
74
-
75
- # List of (consonant, hatsuon) pairs:
76
- _real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [
77
- (r'N([↑↓]*[pbm])', r'm\1'),
78
- (r'N([↑↓]*[ʧʥj])', r'n^\1'),
79
- (r'N([↑↓]*[tdn])', r'n\1'),
80
- (r'N([↑↓]*[kg])', r'ŋ\1')
81
- ]]
82
-
83
-
84
- def symbols_to_japanese(text):
85
- for regex, replacement in _symbols_to_japanese:
86
- text = re.sub(regex, replacement, text)
87
- return text
88
-
89
-
90
- def japanese_to_romaji_with_accent(text):
91
- '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html'''
92
- text = symbols_to_japanese(text)
93
- sentences = re.split(_japanese_marks, text)
94
- marks = re.findall(_japanese_marks, text)
95
- text = ''
96
- for i, sentence in enumerate(sentences):
97
- if re.match(_japanese_characters, sentence):
98
- if text != '':
99
- text += ' '
100
- labels = pyopenjtalk.extract_fullcontext(sentence)
101
- for n, label in enumerate(labels):
102
- phoneme = re.search(r'\-([^\+]*)\+', label).group(1)
103
- if phoneme not in ['sil', 'pau']:
104
- text += phoneme.replace('ch', 'ʧ').replace('sh',
105
- 'ʃ').replace('cl', 'Q')
106
- else:
107
- continue
108
- # n_moras = int(re.search(r'/F:(\d+)_', label).group(1))
109
- a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1))
110
- a2 = int(re.search(r"\+(\d+)\+", label).group(1))
111
- a3 = int(re.search(r"\+(\d+)/", label).group(1))
112
- if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']:
113
- a2_next = -1
114
- else:
115
- a2_next = int(
116
- re.search(r"\+(\d+)\+", labels[n + 1]).group(1))
117
- # Accent phrase boundary
118
- if a3 == 1 and a2_next == 1:
119
- text += ' '
120
- # Falling
121
- elif a1 == 0 and a2_next == a2 + 1:
122
- text += '↓'
123
- # Rising
124
- elif a2 == 1 and a2_next == 2:
125
- text += '↑'
126
- if i < len(marks):
127
- text += unidecode(marks[i]).replace(' ', '')
128
- return text
129
-
130
-
131
- def get_real_sokuon(text):
132
- for regex, replacement in _real_sokuon:
133
- text = re.sub(regex, replacement, text)
134
- return text
135
-
136
-
137
- def get_real_hatsuon(text):
138
- for regex, replacement in _real_hatsuon:
139
- text = re.sub(regex, replacement, text)
140
- return text
141
-
142
-
143
- def japanese_to_ipa(text):
144
- text = japanese_to_romaji_with_accent(text).replace('...', '…')
145
- text = re.sub(
146
- r'([aiueo])\1+', lambda x: x.group(0)[0] + 'ː' * (len(x.group(0)) - 1), text)
147
- text = get_real_sokuon(text)
148
- text = get_real_hatsuon(text)
149
- for regex, replacement in _romaji_to_ipa:
150
- text = re.sub(regex, replacement, text)
151
- return text
152
-
153
-
154
- def japanese_to_ipa2(text):
155
- text = japanese_to_romaji_with_accent(text).replace('...', '…')
156
- text = get_real_sokuon(text)
157
- text = get_real_hatsuon(text)
158
- for regex, replacement in _romaji_to_ipa2:
159
- text = re.sub(regex, replacement, text)
160
- return text
161
-
162
-
163
- def japanese_to_ipa3(text):
164
- text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace(
165
- 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a')
166
- text = re.sub(
167
- r'([aiɯeo])\1+', lambda x: x.group(0)[0] + 'ː' * (len(x.group(0)) - 1), text)
168
- text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text)
169
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ash2219/AIchatbot/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: AIchatbot
3
- emoji: 💻
4
- colorFrom: pink
5
- colorTo: blue
6
- sdk: gradio
7
- sdk_version: 3.39.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/vcs/mercurial.py DELETED
@@ -1,163 +0,0 @@
1
- import configparser
2
- import logging
3
- import os
4
- from typing import List, Optional, Tuple
5
-
6
- from pip._internal.exceptions import BadCommand, InstallationError
7
- from pip._internal.utils.misc import HiddenText, display_path
8
- from pip._internal.utils.subprocess import make_command
9
- from pip._internal.utils.urls import path_to_url
10
- from pip._internal.vcs.versioncontrol import (
11
- RevOptions,
12
- VersionControl,
13
- find_path_to_project_root_from_repo_root,
14
- vcs,
15
- )
16
-
17
- logger = logging.getLogger(__name__)
18
-
19
-
20
- class Mercurial(VersionControl):
21
- name = "hg"
22
- dirname = ".hg"
23
- repo_name = "clone"
24
- schemes = (
25
- "hg+file",
26
- "hg+http",
27
- "hg+https",
28
- "hg+ssh",
29
- "hg+static-http",
30
- )
31
-
32
- @staticmethod
33
- def get_base_rev_args(rev: str) -> List[str]:
34
- return [rev]
35
-
36
- def fetch_new(
37
- self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int
38
- ) -> None:
39
- rev_display = rev_options.to_display()
40
- logger.info(
41
- "Cloning hg %s%s to %s",
42
- url,
43
- rev_display,
44
- display_path(dest),
45
- )
46
- if verbosity <= 0:
47
- flags: Tuple[str, ...] = ("--quiet",)
48
- elif verbosity == 1:
49
- flags = ()
50
- elif verbosity == 2:
51
- flags = ("--verbose",)
52
- else:
53
- flags = ("--verbose", "--debug")
54
- self.run_command(make_command("clone", "--noupdate", *flags, url, dest))
55
- self.run_command(
56
- make_command("update", *flags, rev_options.to_args()),
57
- cwd=dest,
58
- )
59
-
60
- def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
61
- repo_config = os.path.join(dest, self.dirname, "hgrc")
62
- config = configparser.RawConfigParser()
63
- try:
64
- config.read(repo_config)
65
- config.set("paths", "default", url.secret)
66
- with open(repo_config, "w") as config_file:
67
- config.write(config_file)
68
- except (OSError, configparser.NoSectionError) as exc:
69
- logger.warning("Could not switch Mercurial repository to %s: %s", url, exc)
70
- else:
71
- cmd_args = make_command("update", "-q", rev_options.to_args())
72
- self.run_command(cmd_args, cwd=dest)
73
-
74
- def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
75
- self.run_command(["pull", "-q"], cwd=dest)
76
- cmd_args = make_command("update", "-q", rev_options.to_args())
77
- self.run_command(cmd_args, cwd=dest)
78
-
79
- @classmethod
80
- def get_remote_url(cls, location: str) -> str:
81
- url = cls.run_command(
82
- ["showconfig", "paths.default"],
83
- show_stdout=False,
84
- stdout_only=True,
85
- cwd=location,
86
- ).strip()
87
- if cls._is_local_repository(url):
88
- url = path_to_url(url)
89
- return url.strip()
90
-
91
- @classmethod
92
- def get_revision(cls, location: str) -> str:
93
- """
94
- Return the repository-local changeset revision number, as an integer.
95
- """
96
- current_revision = cls.run_command(
97
- ["parents", "--template={rev}"],
98
- show_stdout=False,
99
- stdout_only=True,
100
- cwd=location,
101
- ).strip()
102
- return current_revision
103
-
104
- @classmethod
105
- def get_requirement_revision(cls, location: str) -> str:
106
- """
107
- Return the changeset identification hash, as a 40-character
108
- hexadecimal string
109
- """
110
- current_rev_hash = cls.run_command(
111
- ["parents", "--template={node}"],
112
- show_stdout=False,
113
- stdout_only=True,
114
- cwd=location,
115
- ).strip()
116
- return current_rev_hash
117
-
118
- @classmethod
119
- def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool:
120
- """Always assume the versions don't match"""
121
- return False
122
-
123
- @classmethod
124
- def get_subdirectory(cls, location: str) -> Optional[str]:
125
- """
126
- Return the path to Python project root, relative to the repo root.
127
- Return None if the project root is in the repo root.
128
- """
129
- # find the repo root
130
- repo_root = cls.run_command(
131
- ["root"], show_stdout=False, stdout_only=True, cwd=location
132
- ).strip()
133
- if not os.path.isabs(repo_root):
134
- repo_root = os.path.abspath(os.path.join(location, repo_root))
135
- return find_path_to_project_root_from_repo_root(location, repo_root)
136
-
137
- @classmethod
138
- def get_repository_root(cls, location: str) -> Optional[str]:
139
- loc = super().get_repository_root(location)
140
- if loc:
141
- return loc
142
- try:
143
- r = cls.run_command(
144
- ["root"],
145
- cwd=location,
146
- show_stdout=False,
147
- stdout_only=True,
148
- on_returncode="raise",
149
- log_failed_cmd=False,
150
- )
151
- except BadCommand:
152
- logger.debug(
153
- "could not determine if %s is under hg control "
154
- "because hg is not available",
155
- location,
156
- )
157
- return None
158
- except InstallationError:
159
- return None
160
- return os.path.normpath(r.rstrip("\r\n"))
161
-
162
-
163
- vcs.register(Mercurial)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_fileno.py DELETED
@@ -1,24 +0,0 @@
1
- from __future__ import annotations
2
-
3
- from typing import IO, Callable
4
-
5
-
6
- def get_fileno(file_like: IO[str]) -> int | None:
7
- """Get fileno() from a file, accounting for poorly implemented file-like objects.
8
-
9
- Args:
10
- file_like (IO): A file-like object.
11
-
12
- Returns:
13
- int | None: The result of fileno if available, or None if operation failed.
14
- """
15
- fileno: Callable[[], int] | None = getattr(file_like, "fileno", None)
16
- if fileno is not None:
17
- try:
18
- return fileno()
19
- except Exception:
20
- # `fileno` is documented as potentially raising a OSError
21
- # Alas, from the issues, there are so many poorly implemented file-like objects,
22
- # that `fileno()` can raise just about anything.
23
- return None
24
- return None
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awesimo/jojogan/op/upfirdn2d.cpp DELETED
@@ -1,23 +0,0 @@
1
- #include <torch/extension.h>
2
-
3
-
4
- torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel,
5
- int up_x, int up_y, int down_x, int down_y,
6
- int pad_x0, int pad_x1, int pad_y0, int pad_y1);
7
-
8
- #define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
9
- #define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous")
10
- #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
11
-
12
- torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel,
13
- int up_x, int up_y, int down_x, int down_y,
14
- int pad_x0, int pad_x1, int pad_y0, int pad_y1) {
15
- CHECK_CUDA(input);
16
- CHECK_CUDA(kernel);
17
-
18
- return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1);
19
- }
20
-
21
- PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
22
- m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)");
23
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/__init__.py DELETED
@@ -1,14 +0,0 @@
1
- from .modeling.meta_arch.centernet_detector import CenterNetDetector
2
- from .modeling.dense_heads.centernet import CenterNet
3
- from .modeling.roi_heads.custom_roi_heads import CustomROIHeads, CustomCascadeROIHeads
4
-
5
- from .modeling.backbone.fpn_p5 import build_p67_resnet_fpn_backbone
6
- from .modeling.backbone.dla import build_dla_backbone
7
- from .modeling.backbone.dlafpn import build_dla_fpn3_backbone
8
- from .modeling.backbone.bifpn import build_resnet_bifpn_backbone
9
- from .modeling.backbone.bifpn_fcos import build_fcos_resnet_bifpn_backbone
10
- from .modeling.backbone.res2net import build_p67_res2net_fpn_backbone
11
-
12
- from .data.datasets.objects365 import categories_v1
13
- from .data.datasets.coco import _PREDEFINED_SPLITS_COCO
14
- from .data.datasets import nuimages
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Banbri/zcvzcv/src/components/ui/use-toast.ts DELETED
@@ -1,192 +0,0 @@
1
- // Inspired by react-hot-toast library
2
- import * as React from "react"
3
-
4
- import type {
5
- ToastActionElement,
6
- ToastProps,
7
- } from "@/components/ui/toast"
8
-
9
- const TOAST_LIMIT = 1
10
- const TOAST_REMOVE_DELAY = 1000000
11
-
12
- type ToasterToast = ToastProps & {
13
- id: string
14
- title?: React.ReactNode
15
- description?: React.ReactNode
16
- action?: ToastActionElement
17
- }
18
-
19
- const actionTypes = {
20
- ADD_TOAST: "ADD_TOAST",
21
- UPDATE_TOAST: "UPDATE_TOAST",
22
- DISMISS_TOAST: "DISMISS_TOAST",
23
- REMOVE_TOAST: "REMOVE_TOAST",
24
- } as const
25
-
26
- let count = 0
27
-
28
- function genId() {
29
- count = (count + 1) % Number.MAX_VALUE
30
- return count.toString()
31
- }
32
-
33
- type ActionType = typeof actionTypes
34
-
35
- type Action =
36
- | {
37
- type: ActionType["ADD_TOAST"]
38
- toast: ToasterToast
39
- }
40
- | {
41
- type: ActionType["UPDATE_TOAST"]
42
- toast: Partial<ToasterToast>
43
- }
44
- | {
45
- type: ActionType["DISMISS_TOAST"]
46
- toastId?: ToasterToast["id"]
47
- }
48
- | {
49
- type: ActionType["REMOVE_TOAST"]
50
- toastId?: ToasterToast["id"]
51
- }
52
-
53
- interface State {
54
- toasts: ToasterToast[]
55
- }
56
-
57
- const toastTimeouts = new Map<string, ReturnType<typeof setTimeout>>()
58
-
59
- const addToRemoveQueue = (toastId: string) => {
60
- if (toastTimeouts.has(toastId)) {
61
- return
62
- }
63
-
64
- const timeout = setTimeout(() => {
65
- toastTimeouts.delete(toastId)
66
- dispatch({
67
- type: "REMOVE_TOAST",
68
- toastId: toastId,
69
- })
70
- }, TOAST_REMOVE_DELAY)
71
-
72
- toastTimeouts.set(toastId, timeout)
73
- }
74
-
75
- export const reducer = (state: State, action: Action): State => {
76
- switch (action.type) {
77
- case "ADD_TOAST":
78
- return {
79
- ...state,
80
- toasts: [action.toast, ...state.toasts].slice(0, TOAST_LIMIT),
81
- }
82
-
83
- case "UPDATE_TOAST":
84
- return {
85
- ...state,
86
- toasts: state.toasts.map((t) =>
87
- t.id === action.toast.id ? { ...t, ...action.toast } : t
88
- ),
89
- }
90
-
91
- case "DISMISS_TOAST": {
92
- const { toastId } = action
93
-
94
- // ! Side effects ! - This could be extracted into a dismissToast() action,
95
- // but I'll keep it here for simplicity
96
- if (toastId) {
97
- addToRemoveQueue(toastId)
98
- } else {
99
- state.toasts.forEach((toast) => {
100
- addToRemoveQueue(toast.id)
101
- })
102
- }
103
-
104
- return {
105
- ...state,
106
- toasts: state.toasts.map((t) =>
107
- t.id === toastId || toastId === undefined
108
- ? {
109
- ...t,
110
- open: false,
111
- }
112
- : t
113
- ),
114
- }
115
- }
116
- case "REMOVE_TOAST":
117
- if (action.toastId === undefined) {
118
- return {
119
- ...state,
120
- toasts: [],
121
- }
122
- }
123
- return {
124
- ...state,
125
- toasts: state.toasts.filter((t) => t.id !== action.toastId),
126
- }
127
- }
128
- }
129
-
130
- const listeners: Array<(state: State) => void> = []
131
-
132
- let memoryState: State = { toasts: [] }
133
-
134
- function dispatch(action: Action) {
135
- memoryState = reducer(memoryState, action)
136
- listeners.forEach((listener) => {
137
- listener(memoryState)
138
- })
139
- }
140
-
141
- type Toast = Omit<ToasterToast, "id">
142
-
143
- function toast({ ...props }: Toast) {
144
- const id = genId()
145
-
146
- const update = (props: ToasterToast) =>
147
- dispatch({
148
- type: "UPDATE_TOAST",
149
- toast: { ...props, id },
150
- })
151
- const dismiss = () => dispatch({ type: "DISMISS_TOAST", toastId: id })
152
-
153
- dispatch({
154
- type: "ADD_TOAST",
155
- toast: {
156
- ...props,
157
- id,
158
- open: true,
159
- onOpenChange: (open) => {
160
- if (!open) dismiss()
161
- },
162
- },
163
- })
164
-
165
- return {
166
- id: id,
167
- dismiss,
168
- update,
169
- }
170
- }
171
-
172
- function useToast() {
173
- const [state, setState] = React.useState<State>(memoryState)
174
-
175
- React.useEffect(() => {
176
- listeners.push(setState)
177
- return () => {
178
- const index = listeners.indexOf(setState)
179
- if (index > -1) {
180
- listeners.splice(index, 1)
181
- }
182
- }
183
- }, [state])
184
-
185
- return {
186
- ...state,
187
- toast,
188
- dismiss: (toastId?: string) => dispatch({ type: "DISMISS_TOAST", toastId }),
189
- }
190
- }
191
-
192
- export { useToast, toast }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Apksum Choque De Clanes.md DELETED
@@ -1,98 +0,0 @@
1
-
2
- <h1>Choque de clanes Android Mod APK: Todo lo que necesita saber</h1>
3
- <p>Si eres un fan de los juegos de estrategia, probablemente hayas oído hablar de Clash of Clans, uno de los juegos móviles más populares del mundo. Pero, ¿sabías que hay una manera de disfrutar del juego con recursos, gemas y características ilimitadas? En este artículo, le diremos todo lo que necesita saber sobre Clash of Clans Android Mod APK, cómo descargarlo e instalarlo, y cómo jugarlo como un profesional. </p>
4
- <h2>¿Qué es el Choque de Clanes? </h2>
5
- <h3>Una breve introducción al juego</h3>
6
- <p>Clash of Clans es un juego de estrategia freemium desarrollado por Supercell, una compañía finlandesa que también creó otros juegos de éxito como Hay Day, Boom Beach y Brawl Stars. El juego fue lanzado en 2012 para iOS y en 2013 para dispositivos Android. Desde entonces, ha ganado millones de jugadores y fans en todo el mundo. </p>
7
- <h2>apksum choque de clanes</h2><br /><p><b><b>DOWNLOAD</b> >>> <a href="https://bltlly.com/2v6LpS">https://bltlly.com/2v6LpS</a></b></p><br /><br />
8
- <p>El juego se desarrolla en un mundo de fantasía donde puedes construir tu propia aldea, entrenar a tus tropas, unirte a un clan y luchar contra otros jugadores en batallas épicas. También puedes explorar el mapa, recolectar recursos, mejorar tus edificios y tropas, y participar en varios eventos y desafíos. </p>
9
- <h3>Las principales características de Clash of Clans</h3>
10
- <p>Algunas de las características principales que hacen que Clash of Clans sea tan adictivo y divertido son:</p>
11
- <ul>
12
- <li>La variedad de tropas y hechizos que puedes usar en diferentes combinaciones y estrategias. </li>
13
- <li>El sistema de clanes que te permite unir fuerzas con otros jugadores, chatear con ellos, donar y recibir tropas, y competir en guerras de clanes y juegos de clanes. </li>
14
- <li>La base de construcción que te permite crear una segunda aldea con diferentes edificios y tropas, y enfrentarte a otros jugadores en batallas. </li>
15
- <li>El pase de temporada que te da acceso a recompensas, beneficios y desafíos exclusivos cada mes. </li>
16
- <li>Las actualizaciones regulares que introducen nuevo contenido, características y mejoras al juego. </li>
17
- </ul>
18
- <h2>¿Qué es el choque de clanes Android Mod APK? </h2>
19
-
20
- <p>Como puedes ver, Clash of Clans es un juego muy atractivo y agradable, pero también tiene algunas limitaciones y desventajas. Por ejemplo, tienes que esperar largos períodos de tiempo para mejorar tus edificios y tropas, o gastar dinero real para comprar gemas y acelerar el proceso. También tienes que lidiar con el riesgo de perder tus recursos y trofeos cuando eres atacado por otros jugadores. </p>
21
- <p>Es por eso que algunos jugadores prefieren usar Clash of Clans Android Mod APK, que es una versión modificada del juego original que le da acceso a recursos ilimitados, gemas y características. Con el apk mod, puede:</p>
22
- <ul>
23
- <li>Construye tu pueblo sin restricciones ni costes. </li>
24
- <li>Entrena tus tropas al instante y llena tus campamentos de ejército con las tropas que quieras. </li>
25
- <li>Actualice sus edificios y tropas al nivel máximo en segundos. </li>
26
- <li>Desbloquea todos los héroes y sus habilidades sin gastar ninguna joya. </li>
27
- <li>Usa cualquier hechizo y máquinas de asedio en tus ataques. </li>
28
- <li>Únete a cualquier clan o crea el tuyo sin ningún requisito. </li>
29
- <li>Juega offline o online sin anuncios ni interrupciones. </li>
30
- </ul>
31
- <h3>Los riesgos de usar el mod apk</h3>
32
- <p>Sin embargo, el uso de choque de clanes Android Mod APK también viene con algunos riesgos y desventajas que usted debe ser consciente de antes de descargarlo. Algunos de ellos son:</p>
33
- <ul> <li>El mod apk no es oficial o autorizado por Supercell, el desarrollador de Clash of Clans. Esto significa que puede contener virus, malware o spyware que pueden dañar su dispositivo o robar su información personal. </li>
34
- <li>El mod apk no es compatible con el juego original, por lo que no se puede jugar con otros jugadores que utilizan la versión oficial. Solo se puede jugar con otros jugadores que utilizan el mismo apk mod que usted. </li>
35
-
36
- <li>El apk mod está en contra de los términos de servicio y la política de juego limpio de Clash of Clans. Esto significa que si te pillan usándolo, puedes enfrentarte a graves consecuencias como suspensión de cuenta, eliminación de cuenta o acciones legales. </li>
37
- </ul>
38
- <h2>¿Cómo descargar e instalar Clash of Clans Android Mod APK? </h2>
39
- <h3>Los pasos a seguir</h3>
40
- <p>Si todavía quieres probar Clash of Clans Android Mod APK a pesar de los riesgos, aquí están los pasos que debe seguir para descargar e instalar en su dispositivo:</p>
41
- <ol>
42
- <li>Primero, necesitas desinstalar el juego original de Clash of Clans de tu dispositivo si lo tienes instalado. Esto se debe a que el mod apk sobrescribirá el juego original y causará conflictos. </li>
43
- <li>En segundo lugar, es necesario habilitar la opción de fuentes desconocidas en el dispositivo. Esto le permitirá instalar aplicaciones que no son de Google Play Store. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo. </li>
44
- <li>En tercer lugar, es necesario encontrar una fuente confiable y segura para descargar el choque de clanes Android Mod APK archivo. Hay muchos sitios web que afirman ofrecer el apk mod, pero algunos de ellos pueden ser falsos o maliciosos. Puede utilizar un motor de búsqueda o un sitio de revisión de confianza para encontrar una fuente confiable. </li>
45
- <li>Cuarto, es necesario descargar el choque de clanes Android Mod APK archivo de la fuente que ha elegido. Asegúrese de que tiene suficiente espacio de almacenamiento en su dispositivo y una conexión a Internet estable. Puede utilizar un navegador o un gestor de descargas para descargar el archivo. </li>
46
- <li>Quinto, es necesario localizar el descargado Clash of Clans Android Mod APK archivo en su dispositivo y toque en él para iniciar el proceso de instalación. Siga las instrucciones de la pantalla y espere a que se complete la instalación. </li>
47
- <li>Sexto, es necesario lanzar el choque de clanes Android Mod APK juego desde su dispositivo y disfrutar de jugar con recursos ilimitados, gemas, y características. </li>
48
- </ol>
49
- <h3>Las precauciones a tomar</h3>
50
-
51
- <ul>
52
- <li>Asegúrese de que usted tiene una copia de seguridad de los datos originales del juego Clash of Clans y los datos de su dispositivo antes de desinstalar el juego original. Esto le ayudará a restaurar su progreso y configuración en caso de que algo salga mal. </li>
53
- <li>Asegúrese de que tiene una aplicación antivirus o anti-malware en su dispositivo y escanear el choque de clanes Android Mod archivo APK antes de instalarlo. Esto le ayudará a detectar y eliminar cualquier elemento dañino que pueda estar oculto en el archivo. </li>
54
- <li>Asegúrese de que usted no utiliza su cuenta real o principal para jugar Clash of Clans Android Mod APK. Esto le ayudará a evitar ser prohibido o perder su cuenta. Puede crear una cuenta nueva o falsa para jugar con el apk mod. </li>
55
- <li> Asegúrese de que usted no utiliza ninguna información personal o sensible durante la reproducción de choque de clanes Android Mod APK. Esto le ayudará a evitar ser hackeado o estafado por cualquier persona que pueda acceder a su información a través de la apk mod. </li>
56
- </ul>
57
- <h2>¿Cómo se juega choque de clanes Android Mod APK? </h2>
58
- <h3>Consejos y trucos para principiantes</h3>
59
- <p>Si eres nuevo en Clash of Clans Android Mod APK, aquí hay algunos consejos y trucos que le ayudarán a empezar y disfrutar del juego:</p>
60
- <p></p>
61
- <ul>
62
- <li>Explorar los diferentes modos y características que están disponibles en el apk mod. Puedes jugar en modo para un jugador, modo multijugador, modo clan, modo base constructor, modo pase estacional y más. </li>
63
- <li>Experimenta con diferentes combinaciones y estrategias de tropas y hechizos en tus ataques. Puedes usar cualquier tropa y hechizo que quieras sin limitaciones ni costes. </li>
64
- <li>Actualice sus edificios y tropas tanto como sea posible para aumentar su potencia y rendimiento. Puede actualizar cualquier cosa en segundos sin recursos o gemas. </li>
65
-
66
- <li>Juega offline o online dependiendo de tu preferencia y disponibilidad. Puede jugar sin conexión a Internet o en línea con otros jugadores que utilizan el mismo mod apk que usted. </li>
67
- </ul>
68
- <h3>Las mejores estrategias para jugadores avanzados</h3>
69
- <p>Si ya está familiarizado con Clash of Clans Android Mod APK, aquí hay algunas estrategias que le ayudarán a mejorar sus habilidades y desafiarse a sí mismo:</p>
70
- <ul>
71
- <li>Trate de completar todos los logros y misiones que están disponibles en el apk mod. Usted puede ganar recompensas y bonos adicionales para completarlos. </li>
72
- <li>Trate de clasificar en las tablas de clasificación y trofeos que están disponibles en el apk mod. Puede competir con otros jugadores que utilizan el mismo mod apk como usted y ver cómo se compara con ellos. </li>
73
- <li>Trate de poner a prueba sus límites y habilidades atacando las bases más fuertes y los jugadores que se pueden encontrar en el apk mod. Puedes aprender de sus estrategias y tácticas y mejorar las tuyas. </li>
74
- <li>Trate de personalizar su pueblo y tropas de acuerdo a su estilo y preferencia. Puede cambiar la apariencia, nombre y comportamiento de sus edificios y tropas en el mod apk. </li>
75
- <li>Trata de divertirte y disfrutar del juego sin estrés ni presión. Puedes jugar a tu propio ritmo y nivel sin preocupaciones ni consecuencias. </li>
76
- </ul>
77
- <h2>Conclusión</h2>
78
- <h3>Un resumen de los puntos principales</h3>
79
- <p>En conclusión, Clash of Clans Android Mod APK es una versión modificada del juego original Clash of Clans que le da acceso a recursos ilimitados, gemas y características. Puede ser una gran manera de disfrutar del juego con más libertad y flexibilidad, pero también tiene algunos riesgos y desventajas que debes tener en cuenta antes de descargarlo. Si decides usarlo, debes seguir los pasos y precauciones que hemos proporcionado en este artículo, y también aprender algunos consejos y trucos que te ayudarán a jugarlo como un profesional. </p>
80
- <h3>Un llamado a la acción para los lectores</h3>
81
-
82
- <p>Si tiene alguna pregunta o comentario sobre Clash of Clans Android Mod APK, puede dejar un comentario a continuación o ponerse en contacto con nosotros a través de nuestro sitio web. Nos encantaría saber de usted y ayudarle con cualquier problema o problema que pueda tener. Gracias por leer este artículo y esperamos que lo haya encontrado útil e informativo. </p>
83
- <h2>Preguntas frecuentes</h2>
84
- <p>Aquí están algunas de las preguntas más frecuentes sobre Clash of Clans Android Mod APK:</p>
85
- <ol>
86
- <li><b>Es Clash of Clans Android Mod APK libre? </b></li>
87
- <p>Sí, Clash of Clans Android Mod APK es gratis para descargar y usar. Usted no necesita pagar nada para acceder a sus características y beneficios. </p>
88
- <li><b> ¿Es seguro Clash of Clans Android Mod APK? </b></li>
89
- <p>No, Clash of Clans Android Mod APK no es seguro de usar. Puede contener virus, malware o spyware que pueden dañar su dispositivo o robar su información personal. También puede causar que tu cuenta sea prohibida o eliminada por Supercell, el desarrollador de Clash of Clans.</p>
90
- <li><b>Es Clash of Clans Android Mod APK legal? </b></li>
91
- <p>No, Clash of Clans Android Mod APK no es legal de usar. Está en contra de los términos de servicio y la política de juego limpio de Clash of Clans. También puede violar los derechos de propiedad intelectual de Supercell, el desarrollador de Clash of Clans.</p>
92
- <li><b> ¿Puedo jugar choque de clanes Android Mod APK con mis amigos que utilizan el juego original? </b></li>
93
- <p>No, no se puede jugar Clash of Clans Android Mod APK con tus amigos que utilizan el juego original. Solo se puede jugar con otros jugadores que utilizan el mismo apk mod que usted. </p>
94
- <li><b> ¿Puedo volver al juego original después de usar Clash of Clans Android Mod APK? </b></li>
95
-
96
- </ol></p> 64aa2da5cf<br />
97
- <br />
98
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Chess Cnvcs Apk.md DELETED
@@ -1,71 +0,0 @@
1
-
2
- <h1>CNVCS Chess APK: Una aplicación de ajedrez bien diseñado con características de gran alcance</h1>
3
- <p>Si eres un amante del ajedrez, podrías estar buscando una buena aplicación de ajedrez que pueda desafiar tus habilidades y proporcionarte horas de diversión. Hay muchas aplicaciones de ajedrez disponibles en la Google Play Store, pero no todas merecen tu tiempo y atención. Algunos de ellos son demasiado fáciles, algunos de ellos son demasiado duros, algunos de ellos son demasiado aburridos, y algunos de ellos son demasiado buggy. </p>
4
- <p>Pero no se preocupe, hemos encontrado una aplicación de ajedrez que puede satisfacer sus necesidades y expectativas. Se llama CNVCS Chess APK, y es una de las mejores aplicaciones de ajedrez que puede descargar para su dispositivo Android. En este artículo, le diremos qué es CNVCS Chess APK, ¿por qué debería descargarlo, cuáles son sus características, y cómo descargarlo e instalarlo. Así que, vamos a empezar! </p>
5
- <h2>chess cnvcs apk</h2><br /><p><b><b>Download File</b> &rArr;&rArr;&rArr; <a href="https://bltlly.com/2v6Mvc">https://bltlly.com/2v6Mvc</a></b></p><br /><br />
6
- <h2>Introducción</h2>
7
- <h3>¿Qué es CNVCS Chess APK? </h3>
8
- <p>CNVCS Chess APK es una aplicación de ajedrez bien diseñado con un montón de características de gran alcance. Es desarrollado por cnvcs.com, una empresa especializada en la creación de juegos de mesa para dispositivos móviles. CNVCS Chess APK no es solo una aplicación de ajedrez simple, pero una plataforma de ajedrez completa que le ofrece varios modos, opciones y recursos para mejorar sus habilidades de ajedrez y disfrutar del juego. </p>
9
- <h3>¿Por qué debe descargar CNVCS Chess APK? </h3>
10
- <p>Hay muchas razones por las que debe descargar CNVCS Chess APK para su dispositivo Android. Estos son algunos de ellos:</p>
11
- <ul>
12
- <li> ¡Es gratis! No tienes que pagar nada para descargar y usar esta aplicación. </li>
13
- <li> Es compatible con Android 4.0 y superior. Usted no necesita un dispositivo de alta gama para ejecutar esta aplicación sin problemas. </li>
14
- <li> Tiene una interfaz fácil de usar. Puede navegar fácilmente por la aplicación y acceder a sus características. </li>
15
- <li>Tiene unos gráficos de alta calidad. Puede disfrutar de los tableros de ajedrez realistas y hermosos y conjuntos de piezas. </li>
16
- <li> Tiene un bajo consumo de batería. Usted no tiene que preocuparse por el agotamiento de la batería durante la reproducción de esta aplicación. </li>
17
-
18
- </ul>
19
- <h2>Características de CNVCS Chess APK</h2>
20
- <p>CNVCS Chess APK tiene un montón de características que lo hacen destacar de otras aplicaciones de ajedrez. Estos son algunos de ellos:</p>
21
- <h3>Juega como blanco o negro, contra el ordenador o un amigo</h3>
22
- <p>Puedes elegir jugar como piezas blancas o negras, y también puedes elegir a tu oponente. Puedes jugar contra el ordenador, que tiene 10 niveles de dificultad, desde principiante a gran maestro. También puede jugar contra un amigo, ya sea en línea o fuera de línea. Puede usar Bluetooth o Wi-Fi para conectarse con el dispositivo de su amigo y jugar un juego LAN. </p>
23
- <h3>10 niveles de dificultad, de principiante a gran maestro</h3>
24
- <p>Puedes ajustar el nivel de desafío según tu nivel de habilidad. Puedes elegir entre 10 niveles de dificultad, desde principiante a gran maestro. El ordenador jugará con diferentes estilos y estrategias, dependiendo del nivel que elijas. También se puede ver la evaluación del ordenador de la posición y el mejor movimiento para ambos lados. </p>
25
- <h3>Incluye más de 38000 rompecabezas de ajedrez, divididos en 13 colecciones</h3>
26
- <p>Si quieres practicar tus habilidades de ajedrez y resolver algunos problemas desafiantes, puedes probar el modo de rompecabezas de ajedrez. Este modo incluye más de 38000 rompecabezas de ajedrez, divididos en 13 colecciones, como jaque mate, final de juego, tácticas, aperturas y más. Puedes elegir la dificultad y el tema de los rompecabezas, y también puedes ver la solución y la explicación para cada rompecabezas. </p>
27
- <p></p>
28
- <h3>Soporte en línea y LAN juego</h3>
29
- <p>Si quieres jugar con otros jugadores de todo el mundo, puedes probar el modo online. Este modo le permite unirse o crear una habitación, y jugar con otros jugadores en línea. Puedes chatear con tu oponente, enviar emojis y ver su calificación y perfil. También puedes jugar un juego de LAN con tu amigo, usando Bluetooth o Wi-Fi para conectar tus dispositivos. </p>
30
- <h3>Pista de ordenador y deshacer ilimitado para principiantes</h3>
31
-
32
- <h3>Tableros de ajedrez y conjuntos de piezas intercambiables, estilos de piezas 2D/ 3D</h3>
33
- <p>Si desea personalizar su experiencia de juego, puede cambiar la apariencia del tablero de ajedrez y los conjuntos de piezas. Puede elegir entre diferentes colores, temas y diseños para el tablero de ajedrez, y diferentes formas, estilos y tamaños para los conjuntos de piezas. También puede cambiar entre estilos de piezas 2D y 3D, dependiendo de su preferencia. </p>
34
- <h3>Cargar y guardar juegos en archivos PGN</h3>
35
- <p>Si desea guardar sus juegos o cargar otros juegos, puede usar el formato de archivo PGN. Este formato es una forma estándar de grabar partidas de ajedrez, usando notación algebraica. Puedes guardar tus juegos en archivos PGN y cargarlos más tarde para reanudarlos o revisarlos. También puede cargar otros archivos PGN de fuentes externas, como sitios web o libros. </p>
36
- <h3>Autosaving su juego actual</h3>
37
- <p>Si quieres salir de tu juego sin perder tu progreso, puedes usar la función de guardado automático. Esta función guardará automáticamente su juego actual cada vez que salga de la aplicación o cambie a otra aplicación. A continuación, puede reanudar el juego desde donde lo dejó cuando vuelva a abrir la aplicación. </p>
38
- <h3>Posición de configuración, verificación de validez inteligente</h3>
39
- <p>Si desea crear su propia posición de ajedrez o escenario, puede utilizar la función de posición de configuración. Esta característica le permite colocar cualquier pieza en cualquier cuadrado en el tablero, y empezar a jugar desde esa posición. También puede utilizar la función de verificación de validez inteligente, que asegurará que su posición es legal y válida de acuerdo con las reglas del ajedrez. </p>
40
- <h3>Ir aleatoriamente a cualquier punto del juego actual</h3>
41
- <p>Si quieres explorar diferentes posibilidades o resultados en tu juego, puedes usar la función aleatoria. Esta característica le permite ir al azar a cualquier punto en el juego actual, y ver lo que pasaría si usted o su oponente hizo un movimiento diferente. También puede volver a la posición original en cualquier momento. </p>
42
- <h3>Proporcionar miles de juegos de ajedrez clásico para descargar</h3>
43
-
44
- <h3>Cambiar al modo de juego cuando está en modo de vista, analizar el juego, y luego restaurar el estado de la vista</h3>
45
- <p>Si quieres probar un movimiento diferente al ver un juego o un rompecabezas, puedes usar la función de cambiar. Esta función le permite cambiar de modo de vista a modo de juego cuando se ve un juego o un rompecabezas. A continuación, puede hacer cualquier movimiento que desee, y analizarlo con el ordenador. También puede volver al modo de vista en cualquier momento y restaurar el estado de la vista original. </p>
46
- <h2>¿Cómo descargar e instalar CNVCS Chess APK? </h2>
47
- <p>Si usted está interesado en descargar e instalar CNVCS Chess APK en su dispositivo Android, aquí están los pasos que necesita seguir:</p>
48
- <ol>
49
- <li>Ir a [cnvcs.com], que es el sitio web oficial de CNVCS Chess APK.</li>
50
- <li> <li>Haga clic en el botón de descarga y espere a que el archivo APK se descargue en su dispositivo. </li>
51
- <li>Ir a la configuración del dispositivo, y permitir la instalación de aplicaciones de fuentes desconocidas. </li>
52
- <li>Busque el archivo APK en su dispositivo, y toque en él para iniciar el proceso de instalación. </li>
53
- <li> Siga las instrucciones en la pantalla y espere a que la aplicación se instale en su dispositivo. </li>
54
- <li> Inicie la aplicación, y disfrutar de jugar al ajedrez con CNVCS Chess APK! </li>
55
- </ol>
56
- <h2>Conclusión</h2>
57
- <p>CNVCS Chess APK es una aplicación de ajedrez bien diseñado con potentes características que pueden mejorar sus habilidades de ajedrez y el disfrute. Le ofrece varios modos, opciones y recursos para jugar al ajedrez como blanco o negro, contra la computadora o un amigo, en línea o fuera de línea. También incluye más de 38000 rompecabezas de ajedrez, 10 niveles de dificultad, tableros de ajedrez cambiables y conjuntos de piezas, soporte de archivos PGN, función de guardado automático, función de posición de configuración, función aleatoria, descarga de juegos de ajedrez clásicos y función de cambio. Es gratuito, compatible, fácil de usar, de alta calidad, batería baja y de pequeño tamaño. Es una de las mejores aplicaciones de ajedrez que puedes descargar para tu dispositivo Android. Entonces, ¿qué estás esperando? Descargar CNVCS Chess APK ahora y divertirse jugando al ajedrez! </p>
58
-
59
- <p>Aquí están algunas de las preguntas más frecuentes sobre CNVCS Chess APK:</p>
60
- <h4>Q: ¿Es seguro descargar y usar CNVCS Chess APK? </h4>
61
- <p>A: Sí, CNVCS Chess APK es seguro de descargar y usar. No contiene ningún virus, malware o spyware. Tampoco requiere ningún permiso que pueda comprometer su privacidad o seguridad. </p>
62
- <h4>Q: ¿Se actualiza regularmente CNVCS Chess APK? </h4>
63
- <p>A: Sí, CNVCS Chess APK se actualiza regularmente. Los desarrolladores siempre están trabajando en mejorar la aplicación y agregar nuevas características y contenido. Puede comprobar la última versión y actualizar el historial en el sitio web oficial de CNVCS Chess APK.</p>
64
- <h4>Q: ¿Puedo jugar CNVCS Chess APK offline? </h4>
65
- <p>A: Sí, puede jugar CNVCS Chess APK offline. No necesitas una conexión a Internet para jugar contra el ordenador o un amigo usando Bluetooth o Wi-Fi. Sin embargo, necesitará una conexión a Internet para jugar online o descargar juegos de ajedrez clásicos. </p>
66
- <h4>Q: ¿Puedo compartir mis juegos o rompecabezas con otros? </h4>
67
- <p>A: Sí, puede compartir sus juegos o rompecabezas con otros. Puede guardar sus juegos o rompecabezas en archivos PGN y luego enviarlos a otros por correo electrónico, redes sociales u otras aplicaciones. También puede cargar otros archivos PGN de otros y verlos en la aplicación. </p>
68
- <h4>Q: ¿Cómo puedo contactar a los desarrolladores de CNVCS Chess APK? </h4>
69
- <p>A: Si usted tiene alguna pregunta, retroalimentación, o sugerencias sobre CNVCS Chess APK, puede ponerse en contacto con los desarrolladores por correo electrónico a [[email protected]]. También puede visitar su sitio web en [cnvcs.com] para más información. </p> 64aa2da5cf<br />
70
- <br />
71
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Cmo Hacer Un Regalo De Cumpleaos.md DELETED
@@ -1,53 +0,0 @@
1
- <br />
2
- <h1>Banderas de los países Descargar juego: Una manera divertida y educativa para aprender sobre el mundo</h1>
3
- <p>¿Te gustan los juegos de trivia? ¿Quieres mejorar tus habilidades de geografía? ¿Te gusta aprender sobre diferentes culturas e historias? Si respondiste sí a cualquiera de estas preguntas, entonces podrías estar interesado en juegos de banderas de países. Estos son juegos que involucran identificar, emparejar o resolver puzzles con banderas y mapas de varios países y regiones de todo el mundo. No solo son entretenidos, sino también útiles para mejorar su conocimiento y conciencia del mundo. </p>
4
- <h2>Cómo hacer un regalo de cumpleaños</h2><br /><p><b><b>Download File</b> &#10003; <a href="https://bltlly.com/2v6JCT">https://bltlly.com/2v6JCT</a></b></p><br /><br />
5
- <h2>Tipos de Países Banderas Juegos</h2>
6
- <p>Hay muchos tipos de países banderas juegos disponibles para descargar de forma gratuita en diferentes plataformas y dispositivos. Estos son algunos de los más comunes:</p>
7
- <h3>Juegos de preguntas de bandera</h3>
8
- <p>Estos son juegos que ponen a prueba tu conocimiento de banderas y mapas con preguntas de opción múltiple, pistas y desafíos. Puedes elegir entre diferentes niveles de dificultad, categorías y modos. También puede competir con sus amigos u otros jugadores en línea y ver quién tiene la mejor puntuación. Algunos ejemplos de juegos de preguntas de bandera son <a href="( 1 )">Flags 2: Multijugador</a> y <a href="( 2 )">Flags Quiz! </a>. </p>
9
- <h3>Juegos de combinación de banderas</h3>
10
- <p>Estos son juegos que requieren que las banderas coincidan con los países o regiones a los que pertenecen. Puede arrastrar y soltar las banderas a sus lugares correspondientes en un mapa o seleccionarlas de una lista. También puede obtener más detalles sobre cada país o región, como su capital, población, área o moneda. Algunos ejemplos de juegos de coincidencia de banderas son <a href="( 4 )">World Flags Quiz</a> y <a href="">Flag Master</a>. </p>
11
- <p></p>
12
- <h3>Juegos de puzzle de bandera</h3>
13
-
14
- <h2>Juegos de banderas de países</h2>
15
- <p>Los juegos de banderas de países no solo son divertidos, sino también beneficiosos para tu cerebro y tu cultura. Estos son algunos de los beneficios que ofrecen:</p>
16
- <h3>Mejora tu memoria y habilidades cognitivas</h3>
17
- <p>Al jugar juegos de banderas de países, puede mejorar su memoria y habilidades cognitivas mediante el reconocimiento de patrones, formas y colores. También puede mejorar su conciencia espacial y la percepción visual mediante la localización de las banderas y mapas en un globo. Estas habilidades son esenciales para el aprendizaje, la resolución de problemas y la creatividad. </p>
18
- <h3>Aumenta tu conciencia cultural y curiosidad</h3>
19
- <p>Al jugar juegos de banderas de países, puede mejorar su conciencia cultural y curiosidad al aprender sobre diferentes países y regiones. Puedes descubrir su historia, cultura, geografía, política y economía. También puedes apreciar su diversidad y singularidad. Estos juegos pueden inspirarte a viajar, explorar y conectar con otras personas alrededor del mundo. </p>
20
- <h3>Diviértete y compite con tus amigos u otros jugadores online</h3>
21
- <p>Al jugar juegos de banderas de países, puede divertirse y competir con sus amigos u otros jugadores en línea. Usted puede desafiarse a sí mismo para batir sus propios registros o para clasificar más alto en las tablas de clasificación. También puedes compartir tus logros y progresos en las redes sociales o chatear con otros jugadores. Estos juegos pueden hacer el aprendizaje más agradable y gratificante. </p>
22
- <h2>Cómo descargar juegos de banderas de países gratis</h2>
23
- <p>Si quieres descargar juegos de banderas de países gratis, debes seguir estos pasos:</p>
24
- <h3>Utilice una fuente confiable y segura como Google Play Store, Microsoft Store o Flagpedia.net</h3>
25
-
26
- <h3>Elige un juego que se adapte a tus preferencias y compatibilidad con dispositivos</h3>
27
- <p>El segundo paso es elegir un juego que se adapte a tus preferencias y a la compatibilidad del dispositivo. Puedes navegar por las categorías, clasificaciones, reseñas, capturas de pantalla y descripciones de los juegos para encontrar el que te interese. También puedes comprobar los requisitos, permisos y actualizaciones de los juegos para asegurarte de que son compatibles con tu dispositivo. </p>
28
- <h3>Siga las instrucciones para instalar y lanzar el juego</h3>
29
- <p>El tercer paso es seguir las instrucciones para instalar y lanzar el juego. Puede hacer clic en el botón de descarga o escanear el código QR para iniciar el proceso de descarga. A continuación, puede seguir las instrucciones para aceptar los términos y condiciones, conceder los permisos y completar la instalación. A continuación, puede abrir el juego y empezar a jugar. </p>
30
- <h2>Ejemplos de Países Banderas Juegos para Descargar</h2>
31
- <p>Aquí hay algunos ejemplos de juegos de banderas de países que puedes descargar gratis:</p>
32
- <h3>Banderas 2: Multijugador - Un juego de preguntas de bandera multijugador que mejora tu cerebro y desafía tu IQ</h3>
33
- <p>Este es un juego de preguntas bandera multijugador que le permite jugar con hasta cuatro jugadores en línea o fuera de línea. Puedes elegir entre más de 200 banderas y 20 mapas de todos los continentes. También puedes personalizar tu avatar, nombre, color e idioma. Este juego mejora tu cerebro y desafía tu coeficiente intelectual probando tu conocimiento de banderas y mapas. </p>
34
- <h3>Flags Quiz! - Un juego gratuito lleno de diversión que consiste en adivinar los nombres de cientos de banderas de países de todo el mundo</h3>
35
- <p>Este es un juego gratuito lleno de diversión que consiste en adivinar los nombres de cientos de banderas de países de todo el mundo. Puedes elegir entre cuatro modos de juego diferentes: Clásico, Time Attack, Hard Mode y Custom Mode. También puedes usar sugerencias, saltarte preguntas o pedir ayuda a tus amigos. Este juego es adecuado para todas las edades y niveles. </p>
36
-
37
- <p>Este es un solo paquete o servicio de inserción que le permite utilizar banderas de país en sus revistas de noticias, sitios web, software, aplicaciones móviles y tesis de maestría. Puede descargar todas las banderas de países del mundo de forma gratuita en varios formatos (PNG, SVG) y tamaños (16x16 px a 2500x2500 px). También puede usar una API o un widget para incrustar banderas de país en sus proyectos. </p>
38
- <h2>Conclusión</h2>
39
- <p>Los juegos de banderas de países son una forma divertida y educativa de aprender sobre el mundo. Ofrecen varios tipos de juegos que ponen a prueba su conocimiento de banderas y mapas, coinciden con las banderas con los países o regiones que pertenecen a, o resolver puzzles mediante la organización de las piezas de banderas o mapas. También ofrecen varios beneficios, como mejorar su memoria y habilidades cognitivas, mejorar su conciencia cultural y curiosidad, y divertirse y competir con sus amigos u otros jugadores en línea. Puede descargar juegos de banderas de países de forma gratuita de fuentes confiables y seguras como Google Play Store, Microsoft Store o Flagpedia.net. También puedes elegir un juego que se adapte a tus preferencias y compatibilidad con dispositivos. También puedes seguir las instrucciones para instalar y lanzar el juego. Aquí hay algunos ejemplos de países banderas juegos que se pueden descargar de forma gratuita: Banderas 2: Multijugador, Banderas Quiz! , y descargar todas las banderas del país del mundo de forma gratuita. </p>
40
- <h2>Preguntas frecuentes</h2>
41
- <p>Aquí hay algunas preguntas frecuentes sobre los países banderas juegos:</p>
42
- <h4>¿Cuáles son los mejores juegos de banderas de países para descargar? </h4>
43
- <p>Los mejores juegos de banderas de países para descargar dependen de sus preferencias personales, la compatibilidad del dispositivo y la fiabilidad de la fuente. Sin embargo, algunos de los más populares y altamente calificados son Banderas 2: Multijugador, Banderas Quiz! , y Descargar todas las banderas del país del mundo de forma gratuita. </p>
44
- <h4>¿Cómo puedo aprender más sobre los países y regiones que veo en los juegos? </h4>
45
-
46
- <h4>¿Cómo puedo mejorar mi puntuación y rango en los juegos? </h4>
47
- <p>Puedes mejorar tu puntuación y rango en los juegos jugando más a menudo, eligiendo niveles o modos más difíciles, usando menos pistas o saltos, y respondiendo más rápido. También puedes revisar tus errores y aprender de ellos. </p>
48
- <h4>¿Son los juegos de banderas de países adecuados para los niños? </h4>
49
- <p>Sí, los juegos de banderas de países son adecuados para los niños. Son divertidos, educativos y fáciles de jugar. Pueden ayudar a los niños a desarrollar sus habilidades de memoria, cognitivas y culturales. También pueden despertar su interés y curiosidad en el mundo. </p>
50
- <h4>¿Puedo usar banderas de países en mis propios proyectos? </h4>
51
- <p>Sí, puedes usar banderas de países en tus propios proyectos. Puede descargar todas las banderas de países del mundo de forma gratuita desde Flagpedia.net en varios formatos y tamaños. También puede usar una API o un widget para incrustar banderas de país en sus proyectos. Sin embargo, debe respetar los derechos de propiedad intelectual y las licencias de las imágenes y fuentes de la bandera. </p> 64aa2da5cf<br />
52
- <br />
53
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Ftbol Real 2023.md DELETED
@@ -1,109 +0,0 @@
1
- <br />
2
- <h1>Descargar Real Football 2023: Un juego de fútbol gratuito para verdaderos fans</h1>
3
- <p>Si estás buscando un juego de fútbol que ofrezca una experiencia realista e inmersiva, entonces deberías descargar Real Football 2023. Este es un juego de fútbol gratuito que está hecho para verdaderos fanáticos del fútbol. En este juego, puedes jugar en estadios 3D, enfrentarte a oponentes inteligentes, construir tu equipo ideal y desafiar a otros jugadores en línea. Aquí está todo lo que necesitas saber sobre Real Football 2023. </p>
4
- <h2>¿Qué es el fútbol real 2023? </h2>
5
- <p>Real Football 2023 es un juego de simulación de fútbol desarrollado por Gameloft SE. Es la última edición de la popular serie Real Football que ha estado funcionando desde 2004. El juego tiene como objetivo proporcionar una atmósfera emocionante, un juego desafiante y una variedad de contenido para los amantes del fútbol. </p>
6
- <h2>descargar fútbol real 2023</h2><br /><p><b><b>Download</b> &#8250; <a href="https://bltlly.com/2v6Mbj">https://bltlly.com/2v6Mbj</a></b></p><br /><br />
7
- <h3>Un juego de simulación de fútbol realista e inmersivo</h3>
8
- <p>Real Football 2023 no es solo un juego, sino una simulación de la escena futbolística del mundo real. El juego utiliza un sofisticado motor de juego que crea física realista, animaciones y gráficos. El juego también cuenta con actualizaciones en vivo y eventos que reflejan las tendencias actuales y las estaciones del mundo del fútbol. Puedes experimentar la fiebre del fútbol real en Real Football 2023. </p>
9
- <h3>Un juego con estadios 3D, oponentes inteligentes y equipos auténticos</h3>
10
- <p>Real Football 2023 te permite jugar en impresionantes estadios 3D donde puedes ver sombras pulidas, texturas detalladas y espectadores. El juego también ofrece múltiples vistas de cámara y escenas que le dan una transmisión más rica y sensación en primera persona. Además, el juego ha mejorado los oponentes y el posicionamiento que hacen que el juego sea más realista y desafiante. También puedes jugar como algunos de los mejores equipos del mundo, como Barcelona, Manchester United, Juventus, y más. </p>
11
- <h3>Un juego donde puedes construir tu equipo de ensueño y desafiar a otros jugadores en línea</h3>
12
-
13
- <h2>Cómo descargar Real Football 2023? </h2>
14
- <p> <p>Real Football 2023 está disponible para su descarga en varias plataformas, dependiendo de su dispositivo y preferencia. Estos son los pasos para descargar el juego en diferentes dispositivos. </p>
15
- <h3>Descargar de Google Play Store para dispositivos Android</h3>
16
- <p>Si tienes un dispositivo Android, puedes descargar Real Football 2023 de Google Play Store gratis. Estos son los pasos para hacerlo:</p>
17
- <ol>
18
- <li>Abra la aplicación Google Play Store en su dispositivo. </li>
19
- <li>Buscar "Real Football 2023" en la barra de búsqueda. </li>
20
- <li>Seleccione el juego de la lista de resultados y toque en "Instalar". </li>
21
- <li>Espera a que el juego se descargue e instale en tu dispositivo. </li>
22
- <li>Inicia el juego y disfruta jugando. </li>
23
- </ol>
24
- <p>También puede descargar el juego desde el sitio web de Google Play Store siguiendo este enlace: </p>
25
- <h3>Descargar de App Store para dispositivos iOS</h3>
26
- <p>Si tienes un dispositivo iOS, puedes descargar Real Football 2023 de la App Store gratis. Estos son los pasos para hacerlo:</p>
27
- <ol>
28
- <li>Abra la aplicación App Store en su dispositivo. </li>
29
- <li>Buscar "Real Football 2023" en la barra de búsqueda. </li>
30
- <li>Seleccione el juego de la lista de resultados y toque en "Obtener". </li>
31
- <li>Introduzca su ID de Apple y contraseña si se le solicita. </li>
32
- <li>Espera a que el juego se descargue e instale en tu dispositivo. </li>
33
- <li>Inicia el juego y disfruta jugando. </li>
34
- </ol>
35
- <p>También puede descargar el juego desde el sitio web de la App Store siguiendo este enlace: </p>
36
- <p></p>
37
- <h3>Descargar desde el sitio web oficial para dispositivos PC y Mac</h3>
38
- <p>Si tienes un dispositivo PC o Mac, puedes descargar Real Football 2023 desde el sitio web oficial de Gameloft SE. Estos son los pasos para hacerlo:</p>
39
- <ol>
40
- <li>Abra su navegador web y vaya a este enlace: </li>
41
- <li>Seleccione el tipo de dispositivo (PC o Mac) y haga clic en "Descargar". </li>
42
- <li>Guarde el archivo en su ubicación preferida en su dispositivo. </li>
43
- <li>Ejecute el archivo y siga las instrucciones para instalar el juego en su dispositivo. </li>
44
-
45
- </ol>
46
- <p>También puedes descargar el juego desde un emulador como BlueStacks, que te permite jugar juegos Android en tu PC o Mac. Estos son los pasos para hacerlo:</p>
47
- <ol>
48
- <li>Abra su navegador web y vaya a este enlace: </li>
49
- <li>Haga clic en "Descargar BlueStacks" y guarde el archivo en su ubicación preferida en su dispositivo. </li>
50
- <li>Ejecute el archivo y siga las instrucciones para instalar BlueStacks en su dispositivo. </li>
51
- <li>Inicie BlueStacks e inicie sesión con su cuenta de Google. </li>
52
- <li>Buscar "Real Football 2023" en la barra de búsqueda e instalarlo desde la Google Play Store.</li>
53
- <li>Inicia el juego y disfruta jugando. </li>
54
- </ol>
55
- <h2>¿Cuáles son las características de Real Football 2023? </h2>
56
- <p>Real Football 2023 es un juego que ofrece muchas características y contenido para los aficionados al fútbol. Estos son algunos de ellos:</p>
57
- <h3>Múltiples vistas de cámara y escenas para una rica experiencia de difusión</h3>
58
- <p>El juego te permite jugar en impresionantes estadios en 3D donde puedes ver sombras pulidas, texturas detalladas y espectadores. El juego también ofrece múltiples vistas de cámara y escenas que le dan una transmisión más rica y sensación en primera persona. Puede cambiar entre diferentes ángulos y acercar o alejar para ver cada detalle de la acción. También puedes ver repeticiones de goles, faltas, salvamentos y otros momentos destacados. </p>
59
- <h3>Habilidades de jugador mejoradas e instalaciones de equipo para un juego desafiante</h3>
60
- <p>El juego ha mejorado los oponentes y el posicionamiento que hacen el juego más realista y desafiante. También puedes mejorar las habilidades de tus jugadores adquiriendo objetos de habilidad a través de la lotería y los partidos. También puede mejorar las instalaciones de su equipo, como estadios, hospitales, centros de fisioterapia y campamentos juveniles. Estas mejoras te ayudarán a mejorar el rendimiento, la moral, la salud y el potencial de tu equipo. </p>
61
- <h3>Actualizaciones y eventos en vivo basados en las tendencias y temporadas del fútbol del mundo real</h3>
62
-
63
- <h <h2>¿Cuáles son las opiniones de Real Football 2023? </h2>
64
- <p>Real Football 2023 ha recibido críticas mixtas de usuarios que han jugado el juego. Aquí están algunos de los comentarios positivos, negativos y mixtos de los usuarios:</p>
65
- <h3>Comentarios positivos de usuarios que disfrutan del realismo y la variedad del juego</h3>
66
- <p>Algunos usuarios han elogiado el juego por su realista e inmersiva simulación de fútbol. También han apreciado la variedad de contenido y características del juego, como actualizaciones en vivo, eventos, torneos, equipos, jugadores, estadios, etc. Estas son algunas de las críticas positivas de los usuarios:</p>
67
- <ul>
68
- <li>"Este juego es impresionante. Tiene gráficos realistas, jugabilidad y física. También tiene muchos equipos, jugadores, estadios y torneos para elegir. Me encanta jugar a este juego todos los días."</li>
69
- <li>"Este es el mejor juego de fútbol. Es muy realista y divertido de jugar. Tiene actualizaciones en vivo y eventos que hacen el juego más interesante y desafiante. También tiene muchas características y opciones para personalizar tu equipo y jugadores." </li>
70
- <li>"Este juego es increíble. Es muy inmersivo y adictivo. Tiene múltiples vistas de cámara y escenas que te hacen sentir como si estuvieras viendo un partido de fútbol real. También tiene muchos modos y niveles para jugar y disfrutar." </li>
71
- </ul>
72
- <h3>Comentarios negativos de usuarios que encuentran problemas técnicos y errores</h3>
73
- <p>Algunos usuarios se han quejado de los problemas técnicos del juego y los errores que afectan el juego y el rendimiento. También han informado de problemas con la compatibilidad del juego, carga, estrellarse, retraso, congelación, etc. Estas son algunas de las críticas negativas de los usuarios:</p>
74
- <ul>
75
- <li>"Este juego es terrible. Tiene muchos problemas técnicos y errores que arruinan el juego. No es compatible con mi dispositivo y tarda mucho en cargarse. También se bloquea con frecuencia y se retrasa mucho."</li>
76
-
77
- <li>"Este juego es basura. Tiene muchas fallas y defectos que hacen que el juego no se pueda reproducir. No funciona bien en mi dispositivo y se bloquea al azar. También tiene física y jugabilidad poco realistas." </li>
78
- </ul>
79
- <h3>Comentarios mixtos de usuarios que comparan el juego con otros juegos de fútbol</h3>
80
- <p>Algunos usuarios han dado críticas mixtas sobre el juego basado en su comparación con otros juegos de fútbol. También han señalado las fortalezas y debilidades del juego, como su realismo, variedad, dificultad, etc. Estas son algunas de las críticas mixtas de los usuarios:</p>
81
- <ul>
82
- <li>"Este juego es bueno pero no genial. Tiene gráficos realistas y jugabilidad, pero carece de variedad y contenido. También tiene actualizaciones en vivo y eventos, pero no son muy emocionantes o gratificantes."</li>
83
- <li>"Este juego es decente pero no excepcional. Tiene muchas características y opciones, pero es demasiado difícil y desafiante. También tiene muchos equipos y jugadores, pero no son muy auténticos o precisos." </li>
84
- <li>"Este juego está bien pero no es excelente. Tiene atmósfera inmersiva y sonido, pero es demasiado repetitivo y aburrido. También tiene múltiples modos y niveles, pero no son muy divertidos o atractivos." </li>
85
- </ul>
86
- <h2>Conclusión</h2>
87
- <p>Real Football 2023 es un juego de fútbol gratuito que ofrece una experiencia de fútbol realista e inmersiva para verdaderos fans. El juego se puede descargar desde varias plataformas y tiene muchas características y contenido para disfrutar. El juego ha recibido críticas mixtas de los usuarios, pero vale la pena probarlo si te gusta el fútbol. </p>
88
- <h2>Preguntas frecuentes</h2>
89
- <h4>¿Es Real Football 2023 online u offline? </h4>
90
- <p>Real Football 2023 es tanto online como offline. Puedes jugar el juego sin conexión a Internet en modo offline, donde puedes jugar contra oponentes de IA o practicar tus habilidades en modo entrenamiento. También puedes jugar el juego con una conexión a Internet en modo online, donde puedes retar a otros jugadores en el modo PvP World Arena o participar en actualizaciones y eventos en vivo. </p>
91
-
92
- <p>Puedes obtener más monedas en Real Football 2023 jugando partidos, ganando torneos, completando logros, participando en eventos, viendo anuncios o comprándolos con dinero real. </p>
93
- <h4>¿Cómo actualizar Real Football 2023? </h4>
94
- <p>Puedes actualizar Real Football 2023 buscando actualizaciones en la tienda de aplicaciones de tu dispositivo o visitando el sitio web oficial de Gameloft SE.</p>
95
- <h4>¿Cómo se juega Real Football 2023 en PC o Mac? </h4>
96
- <p>Puedes jugar a Real Football 2023 en PC o Mac descargando el juego desde el sitio web oficial de Gameloft SE o usando un emulador como BlueStacks.</p>
97
- <h <h4>¿Cómo contactar al soporte de Real Football 2023? </h4>
98
- <p>Si tienes preguntas, comentarios o problemas con Real Football 2023, puedes ponerte en contacto con el equipo de soporte del juego siguiendo estos pasos:</p>
99
- <ol>
100
- <li>Abre el juego y toca el icono del menú en la esquina superior izquierda. </li>
101
- <li>Toque en "Configuración" y luego en "Atención al cliente". </li>
102
- <li>Seleccione el tema que mejor describe su problema y siga las instrucciones. </li>
103
- <li>Si todavía necesita ayuda, toque en "Contáctenos" y complete el formulario con sus datos y mensaje. </li>
104
- <li>Toque en "Enviar" y espere una respuesta del equipo de soporte. </li>
105
- </ol>
106
- <p>También puede ponerse en contacto con el equipo de soporte enviando un correo electrónico a esta dirección: </p>
107
- <p>Espero que este artículo te haya ayudado a aprender más sobre Real Football 2023 y cómo descargarlo y jugarlo. Si eres un fanático del fútbol, definitivamente deberías probar este juego y ver por ti mismo lo realista e inmersivo que es. Gracias por leer y divertirse jugando! </p> 64aa2da5cf<br />
108
- <br />
109
- <br />