parquet-converter commited on
Commit
cb9f647
·
1 Parent(s): 381c38d

Update parquet files (step 104 of 121)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Far Cry 4 PC Crack Tips and Tricks for a Smooth Gameplay.md +0 -115
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Enscape 3.0 Free Download with Crack The Ultimate Guide for Architects.md +0 -27
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Video Editor Crack vs. Paid Video Editing Software Which One is Better?.md +0 -23
  4. spaces/1acneusushi/gradio-2dmoleculeeditor/data/GS Typing Tutor Review Features Pricing and License Code.md +0 -22
  5. spaces/1gistliPinn/ChatGPT4/Examples/4media Video Cutter 2 Serial Crack Internet Profesional.md +0 -10
  6. spaces/1gistliPinn/ChatGPT4/Examples/Abbyy Pdf Transformer 3.0 Full Crack [HOT].md +0 -6
  7. spaces/1gistliPinn/ChatGPT4/Examples/Aimersoft Video Converter Ultimate 4.1.0.2 Serial-[HB] Keygen ((BETTER)).md +0 -6
  8. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/All Black - The Punjabi Hit Song by Raftaar and Sukh-E Muzical Doctorz - MP3 Download.md +0 -80
  9. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars Latest APK Enjoy the Epic Multiplayer Battles on Android.md +0 -85
  10. spaces/1phancelerku/anime-remove-background/Air1 Roku App Listen to Worship Music from the Comfort of Your Home Television.md +0 -175
  11. spaces/1phancelerku/anime-remove-background/Enjoy Five Classic Solitaire Card Games on Your Android Device with Microsoft Solitaire Collection.md +0 -101
  12. spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/safety_checker.py +0 -125
  13. spaces/2023Liu2023/bingo/src/lib/bots/bing/utils.ts +0 -87
  14. spaces/232labs/VToonify/vtoonify/model/encoder/encoders/__init__.py +0 -0
  15. spaces/4Taps/SadTalker/src/utils/croper.py +0 -295
  16. spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/nets_new.py +0 -132
  17. spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/syntaspeech/syntactic_graph_encoder.py +0 -193
  18. spaces/AIGC-Audio/AudioGPT/audio_detection/target_sound_detection/src/models.py +0 -1288
  19. spaces/AIGC-Audio/AudioGPT/audio_foundation_models.py +0 -1033
  20. spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/utils.py +0 -26
  21. spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/act.py +0 -28
  22. spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/modules.py +0 -314
  23. spaces/AIZeroToHero/03-NLP-MLM-SOTA-MedEntity/README.md +0 -12
  24. spaces/AUBMC-AIM/OCTaGAN/README.md +0 -46
  25. spaces/Abuzariii/Text-Generation-with-GPT-2/app.py +0 -23
  26. spaces/AchyuthGamer/OpenGPT/g4f/Provider/GptForLove.py +0 -82
  27. spaces/AiMimicry/sovits-models/app-slice.py +0 -135
  28. spaces/AlexWang/lama/saicinpainting/training/losses/style_loss.py +0 -155
  29. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/score_sde_ve.md +0 -20
  30. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/speech_to_image_diffusion.py +0 -261
  31. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/instruct_pix2pix/train_instruct_pix2pix.py +0 -1008
  32. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py +0 -717
  33. spaces/Anonymous-sub/Rerender/sd_model_cfg.py +0 -10
  34. spaces/Artrajz/vits-simple-api/bert_vits2/text/chinese.py +0 -196
  35. spaces/Arun1217/mygenaiapp/README.md +0 -12
  36. spaces/AtomdffAI/wechatgpt4atom/scripts/shutdown.sh +0 -16
  37. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/torchscript.py +0 -132
  38. spaces/Benson/text-generation/Examples/2 Y Lnea Apk.md +0 -73
  39. spaces/Benson/text-generation/Examples/9 Cu Sinif Umumi Tarix Testleri.md +0 -107
  40. spaces/Benson/text-generation/Examples/Avanzado Youtube Apk Descargar La ltima Versin 2023.md +0 -72
  41. spaces/Benson/text-generation/Examples/Como Hacer Una Hoja De Vida.md +0 -61
  42. spaces/BernardoOlisan/vqganclip/CLIP/README.md +0 -193
  43. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/panel.py +0 -308
  44. spaces/BigChungux/Pet_Survey2/info.md +0 -17
  45. spaces/BilalSardar/Reinhard_Color_Transformation/README.md +0 -13
  46. spaces/Boadiwaa/Recipes/openai/api_resources/abstract/engine_api_resource.py +0 -192
  47. spaces/CVPR/LIVE/LIVE/README.md +0 -44
  48. spaces/CVPR/LIVE/pybind11/tests/test_smart_ptr.cpp +0 -369
  49. spaces/CVPR/LIVE/thrust/internal/rename_cub_namespace.sh +0 -7
  50. spaces/CVPR/LIVE/thrust/thrust/cmake/README.md +0 -215
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Far Cry 4 PC Crack Tips and Tricks for a Smooth Gameplay.md DELETED
@@ -1,115 +0,0 @@
1
-
2
- <h1>Download Far Cry 4 PC Crack</h1> | H1 | | <p>Far Cry 4 is one of the most popular open-world first-person shooter games in recent years. It was released in 2014 by Ubisoft and received critical acclaim for its stunning graphics, diverse environments, animal companions system, and engaging story. The game is set in the fictional Himalayan region of Kyrat, where you play as Ajay Ghale, a young man who returns to his homeland to fulfill his mother's final wish of spreading her ashes in her place of birth. However, he soon becomes involved in a civil war between the forces of Pagan Min, a despotic self-appointed king, and the Golden Path, a rebel movement fighting for democracy.</p>
3
- <h2>download far cry 4 pc crack</h2><br /><p><b><b>Download File</b> &#128505; <a href="https://byltly.com/2uKy0O">https://byltly.com/2uKy0O</a></b></p><br /><br /> | P | | <p>However, not everyone can afford to buy the game or play it on their PC. That's why some people resort to using a PC crack, which is a modified version of the game that bypasses the copy protection and allows you to play it for free. A PC crack can also unlock some features that are otherwise unavailable in the official version, such as multiplayer mode or map editor. However, using a PC crack also comes with some risks and drawbacks, such as viruses, malware, errors, bugs, and legal issues.</p> | P | | <p>In this article, we will show you how to download Far Cry 4 PC crack safely and easily, how to fix some common issues with it, and how to enjoy the game to its fullest potential. But before we do that, we want to remind you that using a PC crack is illegal and unethical, and we do not condone or encourage it in any way. This article is for educational purposes only and we are not responsible for any consequences that may arise from using a PC crack.</p> | P | <h2>How to Download Far Cry 4 PC Crack</h2> | H2 | <p>The first step to download Far Cry 4 PC crack is to find a reliable source for it. There are many websites that claim to offer PC cracks for various games, but not all of them are trustworthy or safe. Some of them may contain viruses or malware that can harm your computer or steal your personal information. Some of them may also provide fake or outdated PC cracks that do not work or cause problems with the game.</p> | P | <p>Therefore, you need to be careful and do some research before downloading any PC crack from any website. Here are some tips on how to find a reliable source for Far Cry 4 PC crack:</p> | P | <ul> | UL | <li>Check the reviews and ratings of the website and the PC crack. Look for positive feedback from other users who have downloaded and used it successfully.</li> | LI | <li>Check the date and size of the PC crack. Look for recent and updated versions that match the latest version of the game. Also look for reasonable sizes that are not too small or too large.</li> | LI | <li>Check the compatibility and requirements of the PC crack. Look for versions that are compatible with your operating system and hardware specifications.</li> | LI | <li>Check the instructions and support of the website and the PC crack. Look for clear and detailed instructions on how to download, install, and run the PC crack. Also look for contact information or forums where you can ask for help if you encounter any issues.</li> | LI | </ul>
4
- <p>Based on these criteria, we have found one website that seems to offer a reliable source for Far Cry 4 PC crack. It is called Fitgirl Repacks Site (index:1) , which is a well-known website that provides compressed repacks of various games with all DLCs included. The website has positive reviews and ratings from many users who have downloaded and used its repacks successfully. The website also provides clear and detailed instructions on how to download, install, and run its repacks.</p> | P | <p>The website offers two versions of Far Cry 4 repack: one with multiplayer files included (14.8 GB) and one without multiplayer files (10.8 GB). The repack is based on Far.Cry.4.Proper-RELOADED release (index:1) , which is an official release by RELOADED group (index:3) , which is one of the most reputable groups that provide PC cracks for various games. The repack also includes all DLCs added & activated (index:1) , missing Japanese voiceovers added (index:1) , all patches up to v1.10 applied (index:1) , extreme injector v3.6.1 for 2/3-core CPUs added (index:1) , modified unofficial RELOADED crack (index:3) , ULC unlocker added (index:3) , 100% lossless & MD5 perfect (index:1) , selective download feature (index:1) , significantly smaller archive size (index:1) , fast installation time (index:1) , after-install integrity check (index:1) , HDD space after installation up to 40 GB (index:1) , at least 2 GB of free RAM required for installing this repack (index:1) . The repack is compatible with Windows 7/8/10 (64-bit versions only), Intel Core i5-750 @ 2.6 GHz or AMD Phenom II X4 955 @ 3.2 GHz processor or better, 4 GB RAM or more, NVIDIA GeForce GTX 460 or AMD Radeon HD5850 graphics card or better with 1 GB VRAM or more.</p> | P | <p>To download Far Cry 4 repack from Fitgirl Repacks Site (index:1) , you need to follow these steps:</p> | P | <ol> | OL | <li>Go to https://fitgirl-repacks-site.org/far-cry-4-gold-edition-download-torrent-repack/</li> | LI | <li>Select your preferred download option from the list of mirrors provided on the website.</li> | LI | <li>If you choose torrent option, you need to have a torrent client installed on your computer such as uTorrent or BitTorrent.</li> | LI | <li>If you choose direct download option, you need to have a download manager installed on your computer such as IDM or JDownloader2.</li> | LI | <li>Download all parts of the repack according to your selected option.</li> | LI | <li>Extract all parts of the repack using WinRAR or 7-Zip software.</li> | LI | <li>Run setup.exe file from extracted folder as administrator.</li> | LI | <li>Select your preferred language from English/Russian/Spanish/French/German/Italian/Polish/Portuguese-Brazil/Japanese/Chinese Simplified/Chinese Traditional/Czech/Danish/Dutch/Finnish/Norwegian/Swedish.</li> | LI | <li>Select your preferred components from singleplayer files/multiplayer files/voiceovers files according to your needs.</li> | LI | <li>Select your installation directory where you want to install the game.</li> | LI | <li>Click install button and wait until installation process is completed.</li> | LI | </ol>
5
- <h2>How to Check Far Cry 4 PC Crack for Viruses and Malware</h2>
6
- <p>
7
- Even though Fitgirl Repacks Site (index:1) seems to be a reliable source for Far Cry 4 PC crack, it is still advisable to check it for viruses and malware before installing it on your computer. This is because some malicious files may be hidden or disguised within the repack that can harm your computer or steal your personal information.</p>
8
- <p>To check Far Cry 4 repack for viruses and malware, you need to follow these steps:</p>
9
- <p>How to download far cry 4 pc crack for free<br />
10
- Download far cry 4 pc crack full version<br />
11
- Download far cry 4 pc crack torrent<br />
12
- Download far cry 4 pc crack skidrow<br />
13
- Download far cry 4 pc crack only<br />
14
- Download far cry 4 pc crack no survey<br />
15
- Download far cry 4 pc crack online<br />
16
- Download far cry 4 pc crack reloaded<br />
17
- Download far cry 4 pc crack update<br />
18
- Download far cry 4 pc crack fix<br />
19
- Download far cry 4 pc crack and keygen<br />
20
- Download far cry 4 pc crack direct link<br />
21
- Download far cry 4 pc crack mega<br />
22
- Download far cry 4 pc crack rar<br />
23
- Download far cry 4 pc crack iso<br />
24
- Download far cry 4 pc crack highly compressed<br />
25
- Download far cry 4 pc crack repack<br />
26
- Download far cry 4 pc crack cpy<br />
27
- Download far cry 4 pc crack codex<br />
28
- Download far cry 4 pc crack steam<br />
29
- Download far cry 4 pc crack windows 10<br />
30
- Download far cry 4 pc crack without virus<br />
31
- Download far cry 4 pc crack working<br />
32
- Download far cry 4 pc crack gameplay<br />
33
- Download far cry 4 pc crack trainer<br />
34
- Download far cry 4 pc crack cheats<br />
35
- Download far cry 4 pc crack mods<br />
36
- Download far cry 4 pc crack multiplayer<br />
37
- Download far cry 4 pc crack co-op<br />
38
- Download far cry 4 pc crack patch<br />
39
- Download far cry 4 pc crack dlc<br />
40
- Download far cry 4 pc crack gold edition<br />
41
- Download far cry 4 pc crack valley of the yetis<br />
42
- Download far cry 4 pc crack escape from durgesh prison<br />
43
- Download far cry 4 pc crack over run<br />
44
- Download far cry 4 pc crack hurk deluxe pack<br />
45
- Download far cry 4 pc crack kyrat edition<br />
46
- Download far cry 4 pc crack limited edition<br />
47
- Download far cry 4 pc crack complete edition<br />
48
- Download far cry 4 pc crack ultimate edition<br />
49
- Download far cry 4 pc crack system requirements<br />
50
- Download far cry 4 pc crack installation guide<br />
51
- Download far cry 4 pc crack error solution<br />
52
- Download far cry 4 pc crack black screen fix<br />
53
- Download far cry 4 pc crack lag fix<br />
54
- Download far cry 4 pc crack sound fix<br />
55
- Download far cry 4 pc crack save game location<br />
56
- Download far cry 4 pc crack unlock all weapons and skills <br />
57
- Download far cry 4 pc crack map editor <br />
58
- Download far cry 4 pc crack custom maps</p>
59
- <ol>
60
- <li>Download an antivirus software such as Avast or AVG on your computer if you don't have one already.</li>
61
- <li>Update your antivirus software with latest virus definitions and scan settings.</li>
62
- <li>Right-click on the downloaded repack file and select Scan with [your antivirus software] from the context menu.</li>
63
- <li>Wait for the scan to complete and check the results. If any threats are detected, delete or quarantine them according to your antivirus software's instructions.</li>
64
- <li>If no threats are detected, you can proceed to install the repack on your computer.</li>
65
- </ol>
66
- <p>Alternatively, you can also use an online virus scanner such as VirusTotal or Jotti to check the repack file for viruses and malware. These are websites that allow you to upload any file and scan it with multiple antivirus engines at once. To use an online virus scanner, you need to follow these steps:</p>
67
- <ol>
68
- <li>Go to https://www.virustotal.com/ or https://virusscan.jotti.org/</li>
69
- <li>Click on Choose File or Browse button and select the downloaded repack file from your computer.</li>
70
- <li>Click on Scan It or Submit File button and wait for the scan to complete.</li>
71
- <li>Check the results and see if any antivirus engine detects any threat in the repack file. If any threat is detected, do not install the repack on your computer.</li>
72
- <li>If no threat is detected, you can proceed to install the repack on your computer.</li>
73
- </ol>
74
- <h2>How to Install Far Cry 4 PC Crack and Run the Game</h2>
75
- <p>After downloading and checking Far Cry 4 repack for viruses and malware, you can install it on your computer and run the game. To install Far Cry 4 repack on your computer, you need to follow these steps:</p>
76
- <ol>
77
- <li>Disable your antivirus software temporarily to avoid any interference with the installation process.</li>
78
- <li>Run setup.exe file from extracted folder as administrator.</li>
79
- <li>Select your preferred language from English/Russian/Spanish/French/German/Italian/Polish/Portuguese-Brazil/Japanese/Chinese Simplified/Chinese Traditional/Czech/Danish/Dutch/Finnish/Norwegian/Swedish.</li>
80
- <li>Select your preferred components from singleplayer files/multiplayer files/voiceovers files according to your needs.</li>
81
- <li>Select your installation directory where you want to install the game.</li>
82
- <li>Click install button and wait until installation process is completed.</li>
83
- <li>Enable your antivirus software again after the installation is done.</li>
84
- </ol>
85
- <p>To run Far Cry 4 game on your computer, you need to follow these steps:</p>
86
- <ol>
87
- <li>Go to the installation directory where you installed the game.</li>
88
- <li>Run FarCry4.exe file as administrator.</li>
89
- <li>Select your preferred graphics settings and resolution from the game launcher.</li>
90
- <li>Click Play button and enjoy the game.</li>
91
- </ol>
92
- <h2>How to Fix Common Issues with Far Cry 4 PC Crack</h2>
93
- <p>Even though Far Cry 4 PC crack allows you to play the game for free, it may also cause some issues that can affect your gaming experience. Some of these issues are related to the PC crack itself, while others are related to the game itself. Here are some of the common issues that you may encounter with Far Cry 4 PC crack and how to fix them:</p>
94
- <table>
95
- <tr><th>Issue</th><th>Solution</th></tr>
96
- <tr><td>The game crashes or freezes randomly.</td><td>This may be caused by incompatible drivers, outdated patches, corrupted files, or insufficient system resources. To fix this issue, you can try these steps:<ul><li>Update your graphics card drivers and DirectX version.</li><li>Update the PC crack to the latest version using Fitgirl Repacks Site (index:1) or other sources.</li><li>Verify the integrity of game files using Steam or Uplay client if you have them installed.</li><li>Lower your graphics settings and resolution in the game launcher.</li><li>Close any unnecessary programs running in the background while playing the game.</li></ul></td></tr>
97
- <tr><td>The game does not launch or shows a black screen.</td><td>This may be caused by missing or blocked files, incompatible settings, or antivirus interference. To fix this issue, you can try these steps:<ul><li>Add FarCry4.exe file and installation directory to your antivirus software's exclusion list or disable it temporarily while playing the game.</li><li>Run FarCry4.exe file as administrator and in compatibility mode for Windows 7 or 8 if you have Windows 10.</li><li>Delete GamerProfile.xml file from Documents\My Games\Far Cry 4 folder and launch the game again.</li><li>Rename video.dat file to video.dat.bak in Data_Win32 folder inside installation directory and launch the game again.</li></ul></td></tr>
98
- <tr><td>The game shows an error message saying "Please insert correct DVD-ROM."</td><td>This may be caused by a faulty PC crack or a missing DLL file. To fix this issue, you can try these steps:<ul><li>Download a fixed DLL for map editor from http://sendfile.su/1356321 (index:1) and place it in bin folder inside installation directory, overwriting existing file.</li><li>Download a modified unofficial RELOADED crack (index:3) or another PC crack from a reliable source and replace FarCry4.exe file in bin folder inside installation directory with it.</li></ul></td></tr>
99
- <tr><td>The game does not save progress or shows corrupted save files.</td><td>This may be caused by a wrong save location, a read-only attribute, or a Uplay conflict. To fix this issue, you can try these steps:<ul><li>Create a new folder named "savegames" in Documents\My Games\Far Cry 4 folder and move all save files from Documents\My Games\Far Cry 4\user_id folder into it.</li><li>Right-click on savegames folder and select Properties. Uncheck Read-only option under Attributes and click OK.</li><li>Delete Uplay folder from Program Files (x86) folder if you have it installed. Alternatively, disable cloud synchronization option in Uplay settings if you want to keep Uplay installed.</li></ul></td></tr>
100
- <tr><td>The game does not recognize keyboard or mouse input or shows wrong key bindings.</td><td>This may be caused by a corrupted configuration file, a conflicting device driver, or a keyboard layout issue. To fix this issue, you can try these steps:<ul><li>Delete GamerProfile.xml file from Documents\My Games\Far Cry 4 folder and launch the game again. Customize your key bindings in-game as needed.</li><li>Unplug any unnecessary devices such as controllers, joysticks, webcams, etc. from your computer while playing the game.</li><li>Change your keyboard layout to English (US) in Windows settings if you have a different layout set up.</li></ul></td></tr>
101
- </table>
102
- <h2>Conclusion</h2>
103
- <p>In this article, we have shown you how to download Far Cry 4 PC crack safely and easily, how to check it for viruses and malware, how to install it and run the game, and how to fix some common issues with it. We hope this article has been helpful and informative for you. However, we also want to remind you once again that using a PC crack is illegal and unethical, and we do not condone or encourage it in any way. This article is for educational purposes only and we are not responsible for any consequences that may arise from using a PC crack. If you like Far Cry 4 game and want to support its developers, we urge you to buy it legally from official sources such as Steam or Uplay. Thank you for reading this article and have a great day!</p>
104
- <h2>FAQs</h2>
105
- <p>Here are some frequently asked questions related to Far Cry 4 PC crack:</p>
106
- <ol>
107
- <li>Q: Can I play multiplayer mode with Far Cry 4 PC crack?<br>A: No, you cannot play multiplayer mode with Far Cry 4 PC crack. The multiplayer mode requires an online connection and a valid Uplay account which cannot be bypassed by any PC crack. If you want to play multiplayer mode with other players online, you need to buy the game legally from official sources such as Steam or Uplay.</li>
108
- <li>Q: Can I use map editor with Far Cry 4 PC crack?<br>A: Yes, A: Yes, you can use map editor with Far Cry 4 PC crack. The map editor allows you to create your own custom maps and missions using the game's assets and tools. However, you need to download a fixed DLL for map editor from http://sendfile.su/1356321 (index:1) and place it in bin folder inside installation directory, overwriting existing file. Otherwise, the map editor will not launch or show an error message.</li>
109
- <li>Q: How can I update Far Cry 4 PC crack to the latest version?<br>A: To update Far Cry 4 PC crack to the latest version, you need to download the latest version of the PC crack from Fitgirl Repacks Site (index:1) or other sources and replace FarCry4.exe file in bin folder inside installation directory with it. You also need to download and install the latest patches for the game from official sources such as Steam or Uplay if you have them installed.</li>
110
- <li>Q: How can I uninstall Far Cry 4 PC crack from my computer?<br>A: To uninstall Far Cry 4 PC crack from your computer, you need to delete the installation directory where you installed the game and remove any leftover files and folders from Documents\My Games\Far Cry 4 folder. You also need to scan your computer for viruses and malware using an antivirus software after uninstalling the PC crack.</li>
111
- <li>Q: Where can I find more information and help about Far Cry 4 PC crack?<br>A: You can find more information and help about Far Cry 4 PC crack on various websites and forums that discuss PC gaming and piracy. Some of these websites and forums are:<ul><li>Fitgirl Repacks Site (index:1) : https://fitgirl-repacks-site.org/far-cry-4-gold-edition-download-torrent-repack/</li><li>CS.RIN.RU (index:1) : https://cs.rin.ru/forum/viewtopic.php?f=10&t=64777</li><li>Reddit Piracy (index:3) : https://www.reddit.com/r/Piracy/</li></ul></li>
112
- </ol>
113
- </p> 0a6ba089eb<br />
114
- <br />
115
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Enscape 3.0 Free Download with Crack The Ultimate Guide for Architects.md DELETED
@@ -1,27 +0,0 @@
1
-
2
- <h1>Enscape 3.0 Free Download with Crack: How to Get the Best Real-Time Rendering Software for Architects</h1>
3
- <p>If you are an architect, designer, or engineer, you might have heard of Enscape 3.0, the latest version of the powerful real-time rendering software that integrates with popular CAD programs like Revit, SketchUp, Rhino, and ArchiCAD. Enscape 3.0 allows you to create stunning photorealistic images and videos of your projects in seconds, without any need for complex settings or exporting. You can also explore your designs in virtual reality with a single click.</p>
4
- <h2>enscape 3.0 free download with crack</h2><br /><p><b><b>Download File</b> &#9658;&#9658;&#9658; <a href="https://byltly.com/2uKwBr">https://byltly.com/2uKwBr</a></b></p><br /><br />
5
- <p>However, Enscape 3.0 is not a cheap software. It costs $699 per year for a single user license, which might be too expensive for some users. That's why some people are looking for a way to download and install Enscape 3.0 free with crack, which is a modified version of the software that bypasses the activation and verification process. In this article, we will show you how to do that in a few simple steps.</p>
6
- <h2>What is Enscape 3.0 Free Download with Crack?</h2>
7
- <p>Enscape 3.0 free download with crack is a hacked version of the original software that allows you to use it without paying for a license or entering a product key. However, this also means that you won't be able to access some of the features and updates that the official software offers, such as online support, cloud rendering, asset library, and bug fixes. Therefore, we recommend that you use Enscape 3.0 free download with crack only for testing purposes and not for professional work.</p>
8
- <h2>How to Download and Install Enscape 3.0 Free Download with Crack?</h2>
9
- <p>To download and install Enscape 3.0 free download with crack, you will need to follow these steps:</p>
10
- <ol>
11
- <li>Download the Enscape 3.0 free download with crack file from a reliable source. You can search for it on Google or use one of the links provided below. Make sure that the file is compatible with your system and has positive reviews from other users.</li>
12
- <li>Extract the Enscape 3.0 free download with crack file using a software like WinRAR or 7-Zip. You will get a folder containing the software files and the crack files.</li>
13
- <li>Copy the crack files and paste them into the software folder. You will need to replace the original files with the cracked ones.</li>
14
- <li>Run the software as administrator and enjoy using Enscape 3.0 free download with crack.</li>
15
- </ol>
16
- <h2>Where to Download Enscape 3.0 Free Download with Crack?</h2>
17
- <p>There are many websites that offer Enscape 3.0 free download with crack, but not all of them are safe and reliable. Some of them might contain viruses, malware, or fake files that can harm your computer or steal your personal information. Therefore, you should be careful when choosing where to download Enscape 3.0 free download with crack from. Here are some of the websites that we recommend:</p>
18
- <p></p>
19
- <ul>
20
- <li><a href="https://enscape-3-0-free-download-with-crack.com/">Enscape 3.0 Free Download with Crack</a>: This website claims to have the latest and working version of Enscape 3.0 free download with crack. It also provides a detailed installation guide and a video tutorial.</li>
21
- <li><a href="https://getintopc.com/softwares/enscape-3-0-free-download-with-crack/">Get Into PC</a>: This website is one of the most popular and trusted sources of free software downloads. It has a large collection of software in different categories and platforms. You can find Enscape 3.0 free download with crack here along with other design software.</li>
22
- <li><a href="https://crackzsoft.com/enscape-3-0-free-download-with-crack/">CrackzSoft</a>: This website is another well-known platform for downloading cracked software. It has a user-friendly interface and fast download speed. You can download Enscape 3.0 free download with crack here as well as other rendering software.</li>
23
- </ul>
24
- <h2>Conclusion</h2>
25
- <p>Enscape 3.0 is a great rendering software that offers fast and realistic results for your architectural projects. However, if you don't want to pay for it or you want to test it before buying it, you can download and</p> ddb901b051<br />
26
- <br />
27
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Video Editor Crack vs. Paid Video Editing Software Which One is Better?.md DELETED
@@ -1,23 +0,0 @@
1
-
2
- <h1>Free Video Editor Crack: Why You Should Avoid It and What Are the Best Alternatives</h1>
3
- <p>If you are looking for a way to edit your videos without spending a lot of money, you might be tempted to download a free video editor crack from the internet. However, this is not a good idea for several reasons. In this article, we will explain why you should avoid free video editor crack and what are the best alternatives to edit your videos professionally and legally.</p>
4
- <h2>Why You Should Avoid Free Video Editor Crack</h2>
5
- <p>Free video editor crack is a pirated version of a paid video editing software that claims to offer the same features and benefits as the original one. However, there are many risks and disadvantages associated with using free video editor crack, such as:</p>
6
- <h2>free video editor crack</h2><br /><p><b><b>Download File</b> &#10022;&#10022;&#10022; <a href="https://byltly.com/2uKxOi">https://byltly.com/2uKxOi</a></b></p><br /><br />
7
- <ul>
8
- <li>It is illegal. Downloading and using free video editor crack violates the intellectual property rights of the software developer and can result in legal consequences.</li>
9
- <li>It is unsafe. Free video editor crack may contain viruses, malware, spyware, or other malicious programs that can harm your computer and compromise your personal data.</li>
10
- <li>It is unreliable. Free video editor crack may not work properly or at all, and may cause errors, crashes, or performance issues on your PC.</li>
11
- <li>It is unsupported. Free video editor crack does not receive any updates, patches, or technical support from the official source, which means that it may not be able to handle new formats, codecs, or features.</li>
12
- </ul>
13
- <h2>What Are the Best Alternatives to Free Video Editor Crack</h2>
14
- <p>Instead of risking your PC's security and performance by using free video editor crack, you should consider using one of the following alternatives:</p>
15
- <ul>
16
- <li>Free Video Editing Software. If you are looking for a free and legal way to edit your videos, you can use one of the many free video editing software available online. Some of the most popular ones are Windows Movie Maker, iMovie, DaVinci Resolve, Lightworks, Shotcut, etc. You can download these software from their official websites or app stores.</li>
17
- <li>Paid Video Editing Software. If you are looking for a more advanced and professional way to edit your videos, you can invest in one of the many paid video editing software available online. Some of the most popular ones are Adobe Premiere Pro, Final Cut Pro X, Sony Vegas Pro, Camtasia Studio, etc. You can buy these software from their official websites or online retailers.</li>
18
- <li>Online Video Editing Services. If you are looking for a hassle-free and convenient way to edit your videos, you can hire one of the many online video editing services available online. These services can edit your videos according to your specifications and deliver them to you in a timely manner. Some of the most popular ones are Fiverr, Upwork, Vidchops, etc. You can find these services on their official websites or online platforms.</li>
19
- </ul>
20
- <h2>Conclusion</h2>
21
- <p>Free video editor crack is not worth downloading or using because it can expose your PC to various threats and problems. Instead, you should opt for a legitimate and reliable video editing software or service that can help you create amazing videos for your personal or professional use. We hope this article has helped you understand why you should avoid free video editor crack and what are the best alternatives to it.</p> ddb901b051<br />
22
- <br />
23
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/GS Typing Tutor Review Features Pricing and License Code.md DELETED
@@ -1,22 +0,0 @@
1
- <br />
2
- <h1>GS Typing Tutor License Code: How to Get It for Free</h1>
3
- <p>GS Typing Tutor is a software that helps you learn keyboard typing, improve your typing speed and accuracy, and test your typing skills. It is suitable for beginners, intermediate and advanced learners, as well as for students, teachers and professionals. GS Typing Tutor offers various features, such as:</p>
4
- <h2>gs typing tutor license code</h2><br /><p><b><b>DOWNLOAD</b> &#10001; &#10001; &#10001; <a href="https://byltly.com/2uKwGG">https://byltly.com/2uKwGG</a></b></p><br /><br />
5
- <ul>
6
- <li>Typing lessons with different levels of difficulty and topics.</li>
7
- <li>Typing games and puzzles to make learning fun and challenging.</li>
8
- <li>Typing tests to measure your progress and performance.</li>
9
- <li>Typing statistics and reports to analyze your strengths and weaknesses.</li>
10
- <li>Customizable settings and options to suit your preferences.</li>
11
- </ul>
12
- <p>GS Typing Tutor is compatible with Windows 98 and later versions, and supports multiple languages, such as English, French, German, Spanish, Italian, Portuguese, Dutch, Swedish, Finnish, Danish and Norwegian. You can download a free trial version of GS Typing Tutor from the official website or from other sources, such as FileHippo or Softonic. The trial version allows you to use the software for 30 days with some limitations. To unlock all the features and use the software without any restrictions, you need to buy a license code from the official website. The license code costs $29.95 for a single user license, $99.95 for a family license (up to 5 users), or $199.95 for a site license (up to 100 users).</p>
13
- <p>But what if you want to use GS Typing Tutor for free without buying a license code? Is there a way to get a free license code or crack the software? The answer is yes, but you will need to be careful and follow some steps. Here is how you can get GS Typing Tutor license code for free:</p>
14
- <ol>
15
- <li>If you have bought GS Typing Tutor before but lost your license code, you can retrieve it from the official website. You need to fill out a form with your name, email address and order ID, and click the "Submit" button. You will receive your license code by email within 24 hours.</li>
16
- <li>If you have not bought GS Typing Tutor before but want to use it for free, you can try to find a free license code or a cracked version of the software online. However, this is illegal and risky. You may face legal consequences or malware infections if you download from untrusted sources or use fake codes. We do not condone piracy and recommend that you buy the software from the official website if you can afford it.</li>
17
- <li>If you want to use GS Typing Tutor legally and safely for free, you can look for alternative software that offers similar features and functions. There are many free typing tutor software available online, such as TIPP10, Rapid Typing Tutor, KeyBlaze Free Typing Tutor, Tux Typing, TypingMaster, Sonma Typing-Expert, and more. You can compare their pros and cons and choose the one that suits your needs best.</li>
18
- </ol>
19
- <p>Note: Using GS Typing Tutor without a valid license code is illegal and risky. You may face legal consequences or malware infections if you download from untrusted sources or use fake codes. We do not condone piracy and recommend that you buy the software from the official website if you can afford it.</p>
20
- <p></p> ddb901b051<br />
21
- <br />
22
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/4media Video Cutter 2 Serial Crack Internet Profesional.md DELETED
@@ -1,10 +0,0 @@
1
- <br />
2
- <p>Microsoft, Windows and Internet Explorer are trademarks or registered. You can find the serial number of the printer on. (or 'Allow another pro-. 4Media Video Converter Ultimate 7.8.28 Crack. It’s an excellent software for all of us to more or less convert video files. It’s an excellent software for all of us to more or less convert video files. 4Media Audio Converter Pro; 4Media AVCHD Converter; 4Media AVI to. 4Media Audio Converter Pro; 4Media MP4 to MP3 Converter; 4Media MP4 to MP3 Converter. 4Media Audio Converter Pro; 4Media AVCHD Converter; 4Media AVI to.</p>
3
- <h2>4media Video Cutter 2 Serial Crack internet profesional</h2><br /><p><b><b>Download Zip</b> &#10001; <a href="https://imgfil.com/2uy24S">https://imgfil.com/2uy24S</a></b></p><br /><br />
4
- <p>No matter how you download 4Media Video Cutter Pro 2 Serial Crack, it is very easy to crack in a few simple steps. 4Media Video Cutter Pro 2 Serial Crack and the serial numbers of almost all computer games were known before official publishers release their products, they have been included in our database. </p>
5
- <p>Cool Campfire GXP 3.5.9 Incl Registration Keygen Serial Number [Latest]. Realistic Fire Night. Full cracked version for Free. GX PLUS 4.5.3. F/L/X Win.exe. Notice : We do not own the services in any way, all I have are the copyrights of the developers. </p>
6
- <p>4Media Video Cutter 2 Serial Crack internet profesional ##TOP##. 4Media Video Converter Ultimate 7.8.28 Crack; 4Media Video Cutter 2; 4Media Video Editor 2; 4Media Video Joiner 2; 4Media Video Cutter 2 Serial Crack ; 4Media Audio Converter. </p>
7
- <p>Installation of the latest update of. in parallel, lets you transfer files between hard disc and USB. The software features may seem completely smooth from view. 4Media Video Cutter 2 Serial Crack internet profesional </p>
8
- <p></p> 899543212b<br />
9
- <br />
10
- <br />
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Abbyy Pdf Transformer 3.0 Full Crack [HOT].md DELETED
@@ -1,6 +0,0 @@
1
- <h2>Abbyy Pdf Transformer 3.0 Full Crack</h2><br /><p><b><b>DOWNLOAD</b> &#10042; <a href="https://imgfil.com/2uy27s">https://imgfil.com/2uy27s</a></b></p><br /><br />
2
-
3
- 008 Exhaust Okay I have a 2000 Isuzu Trooper 4x4 with Tod torque on demand a ... and Items 1 - 16 of 61 3 Pdf isuzu 3ld1 cylinder head torque settings - Searches ... Identifier-ark ark:/13960/t5hb1v348 Ocr ABBYY FineReader 9. ... go to far beyond that bolt torque spec you will crack the composite gasket. 4d29de3e1b<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Aimersoft Video Converter Ultimate 4.1.0.2 Serial-[HB] Keygen ((BETTER)).md DELETED
@@ -1,6 +0,0 @@
1
- <h2>Aimersoft Video Converter Ultimate 4.1.0.2 Serial-[HB] Keygen</h2><br /><p><b><b>Download</b> &middot;&middot;&middot; <a href="https://imgfil.com/2uy1kC">https://imgfil.com/2uy1kC</a></b></p><br /><br />
2
- <br />
3
- Aimersoft Video Converter Ultimate 9 Serial Key & Crack Capacity to. ... Aimersoft Video Converter Ultimate 4.1.0.2 + Serial-[HB].dll 123.46 . 4d29de3e1b<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/All Black - The Punjabi Hit Song by Raftaar and Sukh-E Muzical Doctorz - MP3 Download.md DELETED
@@ -1,80 +0,0 @@
1
-
2
- <br> - Benefits of downloading MP3 songs: Quality, convenience, and offline access. <br> - Methods of downloading MP3 song all black: Online converters, websites, and apps. <br> - Conclusion: Summarize the main points and provide a call to action. | | H2: Introduction | - Explain the genre, lyrics, and meaning of the song. <br> - Mention the artists, release date, and awards of the song. <br> - Provide some background information on the Indian music industry and its global influence. | | H2: Benefits of Downloading MP3 Songs | - Quality: MP3 format preserves the original sound quality of the song. <br> - Convenience: MP3 files are easy to store, transfer, and play on any device. <br> - Offline access: MP3 files can be accessed without internet connection or streaming fees. | | H2: Methods of Downloading MP3 Song All Black | - Online converters: How to use online tools to convert YouTube videos or other sources to MP3 files. <br> - Websites: How to use websites that offer free or paid downloads of MP3 songs. <br> - Apps: How to use apps that allow users to download MP3 songs from various platforms. | | H2: Conclusion | - Summarize the main points of the article. <br> - Provide a call to action for the readers to download the song and enjoy it. <br> - Invite feedback and comments from the readers. | Table 2: Article with HTML formatting <h1>How to Download MP3 Song All Black by Sukh-E and Raftaar</h1>
3
- <p>If you are a fan of Indian music, you might have heard of the hit song All Black by Sukh-E and Raftaar. This song is a fusion of Punjabi rap and pop music, with catchy lyrics and beats that will make you want to dance. The song is about celebrating life and love in style, with references to luxury brands and cars.</p>
4
- <h2>download mp3 song all black</h2><br /><p><b><b>DOWNLOAD</b> &#10004;&#10004;&#10004; <a href="https://urlin.us/2uT0ad">https://urlin.us/2uT0ad</a></b></p><br /><br />
5
- <p>All Black was released in 2015 and became an instant success, topping the charts in India and abroad. The song has over 400 million views on YouTube and has won several awards, including the PTC Punjabi Music Award for Best Duo/Group Song in 2016. The song also features in the Bollywood movie Baar Baar Dekho, starring Katrina Kaif and Sidharth Malhotra.</p>
6
- <p>The Indian music industry is one of the largest and most diverse in the world, producing songs in various languages, genres, and styles. Indian music has also gained popularity worldwide, thanks to its unique blend of tradition and modernity, as well as its influence on other forms of music, such as hip hop, reggae, and EDM.</p>
7
- <p>If you love this song and want to listen to it anytime, anywhere, you might want to download it as an MP3 file. MP3 is a digital audio format that compresses sound data without losing much quality. By downloading MP3 songs, you can enjoy several benefits, such as:</p>
8
- <h2>Benefits of Downloading MP3 Songs</h2>
9
- <ul>
10
- <li><b>Quality:</b> MP3 format preserves the original sound quality of the song, so you can hear every detail and nuance of the music. You can also choose the bitrate of the MP3 file, which determines how much data is used per second of audio. The higher the bitrate, the better the quality, but also the larger the file size.</li>
11
- <li><b>Convenience:</b> MP3 files are easy to store, transfer, and play on any device, such as your computer, smartphone, tablet, or music player. You can also create playlists, edit tags, and organize your music library according to your preferences.</li>
12
- <li><b>Offline access:</b> MP3 files can be accessed without internet connection or streaming fees. You can listen to your favorite songs offline whenever you want, without worrying about buffering, interruptions, or data charges.</li>
13
- </ul>
14
- <p>So how can you download MP3 song all black? There are several methods you can use, depending on your source and preference. Here are some of them:</p>
15
- <h2>Methods of Downloading MP3 Song All Black</h2>
16
- <ol>
17
- <li><b>Online converters:</b> One of the easiest ways to download MP3 song all black is to use online tools that can convert YouTube videos or other sources to MP3 files. All you need to do is copy the URL of the video you want to convert, paste it into the online converter, and click the download button. Some of the popular online converters are: - [YTMP3]: This website can convert YouTube videos to MP3 or MP4 files, with high quality and fast speed. You can download up to 1 hour of video at a time, and there is no registration or software installation required. - [OnlineVideoConverter]: This website can convert videos from YouTube, Vimeo, Dailymotion, and other platforms to MP3, MP4, AVI, and other formats. You can also choose the quality and format of the output file, and crop or trim the video if needed. - [MP3Skull]: This website can download MP3 songs from YouTube, SoundCloud, and other sources. You can also search for songs by name, artist, or genre, and listen to them online before downloading. To download MP3 song all black using online converters, follow these steps: - Go to YouTube and search for the song All Black by Sukh-E and Raftaar. - Copy the URL of the video from the address bar. - Go to one of the online converters mentioned above and paste the URL into the input box. - Choose MP3 as the output format and click the convert or download button. - Wait for the conversion process to finish and download the MP3 file to your device.</li>
18
- <li><b>Websites:</b> Another way to download MP3 song all black is to use websites that offer free or paid downloads of MP3 songs. Some of these websites are: - [Pagalworld]: This website provides free downloads of Bollywood, Punjabi, Indipop, and DJ remix songs. You can browse by category, artist, or album, and download songs in various qualities and sizes. - [Gaana]: This website is a leading music streaming service in India, offering millions of songs in different languages and genres. You can also download songs for offline listening with a premium subscription. - [Hungama]: This website is a digital entertainment platform that offers music, movies, videos, and games. You can download songs for free with a limited number of downloads per month, or get unlimited downloads with a paid plan. To download MP3 song all black using websites, follow these steps: - Go to one of the websites mentioned above and search for the song All Black by Sukh-E and Raftaar. - Click on the download or play button next to the song title. - Choose the quality and format of the MP3 file and click the confirm or save button. - Download the MP3 file to your device.</li>
19
- <li><b>Apps:</b> A third way to download MP3 song all black is to use apps that allow users to download MP3 songs from various platforms. Some of these apps are: - [VidMate]: This app is a powerful video downloader that can download videos and music from YouTube, Facebook, Instagram, and other sites. You can also watch live TV, movies, and shows on this app. - [Snaptube]: This app is a simple and fast video downloader that can download videos and music from YouTube, Facebook, TikTok, and other platforms. You can also explore trending videos and music on this app. - [Wynk Music]: This app is a popular music streaming service that offers over 6 million songs in various languages and genres. You can also download songs for offline listening with a premium subscription. To download MP3 song all black using apps, follow these steps: - Download and install one of the apps mentioned above on your device. - Open the app and search for the song All Black by Sukh-E and Raftaar. - Tap on the download or play button next to the song title. - Choose the quality and format of the MP3 file and tap the confirm or save button. - Download the MP3 file to your device.</li>
20
- </ol>
21
- <h2>Conclusion</h2>
22
- <p>Downloading MP3 song all black is easy and convenient with these methods. You can enjoy this amazing song in high quality, offline mode, and on any device you want. Whether you use online converters, websites, or apps, you can get your favorite song in just a few clicks.</p>
23
- <p>So what are you waiting for? Download MP3 song all black today and groove to its catchy beats. You will surely love this song as much as we do.</p>
24
- <p>download mp3 song all black by raftaar and sukh-e<br />
25
- download mp3 song all black from afrocharts<br />
26
- download mp3 song all black punjabi<br />
27
- download mp3 song all black 320kbps<br />
28
- download mp3 song all black remix<br />
29
- download mp3 song all black video<br />
30
- download mp3 song all black lyrics<br />
31
- download mp3 song all black dj<br />
32
- download mp3 song all black mr jatt<br />
33
- download mp3 song all black pagalworld<br />
34
- download mp3 song all black djpunjab<br />
35
- download mp3 song all black ringtone<br />
36
- download mp3 song all black hd<br />
37
- download mp3 song all black full<br />
38
- download mp3 song all black online<br />
39
- download mp3 song all black free<br />
40
- download mp3 song all black gaana<br />
41
- download mp3 song all black jiosaavn<br />
42
- download mp3 song all black spotify<br />
43
- download mp3 song all black apple music<br />
44
- download mp3 song all black youtube<br />
45
- download mp3 song all black soundcloud<br />
46
- download mp3 song all black audiomack<br />
47
- download mp3 song all black wapking<br />
48
- download mp3 song all black waploaded<br />
49
- download mp3 song all black naijaloaded<br />
50
- download mp3 song all black tooxclusive<br />
51
- download mp3 song all black notjustok<br />
52
- download mp3 song all black 9jaflaver<br />
53
- download mp3 song all black fakaza<br />
54
- download mp3 song all black zamusic<br />
55
- download mp3 song all black sahiphopmag<br />
56
- download mp3 song all black hiphopza<br />
57
- download mp3 song all black hitvibes<br />
58
- download mp3 song all black flexyjamz<br />
59
- download mp3 song all black afrobeat9ja<br />
60
- download mp3 song all black afrohouseking<br />
61
- download mp3 song all black afrofire<br />
62
- download mp3 song all black malawi-music.com<br />
63
- download mp3 song all black zambianmusicblog.co.zm<br />
64
- download mp3 song all black zedgossip.net</p>
65
- <p>If you liked this article, please share it with your friends and family. Also, let us know your feedback and comments below. We would love to hear from you.</p>
66
- <h2>FAQs</h2>
67
- <ul>
68
- <li><b>Q: What is the <b>Q: What is the genre of the song All Black?</b></li>
69
- <li><b>A: The song All Black is a fusion of Punjabi rap and pop music, with elements of trap, EDM, and R&B. The song is sung by Sukh-E and Raftaar, who are both popular Indian rappers and singers.</b></li>
70
- <li><b>Q: How long is the song All Black?</b></li>
71
- <li><b>A: The song All Black is 3 minutes and 51 seconds long. The video of the song is 3 minutes and 46 seconds long.</b></li>
72
- <li><b>Q: Where can I watch the video of the song All Black?</b></li>
73
- <li><b>A: You can watch the video of the song All Black on YouTube, where it has over 400 million views. You can also watch it on other platforms, such as Gaana, Hungama, or VidMate.</b></li>
74
- <li><b>Q: What are some of the lyrics of the song All Black?</b></li>
75
- <li><b>A: Some of the lyrics of the song All Black are: - Kaali kaali aineka paa kaale kaale shoes Leke gaddi kaali kaali teri gali vad'da Bach ni, tere yaar ne sare karke vekh liye Ni tu kalli kalli nitt kardi ae bluff ni - Black black window black black seat Black black eyes black black suit Black black everything black black heart Girl you're looking so fly in all black - You know me I'm a rap star I like to go fast in my fast car I like to wear black you know my style You know I'm a rockstar baby just smile</b></li>
76
- <li><b>Q: What are some of the awards that the song All Black has won?</b></li>
77
- <li><b>A: Some of the awards that the song All Black has won are: - PTC Punjabi Music Award for Best Duo/Group Song in 2016 - Mirchi Music Award for Listener's Choice Song of the Year in 2016 - Global Indian Music Academy Award for Best Music Video in 2016</b></li>
78
- </ul></p> 197e85843d<br />
79
- <br />
80
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars Latest APK Enjoy the Epic Multiplayer Battles on Android.md DELETED
@@ -1,85 +0,0 @@
1
-
2
- <h1>Brawl Stars Latest APK: How to Download and Play the Popular Game</h1>
3
- <p>If you are looking for a fun and exciting game to play on your mobile device, you might want to check out Brawl Stars. Brawl Stars is a multiplayer game that lets you compete with other players in various modes and arenas. You can also collect and upgrade different characters, called brawlers, each with their own unique skills and abilities. In this article, we will tell you everything you need to know about Brawl Stars, including how to download and install the latest APK version of the game.</p>
4
- <h2>brawl stars latest apk</h2><br /><p><b><b>Download Zip</b> &#10026; <a href="https://urlin.us/2uT22e">https://urlin.us/2uT22e</a></b></p><br /><br />
5
- <h2>What is Brawl Stars?</h2>
6
- <h3>A fast-paced multiplayer game with different modes and characters</h3>
7
- <p>Brawl Stars is a game that combines elements of shooting, fighting, strategy, and teamwork. You can choose from over 40 brawlers, each with their own strengths and weaknesses, and use them to battle other players in various modes. Some of the modes include:</p>
8
- <ul>
9
- <li>Gem Grab: Collect gems from the center of the map while preventing your opponents from doing the same.</li>
10
- <li>Showdown: Survive as long as you can in a solo or duo battle royale.</li>
11
- <li>Brawl Ball: Score goals by carrying or kicking a ball into the enemy's goal.</li>
12
- <li>Heist: Attack or defend a safe filled with valuables.</li>
13
- <li>Bounty: Earn stars by eliminating your enemies and lose them by getting eliminated.</li>
14
- <li>Hot Zone: Control zones on the map for a certain amount of time.</li>
15
- <li>Knockout: Eliminate all the enemies in a best-of-three match.</li>
16
- </ul>
17
- <h3>A free-to-play game with optional in-app purchases</h3>
18
- <p>Brawl Stars is free to download and play, but you can also spend real money to buy gems, which are the premium currency of the game. Gems can be used to buy brawl boxes, which contain brawlers, coins, power points, star points, or gadgets. You can also use gems to buy skins, which change the appearance of your brawlers, or passes, which give you access to exclusive rewards and quests. However, spending money is not necessary to enjoy the game, as you can also earn gems, coins, power points, star points, and gadgets by playing the game regularly.</p>
19
- <h3>A game developed by Supercell, the makers of Clash of Clans and Clash Royale</h3>
20
- <p>Brawl Stars is developed by Supercell, a Finnish company that is known for creating popular mobile games such as Clash of Clans, Clash Royale, Hay Day, and Boom Beach. Supercell is also known for its high-quality graphics, sound effects, music, and animations. Brawl Stars has a colorful and cartoony style that appeals to both kids and adults. The game also has frequent updates that add new brawlers, modes, maps, events, skins, and features.</p>
21
- <p>brawl stars apk download latest version 2023<br />
22
- brawl stars mod apk unlimited gems and coins latest version<br />
23
- brawl stars hack apk download latest version<br />
24
- brawl stars new update apk download<br />
25
- brawl stars apk for android tv<br />
26
- brawl stars apk for pc windows 10<br />
27
- brawl stars apk for ios devices<br />
28
- brawl stars apk mirror download link<br />
29
- brawl stars apk pure free download<br />
30
- brawl stars apk uptodown latest version<br />
31
- brawl stars private server apk download<br />
32
- brawl stars nulls apk download latest version<br />
33
- brawl stars rebrawl apk download<br />
34
- brawl stars lwarb beta apk download<br />
35
- brawl stars phoenix apk download<br />
36
- brawl stars gameplay tips and tricks apk<br />
37
- brawl stars best brawlers guide apk<br />
38
- brawl stars tier list 2023 apk<br />
39
- brawl stars club rankings and stats apk<br />
40
- brawl stars esports tournaments and news apk<br />
41
- brawl stars fan art and wallpapers apk<br />
42
- brawl stars skins and cosmetics apk<br />
43
- brawl stars memes and jokes apk<br />
44
- brawl stars quizzes and trivia apk<br />
45
- brawl stars soundtracks and ringtones apk</p>
46
- <h2>Why download the latest APK of Brawl Stars?</h2>
47
- <h3>To enjoy the new features and updates of the game</h3>
48
- <p>One of the reasons to download the latest APK of Brawl Stars is to enjoy the new features and updates that the game offers. For example, the latest version of the game, which was released on June 15, 2023, introduced a new brawler named Buzz, who is a lifeguard with a grappling hook and a stun ability. The update also added new skins, maps, quests, balance changes, and bug fixes. By downloading the latest APK of Brawl Stars, you can experience the game at its best and avoid missing out on any of the fun.</p>
49
- <h3>To avoid compatibility issues and bugs</h3>
50
- <p>Another reason to download the latest APK of Brawl Stars is to avoid compatibility issues and bugs that might affect your gameplay. Sometimes, older versions of the game might not work properly on newer devices or operating systems, or might have glitches that prevent you from playing smoothly. By downloading the latest APK of Brawl Stars, you can ensure that your game runs smoothly and without any problems.</p>
51
- <h3>To access the game in regions where it is not officially available</h3>
52
- <p>A final reason to download the latest APK of Brawl Stars is to access the game in regions where it is not officially available. Brawl Stars is a global game that is available in most countries, but there might be some regions where it is not yet released or banned for some reason. If you live in one of those regions, you might not be able to download the game from the official app store. However, by downloading the latest APK of Brawl Stars from a third-party source, you can bypass this restriction and play the game wherever you are.</p>
53
- <h2>How to download and install the latest APK of Brawl Stars?</h2>
54
- <h3>Step 1: Find a reliable source for the APK file</h3>
55
- <p>The first step to download and install the latest APK of Brawl Stars is to find a reliable source for the APK file. You can search online for websites that offer APK files for various apps and games, but be careful to choose a trustworthy and safe one. Some websites might have fake or malicious APK files that could harm your device or steal your data. To avoid this, you can check the reviews and ratings of the website, or use an antivirus software to scan the APK file before downloading it.</p>
56
- <h3>Step 2: Enable unknown sources on your device settings</h3>
57
- <p>The second step to download and install the latest APK of Brawl Stars is to enable unknown sources on your device settings. This is because your device might not allow you to install apps from sources other than the official app store by default. To enable unknown sources, you can go to your device settings, then security or privacy, then toggle on the option that allows installation from unknown sources. You might also need to grant permission to your browser or file manager to install apps from unknown sources.</p>
58
- <h3>Step 3: Download and install the APK file</h3>
59
- <p>The third step to download and install the latest APK of Brawl Stars is to download and install the APK file. You can do this by clicking on the download link or button on the website where you found the APK file, then waiting for it to finish downloading. Once it is downloaded, you can open it with your file manager or browser, then tap on install. You might need to accept some terms and conditions before proceeding with the installation.</p>
60
- <h3>Step 4: Launch the game and sign in with your account</h3>
61
- <p>The final step to download and install the latest APK of Brawl Stars is to launch the game and sign in with your account. You can do this by tapping on the game icon on your home screen or app drawer, then waiting for it to load. You might need to agree to some permissions or policies before playing the game. Once you are in the game, you can sign in with your Supercell ID, Google Play Games account, Facebook account, or Apple ID, depending on your device and preference. This will allow you to sync your progress and purchases across different devices.</p>
62
- <h2>What are some tips and tricks for playing Brawl Stars?</h2>
63
- <h3>Choose your brawler wisely according to your play style and mode</h3>
64
- <p>One of the tips for playing Brawl Stars is to choose your brawler wisely according to your play style and mode. As mentioned earlier, there are over 40 brawlers in the game, each with their own unique skills and abilities. Some brawlers are better suited for certain modes or situations than others. For example, some brawlers are good at close-range combat, while others are good at long-range combat. Some brawlers are good at dealing damage, while others are good at supporting or healing their teammates. Some brawlers are good at controlling zones or objectives, while others are good at sneaking or stealing gems or stars. Therefore, you should choose your brawler according to your play style and the mode you are playing. You can also switch your brawler between matches if you want to try a different strategy or adapt to the enemy's team composition.</p>
65
- <h3>Collect and upgrade your brawlers to unlock their super abilities, star powers, and gadgets</h3>
66
- <p>Another tip for playing Brawl Stars is to collect and upgrade your brawlers to unlock their super abilities, star powers, and gadgets. Super abilities are powerful moves that can be activated once you fill up your super meter by attacking or taking damage. Star powers are passive skills that enhance your brawler's performance in some way. Gadgets are active items that can be used once per match to give you an edge in certain situations. You can unlock super abilities by reaching level 2 with your brawler, star powers by reaching level 9, and gadgets by reaching level 7. You can also upgrade your brawler's power level by using coins and power points, which increase their health, damage, and super damage.</p>
67
- <h3>Join a club and team up with other players for more fun and rewards</h3>
68
- <p>A final tip for playing Brawl Stars is to join a club and team up with other players for more fun and rewards. A club is a group of players who can chat, play, and compete together. You can join an existing club or create your own one. By joining a club, you can make friends, learn from other players, and participate in club events or wars. You can also team up with other players from your club or from the global chat to play together in friendly or competitive matches. By playing with teammates, you can coordinate your strategies, communicate with voice chat, and earn more trophies and rewards.</p>
69
- <h2>Conclusion</h2>
70
- <p>Brawl Stars is a popular game that offers a lot of fun and excitement for mobile gamers. You can download and install the latest APK of Brawl Stars to enjoy the new features and updates of the game, avoid compatibility issues and bugs, and access the game in regions where it is not officially available. You can also follow some tips and tricks to improve your skills and performance in the game, such as choosing your brawler wisely, collecting and upgrading your brawlers, and joining a club and teaming up with other players. If you are ready to join the brawl, download the latest APK of Brawl Stars now and start playing!</p>
71
- <h2>FAQs</h2>
72
- <ul>
73
- <li><b>Q: Is Brawl Stars safe to download and play?</b></li>
74
- <li>A: Yes, Brawl Stars is safe to download and play as long as you download it from a reliable source and follow the installation steps correctly. However, you should be careful not to share your personal or account information with anyone or use any third-party tools or hacks that might compromise your security or violate the game's terms of service.</li>
75
- <li><b>Q: How can I get more gems, coins, power points, star points, or gadgets in Brawl Stars?</b></li>
76
- <li>A: You can get more gems by buying them with real money or earning them by playing the game regularly. You can get more coins by opening brawl boxes or completing quests. You can get more power points by opening brawl boxes or using coins to buy them in the shop. You can get more star points by reaching rank 10 or higher with your brawlers or participating in power play matches. You can get more gadgets by opening brawl boxes or using gems to buy them in the shop.</li>
77
- <li><b>Q: How can I get new brawlers or skins in Brawl Stars?</b></li>
78
- <li>A: You can get new brawlers by opening brawl boxes or using gems to buy them in the shop. Some brawlers are also available as rewards for reaching certain trophy milestones or completing certain challenges. You can get new skins by using gems or star points to buy them in the shop. Some skins are also available as rewards for reaching certain ranks or seasons with your brawlers or completing certain events or quests.</li>
79
- <li><b>Q: How can I contact Supercell or report a problem or feedback about Brawl Stars?</b></li>
80
- <li>A: You can contact Supercell or report a problem or feedback about Brawl Stars by using the in-game support feature. You can access it by tapping on the settings icon on the top right corner of the main screen, then tapping on help and support. You can then browse through the FAQs or contact the support team directly.</li>
81
- <li><b>Q: How can I learn more about Brawl Stars?</b></li>
82
- <li>A: You can learn more about Brawl Stars by visiting the official website (https://supercell.com/en/games/brawlstars/), following the official social media accounts (Facebook, Twitter, Instagram, YouTube, Reddit, Discord), or watching the official videos or streams (Brawl Talk, Brawl Stars Championship, Brawl Stars Esports). You can also learn more about Brawl Stars by reading the official blog (https://blog.brawlstars.com/), joining the official forums (https://forum.supercell.com/forumdisplay.php/122-Brawl-Stars), or checking out the fan-made wiki (https://brawlstars.fandom.com/wiki/Brawl_Stars_Wiki).</li>
83
- </ul></p> 197e85843d<br />
84
- <br />
85
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Air1 Roku App Listen to Worship Music from the Comfort of Your Home Television.md DELETED
@@ -1,175 +0,0 @@
1
-
2
- <h1>How to Download Air1.com and Enjoy Worship Music Anywhere</h1>
3
- <p>Do you love worship music and want to listen to it wherever you go? Do you want to grow in your faith and get inspired by daily verses and prayers? If you answered yes, then you should download air1.com, a website that offers worship music and faith content for free. In this article, we will show you how to download air1.com on different devices, such as your phone, tablet, computer, smart speaker, or TV. We will also explain why listening to air1.com can benefit your spiritual life and well-being.</p>
4
- <h2>What is Air1.com and Why You Should Listen to It</h2>
5
- <h3>Air1.com is a website that offers worship music and faith content</h3>
6
- <p>Air1.com is a website that plays worship music from various artists, such as Elevation Worship, Maverick City Music, Shane & Shane, and more. You can listen to air1.com live or on demand, and discover new songs and artists that will uplift your soul. You can also explore in-depth music content with air1.com's top artists, such as interviews, videos, lyrics, and stories behind the songs.</p>
7
- <h2>download air1.com</h2><br /><p><b><b>Download Zip</b> ->>> <a href="https://jinyurl.com/2uNPiu">https://jinyurl.com/2uNPiu</a></b></p><br /><br />
8
- <p>But air1.com is more than just music. It is also a website that offers faith content that will help you grow in your relationship with God. You can read the Verse of the Day, submit requests for prayer and pray for others, dive deeper into all new faith content, and get inspired and share beautiful daily verse images. You can also enter contests, get exclusive content, free tickets, and new songs from air1.com.</p>
9
- <h3>Listening to air1.com can benefit your spiritual life and well-being</h3>
10
- <p>Listening to air1.com can have many benefits for your spiritual life and well-being. Here are some of them:</p>
11
- <ul>
12
- <li>Listening to worship music can help you praise God and express your love and gratitude for Him.</li>
13
- <li>Listening to worship music can help you connect with God and feel His presence and peace in your life.</li>
14
- <li>Listening to worship music can help you meditate on God's word and apply it to your daily situations.</li>
15
- <li>Listening to worship music can help you overcome stress, anxiety, fear, depression, and other negative emotions.</li>
16
- <li>Listening to worship music can help you heal from emotional wounds and find hope and joy in God.</li>
17
- <li>Listening to worship music can help you strengthen your faith and trust in God's promises and plans for you.</li>
18
- </ul>
19
- <h2>How to Download Air1.com on Different Devices</h2>
20
- <h3>Download the Air1 App for Android or iOS</h3>
21
- <h4>Features of the Air1 App</h4>
22
- <p>If you want to listen to air1.com on your phone or tablet, you can download the Air1 app for Android or iOS. The Air1 app has many features that will enhance your listening experience. You can:</p>
23
- <ul>
24
- <li>Create a list of your favorite songs</li>
25
- <li>Discover air1 stations in your city or while traveling</li> <li>Rate the songs and give feedback to air1.com</li>
26
- <li>Access the Verse of the Day, prayer requests, and faith content</li>
27
- <li>Enter contests and get exclusive content from air1.com</li>
28
- </ul>
29
- <h4>How to Install the Air1 App</h4>
30
- <p>To install the Air1 app on your Android or iOS device, follow these simple steps:</p>
31
- <ol>
32
- <li>Go to the Google Play Store or the App Store on your device.</li>
33
- <li>Search for "Air1" and tap on the app icon.</li>
34
- <li>Tap on "Install" or "Get" and wait for the app to download.</li>
35
- <li>Open the app and enjoy listening to air1.com.</li>
36
- </ol>
37
- <h3>Enable the Air1 Skill for Amazon Alexa</h3>
38
- <h4>How to Enable the Air1 Skill</h4>
39
- <p>If you have an Amazon Alexa device, such as an Echo, Dot, or Show, you can enable the Air1 skill and listen to air1.com with voice commands. To enable the Air1 skill, follow these steps:</p>
40
- <ol>
41
- <li>Open the Alexa app on your phone or tablet.</li>
42
- <li>Tap on the menu icon and select "Skills & Games".</li>
43
- <li>Search for "Air1" and tap on the skill icon.</li>
44
- <li>Tap on "Enable to Use" and follow the instructions to link your account.</li>
45
- <li>Say "Alexa, open Air1" to start listening to air1.com.</li>
46
- </ol>
47
- <h4>How to Use the Air1 Skill</h4>
48
- <p>Once you have enabled the Air1 skill, you can use voice commands to control your listening experience. Here are some examples of what you can say:</p>
49
- <ul>
50
- <li>"Alexa, play Air1"</li>
51
- <li>"Alexa, pause"</li>
52
- <li>"Alexa, resume"</li>
53
- <li>"Alexa, skip"</li>
54
- <li>"Alexa, what song is this?"</li>
55
- <li>"Alexa, who sings this song?"</li>
56
- <li>"Alexa, tell me more about this artist"</li>
57
- <li>"Alexa, play my favorites"</li>
58
- <li>"Alexa, rate this song"</li>
59
- <li>"Alexa, give me the Verse of the Day"</li>
60
- </ul>
61
- <h3>Listen to Air1 Online Through iHeartRadio or TuneIn</h3>
62
- <h4>How to Access Air1 on iHeartRadio or TuneIn</h4>
63
- <p>If you prefer to listen to air1.com online through your computer or browser, you can use iHeartRadio or TuneIn. These are online platforms that let you stream live radio stations from around the world. To access air1.com on iHeartRadio or TuneIn, follow these steps:</p>
64
- <p>How to download air1.com worship music<br />
65
- Download air1.com app for Android or iOS<br />
66
- Download air1.com podcast with Candace Cameron Bure<br />
67
- Download air1.com exclusive performance videos<br />
68
- Download air1.com influencer survey and win prizes<br />
69
- Download air1.com daily Bible verses and devotions<br />
70
- Download air1.com live stream and listen online<br />
71
- Download air1.com latest worship songs and lyrics<br />
72
- Download air1.com book offer for Father's Day<br />
73
- Download air1.com station finder and locate nearby stations<br />
74
- Download air1.com donation receipt and support the ministry<br />
75
- Download air1.com prayer request form and share your needs<br />
76
- Download air1.com concert tickets and see your favorite artists<br />
77
- Download air1.com newsletter and get updates and offers<br />
78
- Download air1.com wallpapers and backgrounds for your devices<br />
79
- Download air1.com artist interviews and stories<br />
80
- Download air1.com song request feature and hear what you want<br />
81
- Download air1.com feedback form and share your opinions<br />
82
- Download air1.com volunteer opportunities and serve your community<br />
83
- Download air1.com merchandise and show your support<br />
84
- Download air1.com music playlist and discover new songs<br />
85
- Download air1.com radio app for Windows or Mac<br />
86
- Download air1.com social media links and follow them online<br />
87
- Download air1.com testimonies and be inspired by others<br />
88
- Download air1.com events calendar and plan your schedule<br />
89
- Download air1.com contact information and get in touch with them<br />
90
- Download air1.com career opportunities and join their team<br />
91
- Download air1.com listener stories and hear how they impact lives<br />
92
- Download air1.com music charts and see what's trending<br />
93
- Download air1.com song history and find out what played when<br />
94
- Download air1.com music reviews and ratings<br />
95
- Download air1.com FAQs and answers<br />
96
- Download air1.com press releases and media kit<br />
97
- Download air1.com partner resources and tools<br />
98
- Download air1.com privacy policy and terms of use<br />
99
- Download air1.com station logos and images<br />
100
- Download air1.com music videos and watch online<br />
101
- Download air1.com song lyrics and sing along<br />
102
- Download air1.com artist bios and photos<br />
103
- Download air1.com music genres and categories<br />
104
- Download air1.com music awards and nominations<br />
105
- Download air1.com music trivia and quizzes<br />
106
- Download air1.com music news and updates<br />
107
- Download air1.com music blogs and articles<br />
108
- Download air1.com music podcasts and episodes<br />
109
- Download air1.com music contests and giveaways<br />
110
- Download air1.com music coupons and discounts<br />
111
- Download air1.com music recommendations and suggestions</p>
112
- <ol>
113
- <li>Go to <a href="">iHeartRadio.com</a> or <a href="">TuneIn.com</a> on your computer or browser.</li>
114
- <li>Search for "Air1" and click on the station logo.</li>
115
- <li>Enjoy listening to air1.com online.</li>
116
- </ol>
117
- <h4>Benefits of Listening to Air1 Online</h4>
118
- <p>Listening to air1.com online through iHeartRadio or TuneIn has some benefits that you might like. For example, you can:</p>
119
- <ul>
120
- <li>Create a free account and save your favorite stations and podcasts.</li>
121
- <li>Browse other genres and categories of music and radio stations.</li>
122
- <li>Listen to air1.com on other devices that support iHeartRadio or TuneIn, such as smart TVs, gaming consoles, wearables, and more.</li>
123
- </ul>
124
- <h3>Listen to Air1 on Your Home Television with Roku</h3>
125
- <h4>How to Install the Air1 Roku App</h4>
126
- <p>If you have a Roku device connected to your home television, you can install the Air1 Roku app and listen to air1.com on your TV. To install the Air1 Roku app, follow these steps:</p>
127
- <ol>
128
- <li>Turn on your Roku device and TV.</li>
129
- <li>Navigate to the Roku Channel Store and search for "Air1".</li>
130
- <li>Select the Air1 app and click on "Add Channel".</li>
131
- <li>Wait for the app to download and install.</li>
132
- <li>Open the app and enjoy listening to air1.com on your TV.</li>
133
- </ol>
134
- <h4>Features of the Air1 Roku App</h4>
135
- <p>The Air1 Roku app has some features that will make your listening experience more enjoyable. You can:</p>
136
- <ul>
137
- <li>Select from different audio quality options: low, medium, or high.</li>
138
- <li>View <li>View the song title, artist name, album art, and lyrics on your TV screen.</li>
139
- <li>Rate the songs and give feedback to air1.com.</li>
140
- <li>Access the Verse of the Day, prayer requests, and faith content.</li>
141
- <li>Enter contests and get exclusive content from air1.com.</li>
142
- </ul>
143
- <h2>Conclusion and FAQs</h2>
144
- <h3>Summary of the Main Points</h3>
145
- <p>In this article, we have shown you how to download air1.com and enjoy worship music anywhere. We have explained what air1.com is and why you should listen to it. We have also given you instructions on how to download air1.com on different devices, such as your phone, tablet, computer, smart speaker, or TV. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to contact us at [email protected]. Thank you for reading and happy listening!</p>
146
- <h3>FAQs About Downloading and Listening to Air1.com</h3>
147
- <p>Here are some frequently asked questions about downloading and listening to air1.com:</p>
148
- <table>
149
- <tr>
150
- <th>Question</th>
151
- <th>Answer</th>
152
- </tr>
153
- <tr>
154
- <td>Is air1.com free to download and listen?</td>
155
- <td>Yes, air1.com is free to download and listen. However, you may incur data charges from your internet service provider or mobile carrier depending on your plan and usage.</td>
156
- </tr>
157
- <tr>
158
- <td>Can I listen to air1.com offline?</td>
159
- <td>No, you need an internet connection to listen to air1.com. However, you can download some of the faith content, such as the Verse of the Day and daily verse images, and access them offline.</td>
160
- </tr>
161
- <tr>
162
- <td>Can I request a song or a prayer on air1.com?</td>
163
- <td>Yes, you can request a song or a prayer on air1.com. You can call 888-937-2471 or text 38568 to request a song. You can also submit a prayer request online or call 888-937-2471 to pray with someone.</td>
164
- </tr>
165
- <tr>
166
- <td>How can I support air1.com?</td>
167
- <td>You can support air1.com by donating online or by phone at 888-937-2471. You can also support air1.com by sharing it with your friends and family, following it on social media, and leaving a positive review on the app store or the website.</td>
168
- </tr>
169
- <tr>
170
- <td>How can I contact air1.com?</td>
171
- <td>You can contact air1.com by email at [email protected] or by phone at 888-937-2471. You can also follow air1.com on Facebook, Twitter, Instagram, YouTube, and TikTok.</td>
172
- </tr>
173
- </table></p> 197e85843d<br />
174
- <br />
175
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Enjoy Five Classic Solitaire Card Games on Your Android Device with Microsoft Solitaire Collection.md DELETED
@@ -1,101 +0,0 @@
1
- <br />
2
- <h1>Microsoft Solitaire Collection Android Free Download: How to Play the Classic Card Games on Your Phone</h1>
3
- <h2>Introduction</h2>
4
- <p>If you are a fan of solitaire card games, you might have played or heard of Microsoft Solitaire Collection. It is a collection of five popular solitaire games that you can play on your Windows PC or laptop. But did you know that you can also play Microsoft Solitaire Collection on your Android phone or tablet? In this article, we will show you how to download and play Microsoft Solitaire Collection on Android for free. We will also explain the features and benefits of this app, and answer some frequently asked questions.</p>
5
- <h2>microsoft solitaire collection android free download</h2><br /><p><b><b>Download Zip</b> &#9733;&#9733;&#9733; <a href="https://jinyurl.com/2uNRJ2">https://jinyurl.com/2uNRJ2</a></b></p><br /><br />
6
- <h3>What is Microsoft Solitaire Collection?</h3>
7
- <p>Microsoft Solitaire Collection is an app that offers five of the best solitaire card games in one place. You can choose from Klondike, Spider, FreeCell, TriPeaks, and Pyramid solitaire games, each with different rules and challenges. You can also enjoy daily challenges, events, themes, card backs, achievements, and more. Microsoft Solitaire Collection is fun for players of all ages and skill levels. You can relax with the classics, sharpen your mind, or challenge yourself with different modes and difficulties.</p>
8
- <h3>Why download Microsoft Solitaire Collection on Android?</h3>
9
- <p>There are many reasons why you might want to download Microsoft Solitaire Collection on your Android device. Here are some of them:</p>
10
- <ul>
11
- <li>You can play your favorite solitaire games anytime, anywhere. Whether you are at home, at work, or on the go, you can enjoy a quick game of solitaire on your phone or tablet.</li>
12
- <li>You can access an ad-free game experience with an Xbox Game Pass account. If you have an Xbox Game Pass subscription, you can sign in with your Microsoft account and play without any ads or interruptions.</li>
13
- <li>You can save your progress and sync across devices. You can also sign in with your Microsoft account to save your stats, level, achievements, and rewards. You can also sync your progress across multiple devices with the same account.</li>
14
- <li>You can celebrate over 30 years of fun with Microsoft Solitaire Collection. This app is a tribute to the original solitaire games that were released with Windows in 1990. You can even play with retro card backs from the 1990s version.</li>
15
- </ul>
16
- <h2>How to download Microsoft Solitaire Collection on Android</h2>
17
- <p>Downloading Microsoft Solitaire Collection on your Android device is very easy and simple. Just follow these steps:</p>
18
- <h3>Step 1: Go to Google Play Store</h3>
19
- <p>Open the Google Play Store app on your Android device. You can find it on your home screen or in your app drawer.</p>
20
- <h3>Step 2: Search for Microsoft Solitaire Collection</h3>
21
- <p>In the search bar at the top of the screen, type "Microsoft Solitaire Collection" and tap the magnifying glass icon. You will see a list of results related to your search.</p>
22
- <h3>Step 3: Install the app</h3>
23
- <p>Find the app that has the name "Microsoft Solitaire Collection" and the logo of a blue spade card. Tap on it to open its details page. Then tap on the green "Install" button to start downloading and installing the app on your device. You might need to grant some permissions for the app to work properly.</p>
24
- <p>Once the installation is complete, you can open the app from your home screen or app drawer. You can also tap on the "Open" button on the Google Play Store page.</p>
25
- <h2>How to play Microsoft Solitaire Collection on Android</h2>
26
- <p>Playing Microsoft Solitaire Collection on your Android device is very fun and easy. Here are some tips and tricks to help you get started:</p>
27
- <h3>Choose a game mode</h3>
28
- <p>When you open the app, you will see five icons representing the five solitaire games available. You can tap on any of them to start playing. Each game has its own rules and objectives, but the basic goal is to move all the cards to the foundations or clear the board. Here is a brief overview of each game mode:</p>
29
- <p>microsoft solitaire collection android app<br />
30
- microsoft solitaire collection apk download<br />
31
- microsoft solitaire collection for android cnet<br />
32
- microsoft solitaire collection google play store<br />
33
- microsoft solitaire collection spider solitaire android<br />
34
- microsoft solitaire collection klondike solitaire android<br />
35
- microsoft solitaire collection freecell solitaire android<br />
36
- microsoft solitaire collection tripeaks solitaire android<br />
37
- microsoft solitaire collection pyramid solitaire android<br />
38
- microsoft solitaire collection dark mode android<br />
39
- microsoft solitaire collection classic theme android<br />
40
- microsoft solitaire collection aquarium theme android<br />
41
- microsoft solitaire collection beach theme android<br />
42
- microsoft solitaire collection retro theme android<br />
43
- microsoft solitaire collection daily challenges android<br />
44
- microsoft solitaire collection events and rewards android<br />
45
- microsoft solitaire collection achievements and gamerscore android<br />
46
- microsoft solitaire collection sign in with microsoft account android<br />
47
- microsoft solitaire collection connect with xbox game pass android<br />
48
- microsoft solitaire collection ad-free experience android<br />
49
- microsoft solitaire collection 30 years of fun android<br />
50
- microsoft solitaire collection millions of gamers worldwide android<br />
51
- microsoft solitaire collection simple rules and gameplay android<br />
52
- microsoft solitaire collection relax and enjoy android<br />
53
- microsoft solitaire collection keep your mind sharp android<br />
54
- microsoft solitaire collection challenge yourself android<br />
55
- microsoft solitaire collection traditional scoring android<br />
56
- microsoft solitaire collection vegas scoring android<br />
57
- microsoft solitaire collection one or three card draw android<br />
58
- microsoft solitaire collection single suit or four suits spider android<br />
59
- microsoft solitaire collection four free cell spaces freecell android<br />
60
- microsoft solitaire collection cards in sequence tripeaks android<br />
61
- microsoft solitaire collection combo points tripeaks android<br />
62
- microsoft solitaire collection cards that add up to 13 pyramid android<br />
63
- microsoft solitaire collection earn badges and rewards daily challenges android<br />
64
- microsoft solitaire collection track your progress daily challenges android<br />
65
- microsoft solitaire collection compete with other players daily challenges android<br />
66
- microsoft solitaire collection choose your mood themes and card backs android<br />
67
- microsoft solitaire collection save your stats and level sign in with account android<br />
68
- microsoft solitaire collection pick up where you left off sign in with account android</p>
69
- <h4>Klondike Solitaire</h4>
70
- <p>This is the classic and most popular solitaire game. You have seven columns of cards, and you need to build four foundations from Ace to King in the same suit. You can move cards from one column to another if they are in descending order and alternating colors. You can also draw cards from the stock pile and place them on the waste pile or the columns. You can choose from three difficulty levels: Easy, Medium, and Hard.</p>
71
- <h4>Spider Solitaire</h4>
72
- <p>This is a challenging solitaire game that requires more strategy and skill. You have 10 columns of cards, and you need to clear the board by creating eight runs of cards from King to Ace in the same suit. You can move cards from one column to another if they are in descending order and the same suit. You can also draw cards from the stock pile and place them on any column. You can choose from three difficulty levels: One Suit, Two Suits, and Four Suits.</p>
73
- <h4>FreeCell Solitaire</h4>
74
- <p>This is a solitaire game that tests your logic and patience. You have four free cells, four foundations, and eight columns of cards. You need to build four foundations from Ace to King in the same suit. You can move cards from one column to another if they are in descending order and alternating colors. You can also move cards to the free cells or the foundations. You can choose from four difficulty levels: Easy, Medium, Hard, and Expert.</p>
75
- <h4>TriPeaks Solitaire</h4>
76
- <p>This is a solitaire game that is fast-paced and fun. You have three peaks of cards, and you need to clear them by selecting cards that are one higher or one lower than the card on the waste pile. You can also draw cards from the stock pile and place them on the waste pile. You can choose from two difficulty levels: Normal and Hard.</p>
77
- <h4>Pyramid Solitaire</h4>
78
- <p>This is a solitaire game that is simple and addictive. You have a pyramid of cards, and you need to clear it by selecting pairs of cards that add up to 13. You can also draw cards from the stock pile and place them on the waste pile. You can choose from two difficulty levels: Normal and Hard.</p> <h3>Complete daily challenges and events</h3>
79
- <p>One of the best features of Microsoft Solitaire Collection is that it offers daily challenges and events for you to enjoy. You can earn coins, badges, and rewards by completing various tasks and goals in each game mode. You can also compete with other players around the world and see how you rank on the leaderboards. You can access the daily challenges and events by tapping on the calendar icon on the main menu.</p>
80
- <h3>Customize your theme and card backs</h3>
81
- <p>Another great feature of Microsoft Solitaire Collection is that it allows you to customize your theme and card backs. You can choose from different backgrounds, colors, and styles to suit your mood and preference. You can also unlock new themes and card backs by completing achievements and challenges. You can access the theme and card back options by tapping on the gear icon on the main menu.</p>
82
- <h3>Save your progress and earn achievements</h3>
83
- <p>The last feature we want to mention is that Microsoft Solitaire Collection lets you save your progress and earn achievements. You can sign in with your Microsoft account to sync your data across devices and access your stats, level, coins, badges, and rewards. You can also earn achievements by completing various milestones and challenges in each game mode. You can access your profile and achievements by tapping on the trophy icon on the main menu.</p>
84
- <h2>Conclusion</h2>
85
- <p>In conclusion, Microsoft Solitaire Collection is a fantastic app that offers five of the best solitaire card games in one place. You can play Klondike, Spider, FreeCell, TriPeaks, and Pyramid solitaire games on your Android device for free. You can also enjoy daily challenges, events, themes, card backs, achievements, and more. Microsoft Solitaire Collection is fun for players of all ages and skill levels. You can relax with the classics, sharpen your mind, or challenge yourself with different modes and difficulties. If you are a fan of solitaire games, you should definitely download Microsoft Solitaire Collection on your Android device today.</p>
86
- <h2>FAQs</h2>
87
- <p>Here are some frequently asked questions about Microsoft Solitaire Collection:</p>
88
- <ul>
89
- <li><b>Q: How much space does Microsoft Solitaire Collection take on my device?</b></li>
90
- <li>A: Microsoft Solitaire Collection takes about 100 MB of space on your device. You might need more space if you download additional themes or card backs.</li>
91
- <li><b>Q: How do I contact customer support for Microsoft Solitaire Collection?</b></li>
92
- <li>A: You can contact customer support for Microsoft Solitaire Collection by tapping on the question mark icon on the main menu. You can also visit the official website or the Facebook page for more information and help.</li>
93
- <li><b>Q: How do I turn off the sound or music in Microsoft Solitaire Collection?</b></li>
94
- <li>A: You can turn off the sound or music in Microsoft Solitaire Collection by tapping on the gear icon on the main menu. Then tap on the sound or music icons to toggle them on or off.</li>
95
- <li><b>Q: How do I change the language in Microsoft Solitaire Collection?</b></li>
96
- <li>A: You can change the language in Microsoft Solitaire Collection by tapping on the gear icon on the main menu. Then tap on the language option and select your preferred language from the list.</li>
97
- <li><b>Q: How do I uninstall Microsoft Solitaire Collection from my device?</b></li>
98
- <li>A: You can uninstall Microsoft Solitaire Collection from your device by going to your device settings and finding the app in the list of installed apps. Then tap on it and select "Uninstall".</li>
99
- </ul></p> 401be4b1e0<br />
100
- <br />
101
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/safety_checker.py DELETED
@@ -1,125 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- # Copyright 2022 The HuggingFace Team. All rights reserved.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- import numpy as np
17
- import paddle
18
- import paddle.nn.functional as F
19
-
20
- from paddlenlp.transformers import (
21
- CLIPPretrainedModel,
22
- CLIPVisionConfig,
23
- CLIPVisionModel,
24
- )
25
-
26
- from ...utils import logging
27
-
28
- logger = logging.get_logger(__name__)
29
-
30
-
31
- def cosine_distance(image_embeds, text_embeds):
32
- normalized_image_embeds = F.normalize(image_embeds)
33
- normalized_text_embeds = F.normalize(text_embeds)
34
- return paddle.matmul(normalized_image_embeds, normalized_text_embeds, transpose_y=True)
35
-
36
-
37
- class StableDiffusionSafetyChecker(CLIPPretrainedModel):
38
- config_class = CLIPVisionConfig
39
-
40
- def __init__(self, config: CLIPVisionConfig):
41
- super().__init__(config)
42
- self.clip = CLIPVisionModel(config)
43
- self.vision_projection = paddle.create_parameter(
44
- (config.hidden_size, config.projection_dim), dtype=paddle.get_default_dtype()
45
- )
46
-
47
- self.register_buffer("concept_embeds", paddle.ones([17, config.projection_dim]))
48
- self.register_buffer("special_care_embeds", paddle.ones([3, config.projection_dim]))
49
-
50
- self.register_buffer("concept_embeds_weights", paddle.ones([17]))
51
- self.register_buffer("special_care_embeds_weights", paddle.ones([3]))
52
-
53
- @paddle.no_grad()
54
- def forward(self, clip_input, images):
55
- pooled_output = self.clip(clip_input)[1] # pooled_output
56
- image_embeds = paddle.matmul(pooled_output, self.vision_projection)
57
-
58
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
59
- special_cos_dist = cosine_distance(image_embeds, self.special_care_embeds).astype("float32").numpy()
60
- cos_dist = cosine_distance(image_embeds, self.concept_embeds).astype("float32").numpy()
61
-
62
- result = []
63
- batch_size = image_embeds.shape[0]
64
- for i in range(batch_size):
65
- result_img = {"special_scores": {}, "special_care": [], "concept_scores": {}, "bad_concepts": []}
66
-
67
- # increase this value to create a stronger `nfsw` filter
68
- # at the cost of increasing the possibility of filtering benign images
69
- adjustment = 0.0
70
-
71
- for concept_idx in range(len(special_cos_dist[0])):
72
- concept_cos = special_cos_dist[i][concept_idx]
73
- concept_threshold = self.special_care_embeds_weights[concept_idx].item()
74
- result_img["special_scores"][concept_idx] = round(concept_cos - concept_threshold + adjustment, 3)
75
- if result_img["special_scores"][concept_idx] > 0:
76
- result_img["special_care"].append({concept_idx, result_img["special_scores"][concept_idx]})
77
- adjustment = 0.01
78
-
79
- for concept_idx in range(len(cos_dist[0])):
80
- concept_cos = cos_dist[i][concept_idx]
81
- concept_threshold = self.concept_embeds_weights[concept_idx].item()
82
- result_img["concept_scores"][concept_idx] = round(concept_cos - concept_threshold + adjustment, 3)
83
- if result_img["concept_scores"][concept_idx] > 0:
84
- result_img["bad_concepts"].append(concept_idx)
85
-
86
- result.append(result_img)
87
-
88
- has_nsfw_concepts = [len(res["bad_concepts"]) > 0 for res in result]
89
-
90
- for idx, has_nsfw_concept in enumerate(has_nsfw_concepts):
91
- if has_nsfw_concept:
92
- images[idx] = np.zeros(images[idx].shape) # black image
93
-
94
- if any(has_nsfw_concepts):
95
- logger.warning(
96
- "Potential NSFW content was detected in one or more images. A black image will be returned instead."
97
- " Try again with a different prompt and/or seed."
98
- )
99
-
100
- return images, has_nsfw_concepts
101
-
102
- def forward_fastdeploy(self, clip_input: paddle.Tensor, images: paddle.Tensor):
103
- pooled_output = self.clip(clip_input)[1] # pooled_output
104
- image_embeds = paddle.matmul(pooled_output, self.vision_projection)
105
-
106
- special_cos_dist = cosine_distance(image_embeds, self.special_care_embeds)
107
- cos_dist = cosine_distance(image_embeds, self.concept_embeds)
108
-
109
- # increase this value to create a stronger `nsfw` filter
110
- # at the cost of increasing the possibility of filtering benign images
111
- adjustment = 0.0
112
-
113
- special_scores = special_cos_dist - self.special_care_embeds_weights + adjustment
114
- # special_scores = special_scores.round(decimals=3)
115
- special_care = paddle.any(special_scores > 0, axis=1)
116
- special_adjustment = special_care * 0.01
117
- special_adjustment = special_adjustment.unsqueeze(1).expand([-1, cos_dist.shape[1]])
118
-
119
- concept_scores = (cos_dist - self.concept_embeds_weights) + special_adjustment
120
- # concept_scores = concept_scores.round(decimals=3)
121
- has_nsfw_concepts = paddle.any(concept_scores > 0, axis=1)
122
-
123
- images[has_nsfw_concepts] = 0.0 # black image
124
-
125
- return images, has_nsfw_concepts
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/2023Liu2023/bingo/src/lib/bots/bing/utils.ts DELETED
@@ -1,87 +0,0 @@
1
- import { ChatResponseMessage, BingChatResponse } from './types'
2
-
3
- export function convertMessageToMarkdown(message: ChatResponseMessage): string {
4
- if (message.messageType === 'InternalSearchQuery') {
5
- return message.text
6
- }
7
- for (const card of message.adaptiveCards??[]) {
8
- for (const block of card.body) {
9
- if (block.type === 'TextBlock') {
10
- return block.text
11
- }
12
- }
13
- }
14
- return ''
15
- }
16
-
17
- const RecordSeparator = String.fromCharCode(30)
18
-
19
- export const websocketUtils = {
20
- packMessage(data: any) {
21
- return `${JSON.stringify(data)}${RecordSeparator}`
22
- },
23
- unpackMessage(data: string | ArrayBuffer | Blob) {
24
- if (!data) return {}
25
- return data
26
- .toString()
27
- .split(RecordSeparator)
28
- .filter(Boolean)
29
- .map((s) => {
30
- try {
31
- return JSON.parse(s)
32
- } catch (e) {
33
- return {}
34
- }
35
- })
36
- },
37
- }
38
-
39
- export async function createImage(prompt: string, id: string, headers: HeadersInit): Promise<string | undefined> {
40
- const { headers: responseHeaders } = await fetch(`https://www.bing.com/images/create?partner=sydney&re=1&showselective=1&sude=1&kseed=7000&SFX=&q=${encodeURIComponent(prompt)}&iframeid=${id}`,
41
- {
42
- method: 'HEAD',
43
- headers,
44
- redirect: 'manual'
45
- },
46
- );
47
-
48
- if (!/&id=([^&]+)$/.test(responseHeaders.get('location') || '')) {
49
- throw new Error('请求异常,请检查 cookie 是否有效')
50
- }
51
-
52
- const resultId = RegExp.$1;
53
- let count = 0
54
- const imageThumbUrl = `https://www.bing.com/images/create/async/results/${resultId}?q=${encodeURIComponent(prompt)}&partner=sydney&showselective=1&IID=images.as`;
55
-
56
- do {
57
- await sleep(3000);
58
- const content = await fetch(imageThumbUrl, { headers, method: 'GET' })
59
-
60
- // @ts-ignore
61
- if (content.headers.get('content-length') > 1) {
62
- const text = await content.text()
63
- return (text?.match(/<img class="mimg"((?!src).)+src="[^"]+/mg)??[])
64
- .map(target => target?.split('src="').pop()?.replace(/&amp;/g, '&'))
65
- .map(img => `![${prompt}](${img})`).join(' ')
66
- }
67
- } while(count ++ < 10);
68
- }
69
-
70
-
71
- export async function* streamAsyncIterable(stream: ReadableStream) {
72
- const reader = stream.getReader()
73
- try {
74
- while (true) {
75
- const { done, value } = await reader.read()
76
- if (done) {
77
- return
78
- }
79
- yield value
80
- }
81
- } finally {
82
- reader.releaseLock()
83
- }
84
- }
85
-
86
- export const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms))
87
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/232labs/VToonify/vtoonify/model/encoder/encoders/__init__.py DELETED
File without changes
spaces/4Taps/SadTalker/src/utils/croper.py DELETED
@@ -1,295 +0,0 @@
1
- import os
2
- import cv2
3
- import time
4
- import glob
5
- import argparse
6
- import scipy
7
- import numpy as np
8
- from PIL import Image
9
- from tqdm import tqdm
10
- from itertools import cycle
11
-
12
- from torch.multiprocessing import Pool, Process, set_start_method
13
-
14
-
15
- """
16
- brief: face alignment with FFHQ method (https://github.com/NVlabs/ffhq-dataset)
17
- author: lzhbrian (https://lzhbrian.me)
18
- date: 2020.1.5
19
- note: code is heavily borrowed from
20
- https://github.com/NVlabs/ffhq-dataset
21
- http://dlib.net/face_landmark_detection.py.html
22
- requirements:
23
- apt install cmake
24
- conda install Pillow numpy scipy
25
- pip install dlib
26
- # download face landmark model from:
27
- # http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
28
- """
29
-
30
- import numpy as np
31
- from PIL import Image
32
- import dlib
33
-
34
-
35
- class Croper:
36
- def __init__(self, path_of_lm):
37
- # download model from: http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
38
- self.predictor = dlib.shape_predictor(path_of_lm)
39
-
40
- def get_landmark(self, img_np):
41
- """get landmark with dlib
42
- :return: np.array shape=(68, 2)
43
- """
44
- detector = dlib.get_frontal_face_detector()
45
- dets = detector(img_np, 1)
46
- # print("Number of faces detected: {}".format(len(dets)))
47
- # for k, d in enumerate(dets):
48
- if len(dets) == 0:
49
- return None
50
- d = dets[0]
51
- # Get the landmarks/parts for the face in box d.
52
- shape = self.predictor(img_np, d)
53
- # print("Part 0: {}, Part 1: {} ...".format(shape.part(0), shape.part(1)))
54
- t = list(shape.parts())
55
- a = []
56
- for tt in t:
57
- a.append([tt.x, tt.y])
58
- lm = np.array(a)
59
- # lm is a shape=(68,2) np.array
60
- return lm
61
-
62
- def align_face(self, img, lm, output_size=1024):
63
- """
64
- :param filepath: str
65
- :return: PIL Image
66
- """
67
- lm_chin = lm[0: 17] # left-right
68
- lm_eyebrow_left = lm[17: 22] # left-right
69
- lm_eyebrow_right = lm[22: 27] # left-right
70
- lm_nose = lm[27: 31] # top-down
71
- lm_nostrils = lm[31: 36] # top-down
72
- lm_eye_left = lm[36: 42] # left-clockwise
73
- lm_eye_right = lm[42: 48] # left-clockwise
74
- lm_mouth_outer = lm[48: 60] # left-clockwise
75
- lm_mouth_inner = lm[60: 68] # left-clockwise
76
-
77
- # Calculate auxiliary vectors.
78
- eye_left = np.mean(lm_eye_left, axis=0)
79
- eye_right = np.mean(lm_eye_right, axis=0)
80
- eye_avg = (eye_left + eye_right) * 0.5
81
- eye_to_eye = eye_right - eye_left
82
- mouth_left = lm_mouth_outer[0]
83
- mouth_right = lm_mouth_outer[6]
84
- mouth_avg = (mouth_left + mouth_right) * 0.5
85
- eye_to_mouth = mouth_avg - eye_avg
86
-
87
- # Choose oriented crop rectangle.
88
- x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1] # 双眼差与双嘴差相加
89
- x /= np.hypot(*x) # hypot函数计算直角三角形的斜边长,用斜边长对三角形两条直边做归一化
90
- x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8) # 双眼差和眼嘴差,选较大的作为基准尺度
91
- y = np.flipud(x) * [-1, 1]
92
- c = eye_avg + eye_to_mouth * 0.1
93
- quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) # 定义四边形,以面部基准位置为中心上下左右平移得到四个顶点
94
- qsize = np.hypot(*x) * 2 # 定义四边形的大小(边长),为基准尺度的2倍
95
-
96
- # Shrink.
97
- # 如果计算出的四边形太大了,就按比例缩小它
98
- shrink = int(np.floor(qsize / output_size * 0.5))
99
- if shrink > 1:
100
- rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink)))
101
- img = img.resize(rsize, Image.ANTIALIAS)
102
- quad /= shrink
103
- qsize /= shrink
104
-
105
- # Crop.
106
- border = max(int(np.rint(qsize * 0.1)), 3)
107
- crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),
108
- int(np.ceil(max(quad[:, 1]))))
109
- crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]),
110
- min(crop[3] + border, img.size[1]))
111
- if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]:
112
- # img = img.crop(crop)
113
- quad -= crop[0:2]
114
-
115
- # Pad.
116
- pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),
117
- int(np.ceil(max(quad[:, 1]))))
118
- pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0),
119
- max(pad[3] - img.size[1] + border, 0))
120
- # if enable_padding and max(pad) > border - 4:
121
- # pad = np.maximum(pad, int(np.rint(qsize * 0.3)))
122
- # img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect')
123
- # h, w, _ = img.shape
124
- # y, x, _ = np.ogrid[:h, :w, :1]
125
- # mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]),
126
- # 1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3]))
127
- # blur = qsize * 0.02
128
- # img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0)
129
- # img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0)
130
- # img = Image.fromarray(np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB')
131
- # quad += pad[:2]
132
-
133
- # Transform.
134
- quad = (quad + 0.5).flatten()
135
- lx = max(min(quad[0], quad[2]), 0)
136
- ly = max(min(quad[1], quad[7]), 0)
137
- rx = min(max(quad[4], quad[6]), img.size[0])
138
- ry = min(max(quad[3], quad[5]), img.size[0])
139
- # img = img.transform((transform_size, transform_size), Image.QUAD, (quad + 0.5).flatten(),
140
- # Image.BILINEAR)
141
- # if output_size < transform_size:
142
- # img = img.resize((output_size, output_size), Image.ANTIALIAS)
143
-
144
- # Save aligned image.
145
- return crop, [lx, ly, rx, ry]
146
-
147
- # def crop(self, img_np_list):
148
- # for _i in range(len(img_np_list)):
149
- # img_np = img_np_list[_i]
150
- # lm = self.get_landmark(img_np)
151
- # if lm is None:
152
- # return None
153
- # crop, quad = self.align_face(img=Image.fromarray(img_np), lm=lm, output_size=512)
154
- # clx, cly, crx, cry = crop
155
- # lx, ly, rx, ry = quad
156
- # lx, ly, rx, ry = int(lx), int(ly), int(rx), int(ry)
157
-
158
- # _inp = img_np_list[_i]
159
- # _inp = _inp[cly:cry, clx:crx]
160
- # _inp = _inp[ly:ry, lx:rx]
161
- # img_np_list[_i] = _inp
162
- # return img_np_list
163
-
164
- def crop(self, img_np_list, xsize=512): # first frame for all video
165
- img_np = img_np_list[0]
166
- lm = self.get_landmark(img_np)
167
- if lm is None:
168
- return None
169
- crop, quad = self.align_face(img=Image.fromarray(img_np), lm=lm, output_size=xsize)
170
- clx, cly, crx, cry = crop
171
- lx, ly, rx, ry = quad
172
- lx, ly, rx, ry = int(lx), int(ly), int(rx), int(ry)
173
- for _i in range(len(img_np_list)):
174
- _inp = img_np_list[_i]
175
- _inp = _inp[cly:cry, clx:crx]
176
- # cv2.imwrite('test1.jpg', _inp)
177
- _inp = _inp[ly:ry, lx:rx]
178
- # cv2.imwrite('test2.jpg', _inp)
179
- img_np_list[_i] = _inp
180
- return img_np_list, crop, quad
181
-
182
-
183
- def read_video(filename, uplimit=100):
184
- frames = []
185
- cap = cv2.VideoCapture(filename)
186
- cnt = 0
187
- while cap.isOpened():
188
- ret, frame = cap.read()
189
- if ret:
190
- frame = cv2.resize(frame, (512, 512))
191
- frames.append(frame)
192
- else:
193
- break
194
- cnt += 1
195
- if cnt >= uplimit:
196
- break
197
- cap.release()
198
- assert len(frames) > 0, f'{filename}: video with no frames!'
199
- return frames
200
-
201
-
202
- def create_video(video_name, frames, fps=25, video_format='.mp4', resize_ratio=1):
203
- # video_name = os.path.dirname(image_folder) + video_format
204
- # img_list = glob.glob1(image_folder, 'frame*')
205
- # img_list.sort()
206
- # frame = cv2.imread(os.path.join(image_folder, img_list[0]))
207
- # frame = cv2.resize(frame, (0, 0), fx=resize_ratio, fy=resize_ratio)
208
- # height, width, layers = frames[0].shape
209
- height, width, layers = 512, 512, 3
210
- if video_format == '.mp4':
211
- fourcc = cv2.VideoWriter_fourcc(*'mp4v')
212
- elif video_format == '.avi':
213
- fourcc = cv2.VideoWriter_fourcc(*'XVID')
214
- video = cv2.VideoWriter(video_name, fourcc, fps, (width, height))
215
- for _frame in frames:
216
- _frame = cv2.resize(_frame, (height, width), interpolation=cv2.INTER_LINEAR)
217
- video.write(_frame)
218
-
219
- def create_images(video_name, frames):
220
- height, width, layers = 512, 512, 3
221
- images_dir = video_name.split('.')[0]
222
- os.makedirs(images_dir, exist_ok=True)
223
- for i, _frame in enumerate(frames):
224
- _frame = cv2.resize(_frame, (height, width), interpolation=cv2.INTER_LINEAR)
225
- _frame_path = os.path.join(images_dir, str(i)+'.jpg')
226
- cv2.imwrite(_frame_path, _frame)
227
-
228
- def run(data):
229
- filename, opt, device = data
230
- os.environ['CUDA_VISIBLE_DEVICES'] = device
231
- croper = Croper()
232
-
233
- frames = read_video(filename, uplimit=opt.uplimit)
234
- name = filename.split('/')[-1] # .split('.')[0]
235
- name = os.path.join(opt.output_dir, name)
236
-
237
- frames = croper.crop(frames)
238
- if frames is None:
239
- print(f'{name}: detect no face. should removed')
240
- return
241
- # create_video(name, frames)
242
- create_images(name, frames)
243
-
244
-
245
- def get_data_path(video_dir):
246
- eg_video_files = ['/apdcephfs/share_1290939/quincheng/datasets/HDTF/backup_fps25/WDA_KatieHill_000.mp4']
247
- # filenames = list()
248
- # VIDEO_EXTENSIONS_LOWERCASE = {'mp4'}
249
- # VIDEO_EXTENSIONS = VIDEO_EXTENSIONS_LOWERCASE.union({f.upper() for f in VIDEO_EXTENSIONS_LOWERCASE})
250
- # extensions = VIDEO_EXTENSIONS
251
- # for ext in extensions:
252
- # filenames = sorted(glob.glob(f'{opt.input_dir}/**/*.{ext}'))
253
- # print('Total number of videos:', len(filenames))
254
- return eg_video_files
255
-
256
-
257
- def get_wra_data_path(video_dir):
258
- if opt.option == 'video':
259
- videos_path = sorted(glob.glob(f'{video_dir}/*.mp4'))
260
- elif opt.option == 'image':
261
- videos_path = sorted(glob.glob(f'{video_dir}/*/'))
262
- else:
263
- raise NotImplementedError
264
- print('Example videos: ', videos_path[:2])
265
- return videos_path
266
-
267
-
268
- if __name__ == '__main__':
269
- set_start_method('spawn')
270
- parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
271
- parser.add_argument('--input_dir', type=str, help='the folder of the input files')
272
- parser.add_argument('--output_dir', type=str, help='the folder of the output files')
273
- parser.add_argument('--device_ids', type=str, default='0,1')
274
- parser.add_argument('--workers', type=int, default=8)
275
- parser.add_argument('--uplimit', type=int, default=500)
276
- parser.add_argument('--option', type=str, default='video')
277
-
278
- root = '/apdcephfs/share_1290939/quincheng/datasets/HDTF'
279
- cmd = f'--input_dir {root}/backup_fps25_first20s_sync/ ' \
280
- f'--output_dir {root}/crop512_stylegan_firstframe_sync/ ' \
281
- '--device_ids 0 ' \
282
- '--workers 8 ' \
283
- '--option video ' \
284
- '--uplimit 500 '
285
- opt = parser.parse_args(cmd.split())
286
- # filenames = get_data_path(opt.input_dir)
287
- filenames = get_wra_data_path(opt.input_dir)
288
- os.makedirs(opt.output_dir, exist_ok=True)
289
- print(f'Video numbers: {len(filenames)}')
290
- pool = Pool(opt.workers)
291
- args_list = cycle([opt])
292
- device_ids = opt.device_ids.split(",")
293
- device_ids = cycle(device_ids)
294
- for data in tqdm(pool.imap_unordered(run, zip(filenames, args_list, device_ids))):
295
- None
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/nets_new.py DELETED
@@ -1,132 +0,0 @@
1
- import torch
2
- from torch import nn
3
- import torch.nn.functional as F
4
- from uvr5_pack.lib_v5 import layers_new as layers
5
-
6
-
7
- class BaseNet(nn.Module):
8
- def __init__(
9
- self, nin, nout, nin_lstm, nout_lstm, dilations=((4, 2), (8, 4), (12, 6))
10
- ):
11
- super(BaseNet, self).__init__()
12
- self.enc1 = layers.Conv2DBNActiv(nin, nout, 3, 1, 1)
13
- self.enc2 = layers.Encoder(nout, nout * 2, 3, 2, 1)
14
- self.enc3 = layers.Encoder(nout * 2, nout * 4, 3, 2, 1)
15
- self.enc4 = layers.Encoder(nout * 4, nout * 6, 3, 2, 1)
16
- self.enc5 = layers.Encoder(nout * 6, nout * 8, 3, 2, 1)
17
-
18
- self.aspp = layers.ASPPModule(nout * 8, nout * 8, dilations, dropout=True)
19
-
20
- self.dec4 = layers.Decoder(nout * (6 + 8), nout * 6, 3, 1, 1)
21
- self.dec3 = layers.Decoder(nout * (4 + 6), nout * 4, 3, 1, 1)
22
- self.dec2 = layers.Decoder(nout * (2 + 4), nout * 2, 3, 1, 1)
23
- self.lstm_dec2 = layers.LSTMModule(nout * 2, nin_lstm, nout_lstm)
24
- self.dec1 = layers.Decoder(nout * (1 + 2) + 1, nout * 1, 3, 1, 1)
25
-
26
- def __call__(self, x):
27
- e1 = self.enc1(x)
28
- e2 = self.enc2(e1)
29
- e3 = self.enc3(e2)
30
- e4 = self.enc4(e3)
31
- e5 = self.enc5(e4)
32
-
33
- h = self.aspp(e5)
34
-
35
- h = self.dec4(h, e4)
36
- h = self.dec3(h, e3)
37
- h = self.dec2(h, e2)
38
- h = torch.cat([h, self.lstm_dec2(h)], dim=1)
39
- h = self.dec1(h, e1)
40
-
41
- return h
42
-
43
-
44
- class CascadedNet(nn.Module):
45
- def __init__(self, n_fft, nout=32, nout_lstm=128):
46
- super(CascadedNet, self).__init__()
47
-
48
- self.max_bin = n_fft // 2
49
- self.output_bin = n_fft // 2 + 1
50
- self.nin_lstm = self.max_bin // 2
51
- self.offset = 64
52
-
53
- self.stg1_low_band_net = nn.Sequential(
54
- BaseNet(2, nout // 2, self.nin_lstm // 2, nout_lstm),
55
- layers.Conv2DBNActiv(nout // 2, nout // 4, 1, 1, 0),
56
- )
57
-
58
- self.stg1_high_band_net = BaseNet(
59
- 2, nout // 4, self.nin_lstm // 2, nout_lstm // 2
60
- )
61
-
62
- self.stg2_low_band_net = nn.Sequential(
63
- BaseNet(nout // 4 + 2, nout, self.nin_lstm // 2, nout_lstm),
64
- layers.Conv2DBNActiv(nout, nout // 2, 1, 1, 0),
65
- )
66
- self.stg2_high_band_net = BaseNet(
67
- nout // 4 + 2, nout // 2, self.nin_lstm // 2, nout_lstm // 2
68
- )
69
-
70
- self.stg3_full_band_net = BaseNet(
71
- 3 * nout // 4 + 2, nout, self.nin_lstm, nout_lstm
72
- )
73
-
74
- self.out = nn.Conv2d(nout, 2, 1, bias=False)
75
- self.aux_out = nn.Conv2d(3 * nout // 4, 2, 1, bias=False)
76
-
77
- def forward(self, x):
78
- x = x[:, :, : self.max_bin]
79
-
80
- bandw = x.size()[2] // 2
81
- l1_in = x[:, :, :bandw]
82
- h1_in = x[:, :, bandw:]
83
- l1 = self.stg1_low_band_net(l1_in)
84
- h1 = self.stg1_high_band_net(h1_in)
85
- aux1 = torch.cat([l1, h1], dim=2)
86
-
87
- l2_in = torch.cat([l1_in, l1], dim=1)
88
- h2_in = torch.cat([h1_in, h1], dim=1)
89
- l2 = self.stg2_low_band_net(l2_in)
90
- h2 = self.stg2_high_band_net(h2_in)
91
- aux2 = torch.cat([l2, h2], dim=2)
92
-
93
- f3_in = torch.cat([x, aux1, aux2], dim=1)
94
- f3 = self.stg3_full_band_net(f3_in)
95
-
96
- mask = torch.sigmoid(self.out(f3))
97
- mask = F.pad(
98
- input=mask,
99
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
100
- mode="replicate",
101
- )
102
-
103
- if self.training:
104
- aux = torch.cat([aux1, aux2], dim=1)
105
- aux = torch.sigmoid(self.aux_out(aux))
106
- aux = F.pad(
107
- input=aux,
108
- pad=(0, 0, 0, self.output_bin - aux.size()[2]),
109
- mode="replicate",
110
- )
111
- return mask, aux
112
- else:
113
- return mask
114
-
115
- def predict_mask(self, x):
116
- mask = self.forward(x)
117
-
118
- if self.offset > 0:
119
- mask = mask[:, :, :, self.offset : -self.offset]
120
- assert mask.size()[3] > 0
121
-
122
- return mask
123
-
124
- def predict(self, x, aggressiveness=None):
125
- mask = self.forward(x)
126
- pred_mag = x * mask
127
-
128
- if self.offset > 0:
129
- pred_mag = pred_mag[:, :, :, self.offset : -self.offset]
130
- assert pred_mag.size()[3] > 0
131
-
132
- return pred_mag
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/syntaspeech/syntactic_graph_encoder.py DELETED
@@ -1,193 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
- import torch.nn.functional as F
4
-
5
- import dgl
6
- from dgl.nn.pytorch import GatedGraphConv
7
-
8
- def sequence_mask(lengths, maxlen, dtype=torch.bool):
9
- if maxlen is None:
10
- maxlen = lengths.max()
11
- mask = ~(torch.ones((len(lengths), maxlen)).to(lengths.device).cumsum(dim=1).t() > lengths).t()
12
- mask.type(dtype)
13
- return mask
14
-
15
-
16
- def group_hidden_by_segs(h, seg_ids, max_len):
17
- """
18
- :param h: [B, T, H]
19
- :param seg_ids: [B, T]
20
- :return: h_ph: [B, T_ph, H]
21
- """
22
- B, T, H = h.shape
23
- h_gby_segs = h.new_zeros([B, max_len + 1, H]).scatter_add_(1, seg_ids[:, :, None].repeat([1, 1, H]), h)
24
- all_ones = h.new_ones(h.shape[:2])
25
- cnt_gby_segs = h.new_zeros([B, max_len + 1]).scatter_add_(1, seg_ids, all_ones).contiguous()
26
- h_gby_segs = h_gby_segs[:, 1:]
27
- cnt_gby_segs = cnt_gby_segs[:, 1:]
28
- h_gby_segs = h_gby_segs / torch.clamp(cnt_gby_segs[:, :, None], min=1)
29
- # assert h_gby_segs.shape[-1] == 192
30
- return h_gby_segs
31
-
32
- class GraphAuxEnc(nn.Module):
33
- def __init__(self, in_dim, hid_dim, out_dim, n_iterations=5, n_edge_types=6):
34
- super(GraphAuxEnc, self).__init__()
35
- self.in_dim = in_dim
36
- self.hid_dim = hid_dim
37
- self.out_dim = out_dim
38
- self.skip_connect = True
39
- self.dropout_after_gae = False
40
-
41
- self.ggc_1 = GatedGraphConv(in_feats=in_dim, out_feats=hid_dim
42
- , n_steps=n_iterations, n_etypes=n_edge_types)
43
- self.ggc_2 = GatedGraphConv(in_feats=hid_dim, out_feats=out_dim
44
- , n_steps=n_iterations, n_etypes=n_edge_types)
45
- self.dropout = nn.Dropout(p=0.5)
46
-
47
- @staticmethod
48
- def ph_encoding_to_word_encoding(ph_encoding, ph2word, word_len):
49
- """
50
- ph_encoding: [batch, t_p, hid]
51
- ph2word: tensor [batch, t_w]
52
- word_len: tensor [batch]
53
- """
54
- word_encoding_for_graph, batch_word_encoding, has_word_row_idx = GraphAuxEnc._process_ph_to_word_encoding(
55
- ph_encoding,
56
- ph2word,
57
- word_len)
58
- # [batch, t_w, hid]
59
- return batch_word_encoding, word_encoding_for_graph
60
-
61
- def pad_word_encoding_to_phoneme(self, word_encoding, ph2word, t_p):
62
- return self._postprocess_word2ph(word_encoding, ph2word, t_p)
63
-
64
- @staticmethod
65
- def _process_ph_to_word_encoding(ph_encoding, ph2word, word_len=None):
66
- """
67
- ph_encoding: [batch, t_p, hid]
68
- ph2word: tensor [batch, t_w]
69
- word_len: tensor [batch]
70
- """
71
- word_len = word_len.reshape([-1,])
72
- max_len = max(word_len)
73
- num_nodes = sum(word_len)
74
-
75
- batch_word_encoding = group_hidden_by_segs(ph_encoding, ph2word, max_len)
76
- bs, t_p, hid = batch_word_encoding.shape
77
- has_word_mask = sequence_mask(word_len, max_len) # [batch, t_p, 1]
78
- word_encoding = batch_word_encoding.reshape([bs * t_p, hid])
79
- has_word_row_idx = has_word_mask.reshape([-1])
80
- word_encoding = word_encoding[has_word_row_idx]
81
- assert word_encoding.shape[0] == num_nodes
82
- return word_encoding, batch_word_encoding, has_word_row_idx
83
-
84
- @staticmethod
85
- def _postprocess_word2ph(word_encoding, ph2word, t_p):
86
- word_encoding = F.pad(word_encoding,[0,0,1,0])
87
- ph2word_ = ph2word[:, :, None].repeat([1, 1, word_encoding.shape[-1]])
88
- out = torch.gather(word_encoding, 1, ph2word_) # [B, T, H]
89
- return out
90
-
91
- @staticmethod
92
- def _repeat_one_sequence(x, d, T):
93
- """Repeat each frame according to duration."""
94
- if d.sum() == 0:
95
- d = d.fill_(1)
96
- hid = x.shape[-1]
97
- expanded_lst = [x_.repeat(int(d_), 1) for x_, d_ in zip(x, d) if d_ != 0]
98
- expanded = torch.cat(expanded_lst, dim=0)
99
- if T > expanded.shape[0]:
100
- expanded = torch.cat([expanded, torch.zeros([T - expanded.shape[0], hid]).to(expanded.device)], dim=0)
101
- return expanded
102
-
103
- def word_forward(self, graph_lst, word_encoding, etypes_lst):
104
- """
105
- word encoding in, word encoding out.
106
- """
107
- batched_graph = dgl.batch(graph_lst)
108
- inp = word_encoding
109
- batched_etypes = torch.cat(etypes_lst) # [num_edges_in_batch, 1]
110
- assert batched_graph.num_nodes() == inp.shape[0]
111
-
112
- gcc1_out = self.ggc_1(batched_graph, inp, batched_etypes)
113
- if self.dropout_after_gae:
114
- gcc1_out = self.dropout(gcc1_out)
115
- gcc2_out = self.ggc_2(batched_graph, gcc1_out, batched_etypes) # [num_nodes_in_batch, hin]
116
- if self.dropout_after_gae:
117
- gcc2_out = self.ggc_2(batched_graph, gcc2_out, batched_etypes)
118
- if self.skip_connect:
119
- assert self.in_dim == self.hid_dim and self.hid_dim == self.out_dim
120
- gcc2_out = inp + gcc1_out + gcc2_out
121
-
122
- word_len = torch.tensor([g.num_nodes() for g in graph_lst]).reshape([-1])
123
- max_len = max(word_len)
124
- has_word_mask = sequence_mask(word_len, max_len) # [batch, t_p, 1]
125
- has_word_row_idx = has_word_mask.reshape([-1])
126
- bs = len(graph_lst)
127
- t_w = max([g.num_nodes() for g in graph_lst])
128
- hid = word_encoding.shape[-1]
129
- output = torch.zeros([bs * t_w, hid]).to(gcc2_out.device)
130
- output[has_word_row_idx] = gcc2_out
131
- output = output.reshape([bs, t_w, hid])
132
- word_level_output = output
133
- return torch.transpose(word_level_output, 1, 2)
134
-
135
- def forward(self, graph_lst, ph_encoding, ph2word, etypes_lst, return_word_encoding=False):
136
- """
137
- graph_lst: [list of dgl_graph]
138
- ph_encoding: [batch, hid, t_p]
139
- ph2word: [list of list[1,2,2,2,3,3,3]]
140
- etypes_lst: [list of etypes]; etypes: torch.LongTensor
141
- """
142
- t_p = ph_encoding.shape[-1]
143
- ph_encoding = ph_encoding.transpose(1,2) # [batch, t_p, hid]
144
- word_len = torch.tensor([g.num_nodes() for g in graph_lst]).reshape([-1])
145
- batched_graph = dgl.batch(graph_lst)
146
- inp, batched_word_encoding, has_word_row_idx = self._process_ph_to_word_encoding(ph_encoding, ph2word,
147
- word_len=word_len) # [num_nodes_in_batch, in_dim]
148
- bs, t_w, hid = batched_word_encoding.shape
149
- batched_etypes = torch.cat(etypes_lst) # [num_edges_in_batch, 1]
150
- gcc1_out = self.ggc_1(batched_graph, inp, batched_etypes)
151
- gcc2_out = self.ggc_2(batched_graph, gcc1_out, batched_etypes) # [num_nodes_in_batch, hin]
152
- # skip connection
153
- gcc2_out = inp + gcc1_out + gcc2_out # [n_nodes, hid]
154
-
155
- output = torch.zeros([bs * t_w, hid]).to(gcc2_out.device)
156
- output[has_word_row_idx] = gcc2_out
157
- output = output.reshape([bs, t_w, hid])
158
- word_level_output = output
159
- output = self._postprocess_word2ph(word_level_output, ph2word, t_p) # [batch, t_p, hid]
160
- output = torch.transpose(output, 1, 2)
161
-
162
- if return_word_encoding:
163
- return output, torch.transpose(word_level_output, 1, 2)
164
- else:
165
- return output
166
-
167
- if __name__ == '__main__':
168
- # Unit Test for batching graphs
169
- from modules.syntaspeech.syntactic_graph_buider import Sentence2GraphParser, plot_dgl_sentence_graph
170
- parser = Sentence2GraphParser("en")
171
-
172
- # Unit Test for English Graph Builder
173
- text1 = "To be or not to be , that 's a question ."
174
- text2 = "I love you . You love me . Mixue ice-scream and tea ."
175
- graph1, etypes1 = parser.parse(text1)
176
- graph2, etypes2 = parser.parse(text2)
177
- batched_text = "<BOS> " + text1 + " <EOS>" + " " + "<BOS> " + text2 + " <EOS>"
178
- batched_nodes = [graph1.num_nodes(), graph2.num_nodes()]
179
- plot_dgl_sentence_graph(dgl.batch([graph1, graph2]), {i: w for i, w in enumerate(batched_text.split(" "))})
180
- etypes_lst = [etypes1, etypes2]
181
-
182
- # Unit Test for Graph Encoder forward
183
- in_feats = 4
184
- out_feats = 4
185
- enc = GraphAuxEnc(in_dim=in_feats, hid_dim=in_feats, out_dim=out_feats)
186
- ph2word = torch.tensor([
187
- [1, 2, 3, 3, 3, 4, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 0],
188
- [1, 2, 3, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]
189
- ])
190
- inp = torch.randn([2, in_feats, 17]) # [N_sentence, feat, ph_length]
191
- graph_lst = [graph1, graph2]
192
- out = enc(graph_lst, inp, ph2word, etypes_lst)
193
- print(out.shape) # [N_sentence, feat, ph_length]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/audio_detection/target_sound_detection/src/models.py DELETED
@@ -1,1288 +0,0 @@
1
- # !/usr/bin/env python
2
- # -*- coding: utf-8 -*-
3
- # @Time : 2021/3/9 16:33
4
- # @Author : dongchao yang
5
- # @File : train.py
6
- from itertools import zip_longest
7
- import numpy as np
8
- from scipy import ndimage
9
- import torch
10
- import torch.nn as nn
11
- import torch.nn.functional as F
12
- import time
13
- from torchlibrosa.augmentation import SpecAugmentation
14
- from torchlibrosa.stft import Spectrogram, LogmelFilterBank
15
- import math
16
- from sklearn.cluster import KMeans
17
- import os
18
- import time
19
- from functools import partial
20
- # import timm
21
- # from timm.models.layers import DropPath, to_2tuple, trunc_normal_
22
- import warnings
23
- from functools import partial
24
- # from timm.models.registry import register_model
25
- # from timm.models.vision_transformer import _cfg
26
- # from mmdet.utils import get_root_logger
27
- # from mmcv.runner import load_checkpoint
28
- # from mmcv.runner import _load_checkpoint, load_state_dict
29
- # import mmcv.runner
30
- import copy
31
- from collections import OrderedDict
32
- import io
33
- import re
34
- DEBUG=0
35
- event_labels = ['Alarm', 'Alarm_clock', 'Animal', 'Applause', 'Arrow', 'Artillery_fire',
36
- 'Babbling', 'Baby_laughter', 'Bark', 'Basketball_bounce', 'Battle_cry',
37
- 'Bell', 'Bird', 'Bleat', 'Bouncing', 'Breathing', 'Buzz', 'Camera',
38
- 'Cap_gun', 'Car', 'Car_alarm', 'Cat', 'Caw', 'Cheering', 'Child_singing',
39
- 'Choir', 'Chop', 'Chopping_(food)', 'Clapping', 'Clickety-clack', 'Clicking',
40
- 'Clip-clop', 'Cluck', 'Coin_(dropping)', 'Computer_keyboard', 'Conversation',
41
- 'Coo', 'Cough', 'Cowbell', 'Creak', 'Cricket', 'Croak', 'Crow', 'Crowd', 'DTMF',
42
- 'Dog', 'Door', 'Drill', 'Drip', 'Engine', 'Engine_starting', 'Explosion', 'Fart',
43
- 'Female_singing', 'Filing_(rasp)', 'Finger_snapping', 'Fire', 'Fire_alarm', 'Firecracker',
44
- 'Fireworks', 'Frog', 'Gasp', 'Gears', 'Giggle', 'Glass', 'Glass_shatter', 'Gobble', 'Groan',
45
- 'Growling', 'Hammer', 'Hands', 'Hiccup', 'Honk', 'Hoot', 'Howl', 'Human_sounds', 'Human_voice',
46
- 'Insect', 'Laughter', 'Liquid', 'Machine_gun', 'Male_singing', 'Mechanisms', 'Meow', 'Moo',
47
- 'Motorcycle', 'Mouse', 'Music', 'Oink', 'Owl', 'Pant', 'Pant_(dog)', 'Patter', 'Pig', 'Plop',
48
- 'Pour', 'Power_tool', 'Purr', 'Quack', 'Radio', 'Rain_on_surface', 'Rapping', 'Rattle',
49
- 'Reversing_beeps', 'Ringtone', 'Roar', 'Run', 'Rustle', 'Scissors', 'Scrape', 'Scratch',
50
- 'Screaming', 'Sewing_machine', 'Shout', 'Shuffle', 'Shuffling_cards', 'Singing',
51
- 'Single-lens_reflex_camera', 'Siren', 'Skateboard', 'Sniff', 'Snoring', 'Speech',
52
- 'Speech_synthesizer', 'Spray', 'Squeak', 'Squeal', 'Steam', 'Stir', 'Surface_contact',
53
- 'Tap', 'Tap_dance', 'Telephone_bell_ringing', 'Television', 'Tick', 'Tick-tock', 'Tools',
54
- 'Train', 'Train_horn', 'Train_wheels_squealing', 'Truck', 'Turkey', 'Typewriter', 'Typing',
55
- 'Vehicle', 'Video_game_sound', 'Water', 'Whimper_(dog)', 'Whip', 'Whispering', 'Whistle',
56
- 'Whistling', 'Whoop', 'Wind', 'Writing', 'Yip', 'and_pans', 'bird_song', 'bleep', 'clink',
57
- 'cock-a-doodle-doo', 'crinkling', 'dove', 'dribble', 'eructation', 'faucet', 'flapping_wings',
58
- 'footsteps', 'gunfire', 'heartbeat', 'infant_cry', 'kid_speaking', 'man_speaking', 'mastication',
59
- 'mice', 'river', 'rooster', 'silverware', 'skidding', 'smack', 'sobbing', 'speedboat', 'splatter',
60
- 'surf', 'thud', 'thwack', 'toot', 'truck_horn', 'tweet', 'vroom', 'waterfowl', 'woman_speaking']
61
- def load_checkpoint(model,
62
- filename,
63
- map_location=None,
64
- strict=False,
65
- logger=None,
66
- revise_keys=[(r'^module\.', '')]):
67
- """Load checkpoint from a file or URI.
68
- Args:
69
- model (Module): Module to load checkpoint.
70
- filename (str): Accept local filepath, URL, ``torchvision://xxx``,
71
- ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for
72
- details.
73
- map_location (str): Same as :func:`torch.load`.
74
- strict (bool): Whether to allow different params for the model and
75
- checkpoint.
76
- logger (:mod:`logging.Logger` or None): The logger for error message.
77
- revise_keys (list): A list of customized keywords to modify the
78
- state_dict in checkpoint. Each item is a (pattern, replacement)
79
- pair of the regular expression operations. Default: strip
80
- the prefix 'module.' by [(r'^module\\.', '')].
81
- Returns:
82
- dict or OrderedDict: The loaded checkpoint.
83
- """
84
-
85
- checkpoint = _load_checkpoint(filename, map_location, logger)
86
- '''
87
- new_proj = torch.nn.Conv2d(1, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
88
- new_proj.weight = torch.nn.Parameter(torch.sum(checkpoint['patch_embed1.proj.weight'], dim=1).unsqueeze(1))
89
- checkpoint['patch_embed1.proj.weight'] = new_proj.weight
90
- new_proj.weight = torch.nn.Parameter(torch.sum(checkpoint['patch_embed1.proj.weight'], dim=2).unsqueeze(2).repeat(1,1,3,1))
91
- checkpoint['patch_embed1.proj.weight'] = new_proj.weight
92
- new_proj.weight = torch.nn.Parameter(torch.sum(checkpoint['patch_embed1.proj.weight'], dim=3).unsqueeze(3).repeat(1,1,1,3))
93
- checkpoint['patch_embed1.proj.weight'] = new_proj.weight
94
- '''
95
- new_proj = torch.nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(4, 4), padding=(2, 2))
96
- new_proj.weight = torch.nn.Parameter(torch.sum(checkpoint['patch_embed1.proj.weight'], dim=1).unsqueeze(1))
97
- checkpoint['patch_embed1.proj.weight'] = new_proj.weight
98
- # OrderedDict is a subclass of dict
99
- if not isinstance(checkpoint, dict):
100
- raise RuntimeError(
101
- f'No state_dict found in checkpoint file {filename}')
102
- # get state_dict from checkpoint
103
- if 'state_dict' in checkpoint:
104
- state_dict = checkpoint['state_dict']
105
- else:
106
- state_dict = checkpoint
107
-
108
- # strip prefix of state_dict
109
- metadata = getattr(state_dict, '_metadata', OrderedDict())
110
- for p, r in revise_keys:
111
- state_dict = OrderedDict(
112
- {re.sub(p, r, k): v
113
- for k, v in state_dict.items()})
114
- state_dict = OrderedDict({k.replace('backbone.',''):v for k,v in state_dict.items()})
115
- # Keep metadata in state_dict
116
- state_dict._metadata = metadata
117
-
118
- # load state_dict
119
- load_state_dict(model, state_dict, strict, logger)
120
- return checkpoint
121
-
122
- def init_weights(m):
123
- if isinstance(m, (nn.Conv2d, nn.Conv1d)):
124
- nn.init.kaiming_normal_(m.weight)
125
- if m.bias is not None:
126
- nn.init.constant_(m.bias, 0)
127
- elif isinstance(m, nn.BatchNorm2d):
128
- nn.init.constant_(m.weight, 1)
129
- if m.bias is not None:
130
- nn.init.constant_(m.bias, 0)
131
- if isinstance(m, nn.Linear):
132
- nn.init.kaiming_uniform_(m.weight)
133
- if m.bias is not None:
134
- nn.init.constant_(m.bias, 0)
135
- def init_layer(layer):
136
- """Initialize a Linear or Convolutional layer. """
137
- nn.init.xavier_uniform_(layer.weight)
138
- if hasattr(layer, 'bias'):
139
- if layer.bias is not None:
140
- layer.bias.data.fill_(0.)
141
-
142
-
143
- def init_bn(bn):
144
- """Initialize a Batchnorm layer. """
145
- bn.bias.data.fill_(0.)
146
- bn.weight.data.fill_(1.)
147
-
148
- class MaxPool(nn.Module):
149
- def __init__(self, pooldim=1):
150
- super().__init__()
151
- self.pooldim = pooldim
152
-
153
- def forward(self, logits, decision):
154
- return torch.max(decision, dim=self.pooldim)[0]
155
-
156
-
157
- class LinearSoftPool(nn.Module):
158
- """LinearSoftPool
159
- Linear softmax, takes logits and returns a probability, near to the actual maximum value.
160
- Taken from the paper:
161
- A Comparison of Five Multiple Instance Learning Pooling Functions for Sound Event Detection with Weak Labeling
162
- https://arxiv.org/abs/1810.09050
163
- """
164
- def __init__(self, pooldim=1):
165
- super().__init__()
166
- self.pooldim = pooldim
167
-
168
- def forward(self, logits, time_decision):
169
- return (time_decision**2).sum(self.pooldim) / (time_decision.sum(
170
- self.pooldim)+1e-7)
171
-
172
- class ConvBlock(nn.Module):
173
- def __init__(self, in_channels, out_channels):
174
-
175
- super(ConvBlock, self).__init__()
176
-
177
- self.conv1 = nn.Conv2d(in_channels=in_channels,
178
- out_channels=out_channels,
179
- kernel_size=(3, 3), stride=(1, 1),
180
- padding=(1, 1), bias=False)
181
-
182
- self.conv2 = nn.Conv2d(in_channels=out_channels,
183
- out_channels=out_channels,
184
- kernel_size=(3, 3), stride=(1, 1),
185
- padding=(1, 1), bias=False)
186
-
187
- self.bn1 = nn.BatchNorm2d(out_channels)
188
- self.bn2 = nn.BatchNorm2d(out_channels)
189
-
190
- self.init_weight()
191
-
192
- def init_weight(self):
193
- init_layer(self.conv1)
194
- init_layer(self.conv2)
195
- init_bn(self.bn1)
196
- init_bn(self.bn2)
197
-
198
-
199
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
200
-
201
- x = input
202
- x = F.relu_(self.bn1(self.conv1(x)))
203
- x = F.relu_(self.bn2(self.conv2(x)))
204
- if pool_type == 'max':
205
- x = F.max_pool2d(x, kernel_size=pool_size)
206
- elif pool_type == 'avg':
207
- x = F.avg_pool2d(x, kernel_size=pool_size)
208
- elif pool_type == 'avg+max':
209
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
210
- x2 = F.max_pool2d(x, kernel_size=pool_size)
211
- x = x1 + x2
212
- else:
213
- raise Exception('Incorrect argument!')
214
-
215
- return x
216
-
217
- class ConvBlock_GLU(nn.Module):
218
- def __init__(self, in_channels, out_channels,kernel_size=(3,3)):
219
- super(ConvBlock_GLU, self).__init__()
220
- self.conv1 = nn.Conv2d(in_channels=in_channels,
221
- out_channels=out_channels,
222
- kernel_size=kernel_size, stride=(1, 1),
223
- padding=(1, 1), bias=False)
224
- self.bn1 = nn.BatchNorm2d(out_channels)
225
- self.sigmoid = nn.Sigmoid()
226
- self.init_weight()
227
-
228
- def init_weight(self):
229
- init_layer(self.conv1)
230
- init_bn(self.bn1)
231
-
232
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
233
- x = input
234
- x = self.bn1(self.conv1(x))
235
- cnn1 = self.sigmoid(x[:, :x.shape[1]//2, :, :])
236
- cnn2 = x[:,x.shape[1]//2:,:,:]
237
- x = cnn1*cnn2
238
- if pool_type == 'max':
239
- x = F.max_pool2d(x, kernel_size=pool_size)
240
- elif pool_type == 'avg':
241
- x = F.avg_pool2d(x, kernel_size=pool_size)
242
- elif pool_type == 'avg+max':
243
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
244
- x2 = F.max_pool2d(x, kernel_size=pool_size)
245
- x = x1 + x2
246
- elif pool_type == 'None':
247
- pass
248
- elif pool_type == 'LP':
249
- pass
250
- #nn.LPPool2d(4, pool_size)
251
- else:
252
- raise Exception('Incorrect argument!')
253
- return x
254
-
255
- class Mul_scale_GLU(nn.Module):
256
- def __init__(self):
257
- super(Mul_scale_GLU,self).__init__()
258
- self.conv_block1_1 = ConvBlock_GLU(in_channels=1, out_channels=64,kernel_size=(1,1)) # 1*1
259
- self.conv_block1_2 = ConvBlock_GLU(in_channels=1, out_channels=64,kernel_size=(3,3)) # 3*3
260
- self.conv_block1_3 = ConvBlock_GLU(in_channels=1, out_channels=64,kernel_size=(5,5)) # 5*5
261
- self.conv_block2 = ConvBlock_GLU(in_channels=96, out_channels=128*2)
262
- # self.conv_block3 = ConvBlock(in_channels=64, out_channels=128)
263
- self.conv_block3 = ConvBlock_GLU(in_channels=128, out_channels=128*2)
264
- self.conv_block4 = ConvBlock_GLU(in_channels=128, out_channels=256*2)
265
- self.conv_block5 = ConvBlock_GLU(in_channels=256, out_channels=256*2)
266
- self.conv_block6 = ConvBlock_GLU(in_channels=256, out_channels=512*2)
267
- self.conv_block7 = ConvBlock_GLU(in_channels=512, out_channels=512*2)
268
- self.padding = nn.ReplicationPad2d((0,1,0,1))
269
-
270
- def forward(self, input, fi=None):
271
- """
272
- Input: (batch_size, data_length)"""
273
- x1 = self.conv_block1_1(input, pool_size=(2, 2), pool_type='avg')
274
- x1 = x1[:,:,:500,:32]
275
- #print('x1 ',x1.shape)
276
- x2 = self.conv_block1_2(input,pool_size=(2,2),pool_type='avg')
277
- #print('x2 ',x2.shape)
278
- x3 = self.conv_block1_3(input,pool_size=(2,2),pool_type='avg')
279
- x3 = self.padding(x3)
280
- #print('x3 ',x3.shape)
281
- # assert 1==2
282
- x = torch.cat([x1,x2],dim=1)
283
- x = torch.cat([x,x3],dim=1)
284
- #print('x ',x.shape)
285
- x = self.conv_block2(x, pool_size=(2, 2), pool_type='None')
286
- x = self.conv_block3(x,pool_size=(2,2),pool_type='avg')
287
- x = F.dropout(x, p=0.2, training=self.training) #
288
- #print('x2,3 ',x.shape)
289
- x = self.conv_block4(x, pool_size=(2, 4), pool_type='None')
290
- x = self.conv_block5(x,pool_size=(2,4),pool_type='avg')
291
- x = F.dropout(x, p=0.2, training=self.training)
292
- #print('x4,5 ',x.shape)
293
-
294
- x = self.conv_block6(x, pool_size=(1, 4), pool_type='None')
295
- x = self.conv_block7(x, pool_size=(1, 4), pool_type='avg')
296
- x = F.dropout(x, p=0.2, training=self.training)
297
- # print('x6,7 ',x.shape)
298
- # assert 1==2
299
- return x
300
-
301
- class Cnn14(nn.Module):
302
- def __init__(self, sample_rate=32000, window_size=1024, hop_size=320, mel_bins=64, fmin=50,
303
- fmax=14000, classes_num=527):
304
-
305
- super(Cnn14, self).__init__()
306
-
307
- window = 'hann'
308
- center = True
309
- pad_mode = 'reflect'
310
- ref = 1.0
311
- amin = 1e-10
312
- top_db = None
313
-
314
- # Spectrogram extractor
315
- self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
316
- win_length=window_size, window=window, center=center, pad_mode=pad_mode,
317
- freeze_parameters=True)
318
-
319
- # Logmel feature extractor
320
- self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
321
- n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
322
- freeze_parameters=True)
323
-
324
- # Spec augmenter
325
- self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2,
326
- freq_drop_width=8, freq_stripes_num=2)
327
-
328
- self.bn0 = nn.BatchNorm2d(64)
329
-
330
- self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
331
- self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
332
- self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
333
- self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
334
- self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
335
- self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048)
336
-
337
- self.fc1 = nn.Linear(2048, 128, bias=True)
338
- self.fc_audioset = nn.Linear(128, classes_num, bias=True)
339
-
340
- self.init_weight()
341
-
342
- def init_weight(self):
343
- init_layer(self.fc1)
344
- init_layer(self.fc_audioset)
345
-
346
- def forward(self, input_, mixup_lambda=None):
347
- """
348
- Input: (batch_size, data_length)"""
349
- input_ = input_.unsqueeze(1)
350
- x = self.conv_block1(input_, pool_size=(2, 2), pool_type='avg')
351
- x = F.dropout(x, p=0.2, training=self.training)
352
- x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
353
- x = F.dropout(x, p=0.2, training=self.training)
354
- x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
355
- x = F.dropout(x, p=0.2, training=self.training)
356
- x = self.conv_block4(x, pool_size=(1, 2), pool_type='avg')
357
- x = F.dropout(x, p=0.2, training=self.training)
358
- x = self.conv_block5(x, pool_size=(1, 2), pool_type='avg')
359
- x = F.dropout(x, p=0.2, training=self.training)
360
- x = self.conv_block6(x, pool_size=(1, 2), pool_type='avg')
361
- x = F.dropout(x, p=0.2, training=self.training)
362
- # print(x.shape)
363
- # x = torch.mean(x, dim=3)
364
- x = x.transpose(1, 2).contiguous().flatten(-2)
365
- x = self.fc1(x)
366
- # print(x.shape)
367
- # assert 1==2
368
- # (x1,_) = torch.max(x, dim=2)
369
- # x2 = torch.mean(x, dim=2)
370
- # x = x1 + x2
371
- # x = F.dropout(x, p=0.5, training=self.training)
372
- # x = F.relu_(self.fc1(x))
373
- # embedding = F.dropout(x, p=0.5, training=self.training)
374
- return x
375
-
376
- class Cnn10_fi(nn.Module):
377
- def __init__(self):
378
- super(Cnn10_fi, self).__init__()
379
- self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
380
- self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
381
- self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
382
- self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
383
-
384
- # self.fc1 = nn.Linear(512, 512, bias=True)
385
- # self.fc_audioset = nn.Linear(512, classes_num, bias=True)
386
-
387
- # self.init_weight()
388
-
389
- def forward(self, input, fi=None):
390
- """
391
- Input: (batch_size, data_length)"""
392
-
393
- x = self.conv_block1(input, pool_size=(2, 2), pool_type='avg')
394
- if fi != None:
395
- gamma = fi[:,0].unsqueeze(1).unsqueeze(2).unsqueeze(3).expand_as(x)
396
- beta = fi[:,1].unsqueeze(1).unsqueeze(2).unsqueeze(3).expand_as(x)
397
- x = (gamma)*x + beta
398
- x = F.dropout(x, p=0.2, training=self.training)
399
- x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
400
- if fi != None:
401
- gamma = fi[:,0].unsqueeze(1).unsqueeze(2).unsqueeze(3).expand_as(x)
402
- beta = fi[:,1].unsqueeze(1).unsqueeze(2).unsqueeze(3).expand_as(x)
403
- x = (gamma)*x + beta
404
- x = F.dropout(x, p=0.2, training=self.training)
405
- x = self.conv_block3(x, pool_size=(2, 4), pool_type='avg')
406
- if fi != None:
407
- gamma = fi[:,0].unsqueeze(1).unsqueeze(2).unsqueeze(3).expand_as(x)
408
- beta = fi[:,1].unsqueeze(1).unsqueeze(2).unsqueeze(3).expand_as(x)
409
- x = (gamma)*x + beta
410
- x = F.dropout(x, p=0.2, training=self.training)
411
- x = self.conv_block4(x, pool_size=(1, 4), pool_type='avg')
412
- if fi != None:
413
- gamma = fi[:,0].unsqueeze(1).unsqueeze(2).unsqueeze(3).expand_as(x)
414
- beta = fi[:,1].unsqueeze(1).unsqueeze(2).unsqueeze(3).expand_as(x)
415
- x = (gamma)*x + beta
416
- x = F.dropout(x, p=0.2, training=self.training)
417
- return x
418
-
419
- class Cnn10_mul_scale(nn.Module):
420
- def __init__(self,scale=8):
421
- super(Cnn10_mul_scale, self).__init__()
422
- self.conv_block1_1 = ConvBlock_GLU(in_channels=1, out_channels=64,kernel_size=(1,1))
423
- self.conv_block1_2 = ConvBlock_GLU(in_channels=1, out_channels=64,kernel_size=(3,3))
424
- self.conv_block1_3 = ConvBlock_GLU(in_channels=1, out_channels=64,kernel_size=(5,5))
425
- self.conv_block2 = ConvBlock(in_channels=96, out_channels=128)
426
- self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
427
- self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
428
- self.scale = scale
429
- self.padding = nn.ReplicationPad2d((0,1,0,1))
430
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
431
- """
432
- Input: (batch_size, data_length)"""
433
- if self.scale == 8:
434
- pool_size1 = (2,2)
435
- pool_size2 = (2,2)
436
- pool_size3 = (2,4)
437
- pool_size4 = (1,4)
438
- elif self.scale == 4:
439
- pool_size1 = (2,2)
440
- pool_size2 = (2,2)
441
- pool_size3 = (1,4)
442
- pool_size4 = (1,4)
443
- elif self.scale == 2:
444
- pool_size1 = (2,2)
445
- pool_size2 = (1,2)
446
- pool_size3 = (1,4)
447
- pool_size4 = (1,4)
448
- else:
449
- pool_size1 = (1,2)
450
- pool_size2 = (1,2)
451
- pool_size3 = (1,4)
452
- pool_size4 = (1,4)
453
- # print('input ',input.shape)
454
- x1 = self.conv_block1_1(input, pool_size=pool_size1, pool_type='avg')
455
- x1 = x1[:,:,:500,:32]
456
- #print('x1 ',x1.shape)
457
- x2 = self.conv_block1_2(input, pool_size=pool_size1, pool_type='avg')
458
- #print('x2 ',x2.shape)
459
- x3 = self.conv_block1_3(input, pool_size=pool_size1, pool_type='avg')
460
- x3 = self.padding(x3)
461
- #print('x3 ',x3.shape)
462
- # assert 1==2
463
- m_i = min(x3.shape[2],min(x1.shape[2],x2.shape[2]))
464
- #print('m_i ', m_i)
465
- x = torch.cat([x1[:,:,:m_i,:],x2[:,:, :m_i,:],x3[:,:, :m_i,:]],dim=1)
466
- # x = torch.cat([x,x3],dim=1)
467
-
468
- # x = self.conv_block1(input, pool_size=pool_size1, pool_type='avg')
469
- x = F.dropout(x, p=0.2, training=self.training)
470
- x = self.conv_block2(x, pool_size=pool_size2, pool_type='avg')
471
- x = F.dropout(x, p=0.2, training=self.training)
472
- x = self.conv_block3(x, pool_size=pool_size3, pool_type='avg')
473
- x = F.dropout(x, p=0.2, training=self.training)
474
- x = self.conv_block4(x, pool_size=pool_size4, pool_type='avg')
475
- x = F.dropout(x, p=0.2, training=self.training)
476
- return x
477
-
478
-
479
- class Cnn10(nn.Module):
480
- def __init__(self,scale=8):
481
- super(Cnn10, self).__init__()
482
- self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
483
- self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
484
- self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
485
- self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
486
- self.scale = scale
487
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
488
- """
489
- Input: (batch_size, data_length)"""
490
- if self.scale == 8:
491
- pool_size1 = (2,2)
492
- pool_size2 = (2,2)
493
- pool_size3 = (2,4)
494
- pool_size4 = (1,4)
495
- elif self.scale == 4:
496
- pool_size1 = (2,2)
497
- pool_size2 = (2,2)
498
- pool_size3 = (1,4)
499
- pool_size4 = (1,4)
500
- elif self.scale == 2:
501
- pool_size1 = (2,2)
502
- pool_size2 = (1,2)
503
- pool_size3 = (1,4)
504
- pool_size4 = (1,4)
505
- else:
506
- pool_size1 = (1,2)
507
- pool_size2 = (1,2)
508
- pool_size3 = (1,4)
509
- pool_size4 = (1,4)
510
- x = self.conv_block1(input, pool_size=pool_size1, pool_type='avg')
511
- x = F.dropout(x, p=0.2, training=self.training)
512
- x = self.conv_block2(x, pool_size=pool_size2, pool_type='avg')
513
- x = F.dropout(x, p=0.2, training=self.training)
514
- x = self.conv_block3(x, pool_size=pool_size3, pool_type='avg')
515
- x = F.dropout(x, p=0.2, training=self.training)
516
- x = self.conv_block4(x, pool_size=pool_size4, pool_type='avg')
517
- x = F.dropout(x, p=0.2, training=self.training)
518
- return x
519
-
520
- class MeanPool(nn.Module):
521
- def __init__(self, pooldim=1):
522
- super().__init__()
523
- self.pooldim = pooldim
524
-
525
- def forward(self, logits, decision):
526
- return torch.mean(decision, dim=self.pooldim)
527
-
528
- class ResPool(nn.Module):
529
- def __init__(self, pooldim=1):
530
- super().__init__()
531
- self.pooldim = pooldim
532
- self.linPool = LinearSoftPool(pooldim=1)
533
-
534
- class AutoExpPool(nn.Module):
535
- def __init__(self, outputdim=10, pooldim=1):
536
- super().__init__()
537
- self.outputdim = outputdim
538
- self.alpha = nn.Parameter(torch.full((outputdim, ), 1))
539
- self.pooldim = pooldim
540
-
541
- def forward(self, logits, decision):
542
- scaled = self.alpha * decision # \alpha * P(Y|x) in the paper
543
- return (logits * torch.exp(scaled)).sum(
544
- self.pooldim) / torch.exp(scaled).sum(self.pooldim)
545
-
546
-
547
- class SoftPool(nn.Module):
548
- def __init__(self, T=1, pooldim=1):
549
- super().__init__()
550
- self.pooldim = pooldim
551
- self.T = T
552
-
553
- def forward(self, logits, decision):
554
- w = torch.softmax(decision / self.T, dim=self.pooldim)
555
- return torch.sum(decision * w, dim=self.pooldim)
556
-
557
-
558
- class AutoPool(nn.Module):
559
- """docstring for AutoPool"""
560
- def __init__(self, outputdim=10, pooldim=1):
561
- super().__init__()
562
- self.outputdim = outputdim
563
- self.alpha = nn.Parameter(torch.ones(outputdim))
564
- self.dim = pooldim
565
-
566
- def forward(self, logits, decision):
567
- scaled = self.alpha * decision # \alpha * P(Y|x) in the paper
568
- weight = torch.softmax(scaled, dim=self.dim)
569
- return torch.sum(decision * weight, dim=self.dim) # B x C
570
-
571
-
572
- class ExtAttentionPool(nn.Module):
573
- def __init__(self, inputdim, outputdim=10, pooldim=1, **kwargs):
574
- super().__init__()
575
- self.inputdim = inputdim
576
- self.outputdim = outputdim
577
- self.pooldim = pooldim
578
- self.attention = nn.Linear(inputdim, outputdim)
579
- nn.init.zeros_(self.attention.weight)
580
- nn.init.zeros_(self.attention.bias)
581
- self.activ = nn.Softmax(dim=self.pooldim)
582
-
583
- def forward(self, logits, decision):
584
- # Logits of shape (B, T, D), decision of shape (B, T, C)
585
- w_x = self.activ(self.attention(logits) / self.outputdim)
586
- h = (logits.permute(0, 2, 1).contiguous().unsqueeze(-2) *
587
- w_x.unsqueeze(-1)).flatten(-2).contiguous()
588
- return torch.sum(h, self.pooldim)
589
-
590
-
591
- class AttentionPool(nn.Module):
592
- """docstring for AttentionPool"""
593
- def __init__(self, inputdim, outputdim=10, pooldim=1, **kwargs):
594
- super().__init__()
595
- self.inputdim = inputdim
596
- self.outputdim = outputdim
597
- self.pooldim = pooldim
598
- self.transform = nn.Linear(inputdim, outputdim)
599
- self.activ = nn.Softmax(dim=self.pooldim)
600
- self.eps = 1e-7
601
-
602
- def forward(self, logits, decision):
603
- # Input is (B, T, D)
604
- # B, T , D
605
- w = self.activ(torch.clamp(self.transform(logits), -15, 15))
606
- detect = (decision * w).sum(
607
- self.pooldim) / (w.sum(self.pooldim) + self.eps)
608
- # B, T, D
609
- return detect
610
-
611
- class Block2D(nn.Module):
612
- def __init__(self, cin, cout, kernel_size=3, padding=1):
613
- super().__init__()
614
- self.block = nn.Sequential(
615
- nn.BatchNorm2d(cin),
616
- nn.Conv2d(cin,
617
- cout,
618
- kernel_size=kernel_size,
619
- padding=padding,
620
- bias=False),
621
- nn.LeakyReLU(inplace=True, negative_slope=0.1))
622
-
623
- def forward(self, x):
624
- return self.block(x)
625
-
626
- class AudioCNN(nn.Module):
627
- def __init__(self, classes_num):
628
- super(AudioCNN, self).__init__()
629
- self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
630
- self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
631
- self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
632
- self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
633
- self.fc1 = nn.Linear(512,128,bias=True)
634
- self.fc = nn.Linear(128, classes_num, bias=True)
635
- self.init_weights()
636
-
637
- def init_weights(self):
638
- init_layer(self.fc)
639
-
640
- def forward(self, input):
641
- '''
642
- Input: (batch_size, times_steps, freq_bins)'''
643
- # [128, 801, 168] --> [128,1,801,168]
644
- x = input[:, None, :, :]
645
- '''(batch_size, 1, times_steps, freq_bins)'''
646
- x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg') # 128,64,400,84
647
- x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg') # 128,128,200,42
648
- x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg') # 128,256,100,21
649
- x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg') # 128,512,50,10
650
- '''(batch_size, feature_maps, time_steps, freq_bins)'''
651
- x = torch.mean(x, dim=3) # (batch_size, feature_maps, time_stpes) # 128,512,50
652
- (x, _) = torch.max(x, dim=2) # (batch_size, feature_maps) 128,512
653
- x = self.fc1(x) # 128,128
654
- output = self.fc(x) # 128,10
655
- return x,output
656
-
657
- def extract(self,input):
658
- '''Input: (batch_size, times_steps, freq_bins)'''
659
- x = input[:, None, :, :]
660
- x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
661
- x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
662
- x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
663
- x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
664
- '''(batch_size, feature_maps, time_steps, freq_bins)'''
665
- x = torch.mean(x, dim=3) # (batch_size, feature_maps, time_stpes)
666
- (x, _) = torch.max(x, dim=2) # (batch_size, feature_maps)
667
- x = self.fc1(x) # 128,128
668
- return x
669
-
670
- def parse_poolingfunction(poolingfunction_name='mean', **kwargs):
671
- """parse_poolingfunction
672
- A heler function to parse any temporal pooling
673
- Pooling is done on dimension 1
674
- :param poolingfunction_name:
675
- :param **kwargs:
676
- """
677
- poolingfunction_name = poolingfunction_name.lower()
678
- if poolingfunction_name == 'mean':
679
- return MeanPool(pooldim=1)
680
- elif poolingfunction_name == 'max':
681
- return MaxPool(pooldim=1)
682
- elif poolingfunction_name == 'linear':
683
- return LinearSoftPool(pooldim=1)
684
- elif poolingfunction_name == 'expalpha':
685
- return AutoExpPool(outputdim=kwargs['outputdim'], pooldim=1)
686
-
687
- elif poolingfunction_name == 'soft':
688
- return SoftPool(pooldim=1)
689
- elif poolingfunction_name == 'auto':
690
- return AutoPool(outputdim=kwargs['outputdim'])
691
- elif poolingfunction_name == 'attention':
692
- return AttentionPool(inputdim=kwargs['inputdim'],
693
- outputdim=kwargs['outputdim'])
694
- class conv1d(nn.Module):
695
- def __init__(self, nin, nout, kernel_size=3, stride=1, padding='VALID', dilation=1):
696
- super(conv1d, self).__init__()
697
- if padding == 'VALID':
698
- dconv_pad = 0
699
- elif padding == 'SAME':
700
- dconv_pad = dilation * ((kernel_size - 1) // 2)
701
- else:
702
- raise ValueError("Padding Mode Error!")
703
- self.conv = nn.Conv1d(nin, nout, kernel_size=kernel_size, stride=stride, padding=dconv_pad)
704
- self.act = nn.ReLU()
705
- self.init_layer(self.conv)
706
-
707
- def init_layer(self, layer, nonlinearity='relu'):
708
- """Initialize a Linear or Convolutional layer. """
709
- nn.init.kaiming_normal_(layer.weight, nonlinearity=nonlinearity)
710
- nn.init.constant_(layer.bias, 0.1)
711
-
712
- def forward(self, x):
713
- out = self.act(self.conv(x))
714
- return out
715
-
716
- class Atten_1(nn.Module):
717
- def __init__(self, input_dim, context=2, dropout_rate=0.2):
718
- super(Atten_1, self).__init__()
719
- self._matrix_k = nn.Linear(input_dim, input_dim // 4)
720
- self._matrix_q = nn.Linear(input_dim, input_dim // 4)
721
- self.relu = nn.ReLU()
722
- self.context = context
723
- self._dropout_layer = nn.Dropout(dropout_rate)
724
- self.init_layer(self._matrix_k)
725
- self.init_layer(self._matrix_q)
726
-
727
- def init_layer(self, layer, nonlinearity='leaky_relu'):
728
- """Initialize a Linear or Convolutional layer. """
729
- nn.init.kaiming_uniform_(layer.weight, nonlinearity=nonlinearity)
730
- if hasattr(layer, 'bias'):
731
- if layer.bias is not None:
732
- layer.bias.data.fill_(0.)
733
-
734
- def forward(self, input_x):
735
- k_x = input_x
736
- k_x = self.relu(self._matrix_k(k_x))
737
- k_x = self._dropout_layer(k_x)
738
- # print('k_x ',k_x.shape)
739
- q_x = input_x[:, self.context, :]
740
- # print('q_x ',q_x.shape)
741
- q_x = q_x[:, None, :]
742
- # print('q_x1 ',q_x.shape)
743
- q_x = self.relu(self._matrix_q(q_x))
744
- q_x = self._dropout_layer(q_x)
745
- # print('q_x2 ',q_x.shape)
746
- x_ = torch.matmul(k_x, q_x.transpose(-2, -1) / math.sqrt(k_x.size(-1)))
747
- # print('x_ ',x_.shape)
748
- x_ = x_.squeeze(2)
749
- alpha = F.softmax(x_, dim=-1)
750
- att_ = alpha
751
- # print('alpha ',alpha)
752
- alpha = alpha.unsqueeze(2).repeat(1,1,input_x.shape[2])
753
- # print('alpha ',alpha)
754
- # alpha = alpha.view(alpha.size(0), alpha.size(1), alpha.size(2), 1)
755
- out = alpha * input_x
756
- # print('out ', out.shape)
757
- # out = out.mean(2)
758
- out = out.mean(1)
759
- # print('out ',out.shape)
760
- # assert 1==2
761
- #y = alpha * input_x
762
- #return y, att_
763
- out = input_x[:, self.context, :] + out
764
- return out
765
-
766
- class Fusion(nn.Module):
767
- def __init__(self, inputdim, inputdim2, n_fac):
768
- super().__init__()
769
- self.fuse_layer1 = conv1d(inputdim, inputdim2*n_fac,1)
770
- self.fuse_layer2 = conv1d(inputdim2, inputdim2*n_fac,1)
771
- self.avg_pool = nn.AvgPool1d(n_fac, stride=n_fac) # 沿着最后一个维度进行pooling
772
-
773
- def forward(self,embedding,mix_embed):
774
- embedding = embedding.permute(0,2,1)
775
- fuse1_out = self.fuse_layer1(embedding) # [2, 501, 2560] ,512*5, 1D卷积融合,spk_embeding ,扩大其维度
776
- fuse1_out = fuse1_out.permute(0,2,1)
777
-
778
- mix_embed = mix_embed.permute(0,2,1)
779
- fuse2_out = self.fuse_layer2(mix_embed) # [2, 501, 2560] ,512*5, 1D卷积融合,spk_embeding ,扩大其维度
780
- fuse2_out = fuse2_out.permute(0,2,1)
781
- as_embs = torch.mul(fuse1_out, fuse2_out) # 相乘 [2, 501, 2560]
782
- # (10, 501, 512)
783
- as_embs = self.avg_pool(as_embs) # [2, 501, 512] 相当于 2560//5
784
- return as_embs
785
-
786
- class CDur_fusion(nn.Module):
787
- def __init__(self, inputdim, outputdim, **kwargs):
788
- super().__init__()
789
- self.features = nn.Sequential(
790
- Block2D(1, 32),
791
- nn.LPPool2d(4, (2, 4)),
792
- Block2D(32, 128),
793
- Block2D(128, 128),
794
- nn.LPPool2d(4, (2, 4)),
795
- Block2D(128, 128),
796
- Block2D(128, 128),
797
- nn.LPPool2d(4, (1, 4)),
798
- nn.Dropout(0.3),
799
- )
800
- with torch.no_grad():
801
- rnn_input_dim = self.features(torch.randn(1, 1, 500,inputdim)).shape
802
- rnn_input_dim = rnn_input_dim[1] * rnn_input_dim[-1]
803
-
804
- self.gru = nn.GRU(128, 128, bidirectional=True, batch_first=True)
805
- self.fusion = Fusion(128,2)
806
- self.fc = nn.Linear(256,256)
807
- self.outputlayer = nn.Linear(256, outputdim)
808
- self.features.apply(init_weights)
809
- self.outputlayer.apply(init_weights)
810
-
811
- def forward(self, x, embedding): #
812
- batch, time, dim = x.shape
813
- x = x.unsqueeze(1) # (b,1,t,d)
814
- x = self.features(x) #
815
- x = x.transpose(1, 2).contiguous().flatten(-2) # 重新拷贝一份x,之后推平-2:-1之间的维度 # (b,125,128)
816
- embedding = embedding.unsqueeze(1)
817
- embedding = embedding.repeat(1, x.shape[1], 1)
818
- x = self.fusion(embedding,x)
819
- #x = torch.cat((x, embedding), dim=2) # [B, T, 128 + emb_dim]
820
- if not hasattr(self, '_flattened'):
821
- self.gru.flatten_parameters()
822
- x, _ = self.gru(x) # x torch.Size([16, 125, 256])
823
- x = self.fc(x)
824
- decision_time = torch.softmax(self.outputlayer(x),dim=2) # x torch.Size([16, 125, 2])
825
- decision_up = torch.nn.functional.interpolate(
826
- decision_time.transpose(1, 2), # [16, 2, 125]
827
- time, # 501
828
- mode='linear',
829
- align_corners=False).transpose(1, 2) # 从125插值回 501 ?--> (16,501,2)
830
- return decision_time[:,:,0],decision_up
831
-
832
- class CDur(nn.Module):
833
- def __init__(self, inputdim, outputdim,time_resolution, **kwargs):
834
- super().__init__()
835
- self.features = nn.Sequential(
836
- Block2D(1, 32),
837
- nn.LPPool2d(4, (2, 4)),
838
- Block2D(32, 128),
839
- Block2D(128, 128),
840
- nn.LPPool2d(4, (2, 4)),
841
- Block2D(128, 128),
842
- Block2D(128, 128),
843
- nn.LPPool2d(4, (2, 4)),
844
- nn.Dropout(0.3),
845
- )
846
- with torch.no_grad():
847
- rnn_input_dim = self.features(torch.randn(1, 1, 500,inputdim)).shape
848
- rnn_input_dim = rnn_input_dim[1] * rnn_input_dim[-1]
849
-
850
- self.gru = nn.GRU(256, 256, bidirectional=True, batch_first=True)
851
- self.fc = nn.Linear(512,256)
852
- self.outputlayer = nn.Linear(256, outputdim)
853
- self.features.apply(init_weights)
854
- self.outputlayer.apply(init_weights)
855
-
856
- def forward(self, x, embedding,one_hot=None): #
857
- batch, time, dim = x.shape
858
- x = x.unsqueeze(1) # (b,1,t,d)
859
- x = self.features(x) #
860
- x = x.transpose(1, 2).contiguous().flatten(-2) # 重新拷贝一份x,之后推平-2:-1之间的维度 # (b,125,128)
861
- embedding = embedding.unsqueeze(1)
862
- embedding = embedding.repeat(1, x.shape[1], 1)
863
- x = torch.cat((x, embedding), dim=2) # [B, T, 128 + emb_dim]
864
- if not hasattr(self, '_flattened'):
865
- self.gru.flatten_parameters()
866
- x, _ = self.gru(x) # x torch.Size([16, 125, 256])
867
- x = self.fc(x)
868
- decision_time = torch.softmax(self.outputlayer(x),dim=2) # x torch.Size([16, 125, 2])
869
- decision_up = torch.nn.functional.interpolate(
870
- decision_time.transpose(1, 2), # [16, 2, 125]
871
- time, # 501
872
- mode='linear',
873
- align_corners=False).transpose(1, 2) # 从125插值回 501 ?--> (16,501,2)
874
- return decision_time[:,:,0],decision_up
875
-
876
- class CDur_big(nn.Module):
877
- def __init__(self, inputdim, outputdim, **kwargs):
878
- super().__init__()
879
- self.features = nn.Sequential(
880
- Block2D(1, 64),
881
- Block2D(64, 64),
882
- nn.LPPool2d(4, (2, 2)),
883
- Block2D(64, 128),
884
- Block2D(128, 128),
885
- nn.LPPool2d(4, (2, 2)),
886
- Block2D(128, 256),
887
- Block2D(256, 256),
888
- nn.LPPool2d(4, (2, 4)),
889
- Block2D(256, 512),
890
- Block2D(512, 512),
891
- nn.LPPool2d(4, (1, 4)),
892
- nn.Dropout(0.3),)
893
- with torch.no_grad():
894
- rnn_input_dim = self.features(torch.randn(1, 1, 500,inputdim)).shape
895
- rnn_input_dim = rnn_input_dim[1] * rnn_input_dim[-1]
896
- self.gru = nn.GRU(640, 512, bidirectional=True, batch_first=True)
897
- self.fc = nn.Linear(1024,256)
898
- self.outputlayer = nn.Linear(256, outputdim)
899
- self.features.apply(init_weights)
900
- self.outputlayer.apply(init_weights)
901
-
902
- def forward(self, x, embedding): #
903
- batch, time, dim = x.shape
904
- x = x.unsqueeze(1) # (b,1,t,d)
905
- x = self.features(x) #
906
- x = x.transpose(1, 2).contiguous().flatten(-2) # 重新拷贝一份x,之后推平-2:-1之间的维度 # (b,125,512)
907
- embedding = embedding.unsqueeze(1)
908
- embedding = embedding.repeat(1, x.shape[1], 1)
909
- x = torch.cat((x, embedding), dim=2) # [B, T, 128 + emb_dim]
910
- if not hasattr(self, '_flattened'):
911
- self.gru.flatten_parameters()
912
- x, _ = self.gru(x) # x torch.Size([16, 125, 256])
913
- x = self.fc(x)
914
- decision_time = torch.softmax(self.outputlayer(x),dim=2) # x torch.Size([16, 125, 2])
915
- decision_up = torch.nn.functional.interpolate(
916
- decision_time.transpose(1, 2), # [16, 2, 125]
917
- time, # 501
918
- mode='linear',
919
- align_corners=False).transpose(1, 2) # 从125插值回 501 ?--> (16,501,2)
920
- return decision_time[:,:,0],decision_up
921
-
922
- class CDur_GLU(nn.Module):
923
- def __init__(self, inputdim, outputdim, **kwargs):
924
- super().__init__()
925
- self.features = Mul_scale_GLU()
926
- # with torch.no_grad():
927
- # rnn_input_dim = self.features(torch.randn(1, 1, 500,inputdim)).shape
928
- # rnn_input_dim = rnn_input_dim[1] * rnn_input_dim[-1]
929
- self.gru = nn.GRU(640, 512,1, bidirectional=True, batch_first=True) # previous is 640
930
- # self.gru = LSTMModel(640, 512,1)
931
- self.fc = nn.Linear(1024,256)
932
- self.outputlayer = nn.Linear(256, outputdim)
933
- # self.features.apply(init_weights)
934
- self.outputlayer.apply(init_weights)
935
-
936
- def forward(self, x, embedding,one_hot=None): #
937
- batch, time, dim = x.shape
938
- x = x.unsqueeze(1) # (b,1,t,d)
939
- x = self.features(x) #
940
- x = x.transpose(1, 2).contiguous().flatten(-2) # 重新拷贝一份x,之后推平-2:-1之间的维度 # (b,125,512)
941
- # print('x ',x.shape)
942
- # assert 1==2
943
- embedding = embedding.unsqueeze(1)
944
- embedding = embedding.repeat(1, x.shape[1], 1)
945
-
946
- x = torch.cat((x, embedding), dim=2) # [B, T, 128 + emb_dim]
947
- if not hasattr(self, '_flattened'):
948
- self.gru.flatten_parameters()
949
- x, _ = self.gru(x) # x torch.Size([16, 125, 256])
950
- # x = self.gru(x) # x torch.Size([16, 125, 256])
951
- x = self.fc(x)
952
- decision_time = torch.softmax(self.outputlayer(x),dim=2) # x torch.Size([16, 125, 2])
953
- decision_up = torch.nn.functional.interpolate(
954
- decision_time.transpose(1, 2), # [16, 2, 125]
955
- time, # 501
956
- mode='linear',
957
- align_corners=False).transpose(1, 2) # 从125插值回 501 ?--> (16,501,2)
958
- return decision_time[:,:,0],decision_up
959
-
960
- class CDur_CNN14(nn.Module):
961
- def __init__(self, inputdim, outputdim,time_resolution,**kwargs):
962
- super().__init__()
963
- if time_resolution==125:
964
- self.features = Cnn10(8)
965
- elif time_resolution == 250:
966
- #print('time_resolution ',time_resolution)
967
- self.features = Cnn10(4)
968
- elif time_resolution == 500:
969
- self.features = Cnn10(2)
970
- else:
971
- self.features = Cnn10(0)
972
- with torch.no_grad():
973
- rnn_input_dim = self.features(torch.randn(1, 1, 500,inputdim)).shape
974
- rnn_input_dim = rnn_input_dim[1] * rnn_input_dim[-1]
975
- # self.features = Cnn10()
976
- self.gru = nn.GRU(640, 512, bidirectional=True, batch_first=True)
977
- # self.gru = LSTMModel(640, 512,1)
978
- self.fc = nn.Linear(1024,256)
979
- self.outputlayer = nn.Linear(256, outputdim)
980
- # self.features.apply(init_weights)
981
- self.outputlayer.apply(init_weights)
982
-
983
- def forward(self, x, embedding,one_hot=None):
984
- batch, time, dim = x.shape
985
- x = x.unsqueeze(1) # (b,1,t,d)
986
- x = self.features(x) #
987
- x = x.transpose(1, 2).contiguous().flatten(-2) # 重新拷贝一份x,之后推平-2:-1之间的维度 # (b,125,512)
988
- # print('x ',x.shape)
989
- # assert 1==2
990
- embedding = embedding.unsqueeze(1)
991
- embedding = embedding.repeat(1, x.shape[1], 1)
992
- x = torch.cat((x, embedding), dim=2) # [B, T, 128 + emb_dim]
993
- if not hasattr(self, '_flattened'):
994
- self.gru.flatten_parameters()
995
- x, _ = self.gru(x) # x torch.Size([16, 125, 256])
996
- # x = self.gru(x) # x torch.Size([16, 125, 256])
997
- x = self.fc(x)
998
- decision_time = torch.softmax(self.outputlayer(x),dim=2) # x torch.Size([16, 125, 2])
999
- decision_up = torch.nn.functional.interpolate(
1000
- decision_time.transpose(1, 2), # [16, 2, 125]
1001
- time, # 501
1002
- mode='linear',
1003
- align_corners=False).transpose(1, 2) # 从125插值回 501 ?--> (16,501,2)
1004
- return decision_time[:,:,0],decision_up
1005
-
1006
- class CDur_CNN_mul_scale(nn.Module):
1007
- def __init__(self, inputdim, outputdim,time_resolution,**kwargs):
1008
- super().__init__()
1009
- if time_resolution==125:
1010
- self.features = Cnn10_mul_scale(8)
1011
- elif time_resolution == 250:
1012
- #print('time_resolution ',time_resolution)
1013
- self.features = Cnn10_mul_scale(4)
1014
- elif time_resolution == 500:
1015
- self.features = Cnn10_mul_scale(2)
1016
- else:
1017
- self.features = Cnn10_mul_scale(0)
1018
- # with torch.no_grad():
1019
- # rnn_input_dim = self.features(torch.randn(1, 1, 500,inputdim)).shape
1020
- # rnn_input_dim = rnn_input_dim[1] * rnn_input_dim[-1]
1021
- # self.features = Cnn10()
1022
- self.gru = nn.GRU(640, 512, bidirectional=True, batch_first=True)
1023
- # self.gru = LSTMModel(640, 512,1)
1024
- self.fc = nn.Linear(1024,256)
1025
- self.outputlayer = nn.Linear(256, outputdim)
1026
- # self.features.apply(init_weights)
1027
- self.outputlayer.apply(init_weights)
1028
-
1029
- def forward(self, x, embedding,one_hot=None):
1030
- # print('x ',x.shape)
1031
- # assert 1==2
1032
- batch, time, dim = x.shape
1033
- x = x.unsqueeze(1) # (b,1,t,d)
1034
- x = self.features(x) #
1035
- x = x.transpose(1, 2).contiguous().flatten(-2) # 重新拷贝一份x,之后推平-2:-1之间的维度 # (b,125,512)
1036
- # print('x ',x.shape)
1037
- # assert 1==2
1038
- embedding = embedding.unsqueeze(1)
1039
- embedding = embedding.repeat(1, x.shape[1], 1)
1040
- x = torch.cat((x, embedding), dim=2) # [B, T, 128 + emb_dim]
1041
- if not hasattr(self, '_flattened'):
1042
- self.gru.flatten_parameters()
1043
- x, _ = self.gru(x) # x torch.Size([16, 125, 256])
1044
- # x = self.gru(x) # x torch.Size([16, 125, 256])
1045
- x = self.fc(x)
1046
- decision_time = torch.softmax(self.outputlayer(x),dim=2) # x torch.Size([16, 125, 2])
1047
- decision_up = torch.nn.functional.interpolate(
1048
- decision_time.transpose(1, 2), # [16, 2, 125]
1049
- time, # 501
1050
- mode='linear',
1051
- align_corners=False).transpose(1, 2) # 从125插值回 501 ?--> (16,501,2)
1052
- return decision_time[:,:,0],decision_up
1053
-
1054
- class CDur_CNN_mul_scale_fusion(nn.Module):
1055
- def __init__(self, inputdim, outputdim, time_resolution,**kwargs):
1056
- super().__init__()
1057
- if time_resolution==125:
1058
- self.features = Cnn10_mul_scale(8)
1059
- elif time_resolution == 250:
1060
- #print('time_resolution ',time_resolution)
1061
- self.features = Cnn10_mul_scale(4)
1062
- elif time_resolution == 500:
1063
- self.features = Cnn10_mul_scale(2)
1064
- else:
1065
- self.features = Cnn10_mul_scale(0)
1066
- # with torch.no_grad():
1067
- # rnn_input_dim = self.features(torch.randn(1, 1, 500,inputdim)).shape
1068
- # rnn_input_dim = rnn_input_dim[1] * rnn_input_dim[-1]
1069
- # self.features = Cnn10()
1070
- self.gru = nn.GRU(512, 512, bidirectional=True, batch_first=True)
1071
- # self.gru = LSTMModel(640, 512,1)
1072
- self.fc = nn.Linear(1024,256)
1073
- self.fusion = Fusion(128,512,2)
1074
- self.outputlayer = nn.Linear(256, outputdim)
1075
- # self.features.apply(init_weights)
1076
- self.outputlayer.apply(init_weights)
1077
-
1078
- def forward(self, x, embedding,one_hot=None):
1079
- # print('x ',x.shape)
1080
- # assert 1==2
1081
- batch, time, dim = x.shape
1082
- x = x.unsqueeze(1) # (b,1,t,d)
1083
- x = self.features(x) #
1084
- x = x.transpose(1, 2).contiguous().flatten(-2) # 重新拷贝一份x,之后推平-2:-1之间的维度 # (b,125,512)
1085
- # print('x ',x.shape)
1086
- # assert 1==2
1087
- embedding = embedding.unsqueeze(1)
1088
- embedding = embedding.repeat(1, x.shape[1], 1)
1089
- x = self.fusion(embedding, x)
1090
- #x = torch.cat((x, embedding), dim=2) # [B, T, 128 + emb_dim]
1091
- if not hasattr(self, '_flattened'):
1092
- self.gru.flatten_parameters()
1093
- x, _ = self.gru(x) # x torch.Size([16, 125, 256])
1094
- # x = self.gru(x) # x torch.Size([16, 125, 256])
1095
- x = self.fc(x)
1096
- decision_time = torch.softmax(self.outputlayer(x),dim=2) # x torch.Size([16, 125, 2])
1097
- decision_up = torch.nn.functional.interpolate(
1098
- decision_time.transpose(1, 2), # [16, 2, 125]
1099
- time, # 501
1100
- mode='linear',
1101
- align_corners=False).transpose(1, 2) # 从125插值回 501 ?--> (16,501,2)
1102
- return decision_time[:,:,0],decision_up
1103
-
1104
-
1105
- class RaDur_fusion(nn.Module):
1106
- def __init__(self, model_config, inputdim, outputdim, time_resolution, **kwargs):
1107
- super().__init__()
1108
- self.encoder = Cnn14()
1109
- self.detection = CDur_CNN_mul_scale_fusion(inputdim, outputdim, time_resolution)
1110
- self.softmax = nn.Softmax(dim=2)
1111
- #self.temperature = 5
1112
- # if model_config['pre_train']:
1113
- # self.encoder.load_state_dict(torch.load(model_config['encoder_path'])['model'])
1114
- # self.detection.load_state_dict(torch.load(model_config['CDur_path']))
1115
-
1116
- self.q = nn.Linear(128,128)
1117
- self.k = nn.Linear(128,128)
1118
- self.q_ee = nn.Linear(128, 128)
1119
- self.k_ee = nn.Linear(128, 128)
1120
- self.temperature = 11.3 # sqrt(128)
1121
- self.att_pool = model_config['att_pool']
1122
- self.enhancement = model_config['enhancement']
1123
- self.tao = model_config['tao']
1124
- self.top = model_config['top']
1125
- self.bn = nn.BatchNorm1d(128)
1126
- self.EE_fusion = Fusion(128, 128, 4)
1127
-
1128
- def get_w(self,q,k):
1129
- q = self.q(q)
1130
- k = self.k(k)
1131
- q = q.unsqueeze(1)
1132
- attn = torch.bmm(q, k.transpose(1, 2))
1133
- attn = attn/self.temperature
1134
- attn = self.softmax(attn)
1135
- return attn
1136
-
1137
- def get_w_ee(self,q,k):
1138
- q = self.q_ee(q)
1139
- k = self.k_ee(k)
1140
- q = q.unsqueeze(1)
1141
- attn = torch.bmm(q, k.transpose(1, 2))
1142
- attn = attn/self.temperature
1143
- attn = self.softmax(attn)
1144
- return attn
1145
-
1146
- def attention_pooling(self, embeddings, mean_embedding):
1147
- att_pool_w = self.get_w(mean_embedding,embeddings)
1148
- embedding = torch.bmm(att_pool_w, embeddings).squeeze(1)
1149
- # print(embedding.shape)
1150
- # print(att_pool_w.shape)
1151
- # print(att_pool_w[0])
1152
- # assert 1==2
1153
- return embedding
1154
-
1155
- def select_topk_embeddings(self, scores, embeddings, k):
1156
- _, idx_DESC = scores.sort(descending=True, dim=1) # 根据分数进行排序
1157
- top_k = _[:,:k]
1158
- # print('top_k ', top_k)
1159
- # top_k = top_k.mean(1)
1160
- idx_topk = idx_DESC[:, :k] # 取top_k个
1161
- # print('index ', idx_topk)
1162
- idx_topk = idx_topk.unsqueeze(2).expand([-1, -1, embeddings.shape[2]])
1163
- selected_embeddings = torch.gather(embeddings, 1, idx_topk)
1164
- return selected_embeddings,top_k
1165
-
1166
- def sum_with_attention(self, embedding, top_k, selected_embeddings):
1167
- # print('embedding ',embedding)
1168
- # print('selected_embeddings ',selected_embeddings.shape)
1169
- att_1 = self.get_w_ee(embedding, selected_embeddings)
1170
- att_1 = att_1.squeeze(1)
1171
- #print('att_1 ',att_1.shape)
1172
- larger = top_k > self.tao
1173
- # print('larger ',larger)
1174
- top_k = top_k*larger
1175
- # print('top_k ',top_k.shape)
1176
- # print('top_k ',top_k)
1177
- att_1 = att_1*top_k
1178
- #print('att_1 ',att_1.shape)
1179
- # assert 1==2
1180
- att_2 = att_1.unsqueeze(2).repeat(1,1,128)
1181
- Es = selected_embeddings*att_2
1182
- return Es
1183
-
1184
- def orcal_EE(self, x, embedding, label):
1185
- batch, time, dim = x.shape
1186
-
1187
- mixture_embedding = self.encoder(x) # 8, 125, 128
1188
- mixture_embedding = mixture_embedding.transpose(1,2)
1189
- mixture_embedding = self.bn(mixture_embedding)
1190
- mixture_embedding = mixture_embedding.transpose(1,2)
1191
-
1192
- x = x.unsqueeze(1) # (b,1,t,d)
1193
- x = self.detection.features(x) #
1194
- x = x.transpose(1, 2).contiguous().flatten(-2) # 重新拷贝一份x,之后推平-2:-1之间的维度 # (b,125,128)
1195
- embedding_pre = embedding.unsqueeze(1)
1196
- embedding_pre = embedding_pre.repeat(1, x.shape[1], 1)
1197
- f = self.detection.fusion(embedding_pre, x) # the first stage results
1198
- #f = torch.cat((x, embedding_pre), dim=2) # [B, T, 128 + emb_dim]
1199
- if not hasattr(self, '_flattened'):
1200
- self.detection.gru.flatten_parameters()
1201
- f, _ = self.detection.gru(f) # x torch.Size([16, 125, 256])
1202
- f = self.detection.fc(f)
1203
- decision_time = torch.softmax(self.detection.outputlayer(f),dim=2) # x torch.Size([16, 125, 2])
1204
-
1205
- selected_embeddings, top_k = self.select_topk_embeddings(decision_time[:,:,0], mixture_embedding, self.top)
1206
-
1207
- selected_embeddings = self.sum_with_attention(embedding, top_k, selected_embeddings) # add the weight
1208
-
1209
- mix_embedding = selected_embeddings.mean(1).unsqueeze(1) #
1210
- mix_embedding = mix_embedding.repeat(1, x.shape[1], 1)
1211
- embedding = embedding.unsqueeze(1)
1212
- embedding = embedding.repeat(1, x.shape[1], 1)
1213
- mix_embedding = self.EE_fusion(mix_embedding, embedding) # 使用神经网络进行融合
1214
- # mix_embedding2 = selected_embeddings2.mean(1)
1215
- #mix_embedding = embedding + mix_embedding # 直接相加
1216
- # new detection results
1217
- # embedding_now = mix_embedding.unsqueeze(1)
1218
- # embedding_now = embedding_now.repeat(1, x.shape[1], 1)
1219
- f_now = self.detection.fusion(mix_embedding, x)
1220
- #f_now = torch.cat((x, embedding_now), dim=2) #
1221
- f_now, _ = self.detection.gru(f_now) # x torch.Size([16, 125, 256])
1222
- f_now = self.detection.fc(f_now)
1223
- decision_time_now = torch.softmax(self.detection.outputlayer(f_now), dim=2) # x torch.Size([16, 125, 2])
1224
-
1225
- top_k = top_k.mean(1) # get avg score,higher score will have more weight
1226
- larger = top_k > self.tao
1227
- top_k = top_k * larger
1228
- top_k = top_k/2.0
1229
- # print('top_k ',top_k)
1230
- # assert 1==2
1231
- # print('tok_k[ ',top_k.shape)
1232
- # print('decision_time ',decision_time.shape)
1233
- # print('decision_time_now ',decision_time_now.shape)
1234
- neg_w = top_k.unsqueeze(1).unsqueeze(2)
1235
- neg_w = neg_w.repeat(1, decision_time_now.shape[1], decision_time_now.shape[2])
1236
- # print('neg_w ',neg_w.shape)
1237
- #print('neg_w ',neg_w[:,0:10,0])
1238
- pos_w = 1-neg_w
1239
- #print('pos_w ',pos_w[:,0:10,0])
1240
- decision_time_final = decision_time*pos_w + neg_w*decision_time_now
1241
- #print('decision_time_final ',decision_time_final[0,0:10,0])
1242
- # print(decision_time_final[0,:,:])
1243
- #assert 1==2
1244
- return decision_time_final
1245
-
1246
- def forward(self, x, ref, label=None):
1247
- batch, time, dim = x.shape
1248
- logit = torch.zeros(1).cuda()
1249
- embeddings = self.encoder(ref)
1250
- mean_embedding = embeddings.mean(1)
1251
- if self.att_pool == True:
1252
- mean_embedding = self.bn(mean_embedding)
1253
- embeddings = embeddings.transpose(1,2)
1254
- embeddings = self.bn(embeddings)
1255
- embeddings = embeddings.transpose(1,2)
1256
- embedding = self.attention_pooling(embeddings, mean_embedding)
1257
- else:
1258
- embedding = mean_embedding
1259
- if self.enhancement == True:
1260
- decision_time = self.orcal_EE(x, embedding, label)
1261
- decision_up = torch.nn.functional.interpolate(
1262
- decision_time.transpose(1, 2), # [16, 2, 125]
1263
- time, # 501
1264
- mode='linear',
1265
- align_corners=False).transpose(1, 2) # 从125插值回 501 ?--> (16,501,2)
1266
- return decision_time[:,:,0], decision_up, logit
1267
-
1268
- x = x.unsqueeze(1) # (b,1,t,d)
1269
- x = self.detection.features(x) #
1270
- x = x.transpose(1, 2).contiguous().flatten(-2) # 重新拷贝一份x,之后推平-2:-1之间的维度 # (b,125,128)
1271
- embedding = embedding.unsqueeze(1)
1272
- embedding = embedding.repeat(1, x.shape[1], 1)
1273
- # x = torch.cat((x, embedding), dim=2) # [B, T, 128 + emb_dim]
1274
- x = self.detection.fusion(embedding, x)
1275
- # embedding = embedding.unsqueeze(1)
1276
- # embedding = embedding.repeat(1, x.shape[1], 1)
1277
- # x = torch.cat((x, embedding), dim=2) # [B, T, 128 + emb_dim]
1278
- if not hasattr(self, '_flattened'):
1279
- self.detection.gru.flatten_parameters()
1280
- x, _ = self.detection.gru(x) # x torch.Size([16, 125, 256])
1281
- x = self.detection.fc(x)
1282
- decision_time = torch.softmax(self.detection.outputlayer(x),dim=2) # x torch.Size([16, 125, 2])
1283
- decision_up = torch.nn.functional.interpolate(
1284
- decision_time.transpose(1, 2),
1285
- time, # 501
1286
- mode='linear',
1287
- align_corners=False).transpose(1, 2) # 从125插值回 501 ?--> (16,501,2)
1288
- return decision_time[:,:,0], decision_up, logit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/audio_foundation_models.py DELETED
@@ -1,1033 +0,0 @@
1
- import sys
2
- import os
3
-
4
- sys.path.append(os.path.dirname(os.path.realpath(__file__)))
5
- sys.path.append(os.path.dirname(os.path.dirname(os.path.realpath(__file__))))
6
- sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'NeuralSeq'))
7
- sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'text_to_audio/Make_An_Audio'))
8
- sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'audio_detection'))
9
- sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'mono2binaural'))
10
- import matplotlib
11
- import librosa
12
- from transformers import AutoModelForCausalLM, AutoTokenizer, CLIPSegProcessor, CLIPSegForImageSegmentation
13
- import torch
14
- from diffusers import StableDiffusionPipeline
15
- from diffusers import StableDiffusionInstructPix2PixPipeline, EulerAncestralDiscreteScheduler
16
- import re
17
- import uuid
18
- import soundfile
19
- from diffusers import StableDiffusionInpaintPipeline
20
- from PIL import Image
21
- import numpy as np
22
- from omegaconf import OmegaConf
23
- from transformers import pipeline, BlipProcessor, BlipForConditionalGeneration, BlipForQuestionAnswering
24
- import cv2
25
- import einops
26
- from einops import repeat
27
- from pytorch_lightning import seed_everything
28
- import random
29
- from ldm.util import instantiate_from_config
30
- from ldm.data.extract_mel_spectrogram import TRANSFORMS_16000
31
- from pathlib import Path
32
- from vocoder.hifigan.modules import VocoderHifigan
33
- from vocoder.bigvgan.models import VocoderBigVGAN
34
- from ldm.models.diffusion.ddim import DDIMSampler
35
- from wav_evaluation.models.CLAPWrapper import CLAPWrapper
36
- from inference.svs.ds_e2e import DiffSingerE2EInfer
37
- from audio_to_text.inference_waveform import AudioCapModel
38
- import whisper
39
- from text_to_speech.TTS_binding import TTSInference
40
- from inference.svs.ds_e2e import DiffSingerE2EInfer
41
- from inference.tts.GenerSpeech import GenerSpeechInfer
42
- from utils.hparams import set_hparams
43
- from utils.hparams import hparams as hp
44
- from utils.os_utils import move_file
45
- import scipy.io.wavfile as wavfile
46
- from audio_infer.utils import config as detection_config
47
- from audio_infer.pytorch.models import PVT
48
- from src.models import BinauralNetwork
49
- from sound_extraction.model.LASSNet import LASSNet
50
- from sound_extraction.utils.stft import STFT
51
- from sound_extraction.utils.wav_io import load_wav, save_wav
52
- from target_sound_detection.src import models as tsd_models
53
- from target_sound_detection.src.models import event_labels
54
- from target_sound_detection.src.utils import median_filter, decode_with_timestamps
55
- import clip
56
-
57
-
58
- def prompts(name, description):
59
- def decorator(func):
60
- func.name = name
61
- func.description = description
62
- return func
63
-
64
- return decorator
65
-
66
-
67
- def initialize_model(config, ckpt, device):
68
- config = OmegaConf.load(config)
69
- model = instantiate_from_config(config.model)
70
- model.load_state_dict(torch.load(ckpt, map_location='cpu')["state_dict"], strict=False)
71
-
72
- model = model.to(device)
73
- model.cond_stage_model.to(model.device)
74
- model.cond_stage_model.device = model.device
75
- sampler = DDIMSampler(model)
76
- return sampler
77
-
78
-
79
- def initialize_model_inpaint(config, ckpt):
80
- config = OmegaConf.load(config)
81
- model = instantiate_from_config(config.model)
82
- model.load_state_dict(torch.load(ckpt, map_location='cpu')["state_dict"], strict=False)
83
- device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
84
- model = model.to(device)
85
- print(model.device, device, model.cond_stage_model.device)
86
- sampler = DDIMSampler(model)
87
- return sampler
88
-
89
-
90
- def select_best_audio(prompt, wav_list):
91
- clap_model = CLAPWrapper('text_to_audio/Make_An_Audio/useful_ckpts/CLAP/CLAP_weights_2022.pth',
92
- 'text_to_audio/Make_An_Audio/useful_ckpts/CLAP/config.yml',
93
- use_cuda=torch.cuda.is_available())
94
- text_embeddings = clap_model.get_text_embeddings([prompt])
95
- score_list = []
96
- for data in wav_list:
97
- sr, wav = data
98
- audio_embeddings = clap_model.get_audio_embeddings([(torch.FloatTensor(wav), sr)], resample=True)
99
- score = clap_model.compute_similarity(audio_embeddings, text_embeddings,
100
- use_logit_scale=False).squeeze().cpu().numpy()
101
- score_list.append(score)
102
- max_index = np.array(score_list).argmax()
103
- print(score_list, max_index)
104
- return wav_list[max_index]
105
-
106
-
107
- def merge_audio(audio_path_1, audio_path_2):
108
- merged_signal = []
109
- sr_1, signal_1 = wavfile.read(audio_path_1)
110
- sr_2, signal_2 = wavfile.read(audio_path_2)
111
- merged_signal.append(signal_1)
112
- merged_signal.append(signal_2)
113
- merged_signal = np.hstack(merged_signal)
114
- merged_signal = np.asarray(merged_signal, dtype=np.int16)
115
- audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav")
116
- wavfile.write(audio_filename, sr_1, merged_signal)
117
- return audio_filename
118
-
119
-
120
- class T2I:
121
- def __init__(self, device):
122
- print("Initializing T2I to %s" % device)
123
- self.device = device
124
- self.pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
125
- self.text_refine_tokenizer = AutoTokenizer.from_pretrained("Gustavosta/MagicPrompt-Stable-Diffusion")
126
- self.text_refine_model = AutoModelForCausalLM.from_pretrained("Gustavosta/MagicPrompt-Stable-Diffusion")
127
- self.text_refine_gpt2_pipe = pipeline("text-generation", model=self.text_refine_model,
128
- tokenizer=self.text_refine_tokenizer, device=self.device)
129
- self.pipe.to(device)
130
-
131
- @prompts(name="Generate Image From User Input Text",
132
- description="useful when you want to generate an image from a user input text and save it to a file. "
133
- "like: generate an image of an object or something, or generate an image that includes some objects. "
134
- "The input to this tool should be a string, representing the text used to generate image. ")
135
- def inference(self, text):
136
- image_filename = os.path.join('image', str(uuid.uuid4())[0:8] + ".png")
137
- refined_text = self.text_refine_gpt2_pipe(text)[0]["generated_text"]
138
- print(f'{text} refined to {refined_text}')
139
- image = self.pipe(refined_text).images[0]
140
- image.save(image_filename)
141
- print(f"Processed T2I.run, text: {text}, image_filename: {image_filename}")
142
- return image_filename
143
-
144
-
145
- class ImageCaptioning:
146
- def __init__(self, device):
147
- print("Initializing ImageCaptioning to %s" % device)
148
- self.device = device
149
- self.processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
150
- self.model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base").to(
151
- self.device)
152
-
153
- @prompts(name="Remove Something From The Photo",
154
- description="useful when you want to remove and object or something from the photo "
155
- "from its description or location. "
156
- "The input to this tool should be a comma separated string of two, "
157
- "representing the image_path and the object need to be removed. ")
158
- def inference(self, image_path):
159
- inputs = self.processor(Image.open(image_path), return_tensors="pt").to(self.device)
160
- out = self.model.generate(**inputs)
161
- captions = self.processor.decode(out[0], skip_special_tokens=True)
162
- return captions
163
-
164
-
165
- class T2A:
166
- def __init__(self, device):
167
- print("Initializing Make-An-Audio to %s" % device)
168
- self.device = device
169
- self.sampler = initialize_model('text_to_audio/Make_An_Audio/configs/text-to-audio/txt2audio_args.yaml',
170
- 'text_to_audio/Make_An_Audio/useful_ckpts/ta40multi_epoch=000085.ckpt',
171
- device=device)
172
- self.vocoder = VocoderBigVGAN('text_to_audio/Make_An_Audio/vocoder/logs/bigv16k53w', device=device)
173
-
174
- def txt2audio(self, text, seed=55, scale=1.5, ddim_steps=100, n_samples=3, W=624, H=80):
175
- SAMPLE_RATE = 16000
176
- prng = np.random.RandomState(seed)
177
- start_code = prng.randn(n_samples, self.sampler.model.first_stage_model.embed_dim, H // 8, W // 8)
178
- start_code = torch.from_numpy(start_code).to(device=self.device, dtype=torch.float32)
179
- uc = self.sampler.model.get_learned_conditioning(n_samples * [""])
180
- c = self.sampler.model.get_learned_conditioning(n_samples * [text])
181
- shape = [self.sampler.model.first_stage_model.embed_dim, H // 8, W // 8] # (z_dim, 80//2^x, 848//2^x)
182
- samples_ddim, _ = self.sampler.sample(S=ddim_steps,
183
- conditioning=c,
184
- batch_size=n_samples,
185
- shape=shape,
186
- verbose=False,
187
- unconditional_guidance_scale=scale,
188
- unconditional_conditioning=uc,
189
- x_T=start_code)
190
-
191
- x_samples_ddim = self.sampler.model.decode_first_stage(samples_ddim)
192
- x_samples_ddim = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0) # [0, 1]
193
-
194
- wav_list = []
195
- for idx, spec in enumerate(x_samples_ddim):
196
- wav = self.vocoder.vocode(spec)
197
- wav_list.append((SAMPLE_RATE, wav))
198
- best_wav = select_best_audio(text, wav_list)
199
- return best_wav
200
-
201
- @prompts(name="Generate Audio From User Input Text",
202
- description="useful for when you want to generate an audio "
203
- "from a user input text and it saved it to a file."
204
- "The input to this tool should be a string, "
205
- "representing the text used to generate audio.")
206
- def inference(self, text, seed=55, scale=1.5, ddim_steps=100, n_samples=3, W=624, H=80):
207
- melbins, mel_len = 80, 624
208
- with torch.no_grad():
209
- result = self.txt2audio(
210
- text=text,
211
- H=melbins,
212
- W=mel_len
213
- )
214
- audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav")
215
- soundfile.write(audio_filename, result[1], samplerate=16000)
216
- print(f"Processed T2I.run, text: {text}, audio_filename: {audio_filename}")
217
- return audio_filename
218
-
219
-
220
- class I2A:
221
- def __init__(self, device):
222
- print("Initializing Make-An-Audio-Image to %s" % device)
223
- self.device = device
224
- self.sampler = initialize_model('text_to_audio/Make_An_Audio/configs/img_to_audio/img2audio_args.yaml',
225
- 'text_to_audio/Make_An_Audio/useful_ckpts/ta54_epoch=000216.ckpt',
226
- device=device)
227
- self.vocoder = VocoderBigVGAN('text_to_audio/Make_An_Audio/vocoder/logs/bigv16k53w', device=device)
228
-
229
- def img2audio(self, image, seed=55, scale=3, ddim_steps=100, W=624, H=80):
230
- SAMPLE_RATE = 16000
231
- n_samples = 1 # only support 1 sample
232
- prng = np.random.RandomState(seed)
233
- start_code = prng.randn(n_samples, self.sampler.model.first_stage_model.embed_dim, H // 8, W // 8)
234
- start_code = torch.from_numpy(start_code).to(device=self.device, dtype=torch.float32)
235
- uc = self.sampler.model.get_learned_conditioning(n_samples * [""])
236
- # image = Image.fromarray(image)
237
- image = Image.open(image)
238
- image = self.sampler.model.cond_stage_model.preprocess(image).unsqueeze(0)
239
- image_embedding = self.sampler.model.cond_stage_model.forward_img(image)
240
- c = image_embedding.repeat(n_samples, 1, 1)
241
- shape = [self.sampler.model.first_stage_model.embed_dim, H // 8, W // 8] # (z_dim, 80//2^x, 848//2^x)
242
- samples_ddim, _ = self.sampler.sample(S=ddim_steps,
243
- conditioning=c,
244
- batch_size=n_samples,
245
- shape=shape,
246
- verbose=False,
247
- unconditional_guidance_scale=scale,
248
- unconditional_conditioning=uc,
249
- x_T=start_code)
250
-
251
- x_samples_ddim = self.sampler.model.decode_first_stage(samples_ddim)
252
- x_samples_ddim = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0) # [0, 1]
253
- wav_list = []
254
- for idx, spec in enumerate(x_samples_ddim):
255
- wav = self.vocoder.vocode(spec)
256
- wav_list.append((SAMPLE_RATE, wav))
257
- best_wav = wav_list[0]
258
- return best_wav
259
-
260
- @prompts(name="Generate Audio From The Image",
261
- description="useful for when you want to generate an audio "
262
- "based on an image. "
263
- "The input to this tool should be a string, "
264
- "representing the image_path. ")
265
- def inference(self, image, seed=55, scale=3, ddim_steps=100, W=624, H=80):
266
- melbins, mel_len = 80, 624
267
- with torch.no_grad():
268
- result = self.img2audio(
269
- image=image,
270
- H=melbins,
271
- W=mel_len
272
- )
273
- audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav")
274
- soundfile.write(audio_filename, result[1], samplerate=16000)
275
- print(f"Processed I2a.run, image_filename: {image}, audio_filename: {audio_filename}")
276
- return audio_filename
277
-
278
-
279
- class TTS:
280
- def __init__(self, device=None):
281
- self.model = TTSInference(device)
282
-
283
- @prompts(name="Synthesize Speech Given the User Input Text",
284
- description="useful for when you want to convert a user input text into speech audio it saved it to a file."
285
- "The input to this tool should be a string, "
286
- "representing the text used to be converted to speech.")
287
- def inference(self, text):
288
- inp = {"text": text}
289
- out = self.model.infer_once(inp)
290
- audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav")
291
- soundfile.write(audio_filename, out, samplerate=22050)
292
- return audio_filename
293
-
294
-
295
- class T2S:
296
- def __init__(self, device=None):
297
- if device is None:
298
- device = 'cuda' if torch.cuda.is_available() else 'cpu'
299
- print("Initializing DiffSinger to %s" % device)
300
- self.device = device
301
- self.exp_name = 'checkpoints/0831_opencpop_ds1000'
302
- self.config = 'NeuralSeq/egs/egs_bases/svs/midi/e2e/opencpop/ds1000.yaml'
303
- self.set_model_hparams()
304
- self.pipe = DiffSingerE2EInfer(self.hp, device)
305
- self.default_inp = {
306
- 'text': '你 说 你 不 SP 懂 为 何 在 这 时 牵 手 AP',
307
- 'notes': 'D#4/Eb4 | D#4/Eb4 | D#4/Eb4 | D#4/Eb4 | rest | D#4/Eb4 | D4 | D4 | D4 | D#4/Eb4 | F4 | D#4/Eb4 | D4 | rest',
308
- 'notes_duration': '0.113740 | 0.329060 | 0.287950 | 0.133480 | 0.150900 | 0.484730 | 0.242010 | 0.180820 | 0.343570 | 0.152050 | 0.266720 | 0.280310 | 0.633300 | 0.444590'
309
- }
310
-
311
- def set_model_hparams(self):
312
- set_hparams(config=self.config, exp_name=self.exp_name, print_hparams=False)
313
- self.hp = hp
314
-
315
- @prompts(name="Generate Singing Voice From User Input Text, Note and Duration Sequence",
316
- description="useful for when you want to generate a piece of singing voice (Optional: from User Input Text, Note and Duration Sequence) "
317
- "and save it to a file."
318
- "If Like: Generate a piece of singing voice, the input to this tool should be \"\" since there is no User Input Text, Note and Duration Sequence. "
319
- "If Like: Generate a piece of singing voice. Text: xxx, Note: xxx, Duration: xxx. "
320
- "Or Like: Generate a piece of singing voice. Text is xxx, note is xxx, duration is xxx."
321
- "The input to this tool should be a comma seperated string of three, "
322
- "representing text, note and duration sequence since User Input Text, Note and Duration Sequence are all provided. ")
323
- def inference(self, inputs):
324
- self.set_model_hparams()
325
- val = inputs.split(",")
326
- key = ['text', 'notes', 'notes_duration']
327
- try:
328
- inp = {k: v for k, v in zip(key, val)}
329
- wav = self.pipe.infer_once(inp)
330
- except:
331
- print('Error occurs. Generate default audio sample.\n')
332
- inp = self.default_inp
333
- wav = self.pipe.infer_once(inp)
334
- wav *= 32767
335
- audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav")
336
- wavfile.write(audio_filename, self.hp['audio_sample_rate'], wav.astype(np.int16))
337
- print(f"Processed T2S.run, audio_filename: {audio_filename}")
338
- return audio_filename
339
-
340
-
341
- class TTS_OOD:
342
- def __init__(self, device):
343
- if device is None:
344
- device = 'cuda' if torch.cuda.is_available() else 'cpu'
345
- print("Initializing GenerSpeech to %s" % device)
346
- self.device = device
347
- self.exp_name = 'checkpoints/GenerSpeech'
348
- self.config = 'NeuralSeq/modules/GenerSpeech/config/generspeech.yaml'
349
- self.set_model_hparams()
350
- self.pipe = GenerSpeechInfer(self.hp, device)
351
-
352
- def set_model_hparams(self):
353
- set_hparams(config=self.config, exp_name=self.exp_name, print_hparams=False)
354
- f0_stats_fn = f'{hp["binary_data_dir"]}/train_f0s_mean_std.npy'
355
- if os.path.exists(f0_stats_fn):
356
- hp['f0_mean'], hp['f0_std'] = np.load(f0_stats_fn)
357
- hp['f0_mean'] = float(hp['f0_mean'])
358
- hp['f0_std'] = float(hp['f0_std'])
359
- hp['emotion_encoder_path'] = 'checkpoints/Emotion_encoder.pt'
360
- self.hp = hp
361
-
362
- @prompts(name="Style Transfer",
363
- description="useful for when you want to generate speech samples with styles "
364
- "(e.g., timbre, emotion, and prosody) derived from a reference custom voice. "
365
- "Like: Generate a speech with style transferred from this voice. The text is xxx., or speak using the voice of this audio. The text is xxx."
366
- "The input to this tool should be a comma seperated string of two, "
367
- "representing reference audio path and input text. ")
368
- def inference(self, inputs):
369
- self.set_model_hparams()
370
- key = ['ref_audio', 'text']
371
- val = inputs.split(",")
372
- inp = {k: v for k, v in zip(key, val)}
373
- wav = self.pipe.infer_once(inp)
374
- wav *= 32767
375
- audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav")
376
- wavfile.write(audio_filename, self.hp['audio_sample_rate'], wav.astype(np.int16))
377
- print(
378
- f"Processed GenerSpeech.run. Input text:{val[1]}. Input reference audio: {val[0]}. Output Audio_filename: {audio_filename}")
379
- return audio_filename
380
-
381
-
382
- class Inpaint:
383
- def __init__(self, device):
384
- print("Initializing Make-An-Audio-inpaint to %s" % device)
385
- self.device = device
386
- self.sampler = initialize_model_inpaint('text_to_audio/Make_An_Audio/configs/inpaint/txt2audio_args.yaml',
387
- 'text_to_audio/Make_An_Audio/useful_ckpts/inpaint7_epoch00047.ckpt')
388
- self.vocoder = VocoderBigVGAN('text_to_audio/Make_An_Audio/vocoder/logs/bigv16k53w', device=device)
389
- self.cmap_transform = matplotlib.cm.viridis
390
-
391
- def make_batch_sd(self, mel, mask, num_samples=1):
392
-
393
- mel = torch.from_numpy(mel)[None, None, ...].to(dtype=torch.float32)
394
- mask = torch.from_numpy(mask)[None, None, ...].to(dtype=torch.float32)
395
- masked_mel = (1 - mask) * mel
396
-
397
- mel = mel * 2 - 1
398
- mask = mask * 2 - 1
399
- masked_mel = masked_mel * 2 - 1
400
-
401
- batch = {
402
- "mel": repeat(mel.to(device=self.device), "1 ... -> n ...", n=num_samples),
403
- "mask": repeat(mask.to(device=self.device), "1 ... -> n ...", n=num_samples),
404
- "masked_mel": repeat(masked_mel.to(device=self.device), "1 ... -> n ...", n=num_samples),
405
- }
406
- return batch
407
-
408
- def gen_mel(self, input_audio_path):
409
- SAMPLE_RATE = 16000
410
- sr, ori_wav = wavfile.read(input_audio_path)
411
- print("gen_mel")
412
- print(sr, ori_wav.shape, ori_wav)
413
- ori_wav = ori_wav.astype(np.float32, order='C') / 32768.0
414
- if len(ori_wav.shape) == 2: # stereo
415
- ori_wav = librosa.to_mono(
416
- ori_wav.T) # gradio load wav shape could be (wav_len,2) but librosa expects (2,wav_len)
417
- print(sr, ori_wav.shape, ori_wav)
418
- ori_wav = librosa.resample(ori_wav, orig_sr=sr, target_sr=SAMPLE_RATE)
419
-
420
- mel_len, hop_size = 848, 256
421
- input_len = mel_len * hop_size
422
- if len(ori_wav) < input_len:
423
- input_wav = np.pad(ori_wav, (0, mel_len * hop_size), constant_values=0)
424
- else:
425
- input_wav = ori_wav[:input_len]
426
-
427
- mel = TRANSFORMS_16000(input_wav)
428
- return mel
429
-
430
- def gen_mel_audio(self, input_audio):
431
- SAMPLE_RATE = 16000
432
- sr, ori_wav = input_audio
433
- print("gen_mel_audio")
434
- print(sr, ori_wav.shape, ori_wav)
435
-
436
- ori_wav = ori_wav.astype(np.float32, order='C') / 32768.0
437
- if len(ori_wav.shape) == 2: # stereo
438
- ori_wav = librosa.to_mono(
439
- ori_wav.T) # gradio load wav shape could be (wav_len,2) but librosa expects (2,wav_len)
440
- print(sr, ori_wav.shape, ori_wav)
441
- ori_wav = librosa.resample(ori_wav, orig_sr=sr, target_sr=SAMPLE_RATE)
442
-
443
- mel_len, hop_size = 848, 256
444
- input_len = mel_len * hop_size
445
- if len(ori_wav) < input_len:
446
- input_wav = np.pad(ori_wav, (0, mel_len * hop_size), constant_values=0)
447
- else:
448
- input_wav = ori_wav[:input_len]
449
- mel = TRANSFORMS_16000(input_wav)
450
- return mel
451
-
452
- def inpaint(self, batch, seed, ddim_steps, num_samples=1, W=512, H=512):
453
- model = self.sampler.model
454
-
455
- prng = np.random.RandomState(seed)
456
- start_code = prng.randn(num_samples, model.first_stage_model.embed_dim, H // 8, W // 8)
457
- start_code = torch.from_numpy(start_code).to(device=self.device, dtype=torch.float32)
458
-
459
- c = model.get_first_stage_encoding(model.encode_first_stage(batch["masked_mel"]))
460
- cc = torch.nn.functional.interpolate(batch["mask"],
461
- size=c.shape[-2:])
462
- c = torch.cat((c, cc), dim=1) # (b,c+1,h,w) 1 is mask
463
-
464
- shape = (c.shape[1] - 1,) + c.shape[2:]
465
- samples_ddim, _ = self.sampler.sample(S=ddim_steps,
466
- conditioning=c,
467
- batch_size=c.shape[0],
468
- shape=shape,
469
- verbose=False)
470
- x_samples_ddim = model.decode_first_stage(samples_ddim)
471
-
472
- mask = batch["mask"] # [-1,1]
473
- mel = torch.clamp((batch["mel"] + 1.0) / 2.0, min=0.0, max=1.0)
474
- mask = torch.clamp((batch["mask"] + 1.0) / 2.0, min=0.0, max=1.0)
475
- predicted_mel = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
476
- inpainted = (1 - mask) * mel + mask * predicted_mel
477
- inpainted = inpainted.cpu().numpy().squeeze()
478
- inapint_wav = self.vocoder.vocode(inpainted)
479
-
480
- return inpainted, inapint_wav
481
-
482
- def predict(self, input_audio, mel_and_mask, seed=55, ddim_steps=100):
483
- SAMPLE_RATE = 16000
484
- torch.set_grad_enabled(False)
485
- mel_img = Image.open(mel_and_mask['image'])
486
- mask_img = Image.open(mel_and_mask["mask"])
487
- show_mel = np.array(mel_img.convert("L")) / 255
488
- mask = np.array(mask_img.convert("L")) / 255
489
- mel_bins, mel_len = 80, 848
490
- input_mel = self.gen_mel_audio(input_audio)[:, :mel_len]
491
- mask = np.pad(mask, ((0, 0), (0, mel_len - mask.shape[1])), mode='constant', constant_values=0)
492
- print(mask.shape, input_mel.shape)
493
- with torch.no_grad():
494
- batch = self.make_batch_sd(input_mel, mask, num_samples=1)
495
- inpainted, gen_wav = self.inpaint(
496
- batch=batch,
497
- seed=seed,
498
- ddim_steps=ddim_steps,
499
- num_samples=1,
500
- H=mel_bins, W=mel_len
501
- )
502
- inpainted = inpainted[:, :show_mel.shape[1]]
503
- color_mel = self.cmap_transform(inpainted)
504
- input_len = int(input_audio[1].shape[0] * SAMPLE_RATE / input_audio[0])
505
- gen_wav = (gen_wav * 32768).astype(np.int16)[:input_len]
506
- image = Image.fromarray((color_mel * 255).astype(np.uint8))
507
- image_filename = os.path.join('image', str(uuid.uuid4())[0:8] + ".png")
508
- image.save(image_filename)
509
- audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav")
510
- soundfile.write(audio_filename, gen_wav, samplerate=16000)
511
- return image_filename, audio_filename
512
-
513
- @prompts(name="Audio Inpainting",
514
- description="useful for when you want to inpaint a mel spectrum of an audio and predict this audio, "
515
- "this tool will generate a mel spectrum and you can inpaint it, receives audio_path as input. "
516
- "The input to this tool should be a string, "
517
- "representing the audio_path. ")
518
- def inference(self, input_audio_path):
519
- crop_len = 500
520
- crop_mel = self.gen_mel(input_audio_path)[:, :crop_len]
521
- color_mel = self.cmap_transform(crop_mel)
522
- image = Image.fromarray((color_mel * 255).astype(np.uint8))
523
- image_filename = os.path.join('image', str(uuid.uuid4())[0:8] + ".png")
524
- image.save(image_filename)
525
- return image_filename
526
-
527
-
528
- class ASR:
529
- def __init__(self, device):
530
- print("Initializing Whisper to %s" % device)
531
- self.device = device
532
- self.model = whisper.load_model("base", device=device)
533
-
534
- @prompts(name="Transcribe speech",
535
- description="useful for when you want to know the text corresponding to a human speech, "
536
- "receives audio_path as input. "
537
- "The input to this tool should be a string, "
538
- "representing the audio_path. ")
539
- def inference(self, audio_path):
540
- audio = whisper.load_audio(audio_path)
541
- audio = whisper.pad_or_trim(audio)
542
- mel = whisper.log_mel_spectrogram(audio).to(self.device)
543
- _, probs = self.model.detect_language(mel)
544
- options = whisper.DecodingOptions()
545
- result = whisper.decode(self.model, mel, options)
546
- return result.text
547
-
548
- def translate_english(self, audio_path):
549
- audio = self.model.transcribe(audio_path, language='English')
550
- return audio['text']
551
-
552
-
553
- class A2T:
554
- def __init__(self, device):
555
- print("Initializing Audio-To-Text Model to %s" % device)
556
- self.device = device
557
- self.model = AudioCapModel("audio_to_text/audiocaps_cntrstv_cnn14rnn_trm")
558
-
559
- @prompts(name="Generate Text From The Audio",
560
- description="useful for when you want to describe an audio in text, "
561
- "receives audio_path as input. "
562
- "The input to this tool should be a string, "
563
- "representing the audio_path. ")
564
- def inference(self, audio_path):
565
- audio = whisper.load_audio(audio_path)
566
- caption_text = self.model(audio)
567
- return caption_text[0]
568
-
569
-
570
- class SoundDetection:
571
- def __init__(self, device):
572
- self.device = device
573
- self.sample_rate = 32000
574
- self.window_size = 1024
575
- self.hop_size = 320
576
- self.mel_bins = 64
577
- self.fmin = 50
578
- self.fmax = 14000
579
- self.model_type = 'PVT'
580
- self.checkpoint_path = 'audio_detection/audio_infer/useful_ckpts/audio_detection.pth'
581
- self.classes_num = detection_config.classes_num
582
- self.labels = detection_config.labels
583
- self.frames_per_second = self.sample_rate // self.hop_size
584
- # Model = eval(self.model_type)
585
- self.model = PVT(sample_rate=self.sample_rate, window_size=self.window_size,
586
- hop_size=self.hop_size, mel_bins=self.mel_bins, fmin=self.fmin, fmax=self.fmax,
587
- classes_num=self.classes_num)
588
- checkpoint = torch.load(self.checkpoint_path, map_location=self.device)
589
- self.model.load_state_dict(checkpoint['model'])
590
- self.model.to(device)
591
-
592
- @prompts(name="Detect The Sound Event From The Audio",
593
- description="useful for when you want to know what event in the audio and the sound event start or end time, it will return an image "
594
- "receives audio_path as input. "
595
- "The input to this tool should be a string, "
596
- "representing the audio_path. ")
597
- def inference(self, audio_path):
598
- # Forward
599
- (waveform, _) = librosa.core.load(audio_path, sr=self.sample_rate, mono=True)
600
- waveform = waveform[None, :] # (1, audio_length)
601
- waveform = torch.from_numpy(waveform)
602
- waveform = waveform.to(self.device)
603
- # Forward
604
- with torch.no_grad():
605
- self.model.eval()
606
- batch_output_dict = self.model(waveform, None)
607
- framewise_output = batch_output_dict['framewise_output'].data.cpu().numpy()[0]
608
- """(time_steps, classes_num)"""
609
- # print('Sound event detection result (time_steps x classes_num): {}'.format(
610
- # framewise_output.shape))
611
- import numpy as np
612
- import matplotlib.pyplot as plt
613
- sorted_indexes = np.argsort(np.max(framewise_output, axis=0))[::-1]
614
- top_k = 10 # Show top results
615
- top_result_mat = framewise_output[:, sorted_indexes[0: top_k]]
616
- """(time_steps, top_k)"""
617
- # Plot result
618
- stft = librosa.core.stft(y=waveform[0].data.cpu().numpy(), n_fft=self.window_size,
619
- hop_length=self.hop_size, window='hann', center=True)
620
- frames_num = stft.shape[-1]
621
- fig, axs = plt.subplots(2, 1, sharex=True, figsize=(10, 4))
622
- axs[0].matshow(np.log(np.abs(stft)), origin='lower', aspect='auto', cmap='jet')
623
- axs[0].set_ylabel('Frequency bins')
624
- axs[0].set_title('Log spectrogram')
625
- axs[1].matshow(top_result_mat.T, origin='upper', aspect='auto', cmap='jet', vmin=0, vmax=1)
626
- axs[1].xaxis.set_ticks(np.arange(0, frames_num, self.frames_per_second))
627
- axs[1].xaxis.set_ticklabels(np.arange(0, frames_num / self.frames_per_second))
628
- axs[1].yaxis.set_ticks(np.arange(0, top_k))
629
- axs[1].yaxis.set_ticklabels(np.array(self.labels)[sorted_indexes[0: top_k]])
630
- axs[1].yaxis.grid(color='k', linestyle='solid', linewidth=0.3, alpha=0.3)
631
- axs[1].set_xlabel('Seconds')
632
- axs[1].xaxis.set_ticks_position('bottom')
633
- plt.tight_layout()
634
- image_filename = os.path.join('image', str(uuid.uuid4())[0:8] + ".png")
635
- plt.savefig(image_filename)
636
- return image_filename
637
-
638
-
639
- class SoundExtraction:
640
- def __init__(self, device):
641
- self.device = device
642
- self.model_file = 'sound_extraction/useful_ckpts/LASSNet.pt'
643
- self.stft = STFT()
644
- import torch.nn as nn
645
- self.model = nn.DataParallel(LASSNet(device)).to(device)
646
- checkpoint = torch.load(self.model_file)
647
- self.model.load_state_dict(checkpoint['model'])
648
- self.model.eval()
649
-
650
- @prompts(name="Extract Sound Event From Mixture Audio Based On Language Description",
651
- description="useful for when you extract target sound from a mixture audio, you can describe the target sound by text, "
652
- "receives audio_path and text as input. "
653
- "The input to this tool should be a comma seperated string of two, "
654
- "representing mixture audio path and input text.")
655
- def inference(self, inputs):
656
- # key = ['ref_audio', 'text']
657
- val = inputs.split(",")
658
- audio_path = val[0] # audio_path, text
659
- text = val[1]
660
- waveform = load_wav(audio_path)
661
- waveform = torch.tensor(waveform).transpose(1, 0)
662
- mixed_mag, mixed_phase = self.stft.transform(waveform)
663
- text_query = ['[CLS] ' + text]
664
- mixed_mag = mixed_mag.transpose(2, 1).unsqueeze(0).to(self.device)
665
- est_mask = self.model(mixed_mag, text_query)
666
- est_mag = est_mask * mixed_mag
667
- est_mag = est_mag.squeeze(1)
668
- est_mag = est_mag.permute(0, 2, 1)
669
- est_wav = self.stft.inverse(est_mag.cpu().detach(), mixed_phase)
670
- est_wav = est_wav.squeeze(0).squeeze(0).numpy()
671
- # est_path = f'output/est{i}.wav'
672
- audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav")
673
- print('audio_filename ', audio_filename)
674
- save_wav(est_wav, audio_filename)
675
- return audio_filename
676
-
677
-
678
- class Binaural:
679
- def __init__(self, device):
680
- self.device = device
681
- self.model_file = 'mono2binaural/useful_ckpts/m2b/binaural_network.net'
682
- self.position_file = ['mono2binaural/useful_ckpts/m2b/tx_positions.txt',
683
- 'mono2binaural/useful_ckpts/m2b/tx_positions2.txt',
684
- 'mono2binaural/useful_ckpts/m2b/tx_positions3.txt',
685
- 'mono2binaural/useful_ckpts/m2b/tx_positions4.txt',
686
- 'mono2binaural/useful_ckpts/m2b/tx_positions5.txt']
687
- self.net = BinauralNetwork(view_dim=7,
688
- warpnet_layers=4,
689
- warpnet_channels=64,
690
- )
691
- self.net.load_from_file(self.model_file)
692
- self.sr = 48000
693
-
694
- @prompts(name="Sythesize Binaural Audio From A Mono Audio Input",
695
- description="useful for when you want to transfer your mono audio into binaural audio, "
696
- "receives audio_path as input. "
697
- "The input to this tool should be a string, "
698
- "representing the audio_path. ")
699
- def inference(self, audio_path):
700
- mono, sr = librosa.load(path=audio_path, sr=self.sr, mono=True)
701
- mono = torch.from_numpy(mono)
702
- mono = mono.unsqueeze(0)
703
- import numpy as np
704
- import random
705
- rand_int = random.randint(0, 4)
706
- view = np.loadtxt(self.position_file[rand_int]).transpose().astype(np.float32)
707
- view = torch.from_numpy(view)
708
- if not view.shape[-1] * 400 == mono.shape[-1]:
709
- mono = mono[:, :(mono.shape[-1] // 400) * 400] #
710
- if view.shape[1] * 400 > mono.shape[1]:
711
- m_a = view.shape[1] - mono.shape[-1] // 400
712
- rand_st = random.randint(0, m_a)
713
- view = view[:, m_a:m_a + (mono.shape[-1] // 400)] #
714
- # binauralize and save output
715
- self.net.eval().to(self.device)
716
- mono, view = mono.to(self.device), view.to(self.device)
717
- chunk_size = 48000 # forward in chunks of 1s
718
- rec_field = 1000 # add 1000 samples as "safe bet" since warping has undefined rec. field
719
- rec_field -= rec_field % 400 # make sure rec_field is a multiple of 400 to match audio and view frequencies
720
- chunks = [
721
- {
722
- "mono": mono[:, max(0, i - rec_field):i + chunk_size],
723
- "view": view[:, max(0, i - rec_field) // 400:(i + chunk_size) // 400]
724
- }
725
- for i in range(0, mono.shape[-1], chunk_size)
726
- ]
727
- for i, chunk in enumerate(chunks):
728
- with torch.no_grad():
729
- mono = chunk["mono"].unsqueeze(0)
730
- view = chunk["view"].unsqueeze(0)
731
- binaural = self.net(mono, view).squeeze(0)
732
- if i > 0:
733
- binaural = binaural[:, -(mono.shape[-1] - rec_field):]
734
- chunk["binaural"] = binaural
735
- binaural = torch.cat([chunk["binaural"] for chunk in chunks], dim=-1)
736
- binaural = torch.clamp(binaural, min=-1, max=1).cpu()
737
- # binaural = chunked_forwarding(net, mono, view)
738
- audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav")
739
- import torchaudio
740
- torchaudio.save(audio_filename, binaural, sr)
741
- # soundfile.write(audio_filename, binaural, samplerate = 48000)
742
- print(f"Processed Binaural.run, audio_filename: {audio_filename}")
743
- return audio_filename
744
-
745
-
746
- class TargetSoundDetection:
747
- def __init__(self, device):
748
- self.device = device
749
- self.MEL_ARGS = {
750
- 'n_mels': 64,
751
- 'n_fft': 2048,
752
- 'hop_length': int(22050 * 20 / 1000),
753
- 'win_length': int(22050 * 40 / 1000)
754
- }
755
- self.EPS = np.spacing(1)
756
- self.clip_model, _ = clip.load("ViT-B/32", device=self.device)
757
- self.event_labels = event_labels
758
- self.id_to_event = {i: label for i, label in enumerate(self.event_labels)}
759
- config = torch.load('audio_detection/target_sound_detection/useful_ckpts/tsd/run_config.pth',
760
- map_location='cpu')
761
- config_parameters = dict(config)
762
- config_parameters['tao'] = 0.6
763
- if 'thres' not in config_parameters.keys():
764
- config_parameters['thres'] = 0.5
765
- if 'time_resolution' not in config_parameters.keys():
766
- config_parameters['time_resolution'] = 125
767
- model_parameters = torch.load(
768
- 'audio_detection/target_sound_detection/useful_ckpts/tsd/run_model_7_loss=-0.0724.pt'
769
- , map_location=lambda storage, loc: storage) # load parameter
770
- self.model = getattr(tsd_models, config_parameters['model'])(config_parameters,
771
- inputdim=64, outputdim=2,
772
- time_resolution=config_parameters[
773
- 'time_resolution'],
774
- **config_parameters['model_args'])
775
- self.model.load_state_dict(model_parameters)
776
- self.model = self.model.to(self.device).eval()
777
- self.re_embeds = torch.load('audio_detection/target_sound_detection/useful_ckpts/tsd/text_emb.pth')
778
- self.ref_mel = torch.load('audio_detection/target_sound_detection/useful_ckpts/tsd/ref_mel.pth')
779
-
780
- def extract_feature(self, fname):
781
- import soundfile as sf
782
- y, sr = sf.read(fname, dtype='float32')
783
- print('y ', y.shape)
784
- ti = y.shape[0] / sr
785
- if y.ndim > 1:
786
- y = y.mean(1)
787
- y = librosa.resample(y, sr, 22050)
788
- lms_feature = np.log(librosa.feature.melspectrogram(y, **self.MEL_ARGS) + self.EPS).T
789
- return lms_feature, ti
790
-
791
- def build_clip(self, text):
792
- text = clip.tokenize(text).to(self.device) # ["a diagram with dog", "a dog", "a cat"]
793
- text_features = self.clip_model.encode_text(text)
794
- return text_features
795
-
796
- def cal_similarity(self, target, retrievals):
797
- ans = []
798
- for name in retrievals.keys():
799
- tmp = retrievals[name]
800
- s = torch.cosine_similarity(target.squeeze(), tmp.squeeze(), dim=0)
801
- ans.append(s.item())
802
- return ans.index(max(ans))
803
-
804
- @prompts(name="Target Sound Detection",
805
- description="useful for when you want to know when the target sound event in the audio happens. You can use language descriptions to instruct the model, "
806
- "receives text description and audio_path as input. "
807
- "The input to this tool should be a comma seperated string of two, "
808
- "representing audio path and the text description. ")
809
- def inference(self, inputs):
810
- audio_path, text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
811
- target_emb = self.build_clip(text) # torch type
812
- idx = self.cal_similarity(target_emb, self.re_embeds)
813
- target_event = self.id_to_event[idx]
814
- embedding = self.ref_mel[target_event]
815
- embedding = torch.from_numpy(embedding)
816
- embedding = embedding.unsqueeze(0).to(self.device).float()
817
- inputs, ti = self.extract_feature(audio_path)
818
- inputs = torch.from_numpy(inputs)
819
- inputs = inputs.unsqueeze(0).to(self.device).float()
820
- decision, decision_up, logit = self.model(inputs, embedding)
821
- pred = decision_up.detach().cpu().numpy()
822
- pred = pred[:, :, 0]
823
- frame_num = decision_up.shape[1]
824
- time_ratio = ti / frame_num
825
- filtered_pred = median_filter(pred, window_size=1, threshold=0.5)
826
- time_predictions = []
827
- for index_k in range(filtered_pred.shape[0]):
828
- decoded_pred = []
829
- decoded_pred_ = decode_with_timestamps(target_event, filtered_pred[index_k, :])
830
- if len(decoded_pred_) == 0: # neg deal
831
- decoded_pred_.append((target_event, 0, 0))
832
- decoded_pred.append(decoded_pred_)
833
- for num_batch in range(len(decoded_pred)): # when we test our model,the batch_size is 1
834
- cur_pred = pred[num_batch]
835
- # Save each frame output, for later visualization
836
- label_prediction = decoded_pred[num_batch] # frame predict
837
- for event_label, onset, offset in label_prediction:
838
- time_predictions.append({
839
- 'onset': onset * time_ratio,
840
- 'offset': offset * time_ratio, })
841
- ans = ''
842
- for i, item in enumerate(time_predictions):
843
- ans = ans + 'segment' + str(i + 1) + ' start_time: ' + str(item['onset']) + ' end_time: ' + str(
844
- item['offset']) + '\t'
845
- return ans
846
-
847
-
848
- class Speech_Enh_SC:
849
- """Speech Enhancement or Separation in single-channel
850
- Example usage:
851
- enh_model = Speech_Enh_SS("cuda")
852
- enh_wav = enh_model.inference("./test_chime4_audio_M05_440C0213_PED_REAL.wav")
853
- """
854
-
855
- def __init__(self, device="cuda", model_name="espnet/Wangyou_Zhang_chime4_enh_train_enh_conv_tasnet_raw"):
856
- self.model_name = model_name
857
- self.device = device
858
- print("Initializing ESPnet Enh to %s" % device)
859
- self._initialize_model()
860
-
861
- def _initialize_model(self):
862
- from espnet_model_zoo.downloader import ModelDownloader
863
- from espnet2.bin.enh_inference import SeparateSpeech
864
-
865
- d = ModelDownloader()
866
-
867
- cfg = d.download_and_unpack(self.model_name)
868
- self.separate_speech = SeparateSpeech(
869
- train_config=cfg["train_config"],
870
- model_file=cfg["model_file"],
871
- # for segment-wise process on long speech
872
- segment_size=2.4,
873
- hop_size=0.8,
874
- normalize_segment_scale=False,
875
- show_progressbar=True,
876
- ref_channel=None,
877
- normalize_output_wav=True,
878
- device=self.device,
879
- )
880
-
881
- @prompts(name="Speech Enhancement In Single-Channel",
882
- description="useful for when you want to enhance the quality of the speech signal by reducing background noise (single-channel), "
883
- "receives audio_path as input."
884
- "The input to this tool should be a string, "
885
- "representing the audio_path. ")
886
- def inference(self, speech_path, ref_channel=0):
887
- speech, sr = soundfile.read(speech_path)
888
- speech = speech[:, ref_channel]
889
- enh_speech = self.separate_speech(speech[None, ...], fs=sr)
890
- audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav")
891
- soundfile.write(audio_filename, enh_speech[0].squeeze(), samplerate=sr)
892
- return audio_filename
893
-
894
-
895
- class Speech_SS:
896
- def __init__(self, device="cuda", model_name="lichenda/wsj0_2mix_skim_noncausal"):
897
- self.model_name = model_name
898
- self.device = device
899
- print("Initializing ESPnet SS to %s" % device)
900
- self._initialize_model()
901
-
902
- def _initialize_model(self):
903
- from espnet_model_zoo.downloader import ModelDownloader
904
- from espnet2.bin.enh_inference import SeparateSpeech
905
-
906
- d = ModelDownloader()
907
-
908
- cfg = d.download_and_unpack(self.model_name)
909
- self.separate_speech = SeparateSpeech(
910
- train_config=cfg["train_config"],
911
- model_file=cfg["model_file"],
912
- # for segment-wise process on long speech
913
- segment_size=2.4,
914
- hop_size=0.8,
915
- normalize_segment_scale=False,
916
- show_progressbar=True,
917
- ref_channel=None,
918
- normalize_output_wav=True,
919
- device=self.device,
920
- )
921
-
922
- @prompts(name="Speech Separation",
923
- description="useful for when you want to separate each speech from the speech mixture, "
924
- "receives audio_path as input."
925
- "The input to this tool should be a string, "
926
- "representing the audio_path. ")
927
- def inference(self, speech_path):
928
- speech, sr = soundfile.read(speech_path)
929
- enh_speech = self.separate_speech(speech[None, ...], fs=sr)
930
- audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav")
931
- if len(enh_speech) == 1:
932
- soundfile.write(audio_filename, enh_speech[0].squeeze(), samplerate=sr)
933
- else:
934
- audio_filename_1 = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav")
935
- soundfile.write(audio_filename_1, enh_speech[0].squeeze(), samplerate=sr)
936
- audio_filename_2 = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav")
937
- soundfile.write(audio_filename_2, enh_speech[1].squeeze(), samplerate=sr)
938
- audio_filename = merge_audio(audio_filename_1, audio_filename_2)
939
- return audio_filename
940
-
941
- class Speech_Enh_SC:
942
- """Speech Enhancement or Separation in single-channel
943
- Example usage:
944
- enh_model = Speech_Enh_SS("cuda")
945
- enh_wav = enh_model.inference("./test_chime4_audio_M05_440C0213_PED_REAL.wav")
946
- """
947
-
948
- def __init__(self, device="cuda", model_name="espnet/Wangyou_Zhang_chime4_enh_train_enh_conv_tasnet_raw"):
949
- self.model_name = model_name
950
- self.device = device
951
- print("Initializing ESPnet Enh to %s" % device)
952
- self._initialize_model()
953
-
954
- def _initialize_model(self):
955
- from espnet_model_zoo.downloader import ModelDownloader
956
- from espnet2.bin.enh_inference import SeparateSpeech
957
-
958
- d = ModelDownloader()
959
-
960
- cfg = d.download_and_unpack(self.model_name)
961
- self.separate_speech = SeparateSpeech(
962
- train_config=cfg["train_config"],
963
- model_file=cfg["model_file"],
964
- # for segment-wise process on long speech
965
- segment_size=2.4,
966
- hop_size=0.8,
967
- normalize_segment_scale=False,
968
- show_progressbar=True,
969
- ref_channel=None,
970
- normalize_output_wav=True,
971
- device=self.device,
972
- )
973
-
974
- @prompts(name="Speech Enhancement In Single-Channel",
975
- description="useful for when you want to enhance the quality of the speech signal by reducing background noise (single-channel), "
976
- "receives audio_path as input."
977
- "The input to this tool should be a string, "
978
- "representing the audio_path. ")
979
- def inference(self, speech_path, ref_channel=0):
980
- speech, sr = soundfile.read(speech_path)
981
- if speech.ndim != 1:
982
- speech = speech[:, ref_channel]
983
- enh_speech = self.separate_speech(speech[None, ...], fs=sr)
984
- audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav")
985
- soundfile.write(audio_filename, enh_speech[0].squeeze(), samplerate=sr)
986
- return audio_filename
987
-
988
-
989
- class Speech_SS:
990
- def __init__(self, device="cuda", model_name="lichenda/wsj0_2mix_skim_noncausal"):
991
- self.model_name = model_name
992
- self.device = device
993
- print("Initializing ESPnet SS to %s" % device)
994
- self._initialize_model()
995
-
996
- def _initialize_model(self):
997
- from espnet_model_zoo.downloader import ModelDownloader
998
- from espnet2.bin.enh_inference import SeparateSpeech
999
-
1000
- d = ModelDownloader()
1001
-
1002
- cfg = d.download_and_unpack(self.model_name)
1003
- self.separate_speech = SeparateSpeech(
1004
- train_config=cfg["train_config"],
1005
- model_file=cfg["model_file"],
1006
- # for segment-wise process on long speech
1007
- segment_size=2.4,
1008
- hop_size=0.8,
1009
- normalize_segment_scale=False,
1010
- show_progressbar=True,
1011
- ref_channel=None,
1012
- normalize_output_wav=True,
1013
- device=self.device,
1014
- )
1015
-
1016
- @prompts(name="Speech Separation",
1017
- description="useful for when you want to separate each speech from the speech mixture, "
1018
- "receives audio_path as input."
1019
- "The input to this tool should be a string, "
1020
- "representing the audio_path. ")
1021
- def inference(self, speech_path):
1022
- speech, sr = soundfile.read(speech_path)
1023
- enh_speech = self.separate_speech(speech[None, ...], fs=sr)
1024
- audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav")
1025
- if len(enh_speech) == 1:
1026
- soundfile.write(audio_filename, enh_speech[0].squeeze(), samplerate=sr)
1027
- else:
1028
- audio_filename_1 = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav")
1029
- soundfile.write(audio_filename_1, enh_speech[0].squeeze(), samplerate=sr)
1030
- audio_filename_2 = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav")
1031
- soundfile.write(audio_filename_2, enh_speech[1].squeeze(), samplerate=sr)
1032
- audio_filename = merge_audio(audio_filename_1, audio_filename_2)
1033
- return audio_filename
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/utils.py DELETED
@@ -1,26 +0,0 @@
1
- import argparse
2
- import yaml
3
- import sys
4
-
5
- def read_config_as_args(config_path,args=None,is_config_str=False):
6
- return_dict = {}
7
-
8
- if config_path is not None:
9
- if is_config_str:
10
- yml_config = yaml.load(config_path, Loader=yaml.FullLoader)
11
- else:
12
- with open(config_path, "r") as f:
13
- yml_config = yaml.load(f, Loader=yaml.FullLoader)
14
-
15
- if args != None:
16
- for k, v in yml_config.items():
17
- if k in args.__dict__:
18
- args.__dict__[k] = v
19
- else:
20
- sys.stderr.write("Ignored unknown parameter {} in yaml.\n".format(k))
21
- else:
22
- for k, v in yml_config.items():
23
- return_dict[k] = v
24
-
25
- args = args if args != None else return_dict
26
- return argparse.Namespace(**args)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/act.py DELETED
@@ -1,28 +0,0 @@
1
- # Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0
2
- # LICENSE is in incl_licenses directory.
3
-
4
- import torch.nn as nn
5
- from .resample import UpSample1d, DownSample1d
6
-
7
-
8
- class Activation1d(nn.Module):
9
- def __init__(self,
10
- activation,
11
- up_ratio: int = 2,
12
- down_ratio: int = 2,
13
- up_kernel_size: int = 12,
14
- down_kernel_size: int = 12):
15
- super().__init__()
16
- self.up_ratio = up_ratio
17
- self.down_ratio = down_ratio
18
- self.act = activation
19
- self.upsample = UpSample1d(up_ratio, up_kernel_size)
20
- self.downsample = DownSample1d(down_ratio, down_kernel_size)
21
-
22
- # x: [B,C,T]
23
- def forward(self, x):
24
- x = self.upsample(x)
25
- x = self.act(x)
26
- x = self.downsample(x)
27
-
28
- return x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/modules.py DELETED
@@ -1,314 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
- from functools import partial
4
-
5
- from ldm.modules.x_transformer import Encoder, TransformerWrapper # TODO: can we directly rely on lucidrains code and simply add this as a reuirement? --> test
6
- from torch.utils.checkpoint import checkpoint
7
- from transformers import T5Tokenizer, T5EncoderModel, CLIPTokenizer, CLIPTextModel, AutoTokenizer
8
- from importlib_resources import files
9
- from ldm.modules.encoders.CLAP.utils import read_config_as_args
10
- from ldm.modules.encoders.CLAP.clap import TextEncoder
11
- from ldm.util import default, count_params
12
-
13
-
14
- class AbstractEncoder(nn.Module):
15
- def __init__(self):
16
- super().__init__()
17
-
18
- def encode(self, *args, **kwargs):
19
- raise NotImplementedError
20
-
21
-
22
- class ClassEmbedder(nn.Module):
23
- def __init__(self, embed_dim, n_classes=1000, key='class'):
24
- super().__init__()
25
- self.key = key
26
- self.embedding = nn.Embedding(n_classes, embed_dim)
27
-
28
- def forward(self, batch, key=None):
29
- if key is None:
30
- key = self.key
31
- # this is for use in crossattn
32
- c = batch[key][:, None]# (bsz,1)
33
- c = self.embedding(c)
34
- return c
35
-
36
-
37
- class TransformerEmbedder(AbstractEncoder):
38
- """Some transformer encoder layers"""
39
- def __init__(self, n_embed, n_layer, vocab_size, max_seq_len=77, device="cuda"):
40
- super().__init__()
41
- self.device = device
42
- self.transformer = TransformerWrapper(num_tokens=vocab_size, max_seq_len=max_seq_len,
43
- attn_layers=Encoder(dim=n_embed, depth=n_layer))
44
-
45
- def forward(self, tokens):
46
- tokens = tokens.to(self.device) # meh
47
- z = self.transformer(tokens, return_embeddings=True)
48
- return z
49
-
50
- def encode(self, x):
51
- return self(x)
52
-
53
-
54
- class BERTTokenizer(AbstractEncoder):
55
- """ Uses a pretrained BERT tokenizer by huggingface. Vocab size: 30522 (?)"""
56
- def __init__(self, device="cuda", vq_interface=True, max_length=77):
57
- super().__init__()
58
- from transformers import BertTokenizerFast # TODO: add to reuquirements
59
- self.tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
60
- self.device = device
61
- self.vq_interface = vq_interface
62
- self.max_length = max_length
63
-
64
- def forward(self, text):
65
- batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True,
66
- return_overflowing_tokens=False, padding="max_length", return_tensors="pt")
67
- tokens = batch_encoding["input_ids"].to(self.device)
68
- return tokens
69
-
70
- @torch.no_grad()
71
- def encode(self, text):
72
- tokens = self(text)
73
- if not self.vq_interface:
74
- return tokens
75
- return None, None, [None, None, tokens]
76
-
77
- def decode(self, text):
78
- return text
79
-
80
-
81
- class BERTEmbedder(AbstractEncoder):# 这里不是用的pretrained bert,是用的transformers的BertTokenizer加自定义的TransformerWrapper
82
- """Uses the BERT tokenizr model and add some transformer encoder layers"""
83
- def __init__(self, n_embed, n_layer, vocab_size=30522, max_seq_len=77,
84
- device="cuda",use_tokenizer=True, embedding_dropout=0.0):
85
- super().__init__()
86
- self.use_tknz_fn = use_tokenizer
87
- if self.use_tknz_fn:
88
- self.tknz_fn = BERTTokenizer(vq_interface=False, max_length=max_seq_len)
89
- self.device = device
90
- self.transformer = TransformerWrapper(num_tokens=vocab_size, max_seq_len=max_seq_len,
91
- attn_layers=Encoder(dim=n_embed, depth=n_layer),
92
- emb_dropout=embedding_dropout)
93
-
94
- def forward(self, text):
95
- if self.use_tknz_fn:
96
- tokens = self.tknz_fn(text)#.to(self.device)
97
- else:
98
- tokens = text
99
- z = self.transformer(tokens, return_embeddings=True)
100
- return z
101
-
102
- def encode(self, text):
103
- # output of length 77
104
- return self(text)
105
-
106
-
107
- class SpatialRescaler(nn.Module):
108
- def __init__(self,
109
- n_stages=1,
110
- method='bilinear',
111
- multiplier=0.5,
112
- in_channels=3,
113
- out_channels=None,
114
- bias=False):
115
- super().__init__()
116
- self.n_stages = n_stages
117
- assert self.n_stages >= 0
118
- assert method in ['nearest','linear','bilinear','trilinear','bicubic','area']
119
- self.multiplier = multiplier
120
- self.interpolator = partial(torch.nn.functional.interpolate, mode=method)
121
- self.remap_output = out_channels is not None
122
- if self.remap_output:
123
- print(f'Spatial Rescaler mapping from {in_channels} to {out_channels} channels after resizing.')
124
- self.channel_mapper = nn.Conv2d(in_channels,out_channels,1,bias=bias)
125
-
126
- def forward(self,x):
127
- for stage in range(self.n_stages):
128
- x = self.interpolator(x, scale_factor=self.multiplier)
129
-
130
-
131
- if self.remap_output:
132
- x = self.channel_mapper(x)
133
- return x
134
-
135
- def encode(self, x):
136
- return self(x)
137
-
138
- def disabled_train(self, mode=True):
139
- """Overwrite model.train with this function to make sure train/eval mode
140
- does not change anymore."""
141
- return self
142
-
143
- class FrozenT5Embedder(AbstractEncoder):
144
- """Uses the T5 transformer encoder for text"""
145
- def __init__(self, version="google/t5-v1_1-large", device="cuda", max_length=77, freeze=True): # others are google/t5-v1_1-xl and google/t5-v1_1-xxl
146
- super().__init__()
147
- self.tokenizer = T5Tokenizer.from_pretrained(version)
148
- self.transformer = T5EncoderModel.from_pretrained(version)
149
- self.device = device
150
- self.max_length = max_length # TODO: typical value?
151
- if freeze:
152
- self.freeze()
153
-
154
- def freeze(self):
155
- self.transformer = self.transformer.eval()
156
- #self.train = disabled_train
157
- for param in self.parameters():
158
- param.requires_grad = False
159
-
160
- def forward(self, text):
161
- batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True,
162
- return_overflowing_tokens=False, padding="max_length", return_tensors="pt")
163
- tokens = batch_encoding["input_ids"].to(self.device)
164
- outputs = self.transformer(input_ids=tokens)
165
-
166
- z = outputs.last_hidden_state
167
- return z
168
-
169
- def encode(self, text):
170
- return self(text)
171
-
172
-
173
- class FrozenCLAPEmbedder(AbstractEncoder):
174
- """Uses the CLAP transformer encoder for text (from huggingface)"""
175
- def __init__(self, weights_path, freeze=True, device="cuda", max_length=77): # clip-vit-base-patch32
176
- super().__init__()
177
-
178
- model_state_dict = torch.load(weights_path, map_location=torch.device('cpu'))['model']
179
- match_params = dict()
180
- for key in list(model_state_dict.keys()):
181
- if 'caption_encoder' in key:
182
- match_params[key.replace('caption_encoder.', '')] = model_state_dict[key]
183
-
184
- config_as_str = files('ldm').joinpath('modules/encoders/CLAP/config.yml').read_text()
185
- args = read_config_as_args(config_as_str, is_config_str=True)
186
-
187
- # To device
188
- self.tokenizer = AutoTokenizer.from_pretrained(args.text_model) # args.text_model
189
- self.caption_encoder = TextEncoder(
190
- args.d_proj, args.text_model, args.transformer_embed_dim
191
- )
192
-
193
- self.max_length = max_length
194
- self.device = device
195
- if freeze: self.freeze()
196
-
197
- print(f"{self.caption_encoder.__class__.__name__} comes with {count_params(self.caption_encoder) * 1.e-6:.2f} M params.")
198
-
199
- def freeze(self):
200
- self.caption_encoder.base = self.caption_encoder.base.eval()
201
- for param in self.caption_encoder.base.parameters():
202
- param.requires_grad = False
203
-
204
-
205
- def encode(self, text):
206
- batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True,
207
- return_overflowing_tokens=False, padding="max_length", return_tensors="pt")
208
- tokens = batch_encoding["input_ids"].to(self.device)
209
-
210
- outputs = self.caption_encoder.base(input_ids=tokens)
211
- z = self.caption_encoder.projection(outputs.last_hidden_state)
212
- return z
213
-
214
- class FrozenCLAPEmbedderNoLoad(AbstractEncoder):
215
- def __init__(self, config, freeze=True, device="cpu", max_length=77):
216
- super().__init__()
217
- args = config
218
-
219
- # To device
220
- self.tokenizer = AutoTokenizer.from_pretrained(args.text_model) # args.text_model
221
- self.caption_encoder = TextEncoder(
222
- args.d_proj, args.text_model, args.transformer_embed_dim
223
- )
224
-
225
- self.max_length = max_length
226
- self.device = device
227
- if freeze: self.freeze()
228
-
229
- print(f"{self.caption_encoder.__class__.__name__} comes with {count_params(self.caption_encoder) * 1.e-6:.2f} M params.")
230
-
231
- def freeze(self):
232
- self.caption_encoder.base = self.caption_encoder.base.eval()
233
- for param in self.caption_encoder.base.parameters():
234
- param.requires_grad = False
235
-
236
-
237
- def encode(self, text):
238
- batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True,
239
- return_overflowing_tokens=False, padding="max_length", return_tensors="pt")
240
- tokens = batch_encoding["input_ids"].to(self.device)
241
-
242
- outputs = self.caption_encoder.base(input_ids=tokens)
243
- z = self.caption_encoder.projection(outputs.last_hidden_state)
244
- return z
245
-
246
-
247
- class NewFrozenCLAPEmbedder(AbstractEncoder):
248
- """Uses the CLAP transformer encoder for text (from huggingface)"""
249
- def __init__(self, weights_path, freeze=True, device="cuda", max_length=77): # clip-vit-base-patch32
250
- super().__init__()
251
- # To device
252
- from transformers import RobertaTokenizer
253
- from ldm.modules.encoders.open_clap import create_model
254
-
255
-
256
- model, model_cfg = create_model(
257
- 'HTSAT-tiny',
258
- 'roberta',
259
- weights_path,
260
- enable_fusion=True,
261
- fusion_type='aff_2d'
262
- )
263
-
264
- del model.audio_branch, model.audio_transform, model.audio_projection
265
- self.tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
266
- self.model = model
267
-
268
- self.max_length = max_length
269
- self.device = device
270
- if freeze: self.freeze()
271
-
272
- param_num = sum(p.numel() for p in model.parameters() if p.requires_grad)
273
- print(f'{self.model.__class__.__name__} comes with: {param_num / 1e+6:.3f} M params.')
274
-
275
- def freeze(self):
276
- self.model = self.model.eval()
277
- for param in self.model.parameters():
278
- param.requires_grad = False
279
-
280
- def encode(self, text):
281
- batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True,
282
- return_overflowing_tokens=False, padding="max_length", return_tensors="pt")
283
- outputs = self.model.text_branch(input_ids=batch_encoding["input_ids"].to(self.device), attention_mask=batch_encoding["attention_mask"].to(self.device))
284
- z = self.model.text_projection(outputs.last_hidden_state)
285
- return z
286
-
287
- class FrozenFLANEmbedder(AbstractEncoder):
288
- """Uses the T5 transformer encoder for text"""
289
- def __init__(self, version="google/flan-t5-large", device="cuda", max_length=77, freeze=True): # others are google/t5-v1_1-xl and google/t5-v1_1-xxl
290
- super().__init__()
291
- self.tokenizer = T5Tokenizer.from_pretrained(version)
292
- self.transformer = T5EncoderModel.from_pretrained(version)
293
- self.device = device
294
- self.max_length = max_length # TODO: typical value?
295
- if freeze:
296
- self.freeze()
297
-
298
- def freeze(self):
299
- self.transformer = self.transformer.eval()
300
- #self.train = disabled_train
301
- for param in self.parameters():
302
- param.requires_grad = False
303
-
304
- def forward(self, text):
305
- batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True,
306
- return_overflowing_tokens=False, padding="max_length", return_tensors="pt")
307
- tokens = batch_encoding["input_ids"].to(self.device)
308
- outputs = self.transformer(input_ids=tokens)
309
-
310
- z = outputs.last_hidden_state
311
- return z
312
-
313
- def encode(self, text):
314
- return self(text)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIZeroToHero/03-NLP-MLM-SOTA-MedEntity/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: 03 NLP MLM SOTA MedEntity
3
- emoji: 🐠
4
- colorFrom: yellow
5
- colorTo: gray
6
- sdk: gradio
7
- sdk_version: 3.1.7
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AUBMC-AIM/OCTaGAN/README.md DELETED
@@ -1,46 +0,0 @@
1
- ---
2
- title: OCTaGAN
3
- emoji: 🐠
4
- colorFrom: purple
5
- colorTo: blue
6
- sdk: gradio
7
- app_file: app.py
8
- pinned: false
9
- license: mit
10
- ---
11
-
12
- # Configuration
13
-
14
- `title`: _string_
15
- Display title for the Space
16
-
17
- `emoji`: _string_
18
- Space emoji (emoji-only character allowed)
19
-
20
- `colorFrom`: _string_
21
- Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
22
-
23
- `colorTo`: _string_
24
- Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
25
-
26
- `sdk`: _string_
27
- Can be either `gradio`, `streamlit`, or `static`
28
-
29
- `sdk_version` : _string_
30
- Only applicable for `streamlit` SDK.
31
- See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
32
-
33
- `app_file`: _string_
34
- Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
35
- Path is relative to the root of the repository.
36
-
37
- `models`: _List[string]_
38
- HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space.
39
- Will be parsed automatically from your code if not specified here.
40
-
41
- `datasets`: _List[string]_
42
- HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space.
43
- Will be parsed automatically from your code if not specified here.
44
-
45
- `pinned`: _boolean_
46
- Whether the Space stays on top of your list.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abuzariii/Text-Generation-with-GPT-2/app.py DELETED
@@ -1,23 +0,0 @@
1
- from transformers import GPT2LMHeadModel, GPT2Tokenizer
2
-
3
- tokenizer = GPT2Tokenizer.from_pretrained("gpt2-large")
4
- model = GPT2LMHeadModel.from_pretrained("gpt2-large", pad_token_id=tokenizer.eos_token_id)
5
-
6
- from transformers import pipeline, set_seed
7
- generator = pipeline('text-generation', model='gpt2')
8
-
9
- def generate_text(prompt):
10
- text1 = generator(prompt, max_length=3000, num_return_sequences=1)
11
- return text1[0].get('generated_text')
12
-
13
-
14
- import gradio as gr
15
-
16
- gr.Interface(
17
- title = 'Text Generation using GPT 2',
18
- fn=generate_text,
19
- inputs=gr.Textbox(placeholder="Type Here..."),
20
- outputs=[
21
- "text"
22
- ],
23
- theme = 'darkhuggingface').launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT/g4f/Provider/GptForLove.py DELETED
@@ -1,82 +0,0 @@
1
- from __future__ import annotations
2
-
3
- from aiohttp import ClientSession
4
- import execjs, os, json
5
-
6
- from ..typing import AsyncGenerator
7
- from .base_provider import AsyncGeneratorProvider
8
- from .helper import format_prompt
9
-
10
- class GptForLove(AsyncGeneratorProvider):
11
- url = "https://ai18.gptforlove.com"
12
- supports_gpt_35_turbo = True
13
- working = True
14
-
15
- @classmethod
16
- async def create_async_generator(
17
- cls,
18
- model: str,
19
- messages: list[dict[str, str]],
20
- **kwargs
21
- ) -> AsyncGenerator:
22
- if not model:
23
- model = "gpt-3.5-turbo"
24
- headers = {
25
- "authority": "api.gptplus.one",
26
- "accept": "application/json, text/plain, */*",
27
- "accept-language": "de-DE,de;q=0.9,en-DE;q=0.8,en;q=0.7,en-US;q=0.6,nl;q=0.5,zh-CN;q=0.4,zh-TW;q=0.3,zh;q=0.2",
28
- "content-type": "application/json",
29
- "origin": cls.url,
30
- "referer": f"{cls.url}/",
31
- "sec-ch-ua": "\"Google Chrome\";v=\"117\", \"Not;A=Brand\";v=\"8\", \"Chromium\";v=\"117\"",
32
- "sec-ch-ua-mobile": "?0",
33
- "sec-ch-ua-platform": "Linux",
34
- "sec-fetch-dest": "empty",
35
- "sec-fetch-mode": "cors",
36
- "sec-fetch-site": "cross-site",
37
- "user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36"
38
- }
39
- async with ClientSession(headers=headers) as session:
40
- prompt = format_prompt(messages)
41
- data = {
42
- "prompt": prompt,
43
- "options": {},
44
- "systemMessage": "You are ChatGPT, the version is GPT3.5, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown.",
45
- "temperature": 0.8,
46
- "top_p": 1,
47
- "secret": get_secret(),
48
- **kwargs
49
- }
50
- async with session.post("https://api.gptplus.one/chat-process", json=data) as response:
51
- response.raise_for_status()
52
- async for line in response.content:
53
- try:
54
- line = json.loads(line)
55
- except:
56
- raise RuntimeError(f"Broken line: {line}")
57
- if "detail" in line:
58
- content = line["detail"]["choices"][0]["delta"].get("content")
59
- if content:
60
- yield content
61
- elif "10分钟内提问超过了5次" in line:
62
- raise RuntimeError("Rate limit reached")
63
- else:
64
- raise RuntimeError(f"Response: {line}")
65
-
66
-
67
- def get_secret() -> str:
68
- dir = os.path.dirname(__file__)
69
- dir += '/npm/node_modules/crypto-js'
70
- source = """
71
- CryptoJS = require('{dir}/crypto-js')
72
- var k = '14487141bvirvvG'
73
- , e = Math.floor(new Date().getTime() / 1e3);
74
- var t = CryptoJS.enc.Utf8.parse(e)
75
- , o = CryptoJS.AES.encrypt(t, k, {
76
- mode: CryptoJS.mode.ECB,
77
- padding: CryptoJS.pad.Pkcs7
78
- });
79
- return o.toString()
80
- """
81
- source = source.replace('{dir}', dir)
82
- return execjs.compile(source).call('')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AiMimicry/sovits-models/app-slice.py DELETED
@@ -1,135 +0,0 @@
1
- import os
2
- import gradio as gr
3
- import edge_tts
4
- from pathlib import Path
5
- import inference.infer_tool as infer_tool
6
- import utils
7
- from inference.infer_tool import Svc
8
- import logging
9
- import webbrowser
10
- import argparse
11
- import asyncio
12
- import librosa
13
- import soundfile
14
- import gradio.processing_utils as gr_processing_utils
15
- logging.getLogger('numba').setLevel(logging.WARNING)
16
- logging.getLogger('markdown_it').setLevel(logging.WARNING)
17
- logging.getLogger('urllib3').setLevel(logging.WARNING)
18
- logging.getLogger('matplotlib').setLevel(logging.WARNING)
19
-
20
- limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces
21
-
22
- audio_postprocess_ori = gr.Audio.postprocess
23
-
24
- def audio_postprocess(self, y):
25
- data = audio_postprocess_ori(self, y)
26
- if data is None:
27
- return None
28
- return gr_processing_utils.encode_url_or_file_to_base64(data["name"])
29
-
30
-
31
- gr.Audio.postprocess = audio_postprocess
32
- def create_vc_fn(model, sid):
33
- def vc_fn(input_audio, vc_transform, auto_f0, slice_db, noise_scale, pad_seconds, tts_text, tts_voice, tts_mode):
34
- if tts_mode:
35
- if len(tts_text) > 100 and limitation:
36
- return "Text is too long", None
37
- if tts_text is None or tts_voice is None:
38
- return "You need to enter text and select a voice", None
39
- asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3"))
40
- audio, sr = librosa.load("tts.mp3")
41
- soundfile.write("tts.wav", audio, 24000, format="wav")
42
- wav_path = "tts.wav"
43
- else:
44
- if input_audio is None:
45
- return "You need to select an audio", None
46
- raw_audio_path = f"raw/{input_audio}"
47
- if "." not in raw_audio_path:
48
- raw_audio_path += ".wav"
49
- infer_tool.format_wav(raw_audio_path)
50
- wav_path = Path(raw_audio_path).with_suffix('.wav')
51
- _audio = model.slice_inference(
52
- wav_path, sid, vc_transform, slice_db,
53
- cluster_infer_ratio=0,
54
- auto_predict_f0=auto_f0,
55
- noice_scale=noise_scale,
56
- pad_seconds=pad_seconds)
57
- model.clear_empty()
58
- return "Success", (44100, _audio)
59
- return vc_fn
60
-
61
- def refresh_raw_wav():
62
- return gr.Dropdown.update(choices=os.listdir("raw"))
63
-
64
- def change_to_tts_mode(tts_mode):
65
- if tts_mode:
66
- return gr.Audio.update(visible=False), gr.Button.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True)
67
- else:
68
- return gr.Audio.update(visible=True), gr.Button.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False)
69
-
70
- if __name__ == '__main__':
71
- parser = argparse.ArgumentParser()
72
- parser.add_argument('--device', type=str, default='cpu')
73
- parser.add_argument('--api', action="store_true", default=False)
74
- parser.add_argument("--share", action="store_true", default=False, help="share gradio app")
75
- parser.add_argument("--colab", action="store_true", default=False, help="share gradio app")
76
- args = parser.parse_args()
77
- hubert_model = utils.get_hubert_model().to(args.device)
78
- models = []
79
- voices = []
80
- tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices())
81
- for r in tts_voice_list:
82
- voices.append(f"{r['ShortName']}-{r['Gender']}")
83
- raw = os.listdir("raw")
84
- for f in os.listdir("models"):
85
- name = f
86
- model = Svc(fr"models/{f}/{f}.pth", f"models/{f}/config.json", device=args.device)
87
- cover = f"models/{f}/cover.png" if os.path.exists(f"models/{f}/cover.png") else None
88
- models.append((name, cover, create_vc_fn(model, name)))
89
- with gr.Blocks() as app:
90
- gr.Markdown(
91
- "# <center> Sovits Models\n"
92
- "## <center> The input audio should be clean and pure voice without background music.\n"
93
- "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=sayashi.Sovits-Umamusume)\n\n"
94
- "[Open In Colab](https://colab.research.google.com/drive/1wfsBbMzmtLflOJeqc5ZnJiLY7L239hJW?usp=share_link)"
95
- " without queue and length limitation.\n\n"
96
- "[Original Repo](https://github.com/svc-develop-team/so-vits-svc)\n\n"
97
- "Other models:\n"
98
- "[rudolf](https://huggingface.co/spaces/sayashi/sovits-rudolf)\n"
99
- "[teio](https://huggingface.co/spaces/sayashi/sovits-teio)\n"
100
- "[goldship](https://huggingface.co/spaces/sayashi/sovits-goldship)\n"
101
- "[tannhauser](https://huggingface.co/spaces/sayashi/sovits-tannhauser)\n"
102
-
103
- )
104
- with gr.Tabs():
105
- for (name, cover, vc_fn) in models:
106
- with gr.TabItem(name):
107
- with gr.Row():
108
- gr.Markdown(
109
- '<div align="center">'
110
- f'<img style="width:auto;height:300px;" src="file/{cover}">' if cover else ""
111
- '</div>'
112
- )
113
- with gr.Row():
114
- with gr.Column():
115
- with gr.Row():
116
- vc_input = gr.Dropdown(label="Input audio", choices=raw)
117
- vc_refresh = gr.Button("🔁", variant="primary")
118
- vc_transform = gr.Number(label="vc_transform", value=0)
119
- slice_db = gr.Number(label="slice_db", value=-40)
120
- noise_scale = gr.Number(label="noise_scale", value=0.4)
121
- pad_seconds = gr.Number(label="pad_seconds", value=0.5)
122
- auto_f0 = gr.Checkbox(label="auto_f0", value=False)
123
- tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False)
124
- tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text")
125
- tts_voice = gr.Dropdown(choices=voices, visible=False)
126
- vc_submit = gr.Button("Generate", variant="primary")
127
- with gr.Column():
128
- vc_output1 = gr.Textbox(label="Output Message")
129
- vc_output2 = gr.Audio(label="Output Audio")
130
- vc_submit.click(vc_fn, [vc_input, vc_transform, auto_f0, slice_db, noise_scale, pad_seconds, tts_text, tts_voice, tts_mode], [vc_output1, vc_output2])
131
- vc_refresh.click(refresh_raw_wav, [], [vc_input])
132
- tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, vc_refresh, tts_text, tts_voice])
133
- if args.colab:
134
- webbrowser.open("http://127.0.0.1:7860")
135
- app.queue(concurrency_count=1, api_open=args.api).launch(share=args.share)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlexWang/lama/saicinpainting/training/losses/style_loss.py DELETED
@@ -1,155 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
- import torchvision.models as models
4
-
5
-
6
- class PerceptualLoss(nn.Module):
7
- r"""
8
- Perceptual loss, VGG-based
9
- https://arxiv.org/abs/1603.08155
10
- https://github.com/dxyang/StyleTransfer/blob/master/utils.py
11
- """
12
-
13
- def __init__(self, weights=[1.0, 1.0, 1.0, 1.0, 1.0]):
14
- super(PerceptualLoss, self).__init__()
15
- self.add_module('vgg', VGG19())
16
- self.criterion = torch.nn.L1Loss()
17
- self.weights = weights
18
-
19
- def __call__(self, x, y):
20
- # Compute features
21
- x_vgg, y_vgg = self.vgg(x), self.vgg(y)
22
-
23
- content_loss = 0.0
24
- content_loss += self.weights[0] * self.criterion(x_vgg['relu1_1'], y_vgg['relu1_1'])
25
- content_loss += self.weights[1] * self.criterion(x_vgg['relu2_1'], y_vgg['relu2_1'])
26
- content_loss += self.weights[2] * self.criterion(x_vgg['relu3_1'], y_vgg['relu3_1'])
27
- content_loss += self.weights[3] * self.criterion(x_vgg['relu4_1'], y_vgg['relu4_1'])
28
- content_loss += self.weights[4] * self.criterion(x_vgg['relu5_1'], y_vgg['relu5_1'])
29
-
30
-
31
- return content_loss
32
-
33
-
34
- class VGG19(torch.nn.Module):
35
- def __init__(self):
36
- super(VGG19, self).__init__()
37
- features = models.vgg19(pretrained=True).features
38
- self.relu1_1 = torch.nn.Sequential()
39
- self.relu1_2 = torch.nn.Sequential()
40
-
41
- self.relu2_1 = torch.nn.Sequential()
42
- self.relu2_2 = torch.nn.Sequential()
43
-
44
- self.relu3_1 = torch.nn.Sequential()
45
- self.relu3_2 = torch.nn.Sequential()
46
- self.relu3_3 = torch.nn.Sequential()
47
- self.relu3_4 = torch.nn.Sequential()
48
-
49
- self.relu4_1 = torch.nn.Sequential()
50
- self.relu4_2 = torch.nn.Sequential()
51
- self.relu4_3 = torch.nn.Sequential()
52
- self.relu4_4 = torch.nn.Sequential()
53
-
54
- self.relu5_1 = torch.nn.Sequential()
55
- self.relu5_2 = torch.nn.Sequential()
56
- self.relu5_3 = torch.nn.Sequential()
57
- self.relu5_4 = torch.nn.Sequential()
58
-
59
- for x in range(2):
60
- self.relu1_1.add_module(str(x), features[x])
61
-
62
- for x in range(2, 4):
63
- self.relu1_2.add_module(str(x), features[x])
64
-
65
- for x in range(4, 7):
66
- self.relu2_1.add_module(str(x), features[x])
67
-
68
- for x in range(7, 9):
69
- self.relu2_2.add_module(str(x), features[x])
70
-
71
- for x in range(9, 12):
72
- self.relu3_1.add_module(str(x), features[x])
73
-
74
- for x in range(12, 14):
75
- self.relu3_2.add_module(str(x), features[x])
76
-
77
- for x in range(14, 16):
78
- self.relu3_2.add_module(str(x), features[x])
79
-
80
- for x in range(16, 18):
81
- self.relu3_4.add_module(str(x), features[x])
82
-
83
- for x in range(18, 21):
84
- self.relu4_1.add_module(str(x), features[x])
85
-
86
- for x in range(21, 23):
87
- self.relu4_2.add_module(str(x), features[x])
88
-
89
- for x in range(23, 25):
90
- self.relu4_3.add_module(str(x), features[x])
91
-
92
- for x in range(25, 27):
93
- self.relu4_4.add_module(str(x), features[x])
94
-
95
- for x in range(27, 30):
96
- self.relu5_1.add_module(str(x), features[x])
97
-
98
- for x in range(30, 32):
99
- self.relu5_2.add_module(str(x), features[x])
100
-
101
- for x in range(32, 34):
102
- self.relu5_3.add_module(str(x), features[x])
103
-
104
- for x in range(34, 36):
105
- self.relu5_4.add_module(str(x), features[x])
106
-
107
- # don't need the gradients, just want the features
108
- for param in self.parameters():
109
- param.requires_grad = False
110
-
111
- def forward(self, x):
112
- relu1_1 = self.relu1_1(x)
113
- relu1_2 = self.relu1_2(relu1_1)
114
-
115
- relu2_1 = self.relu2_1(relu1_2)
116
- relu2_2 = self.relu2_2(relu2_1)
117
-
118
- relu3_1 = self.relu3_1(relu2_2)
119
- relu3_2 = self.relu3_2(relu3_1)
120
- relu3_3 = self.relu3_3(relu3_2)
121
- relu3_4 = self.relu3_4(relu3_3)
122
-
123
- relu4_1 = self.relu4_1(relu3_4)
124
- relu4_2 = self.relu4_2(relu4_1)
125
- relu4_3 = self.relu4_3(relu4_2)
126
- relu4_4 = self.relu4_4(relu4_3)
127
-
128
- relu5_1 = self.relu5_1(relu4_4)
129
- relu5_2 = self.relu5_2(relu5_1)
130
- relu5_3 = self.relu5_3(relu5_2)
131
- relu5_4 = self.relu5_4(relu5_3)
132
-
133
- out = {
134
- 'relu1_1': relu1_1,
135
- 'relu1_2': relu1_2,
136
-
137
- 'relu2_1': relu2_1,
138
- 'relu2_2': relu2_2,
139
-
140
- 'relu3_1': relu3_1,
141
- 'relu3_2': relu3_2,
142
- 'relu3_3': relu3_3,
143
- 'relu3_4': relu3_4,
144
-
145
- 'relu4_1': relu4_1,
146
- 'relu4_2': relu4_2,
147
- 'relu4_3': relu4_3,
148
- 'relu4_4': relu4_4,
149
-
150
- 'relu5_1': relu5_1,
151
- 'relu5_2': relu5_2,
152
- 'relu5_3': relu5_3,
153
- 'relu5_4': relu5_4,
154
- }
155
- return out
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/score_sde_ve.md DELETED
@@ -1,20 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Variance Exploding Stochastic Differential Equation (VE-SDE) scheduler
14
-
15
- ## Overview
16
-
17
- Original paper can be found [here](https://arxiv.org/abs/2011.13456).
18
-
19
- ## ScoreSdeVeScheduler
20
- [[autodoc]] ScoreSdeVeScheduler
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/speech_to_image_diffusion.py DELETED
@@ -1,261 +0,0 @@
1
- import inspect
2
- from typing import Callable, List, Optional, Union
3
-
4
- import torch
5
- from transformers import (
6
- CLIPImageProcessor,
7
- CLIPTextModel,
8
- CLIPTokenizer,
9
- WhisperForConditionalGeneration,
10
- WhisperProcessor,
11
- )
12
-
13
- from diffusers import (
14
- AutoencoderKL,
15
- DDIMScheduler,
16
- DiffusionPipeline,
17
- LMSDiscreteScheduler,
18
- PNDMScheduler,
19
- UNet2DConditionModel,
20
- )
21
- from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
22
- from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
23
- from diffusers.utils import logging
24
-
25
-
26
- logger = logging.get_logger(__name__) # pylint: disable=invalid-name
27
-
28
-
29
- class SpeechToImagePipeline(DiffusionPipeline):
30
- def __init__(
31
- self,
32
- speech_model: WhisperForConditionalGeneration,
33
- speech_processor: WhisperProcessor,
34
- vae: AutoencoderKL,
35
- text_encoder: CLIPTextModel,
36
- tokenizer: CLIPTokenizer,
37
- unet: UNet2DConditionModel,
38
- scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
39
- safety_checker: StableDiffusionSafetyChecker,
40
- feature_extractor: CLIPImageProcessor,
41
- ):
42
- super().__init__()
43
-
44
- if safety_checker is None:
45
- logger.warning(
46
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
47
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
48
- " results in services or applications open to the public. Both the diffusers team and Hugging Face"
49
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
50
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
51
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
52
- )
53
-
54
- self.register_modules(
55
- speech_model=speech_model,
56
- speech_processor=speech_processor,
57
- vae=vae,
58
- text_encoder=text_encoder,
59
- tokenizer=tokenizer,
60
- unet=unet,
61
- scheduler=scheduler,
62
- feature_extractor=feature_extractor,
63
- )
64
-
65
- def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
66
- if slice_size == "auto":
67
- slice_size = self.unet.config.attention_head_dim // 2
68
- self.unet.set_attention_slice(slice_size)
69
-
70
- def disable_attention_slicing(self):
71
- self.enable_attention_slicing(None)
72
-
73
- @torch.no_grad()
74
- def __call__(
75
- self,
76
- audio,
77
- sampling_rate=16_000,
78
- height: int = 512,
79
- width: int = 512,
80
- num_inference_steps: int = 50,
81
- guidance_scale: float = 7.5,
82
- negative_prompt: Optional[Union[str, List[str]]] = None,
83
- num_images_per_prompt: Optional[int] = 1,
84
- eta: float = 0.0,
85
- generator: Optional[torch.Generator] = None,
86
- latents: Optional[torch.FloatTensor] = None,
87
- output_type: Optional[str] = "pil",
88
- return_dict: bool = True,
89
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
90
- callback_steps: int = 1,
91
- **kwargs,
92
- ):
93
- inputs = self.speech_processor.feature_extractor(
94
- audio, return_tensors="pt", sampling_rate=sampling_rate
95
- ).input_features.to(self.device)
96
- predicted_ids = self.speech_model.generate(inputs, max_length=480_000)
97
-
98
- prompt = self.speech_processor.tokenizer.batch_decode(predicted_ids, skip_special_tokens=True, normalize=True)[
99
- 0
100
- ]
101
-
102
- if isinstance(prompt, str):
103
- batch_size = 1
104
- elif isinstance(prompt, list):
105
- batch_size = len(prompt)
106
- else:
107
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
108
-
109
- if height % 8 != 0 or width % 8 != 0:
110
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
111
-
112
- if (callback_steps is None) or (
113
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
114
- ):
115
- raise ValueError(
116
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
117
- f" {type(callback_steps)}."
118
- )
119
-
120
- # get prompt text embeddings
121
- text_inputs = self.tokenizer(
122
- prompt,
123
- padding="max_length",
124
- max_length=self.tokenizer.model_max_length,
125
- return_tensors="pt",
126
- )
127
- text_input_ids = text_inputs.input_ids
128
-
129
- if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
130
- removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
131
- logger.warning(
132
- "The following part of your input was truncated because CLIP can only handle sequences up to"
133
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
134
- )
135
- text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
136
- text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
137
-
138
- # duplicate text embeddings for each generation per prompt, using mps friendly method
139
- bs_embed, seq_len, _ = text_embeddings.shape
140
- text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
141
- text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
142
-
143
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
144
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
145
- # corresponds to doing no classifier free guidance.
146
- do_classifier_free_guidance = guidance_scale > 1.0
147
- # get unconditional embeddings for classifier free guidance
148
- if do_classifier_free_guidance:
149
- uncond_tokens: List[str]
150
- if negative_prompt is None:
151
- uncond_tokens = [""] * batch_size
152
- elif type(prompt) is not type(negative_prompt):
153
- raise TypeError(
154
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
155
- f" {type(prompt)}."
156
- )
157
- elif isinstance(negative_prompt, str):
158
- uncond_tokens = [negative_prompt]
159
- elif batch_size != len(negative_prompt):
160
- raise ValueError(
161
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
162
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
163
- " the batch size of `prompt`."
164
- )
165
- else:
166
- uncond_tokens = negative_prompt
167
-
168
- max_length = text_input_ids.shape[-1]
169
- uncond_input = self.tokenizer(
170
- uncond_tokens,
171
- padding="max_length",
172
- max_length=max_length,
173
- truncation=True,
174
- return_tensors="pt",
175
- )
176
- uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
177
-
178
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
179
- seq_len = uncond_embeddings.shape[1]
180
- uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
181
- uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
182
-
183
- # For classifier free guidance, we need to do two forward passes.
184
- # Here we concatenate the unconditional and text embeddings into a single batch
185
- # to avoid doing two forward passes
186
- text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
187
-
188
- # get the initial random noise unless the user supplied it
189
-
190
- # Unlike in other pipelines, latents need to be generated in the target device
191
- # for 1-to-1 results reproducibility with the CompVis implementation.
192
- # However this currently doesn't work in `mps`.
193
- latents_shape = (batch_size * num_images_per_prompt, self.unet.config.in_channels, height // 8, width // 8)
194
- latents_dtype = text_embeddings.dtype
195
- if latents is None:
196
- if self.device.type == "mps":
197
- # randn does not exist on mps
198
- latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
199
- self.device
200
- )
201
- else:
202
- latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
203
- else:
204
- if latents.shape != latents_shape:
205
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
206
- latents = latents.to(self.device)
207
-
208
- # set timesteps
209
- self.scheduler.set_timesteps(num_inference_steps)
210
-
211
- # Some schedulers like PNDM have timesteps as arrays
212
- # It's more optimized to move all timesteps to correct device beforehand
213
- timesteps_tensor = self.scheduler.timesteps.to(self.device)
214
-
215
- # scale the initial noise by the standard deviation required by the scheduler
216
- latents = latents * self.scheduler.init_noise_sigma
217
-
218
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
219
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
220
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
221
- # and should be between [0, 1]
222
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
223
- extra_step_kwargs = {}
224
- if accepts_eta:
225
- extra_step_kwargs["eta"] = eta
226
-
227
- for i, t in enumerate(self.progress_bar(timesteps_tensor)):
228
- # expand the latents if we are doing classifier free guidance
229
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
230
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
231
-
232
- # predict the noise residual
233
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
234
-
235
- # perform guidance
236
- if do_classifier_free_guidance:
237
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
238
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
239
-
240
- # compute the previous noisy sample x_t -> x_t-1
241
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
242
-
243
- # call the callback, if provided
244
- if callback is not None and i % callback_steps == 0:
245
- callback(i, t, latents)
246
-
247
- latents = 1 / 0.18215 * latents
248
- image = self.vae.decode(latents).sample
249
-
250
- image = (image / 2 + 0.5).clamp(0, 1)
251
-
252
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
253
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
254
-
255
- if output_type == "pil":
256
- image = self.numpy_to_pil(image)
257
-
258
- if not return_dict:
259
- return image
260
-
261
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=None)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/instruct_pix2pix/train_instruct_pix2pix.py DELETED
@@ -1,1008 +0,0 @@
1
- #!/usr/bin/env python
2
- # coding=utf-8
3
- # Copyright 2023 The HuggingFace Inc. team. All rights reserved.
4
- #
5
- # Licensed under the Apache License, Version 2.0 (the "License");
6
- # you may not use this file except in compliance with the License.
7
- # You may obtain a copy of the License at
8
- #
9
- # http://www.apache.org/licenses/LICENSE-2.0
10
- #
11
- # Unless required by applicable law or agreed to in writing, software
12
- # distributed under the License is distributed on an "AS IS" BASIS,
13
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
- # See the License for the specific language governing permissions and
15
- # limitations under the License.
16
-
17
- """Script to fine-tune Stable Diffusion for InstructPix2Pix."""
18
-
19
- import argparse
20
- import logging
21
- import math
22
- import os
23
- import shutil
24
- from pathlib import Path
25
-
26
- import accelerate
27
- import datasets
28
- import numpy as np
29
- import PIL
30
- import requests
31
- import torch
32
- import torch.nn as nn
33
- import torch.nn.functional as F
34
- import torch.utils.checkpoint
35
- import transformers
36
- from accelerate import Accelerator
37
- from accelerate.logging import get_logger
38
- from accelerate.utils import ProjectConfiguration, set_seed
39
- from datasets import load_dataset
40
- from huggingface_hub import create_repo, upload_folder
41
- from packaging import version
42
- from torchvision import transforms
43
- from tqdm.auto import tqdm
44
- from transformers import CLIPTextModel, CLIPTokenizer
45
-
46
- import diffusers
47
- from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionInstructPix2PixPipeline, UNet2DConditionModel
48
- from diffusers.optimization import get_scheduler
49
- from diffusers.training_utils import EMAModel
50
- from diffusers.utils import check_min_version, deprecate, is_wandb_available
51
- from diffusers.utils.import_utils import is_xformers_available
52
-
53
-
54
- # Will error if the minimal version of diffusers is not installed. Remove at your own risks.
55
- check_min_version("0.19.0")
56
-
57
- logger = get_logger(__name__, log_level="INFO")
58
-
59
- DATASET_NAME_MAPPING = {
60
- "fusing/instructpix2pix-1000-samples": ("input_image", "edit_prompt", "edited_image"),
61
- }
62
- WANDB_TABLE_COL_NAMES = ["original_image", "edited_image", "edit_prompt"]
63
-
64
-
65
- def parse_args():
66
- parser = argparse.ArgumentParser(description="Simple example of a training script for InstructPix2Pix.")
67
- parser.add_argument(
68
- "--pretrained_model_name_or_path",
69
- type=str,
70
- default=None,
71
- required=True,
72
- help="Path to pretrained model or model identifier from huggingface.co/models.",
73
- )
74
- parser.add_argument(
75
- "--revision",
76
- type=str,
77
- default=None,
78
- required=False,
79
- help="Revision of pretrained model identifier from huggingface.co/models.",
80
- )
81
- parser.add_argument(
82
- "--dataset_name",
83
- type=str,
84
- default=None,
85
- help=(
86
- "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private,"
87
- " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem,"
88
- " or to a folder containing files that 🤗 Datasets can understand."
89
- ),
90
- )
91
- parser.add_argument(
92
- "--dataset_config_name",
93
- type=str,
94
- default=None,
95
- help="The config of the Dataset, leave as None if there's only one config.",
96
- )
97
- parser.add_argument(
98
- "--train_data_dir",
99
- type=str,
100
- default=None,
101
- help=(
102
- "A folder containing the training data. Folder contents must follow the structure described in"
103
- " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file"
104
- " must exist to provide the captions for the images. Ignored if `dataset_name` is specified."
105
- ),
106
- )
107
- parser.add_argument(
108
- "--original_image_column",
109
- type=str,
110
- default="input_image",
111
- help="The column of the dataset containing the original image on which edits where made.",
112
- )
113
- parser.add_argument(
114
- "--edited_image_column",
115
- type=str,
116
- default="edited_image",
117
- help="The column of the dataset containing the edited image.",
118
- )
119
- parser.add_argument(
120
- "--edit_prompt_column",
121
- type=str,
122
- default="edit_prompt",
123
- help="The column of the dataset containing the edit instruction.",
124
- )
125
- parser.add_argument(
126
- "--val_image_url",
127
- type=str,
128
- default=None,
129
- help="URL to the original image that you would like to edit (used during inference for debugging purposes).",
130
- )
131
- parser.add_argument(
132
- "--validation_prompt", type=str, default=None, help="A prompt that is sampled during training for inference."
133
- )
134
- parser.add_argument(
135
- "--num_validation_images",
136
- type=int,
137
- default=4,
138
- help="Number of images that should be generated during validation with `validation_prompt`.",
139
- )
140
- parser.add_argument(
141
- "--validation_epochs",
142
- type=int,
143
- default=1,
144
- help=(
145
- "Run fine-tuning validation every X epochs. The validation process consists of running the prompt"
146
- " `args.validation_prompt` multiple times: `args.num_validation_images`."
147
- ),
148
- )
149
- parser.add_argument(
150
- "--max_train_samples",
151
- type=int,
152
- default=None,
153
- help=(
154
- "For debugging purposes or quicker training, truncate the number of training examples to this "
155
- "value if set."
156
- ),
157
- )
158
- parser.add_argument(
159
- "--output_dir",
160
- type=str,
161
- default="instruct-pix2pix-model",
162
- help="The output directory where the model predictions and checkpoints will be written.",
163
- )
164
- parser.add_argument(
165
- "--cache_dir",
166
- type=str,
167
- default=None,
168
- help="The directory where the downloaded models and datasets will be stored.",
169
- )
170
- parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
171
- parser.add_argument(
172
- "--resolution",
173
- type=int,
174
- default=256,
175
- help=(
176
- "The resolution for input images, all the images in the train/validation dataset will be resized to this"
177
- " resolution"
178
- ),
179
- )
180
- parser.add_argument(
181
- "--center_crop",
182
- default=False,
183
- action="store_true",
184
- help=(
185
- "Whether to center crop the input images to the resolution. If not set, the images will be randomly"
186
- " cropped. The images will be resized to the resolution first before cropping."
187
- ),
188
- )
189
- parser.add_argument(
190
- "--random_flip",
191
- action="store_true",
192
- help="whether to randomly flip images horizontally",
193
- )
194
- parser.add_argument(
195
- "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader."
196
- )
197
- parser.add_argument("--num_train_epochs", type=int, default=100)
198
- parser.add_argument(
199
- "--max_train_steps",
200
- type=int,
201
- default=None,
202
- help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
203
- )
204
- parser.add_argument(
205
- "--gradient_accumulation_steps",
206
- type=int,
207
- default=1,
208
- help="Number of updates steps to accumulate before performing a backward/update pass.",
209
- )
210
- parser.add_argument(
211
- "--gradient_checkpointing",
212
- action="store_true",
213
- help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
214
- )
215
- parser.add_argument(
216
- "--learning_rate",
217
- type=float,
218
- default=1e-4,
219
- help="Initial learning rate (after the potential warmup period) to use.",
220
- )
221
- parser.add_argument(
222
- "--scale_lr",
223
- action="store_true",
224
- default=False,
225
- help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
226
- )
227
- parser.add_argument(
228
- "--lr_scheduler",
229
- type=str,
230
- default="constant",
231
- help=(
232
- 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
233
- ' "constant", "constant_with_warmup"]'
234
- ),
235
- )
236
- parser.add_argument(
237
- "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
238
- )
239
- parser.add_argument(
240
- "--conditioning_dropout_prob",
241
- type=float,
242
- default=None,
243
- help="Conditioning dropout probability. Drops out the conditionings (image and edit prompt) used in training InstructPix2Pix. See section 3.2.1 in the paper: https://arxiv.org/abs/2211.09800.",
244
- )
245
- parser.add_argument(
246
- "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes."
247
- )
248
- parser.add_argument(
249
- "--allow_tf32",
250
- action="store_true",
251
- help=(
252
- "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see"
253
- " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices"
254
- ),
255
- )
256
- parser.add_argument("--use_ema", action="store_true", help="Whether to use EMA model.")
257
- parser.add_argument(
258
- "--non_ema_revision",
259
- type=str,
260
- default=None,
261
- required=False,
262
- help=(
263
- "Revision of pretrained non-ema model identifier. Must be a branch, tag or git identifier of the local or"
264
- " remote repository specified with --pretrained_model_name_or_path."
265
- ),
266
- )
267
- parser.add_argument(
268
- "--dataloader_num_workers",
269
- type=int,
270
- default=0,
271
- help=(
272
- "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process."
273
- ),
274
- )
275
- parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
276
- parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
277
- parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
278
- parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
279
- parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
280
- parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
281
- parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
282
- parser.add_argument(
283
- "--hub_model_id",
284
- type=str,
285
- default=None,
286
- help="The name of the repository to keep in sync with the local `output_dir`.",
287
- )
288
- parser.add_argument(
289
- "--logging_dir",
290
- type=str,
291
- default="logs",
292
- help=(
293
- "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
294
- " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
295
- ),
296
- )
297
- parser.add_argument(
298
- "--mixed_precision",
299
- type=str,
300
- default=None,
301
- choices=["no", "fp16", "bf16"],
302
- help=(
303
- "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >="
304
- " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the"
305
- " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config."
306
- ),
307
- )
308
- parser.add_argument(
309
- "--report_to",
310
- type=str,
311
- default="tensorboard",
312
- help=(
313
- 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`'
314
- ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.'
315
- ),
316
- )
317
- parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
318
- parser.add_argument(
319
- "--checkpointing_steps",
320
- type=int,
321
- default=500,
322
- help=(
323
- "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming"
324
- " training using `--resume_from_checkpoint`."
325
- ),
326
- )
327
- parser.add_argument(
328
- "--checkpoints_total_limit",
329
- type=int,
330
- default=None,
331
- help=("Max number of checkpoints to store."),
332
- )
333
- parser.add_argument(
334
- "--resume_from_checkpoint",
335
- type=str,
336
- default=None,
337
- help=(
338
- "Whether training should be resumed from a previous checkpoint. Use a path saved by"
339
- ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.'
340
- ),
341
- )
342
- parser.add_argument(
343
- "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers."
344
- )
345
-
346
- args = parser.parse_args()
347
- env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
348
- if env_local_rank != -1 and env_local_rank != args.local_rank:
349
- args.local_rank = env_local_rank
350
-
351
- # Sanity checks
352
- if args.dataset_name is None and args.train_data_dir is None:
353
- raise ValueError("Need either a dataset name or a training folder.")
354
-
355
- # default to using the same revision for the non-ema model if not specified
356
- if args.non_ema_revision is None:
357
- args.non_ema_revision = args.revision
358
-
359
- return args
360
-
361
-
362
- def convert_to_np(image, resolution):
363
- image = image.convert("RGB").resize((resolution, resolution))
364
- return np.array(image).transpose(2, 0, 1)
365
-
366
-
367
- def download_image(url):
368
- image = PIL.Image.open(requests.get(url, stream=True).raw)
369
- image = PIL.ImageOps.exif_transpose(image)
370
- image = image.convert("RGB")
371
- return image
372
-
373
-
374
- def main():
375
- args = parse_args()
376
-
377
- if args.non_ema_revision is not None:
378
- deprecate(
379
- "non_ema_revision!=None",
380
- "0.15.0",
381
- message=(
382
- "Downloading 'non_ema' weights from revision branches of the Hub is deprecated. Please make sure to"
383
- " use `--variant=non_ema` instead."
384
- ),
385
- )
386
- logging_dir = os.path.join(args.output_dir, args.logging_dir)
387
- accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir)
388
- accelerator = Accelerator(
389
- gradient_accumulation_steps=args.gradient_accumulation_steps,
390
- mixed_precision=args.mixed_precision,
391
- log_with=args.report_to,
392
- project_config=accelerator_project_config,
393
- )
394
-
395
- generator = torch.Generator(device=accelerator.device).manual_seed(args.seed)
396
-
397
- if args.report_to == "wandb":
398
- if not is_wandb_available():
399
- raise ImportError("Make sure to install wandb if you want to use it for logging during training.")
400
- import wandb
401
-
402
- # Make one log on every process with the configuration for debugging.
403
- logging.basicConfig(
404
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
405
- datefmt="%m/%d/%Y %H:%M:%S",
406
- level=logging.INFO,
407
- )
408
- logger.info(accelerator.state, main_process_only=False)
409
- if accelerator.is_local_main_process:
410
- datasets.utils.logging.set_verbosity_warning()
411
- transformers.utils.logging.set_verbosity_warning()
412
- diffusers.utils.logging.set_verbosity_info()
413
- else:
414
- datasets.utils.logging.set_verbosity_error()
415
- transformers.utils.logging.set_verbosity_error()
416
- diffusers.utils.logging.set_verbosity_error()
417
-
418
- # If passed along, set the training seed now.
419
- if args.seed is not None:
420
- set_seed(args.seed)
421
-
422
- # Handle the repository creation
423
- if accelerator.is_main_process:
424
- if args.output_dir is not None:
425
- os.makedirs(args.output_dir, exist_ok=True)
426
-
427
- if args.push_to_hub:
428
- repo_id = create_repo(
429
- repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token
430
- ).repo_id
431
-
432
- # Load scheduler, tokenizer and models.
433
- noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
434
- tokenizer = CLIPTokenizer.from_pretrained(
435
- args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision
436
- )
437
- text_encoder = CLIPTextModel.from_pretrained(
438
- args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision
439
- )
440
- vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision)
441
- unet = UNet2DConditionModel.from_pretrained(
442
- args.pretrained_model_name_or_path, subfolder="unet", revision=args.non_ema_revision
443
- )
444
-
445
- # InstructPix2Pix uses an additional image for conditioning. To accommodate that,
446
- # it uses 8 channels (instead of 4) in the first (conv) layer of the UNet. This UNet is
447
- # then fine-tuned on the custom InstructPix2Pix dataset. This modified UNet is initialized
448
- # from the pre-trained checkpoints. For the extra channels added to the first layer, they are
449
- # initialized to zero.
450
- logger.info("Initializing the InstructPix2Pix UNet from the pretrained UNet.")
451
- in_channels = 8
452
- out_channels = unet.conv_in.out_channels
453
- unet.register_to_config(in_channels=in_channels)
454
-
455
- with torch.no_grad():
456
- new_conv_in = nn.Conv2d(
457
- in_channels, out_channels, unet.conv_in.kernel_size, unet.conv_in.stride, unet.conv_in.padding
458
- )
459
- new_conv_in.weight.zero_()
460
- new_conv_in.weight[:, :4, :, :].copy_(unet.conv_in.weight)
461
- unet.conv_in = new_conv_in
462
-
463
- # Freeze vae and text_encoder
464
- vae.requires_grad_(False)
465
- text_encoder.requires_grad_(False)
466
-
467
- # Create EMA for the unet.
468
- if args.use_ema:
469
- ema_unet = EMAModel(unet.parameters(), model_cls=UNet2DConditionModel, model_config=unet.config)
470
-
471
- if args.enable_xformers_memory_efficient_attention:
472
- if is_xformers_available():
473
- import xformers
474
-
475
- xformers_version = version.parse(xformers.__version__)
476
- if xformers_version == version.parse("0.0.16"):
477
- logger.warn(
478
- "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details."
479
- )
480
- unet.enable_xformers_memory_efficient_attention()
481
- else:
482
- raise ValueError("xformers is not available. Make sure it is installed correctly")
483
-
484
- # `accelerate` 0.16.0 will have better support for customized saving
485
- if version.parse(accelerate.__version__) >= version.parse("0.16.0"):
486
- # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format
487
- def save_model_hook(models, weights, output_dir):
488
- if args.use_ema:
489
- ema_unet.save_pretrained(os.path.join(output_dir, "unet_ema"))
490
-
491
- for i, model in enumerate(models):
492
- model.save_pretrained(os.path.join(output_dir, "unet"))
493
-
494
- # make sure to pop weight so that corresponding model is not saved again
495
- weights.pop()
496
-
497
- def load_model_hook(models, input_dir):
498
- if args.use_ema:
499
- load_model = EMAModel.from_pretrained(os.path.join(input_dir, "unet_ema"), UNet2DConditionModel)
500
- ema_unet.load_state_dict(load_model.state_dict())
501
- ema_unet.to(accelerator.device)
502
- del load_model
503
-
504
- for i in range(len(models)):
505
- # pop models so that they are not loaded again
506
- model = models.pop()
507
-
508
- # load diffusers style into model
509
- load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet")
510
- model.register_to_config(**load_model.config)
511
-
512
- model.load_state_dict(load_model.state_dict())
513
- del load_model
514
-
515
- accelerator.register_save_state_pre_hook(save_model_hook)
516
- accelerator.register_load_state_pre_hook(load_model_hook)
517
-
518
- if args.gradient_checkpointing:
519
- unet.enable_gradient_checkpointing()
520
-
521
- # Enable TF32 for faster training on Ampere GPUs,
522
- # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices
523
- if args.allow_tf32:
524
- torch.backends.cuda.matmul.allow_tf32 = True
525
-
526
- if args.scale_lr:
527
- args.learning_rate = (
528
- args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
529
- )
530
-
531
- # Initialize the optimizer
532
- if args.use_8bit_adam:
533
- try:
534
- import bitsandbytes as bnb
535
- except ImportError:
536
- raise ImportError(
537
- "Please install bitsandbytes to use 8-bit Adam. You can do so by running `pip install bitsandbytes`"
538
- )
539
-
540
- optimizer_cls = bnb.optim.AdamW8bit
541
- else:
542
- optimizer_cls = torch.optim.AdamW
543
-
544
- optimizer = optimizer_cls(
545
- unet.parameters(),
546
- lr=args.learning_rate,
547
- betas=(args.adam_beta1, args.adam_beta2),
548
- weight_decay=args.adam_weight_decay,
549
- eps=args.adam_epsilon,
550
- )
551
-
552
- # Get the datasets: you can either provide your own training and evaluation files (see below)
553
- # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub).
554
-
555
- # In distributed training, the load_dataset function guarantees that only one local process can concurrently
556
- # download the dataset.
557
- if args.dataset_name is not None:
558
- # Downloading and loading a dataset from the hub.
559
- dataset = load_dataset(
560
- args.dataset_name,
561
- args.dataset_config_name,
562
- cache_dir=args.cache_dir,
563
- )
564
- else:
565
- data_files = {}
566
- if args.train_data_dir is not None:
567
- data_files["train"] = os.path.join(args.train_data_dir, "**")
568
- dataset = load_dataset(
569
- "imagefolder",
570
- data_files=data_files,
571
- cache_dir=args.cache_dir,
572
- )
573
- # See more about loading custom images at
574
- # https://huggingface.co/docs/datasets/main/en/image_load#imagefolder
575
-
576
- # Preprocessing the datasets.
577
- # We need to tokenize inputs and targets.
578
- column_names = dataset["train"].column_names
579
-
580
- # 6. Get the column names for input/target.
581
- dataset_columns = DATASET_NAME_MAPPING.get(args.dataset_name, None)
582
- if args.original_image_column is None:
583
- original_image_column = dataset_columns[0] if dataset_columns is not None else column_names[0]
584
- else:
585
- original_image_column = args.original_image_column
586
- if original_image_column not in column_names:
587
- raise ValueError(
588
- f"--original_image_column' value '{args.original_image_column}' needs to be one of: {', '.join(column_names)}"
589
- )
590
- if args.edit_prompt_column is None:
591
- edit_prompt_column = dataset_columns[1] if dataset_columns is not None else column_names[1]
592
- else:
593
- edit_prompt_column = args.edit_prompt_column
594
- if edit_prompt_column not in column_names:
595
- raise ValueError(
596
- f"--edit_prompt_column' value '{args.edit_prompt_column}' needs to be one of: {', '.join(column_names)}"
597
- )
598
- if args.edited_image_column is None:
599
- edited_image_column = dataset_columns[2] if dataset_columns is not None else column_names[2]
600
- else:
601
- edited_image_column = args.edited_image_column
602
- if edited_image_column not in column_names:
603
- raise ValueError(
604
- f"--edited_image_column' value '{args.edited_image_column}' needs to be one of: {', '.join(column_names)}"
605
- )
606
-
607
- # Preprocessing the datasets.
608
- # We need to tokenize input captions and transform the images.
609
- def tokenize_captions(captions):
610
- inputs = tokenizer(
611
- captions, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt"
612
- )
613
- return inputs.input_ids
614
-
615
- # Preprocessing the datasets.
616
- train_transforms = transforms.Compose(
617
- [
618
- transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution),
619
- transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x),
620
- ]
621
- )
622
-
623
- def preprocess_images(examples):
624
- original_images = np.concatenate(
625
- [convert_to_np(image, args.resolution) for image in examples[original_image_column]]
626
- )
627
- edited_images = np.concatenate(
628
- [convert_to_np(image, args.resolution) for image in examples[edited_image_column]]
629
- )
630
- # We need to ensure that the original and the edited images undergo the same
631
- # augmentation transforms.
632
- images = np.concatenate([original_images, edited_images])
633
- images = torch.tensor(images)
634
- images = 2 * (images / 255) - 1
635
- return train_transforms(images)
636
-
637
- def preprocess_train(examples):
638
- # Preprocess images.
639
- preprocessed_images = preprocess_images(examples)
640
- # Since the original and edited images were concatenated before
641
- # applying the transformations, we need to separate them and reshape
642
- # them accordingly.
643
- original_images, edited_images = preprocessed_images.chunk(2)
644
- original_images = original_images.reshape(-1, 3, args.resolution, args.resolution)
645
- edited_images = edited_images.reshape(-1, 3, args.resolution, args.resolution)
646
-
647
- # Collate the preprocessed images into the `examples`.
648
- examples["original_pixel_values"] = original_images
649
- examples["edited_pixel_values"] = edited_images
650
-
651
- # Preprocess the captions.
652
- captions = list(examples[edit_prompt_column])
653
- examples["input_ids"] = tokenize_captions(captions)
654
- return examples
655
-
656
- with accelerator.main_process_first():
657
- if args.max_train_samples is not None:
658
- dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples))
659
- # Set the training transforms
660
- train_dataset = dataset["train"].with_transform(preprocess_train)
661
-
662
- def collate_fn(examples):
663
- original_pixel_values = torch.stack([example["original_pixel_values"] for example in examples])
664
- original_pixel_values = original_pixel_values.to(memory_format=torch.contiguous_format).float()
665
- edited_pixel_values = torch.stack([example["edited_pixel_values"] for example in examples])
666
- edited_pixel_values = edited_pixel_values.to(memory_format=torch.contiguous_format).float()
667
- input_ids = torch.stack([example["input_ids"] for example in examples])
668
- return {
669
- "original_pixel_values": original_pixel_values,
670
- "edited_pixel_values": edited_pixel_values,
671
- "input_ids": input_ids,
672
- }
673
-
674
- # DataLoaders creation:
675
- train_dataloader = torch.utils.data.DataLoader(
676
- train_dataset,
677
- shuffle=True,
678
- collate_fn=collate_fn,
679
- batch_size=args.train_batch_size,
680
- num_workers=args.dataloader_num_workers,
681
- )
682
-
683
- # Scheduler and math around the number of training steps.
684
- overrode_max_train_steps = False
685
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
686
- if args.max_train_steps is None:
687
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
688
- overrode_max_train_steps = True
689
-
690
- lr_scheduler = get_scheduler(
691
- args.lr_scheduler,
692
- optimizer=optimizer,
693
- num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes,
694
- num_training_steps=args.max_train_steps * accelerator.num_processes,
695
- )
696
-
697
- # Prepare everything with our `accelerator`.
698
- unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
699
- unet, optimizer, train_dataloader, lr_scheduler
700
- )
701
-
702
- if args.use_ema:
703
- ema_unet.to(accelerator.device)
704
-
705
- # For mixed precision training we cast the text_encoder and vae weights to half-precision
706
- # as these models are only used for inference, keeping weights in full precision is not required.
707
- weight_dtype = torch.float32
708
- if accelerator.mixed_precision == "fp16":
709
- weight_dtype = torch.float16
710
- elif accelerator.mixed_precision == "bf16":
711
- weight_dtype = torch.bfloat16
712
-
713
- # Move text_encode and vae to gpu and cast to weight_dtype
714
- text_encoder.to(accelerator.device, dtype=weight_dtype)
715
- vae.to(accelerator.device, dtype=weight_dtype)
716
-
717
- # We need to recalculate our total training steps as the size of the training dataloader may have changed.
718
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
719
- if overrode_max_train_steps:
720
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
721
- # Afterwards we recalculate our number of training epochs
722
- args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
723
-
724
- # We need to initialize the trackers we use, and also store our configuration.
725
- # The trackers initializes automatically on the main process.
726
- if accelerator.is_main_process:
727
- accelerator.init_trackers("instruct-pix2pix", config=vars(args))
728
-
729
- # Train!
730
- total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
731
-
732
- logger.info("***** Running training *****")
733
- logger.info(f" Num examples = {len(train_dataset)}")
734
- logger.info(f" Num Epochs = {args.num_train_epochs}")
735
- logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
736
- logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
737
- logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
738
- logger.info(f" Total optimization steps = {args.max_train_steps}")
739
- global_step = 0
740
- first_epoch = 0
741
-
742
- # Potentially load in the weights and states from a previous save
743
- if args.resume_from_checkpoint:
744
- if args.resume_from_checkpoint != "latest":
745
- path = os.path.basename(args.resume_from_checkpoint)
746
- else:
747
- # Get the most recent checkpoint
748
- dirs = os.listdir(args.output_dir)
749
- dirs = [d for d in dirs if d.startswith("checkpoint")]
750
- dirs = sorted(dirs, key=lambda x: int(x.split("-")[1]))
751
- path = dirs[-1] if len(dirs) > 0 else None
752
-
753
- if path is None:
754
- accelerator.print(
755
- f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run."
756
- )
757
- args.resume_from_checkpoint = None
758
- else:
759
- accelerator.print(f"Resuming from checkpoint {path}")
760
- accelerator.load_state(os.path.join(args.output_dir, path))
761
- global_step = int(path.split("-")[1])
762
-
763
- resume_global_step = global_step * args.gradient_accumulation_steps
764
- first_epoch = global_step // num_update_steps_per_epoch
765
- resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps)
766
-
767
- # Only show the progress bar once on each machine.
768
- progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process)
769
- progress_bar.set_description("Steps")
770
-
771
- for epoch in range(first_epoch, args.num_train_epochs):
772
- unet.train()
773
- train_loss = 0.0
774
- for step, batch in enumerate(train_dataloader):
775
- # Skip steps until we reach the resumed step
776
- if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step:
777
- if step % args.gradient_accumulation_steps == 0:
778
- progress_bar.update(1)
779
- continue
780
-
781
- with accelerator.accumulate(unet):
782
- # We want to learn the denoising process w.r.t the edited images which
783
- # are conditioned on the original image (which was edited) and the edit instruction.
784
- # So, first, convert images to latent space.
785
- latents = vae.encode(batch["edited_pixel_values"].to(weight_dtype)).latent_dist.sample()
786
- latents = latents * vae.config.scaling_factor
787
-
788
- # Sample noise that we'll add to the latents
789
- noise = torch.randn_like(latents)
790
- bsz = latents.shape[0]
791
- # Sample a random timestep for each image
792
- timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device)
793
- timesteps = timesteps.long()
794
-
795
- # Add noise to the latents according to the noise magnitude at each timestep
796
- # (this is the forward diffusion process)
797
- noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
798
-
799
- # Get the text embedding for conditioning.
800
- encoder_hidden_states = text_encoder(batch["input_ids"])[0]
801
-
802
- # Get the additional image embedding for conditioning.
803
- # Instead of getting a diagonal Gaussian here, we simply take the mode.
804
- original_image_embeds = vae.encode(batch["original_pixel_values"].to(weight_dtype)).latent_dist.mode()
805
-
806
- # Conditioning dropout to support classifier-free guidance during inference. For more details
807
- # check out the section 3.2.1 of the original paper https://arxiv.org/abs/2211.09800.
808
- if args.conditioning_dropout_prob is not None:
809
- random_p = torch.rand(bsz, device=latents.device, generator=generator)
810
- # Sample masks for the edit prompts.
811
- prompt_mask = random_p < 2 * args.conditioning_dropout_prob
812
- prompt_mask = prompt_mask.reshape(bsz, 1, 1)
813
- # Final text conditioning.
814
- null_conditioning = text_encoder(tokenize_captions([""]).to(accelerator.device))[0]
815
- encoder_hidden_states = torch.where(prompt_mask, null_conditioning, encoder_hidden_states)
816
-
817
- # Sample masks for the original images.
818
- image_mask_dtype = original_image_embeds.dtype
819
- image_mask = 1 - (
820
- (random_p >= args.conditioning_dropout_prob).to(image_mask_dtype)
821
- * (random_p < 3 * args.conditioning_dropout_prob).to(image_mask_dtype)
822
- )
823
- image_mask = image_mask.reshape(bsz, 1, 1, 1)
824
- # Final image conditioning.
825
- original_image_embeds = image_mask * original_image_embeds
826
-
827
- # Concatenate the `original_image_embeds` with the `noisy_latents`.
828
- concatenated_noisy_latents = torch.cat([noisy_latents, original_image_embeds], dim=1)
829
-
830
- # Get the target for loss depending on the prediction type
831
- if noise_scheduler.config.prediction_type == "epsilon":
832
- target = noise
833
- elif noise_scheduler.config.prediction_type == "v_prediction":
834
- target = noise_scheduler.get_velocity(latents, noise, timesteps)
835
- else:
836
- raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
837
-
838
- # Predict the noise residual and compute loss
839
- model_pred = unet(concatenated_noisy_latents, timesteps, encoder_hidden_states).sample
840
- loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
841
-
842
- # Gather the losses across all processes for logging (if we use distributed training).
843
- avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean()
844
- train_loss += avg_loss.item() / args.gradient_accumulation_steps
845
-
846
- # Backpropagate
847
- accelerator.backward(loss)
848
- if accelerator.sync_gradients:
849
- accelerator.clip_grad_norm_(unet.parameters(), args.max_grad_norm)
850
- optimizer.step()
851
- lr_scheduler.step()
852
- optimizer.zero_grad()
853
-
854
- # Checks if the accelerator has performed an optimization step behind the scenes
855
- if accelerator.sync_gradients:
856
- if args.use_ema:
857
- ema_unet.step(unet.parameters())
858
- progress_bar.update(1)
859
- global_step += 1
860
- accelerator.log({"train_loss": train_loss}, step=global_step)
861
- train_loss = 0.0
862
-
863
- if global_step % args.checkpointing_steps == 0:
864
- if accelerator.is_main_process:
865
- # _before_ saving state, check if this save would set us over the `checkpoints_total_limit`
866
- if args.checkpoints_total_limit is not None:
867
- checkpoints = os.listdir(args.output_dir)
868
- checkpoints = [d for d in checkpoints if d.startswith("checkpoint")]
869
- checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1]))
870
-
871
- # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints
872
- if len(checkpoints) >= args.checkpoints_total_limit:
873
- num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1
874
- removing_checkpoints = checkpoints[0:num_to_remove]
875
-
876
- logger.info(
877
- f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints"
878
- )
879
- logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}")
880
-
881
- for removing_checkpoint in removing_checkpoints:
882
- removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint)
883
- shutil.rmtree(removing_checkpoint)
884
-
885
- save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}")
886
- accelerator.save_state(save_path)
887
- logger.info(f"Saved state to {save_path}")
888
-
889
- logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
890
- progress_bar.set_postfix(**logs)
891
-
892
- if global_step >= args.max_train_steps:
893
- break
894
-
895
- if accelerator.is_main_process:
896
- if (
897
- (args.val_image_url is not None)
898
- and (args.validation_prompt is not None)
899
- and (epoch % args.validation_epochs == 0)
900
- ):
901
- logger.info(
902
- f"Running validation... \n Generating {args.num_validation_images} images with prompt:"
903
- f" {args.validation_prompt}."
904
- )
905
- # create pipeline
906
- if args.use_ema:
907
- # Store the UNet parameters temporarily and load the EMA parameters to perform inference.
908
- ema_unet.store(unet.parameters())
909
- ema_unet.copy_to(unet.parameters())
910
- # The models need unwrapping because for compatibility in distributed training mode.
911
- pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained(
912
- args.pretrained_model_name_or_path,
913
- unet=accelerator.unwrap_model(unet),
914
- text_encoder=accelerator.unwrap_model(text_encoder),
915
- vae=accelerator.unwrap_model(vae),
916
- revision=args.revision,
917
- torch_dtype=weight_dtype,
918
- )
919
- pipeline = pipeline.to(accelerator.device)
920
- pipeline.set_progress_bar_config(disable=True)
921
-
922
- # run inference
923
- original_image = download_image(args.val_image_url)
924
- edited_images = []
925
- with torch.autocast(
926
- str(accelerator.device).replace(":0", ""), enabled=accelerator.mixed_precision == "fp16"
927
- ):
928
- for _ in range(args.num_validation_images):
929
- edited_images.append(
930
- pipeline(
931
- args.validation_prompt,
932
- image=original_image,
933
- num_inference_steps=20,
934
- image_guidance_scale=1.5,
935
- guidance_scale=7,
936
- generator=generator,
937
- ).images[0]
938
- )
939
-
940
- for tracker in accelerator.trackers:
941
- if tracker.name == "wandb":
942
- wandb_table = wandb.Table(columns=WANDB_TABLE_COL_NAMES)
943
- for edited_image in edited_images:
944
- wandb_table.add_data(
945
- wandb.Image(original_image), wandb.Image(edited_image), args.validation_prompt
946
- )
947
- tracker.log({"validation": wandb_table})
948
- if args.use_ema:
949
- # Switch back to the original UNet parameters.
950
- ema_unet.restore(unet.parameters())
951
-
952
- del pipeline
953
- torch.cuda.empty_cache()
954
-
955
- # Create the pipeline using the trained modules and save it.
956
- accelerator.wait_for_everyone()
957
- if accelerator.is_main_process:
958
- unet = accelerator.unwrap_model(unet)
959
- if args.use_ema:
960
- ema_unet.copy_to(unet.parameters())
961
-
962
- pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained(
963
- args.pretrained_model_name_or_path,
964
- text_encoder=accelerator.unwrap_model(text_encoder),
965
- vae=accelerator.unwrap_model(vae),
966
- unet=unet,
967
- revision=args.revision,
968
- )
969
- pipeline.save_pretrained(args.output_dir)
970
-
971
- if args.push_to_hub:
972
- upload_folder(
973
- repo_id=repo_id,
974
- folder_path=args.output_dir,
975
- commit_message="End of training",
976
- ignore_patterns=["step_*", "epoch_*"],
977
- )
978
-
979
- if args.validation_prompt is not None:
980
- edited_images = []
981
- pipeline = pipeline.to(accelerator.device)
982
- with torch.autocast(str(accelerator.device).replace(":0", "")):
983
- for _ in range(args.num_validation_images):
984
- edited_images.append(
985
- pipeline(
986
- args.validation_prompt,
987
- image=original_image,
988
- num_inference_steps=20,
989
- image_guidance_scale=1.5,
990
- guidance_scale=7,
991
- generator=generator,
992
- ).images[0]
993
- )
994
-
995
- for tracker in accelerator.trackers:
996
- if tracker.name == "wandb":
997
- wandb_table = wandb.Table(columns=WANDB_TABLE_COL_NAMES)
998
- for edited_image in edited_images:
999
- wandb_table.add_data(
1000
- wandb.Image(original_image), wandb.Image(edited_image), args.validation_prompt
1001
- )
1002
- tracker.log({"test": wandb_table})
1003
-
1004
- accelerator.end_training()
1005
-
1006
-
1007
- if __name__ == "__main__":
1008
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py DELETED
@@ -1,717 +0,0 @@
1
- # Copyright 2023 The HuggingFace Team. All rights reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
-
15
- import inspect
16
- import warnings
17
- from typing import Any, Callable, Dict, List, Optional, Union
18
-
19
- import torch
20
- from packaging import version
21
- from transformers import CLIPImageProcessor, XLMRobertaTokenizer
22
-
23
- from diffusers.utils import is_accelerate_available, is_accelerate_version
24
-
25
- from ...configuration_utils import FrozenDict
26
- from ...image_processor import VaeImageProcessor
27
- from ...loaders import LoraLoaderMixin, TextualInversionLoaderMixin
28
- from ...models import AutoencoderKL, UNet2DConditionModel
29
- from ...schedulers import KarrasDiffusionSchedulers
30
- from ...utils import deprecate, logging, randn_tensor, replace_example_docstring
31
- from ..pipeline_utils import DiffusionPipeline
32
- from ..stable_diffusion.safety_checker import StableDiffusionSafetyChecker
33
- from . import AltDiffusionPipelineOutput, RobertaSeriesModelWithTransformation
34
-
35
-
36
- logger = logging.get_logger(__name__) # pylint: disable=invalid-name
37
-
38
- EXAMPLE_DOC_STRING = """
39
- Examples:
40
- ```py
41
- >>> import torch
42
- >>> from diffusers import AltDiffusionPipeline
43
-
44
- >>> pipe = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", torch_dtype=torch.float16)
45
- >>> pipe = pipe.to("cuda")
46
-
47
- >>> # "dark elf princess, highly detailed, d & d, fantasy, highly detailed, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and fuji choko and viktoria gavrilenko and hoang lap"
48
- >>> prompt = "黑暗精灵公主,非常详细,幻想,非常详细,数字绘画,概念艺术,敏锐的焦点,插图"
49
- >>> image = pipe(prompt).images[0]
50
- ```
51
- """
52
-
53
-
54
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.rescale_noise_cfg
55
- def rescale_noise_cfg(noise_cfg, noise_pred_text, guidance_rescale=0.0):
56
- """
57
- Rescale `noise_cfg` according to `guidance_rescale`. Based on findings of [Common Diffusion Noise Schedules and
58
- Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). See Section 3.4
59
- """
60
- std_text = noise_pred_text.std(dim=list(range(1, noise_pred_text.ndim)), keepdim=True)
61
- std_cfg = noise_cfg.std(dim=list(range(1, noise_cfg.ndim)), keepdim=True)
62
- # rescale the results from guidance (fixes overexposure)
63
- noise_pred_rescaled = noise_cfg * (std_text / std_cfg)
64
- # mix with the original results from guidance by factor guidance_rescale to avoid "plain looking" images
65
- noise_cfg = guidance_rescale * noise_pred_rescaled + (1 - guidance_rescale) * noise_cfg
66
- return noise_cfg
67
-
68
-
69
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline with Stable->Alt, CLIPTextModel->RobertaSeriesModelWithTransformation, CLIPTokenizer->XLMRobertaTokenizer, AltDiffusionSafetyChecker->StableDiffusionSafetyChecker
70
- class AltDiffusionPipeline(DiffusionPipeline, TextualInversionLoaderMixin, LoraLoaderMixin):
71
- r"""
72
- Pipeline for text-to-image generation using Alt Diffusion.
73
-
74
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
75
- implemented for all pipelines (downloading, saving, running on a particular device, etc.).
76
-
77
- The pipeline also inherits the following loading methods:
78
- - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings
79
- - [`~loaders.LoraLoaderMixin.load_lora_weights`] for loading LoRA weights
80
- - [`~loaders.LoraLoaderMixin.save_lora_weights`] for saving LoRA weights
81
- - [`~loaders.FromSingleFileMixin.from_single_file`] for loading `.ckpt` files
82
-
83
- Args:
84
- vae ([`AutoencoderKL`]):
85
- Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
86
- text_encoder ([`~transformers.RobertaSeriesModelWithTransformation`]):
87
- Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
88
- tokenizer ([`~transformers.XLMRobertaTokenizer`]):
89
- A `XLMRobertaTokenizer` to tokenize text.
90
- unet ([`UNet2DConditionModel`]):
91
- A `UNet2DConditionModel` to denoise the encoded image latents.
92
- scheduler ([`SchedulerMixin`]):
93
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
94
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
95
- safety_checker ([`StableDiffusionSafetyChecker`]):
96
- Classification module that estimates whether generated images could be considered offensive or harmful.
97
- Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details
98
- about a model's potential harms.
99
- feature_extractor ([`~transformers.CLIPImageProcessor`]):
100
- A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.
101
- """
102
- _optional_components = ["safety_checker", "feature_extractor"]
103
-
104
- def __init__(
105
- self,
106
- vae: AutoencoderKL,
107
- text_encoder: RobertaSeriesModelWithTransformation,
108
- tokenizer: XLMRobertaTokenizer,
109
- unet: UNet2DConditionModel,
110
- scheduler: KarrasDiffusionSchedulers,
111
- safety_checker: StableDiffusionSafetyChecker,
112
- feature_extractor: CLIPImageProcessor,
113
- requires_safety_checker: bool = True,
114
- ):
115
- super().__init__()
116
-
117
- if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
118
- deprecation_message = (
119
- f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
120
- f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
121
- "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
122
- " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
123
- " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
124
- " file"
125
- )
126
- deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
127
- new_config = dict(scheduler.config)
128
- new_config["steps_offset"] = 1
129
- scheduler._internal_dict = FrozenDict(new_config)
130
-
131
- if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True:
132
- deprecation_message = (
133
- f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`."
134
- " `clip_sample` should be set to False in the configuration file. Please make sure to update the"
135
- " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in"
136
- " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very"
137
- " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file"
138
- )
139
- deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
140
- new_config = dict(scheduler.config)
141
- new_config["clip_sample"] = False
142
- scheduler._internal_dict = FrozenDict(new_config)
143
-
144
- if safety_checker is None and requires_safety_checker:
145
- logger.warning(
146
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
147
- " that you abide to the conditions of the Alt Diffusion license and do not expose unfiltered"
148
- " results in services or applications open to the public. Both the diffusers team and Hugging Face"
149
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
150
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
151
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
152
- )
153
-
154
- if safety_checker is not None and feature_extractor is None:
155
- raise ValueError(
156
- "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
157
- " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
158
- )
159
-
160
- is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse(
161
- version.parse(unet.config._diffusers_version).base_version
162
- ) < version.parse("0.9.0.dev0")
163
- is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
164
- if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64:
165
- deprecation_message = (
166
- "The configuration file of the unet has set the default `sample_size` to smaller than"
167
- " 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the"
168
- " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-"
169
- " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5"
170
- " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the"
171
- " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`"
172
- " in the config might lead to incorrect results in future versions. If you have downloaded this"
173
- " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for"
174
- " the `unet/config.json` file"
175
- )
176
- deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False)
177
- new_config = dict(unet.config)
178
- new_config["sample_size"] = 64
179
- unet._internal_dict = FrozenDict(new_config)
180
-
181
- self.register_modules(
182
- vae=vae,
183
- text_encoder=text_encoder,
184
- tokenizer=tokenizer,
185
- unet=unet,
186
- scheduler=scheduler,
187
- safety_checker=safety_checker,
188
- feature_extractor=feature_extractor,
189
- )
190
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
191
- self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
192
- self.register_to_config(requires_safety_checker=requires_safety_checker)
193
-
194
- def enable_vae_slicing(self):
195
- r"""
196
- Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
197
- compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
198
- """
199
- self.vae.enable_slicing()
200
-
201
- def disable_vae_slicing(self):
202
- r"""
203
- Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
204
- computing decoding in one step.
205
- """
206
- self.vae.disable_slicing()
207
-
208
- def enable_vae_tiling(self):
209
- r"""
210
- Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
211
- compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
212
- processing larger images.
213
- """
214
- self.vae.enable_tiling()
215
-
216
- def disable_vae_tiling(self):
217
- r"""
218
- Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
219
- computing decoding in one step.
220
- """
221
- self.vae.disable_tiling()
222
-
223
- def enable_model_cpu_offload(self, gpu_id=0):
224
- r"""
225
- Offload all models to CPU to reduce memory usage with a low impact on performance. Moves one whole model at a
226
- time to the GPU when its `forward` method is called, and the model remains in GPU until the next model runs.
227
- Memory savings are lower than using `enable_sequential_cpu_offload`, but performance is much better due to the
228
- iterative execution of the `unet`.
229
- """
230
- if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
231
- from accelerate import cpu_offload_with_hook
232
- else:
233
- raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
234
-
235
- device = torch.device(f"cuda:{gpu_id}")
236
-
237
- if self.device.type != "cpu":
238
- self.to("cpu", silence_dtype_warnings=True)
239
- torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
240
-
241
- hook = None
242
- for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]:
243
- _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
244
-
245
- if self.safety_checker is not None:
246
- _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook)
247
-
248
- # We'll offload the last model manually.
249
- self.final_offload_hook = hook
250
-
251
- def _encode_prompt(
252
- self,
253
- prompt,
254
- device,
255
- num_images_per_prompt,
256
- do_classifier_free_guidance,
257
- negative_prompt=None,
258
- prompt_embeds: Optional[torch.FloatTensor] = None,
259
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
260
- lora_scale: Optional[float] = None,
261
- ):
262
- r"""
263
- Encodes the prompt into text encoder hidden states.
264
-
265
- Args:
266
- prompt (`str` or `List[str]`, *optional*):
267
- prompt to be encoded
268
- device: (`torch.device`):
269
- torch device
270
- num_images_per_prompt (`int`):
271
- number of images that should be generated per prompt
272
- do_classifier_free_guidance (`bool`):
273
- whether to use classifier free guidance or not
274
- negative_prompt (`str` or `List[str]`, *optional*):
275
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
276
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
277
- less than `1`).
278
- prompt_embeds (`torch.FloatTensor`, *optional*):
279
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
280
- provided, text embeddings will be generated from `prompt` input argument.
281
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
282
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
283
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
284
- argument.
285
- lora_scale (`float`, *optional*):
286
- A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
287
- """
288
- # set lora scale so that monkey patched LoRA
289
- # function of text encoder can correctly access it
290
- if lora_scale is not None and isinstance(self, LoraLoaderMixin):
291
- self._lora_scale = lora_scale
292
-
293
- if prompt is not None and isinstance(prompt, str):
294
- batch_size = 1
295
- elif prompt is not None and isinstance(prompt, list):
296
- batch_size = len(prompt)
297
- else:
298
- batch_size = prompt_embeds.shape[0]
299
-
300
- if prompt_embeds is None:
301
- # textual inversion: procecss multi-vector tokens if necessary
302
- if isinstance(self, TextualInversionLoaderMixin):
303
- prompt = self.maybe_convert_prompt(prompt, self.tokenizer)
304
-
305
- text_inputs = self.tokenizer(
306
- prompt,
307
- padding="max_length",
308
- max_length=self.tokenizer.model_max_length,
309
- truncation=True,
310
- return_tensors="pt",
311
- )
312
- text_input_ids = text_inputs.input_ids
313
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
314
-
315
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
316
- text_input_ids, untruncated_ids
317
- ):
318
- removed_text = self.tokenizer.batch_decode(
319
- untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
320
- )
321
- logger.warning(
322
- "The following part of your input was truncated because CLIP can only handle sequences up to"
323
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
324
- )
325
-
326
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
327
- attention_mask = text_inputs.attention_mask.to(device)
328
- else:
329
- attention_mask = None
330
-
331
- prompt_embeds = self.text_encoder(
332
- text_input_ids.to(device),
333
- attention_mask=attention_mask,
334
- )
335
- prompt_embeds = prompt_embeds[0]
336
-
337
- prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
338
-
339
- bs_embed, seq_len, _ = prompt_embeds.shape
340
- # duplicate text embeddings for each generation per prompt, using mps friendly method
341
- prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
342
- prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
343
-
344
- # get unconditional embeddings for classifier free guidance
345
- if do_classifier_free_guidance and negative_prompt_embeds is None:
346
- uncond_tokens: List[str]
347
- if negative_prompt is None:
348
- uncond_tokens = [""] * batch_size
349
- elif prompt is not None and type(prompt) is not type(negative_prompt):
350
- raise TypeError(
351
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
352
- f" {type(prompt)}."
353
- )
354
- elif isinstance(negative_prompt, str):
355
- uncond_tokens = [negative_prompt]
356
- elif batch_size != len(negative_prompt):
357
- raise ValueError(
358
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
359
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
360
- " the batch size of `prompt`."
361
- )
362
- else:
363
- uncond_tokens = negative_prompt
364
-
365
- # textual inversion: procecss multi-vector tokens if necessary
366
- if isinstance(self, TextualInversionLoaderMixin):
367
- uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer)
368
-
369
- max_length = prompt_embeds.shape[1]
370
- uncond_input = self.tokenizer(
371
- uncond_tokens,
372
- padding="max_length",
373
- max_length=max_length,
374
- truncation=True,
375
- return_tensors="pt",
376
- )
377
-
378
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
379
- attention_mask = uncond_input.attention_mask.to(device)
380
- else:
381
- attention_mask = None
382
-
383
- negative_prompt_embeds = self.text_encoder(
384
- uncond_input.input_ids.to(device),
385
- attention_mask=attention_mask,
386
- )
387
- negative_prompt_embeds = negative_prompt_embeds[0]
388
-
389
- if do_classifier_free_guidance:
390
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
391
- seq_len = negative_prompt_embeds.shape[1]
392
-
393
- negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
394
-
395
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
396
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
397
-
398
- # For classifier free guidance, we need to do two forward passes.
399
- # Here we concatenate the unconditional and text embeddings into a single batch
400
- # to avoid doing two forward passes
401
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
402
-
403
- return prompt_embeds
404
-
405
- def run_safety_checker(self, image, device, dtype):
406
- if self.safety_checker is None:
407
- has_nsfw_concept = None
408
- else:
409
- if torch.is_tensor(image):
410
- feature_extractor_input = self.image_processor.postprocess(image, output_type="pil")
411
- else:
412
- feature_extractor_input = self.image_processor.numpy_to_pil(image)
413
- safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device)
414
- image, has_nsfw_concept = self.safety_checker(
415
- images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
416
- )
417
- return image, has_nsfw_concept
418
-
419
- def decode_latents(self, latents):
420
- warnings.warn(
421
- (
422
- "The decode_latents method is deprecated and will be removed in a future version. Please"
423
- " use VaeImageProcessor instead"
424
- ),
425
- FutureWarning,
426
- )
427
- latents = 1 / self.vae.config.scaling_factor * latents
428
- image = self.vae.decode(latents, return_dict=False)[0]
429
- image = (image / 2 + 0.5).clamp(0, 1)
430
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
431
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
432
- return image
433
-
434
- def prepare_extra_step_kwargs(self, generator, eta):
435
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
436
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
437
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
438
- # and should be between [0, 1]
439
-
440
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
441
- extra_step_kwargs = {}
442
- if accepts_eta:
443
- extra_step_kwargs["eta"] = eta
444
-
445
- # check if the scheduler accepts generator
446
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
447
- if accepts_generator:
448
- extra_step_kwargs["generator"] = generator
449
- return extra_step_kwargs
450
-
451
- def check_inputs(
452
- self,
453
- prompt,
454
- height,
455
- width,
456
- callback_steps,
457
- negative_prompt=None,
458
- prompt_embeds=None,
459
- negative_prompt_embeds=None,
460
- ):
461
- if height % 8 != 0 or width % 8 != 0:
462
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
463
-
464
- if (callback_steps is None) or (
465
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
466
- ):
467
- raise ValueError(
468
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
469
- f" {type(callback_steps)}."
470
- )
471
-
472
- if prompt is not None and prompt_embeds is not None:
473
- raise ValueError(
474
- f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
475
- " only forward one of the two."
476
- )
477
- elif prompt is None and prompt_embeds is None:
478
- raise ValueError(
479
- "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
480
- )
481
- elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
482
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
483
-
484
- if negative_prompt is not None and negative_prompt_embeds is not None:
485
- raise ValueError(
486
- f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
487
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
488
- )
489
-
490
- if prompt_embeds is not None and negative_prompt_embeds is not None:
491
- if prompt_embeds.shape != negative_prompt_embeds.shape:
492
- raise ValueError(
493
- "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
494
- f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
495
- f" {negative_prompt_embeds.shape}."
496
- )
497
-
498
- def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
499
- shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
500
- if isinstance(generator, list) and len(generator) != batch_size:
501
- raise ValueError(
502
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
503
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
504
- )
505
-
506
- if latents is None:
507
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
508
- else:
509
- latents = latents.to(device)
510
-
511
- # scale the initial noise by the standard deviation required by the scheduler
512
- latents = latents * self.scheduler.init_noise_sigma
513
- return latents
514
-
515
- @torch.no_grad()
516
- @replace_example_docstring(EXAMPLE_DOC_STRING)
517
- def __call__(
518
- self,
519
- prompt: Union[str, List[str]] = None,
520
- height: Optional[int] = None,
521
- width: Optional[int] = None,
522
- num_inference_steps: int = 50,
523
- guidance_scale: float = 7.5,
524
- negative_prompt: Optional[Union[str, List[str]]] = None,
525
- num_images_per_prompt: Optional[int] = 1,
526
- eta: float = 0.0,
527
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
528
- latents: Optional[torch.FloatTensor] = None,
529
- prompt_embeds: Optional[torch.FloatTensor] = None,
530
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
531
- output_type: Optional[str] = "pil",
532
- return_dict: bool = True,
533
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
534
- callback_steps: int = 1,
535
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
536
- guidance_rescale: float = 0.0,
537
- ):
538
- r"""
539
- The call function to the pipeline for generation.
540
-
541
- Args:
542
- prompt (`str` or `List[str]`, *optional*):
543
- The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
544
- height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
545
- The height in pixels of the generated image.
546
- width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
547
- The width in pixels of the generated image.
548
- num_inference_steps (`int`, *optional*, defaults to 50):
549
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
550
- expense of slower inference.
551
- guidance_scale (`float`, *optional*, defaults to 7.5):
552
- A higher guidance scale value encourages the model to generate images closely linked to the text
553
- `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
554
- negative_prompt (`str` or `List[str]`, *optional*):
555
- The prompt or prompts to guide what to not include in image generation. If not defined, you need to
556
- pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
557
- num_images_per_prompt (`int`, *optional*, defaults to 1):
558
- The number of images to generate per prompt.
559
- eta (`float`, *optional*, defaults to 0.0):
560
- Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
561
- to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
562
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
563
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
564
- generation deterministic.
565
- latents (`torch.FloatTensor`, *optional*):
566
- Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
567
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
568
- tensor is generated by sampling using the supplied random `generator`.
569
- prompt_embeds (`torch.FloatTensor`, *optional*):
570
- Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
571
- provided, text embeddings are generated from the `prompt` input argument.
572
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
573
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
574
- not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
575
- output_type (`str`, *optional*, defaults to `"pil"`):
576
- The output format of the generated image. Choose between `PIL.Image` or `np.array`.
577
- return_dict (`bool`, *optional*, defaults to `True`):
578
- Whether or not to return a [`~pipelines.stable_diffusion.AltDiffusionPipelineOutput`] instead of a
579
- plain tuple.
580
- callback (`Callable`, *optional*):
581
- A function that calls every `callback_steps` steps during inference. The function is called with the
582
- following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
583
- callback_steps (`int`, *optional*, defaults to 1):
584
- The frequency at which the `callback` function is called. If not specified, the callback is called at
585
- every step.
586
- cross_attention_kwargs (`dict`, *optional*):
587
- A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in
588
- [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
589
- guidance_rescale (`float`, *optional*, defaults to 0.7):
590
- Guidance rescale factor from [Common Diffusion Noise Schedules and Sample Steps are
591
- Flawed](https://arxiv.org/pdf/2305.08891.pdf). Guidance rescale factor should fix overexposure when
592
- using zero terminal SNR.
593
-
594
- Examples:
595
-
596
- Returns:
597
- [`~pipelines.stable_diffusion.AltDiffusionPipelineOutput`] or `tuple`:
598
- If `return_dict` is `True`, [`~pipelines.stable_diffusion.AltDiffusionPipelineOutput`] is returned,
599
- otherwise a `tuple` is returned where the first element is a list with the generated images and the
600
- second element is a list of `bool`s indicating whether the corresponding generated image contains
601
- "not-safe-for-work" (nsfw) content.
602
- """
603
- # 0. Default height and width to unet
604
- height = height or self.unet.config.sample_size * self.vae_scale_factor
605
- width = width or self.unet.config.sample_size * self.vae_scale_factor
606
-
607
- # 1. Check inputs. Raise error if not correct
608
- self.check_inputs(
609
- prompt, height, width, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds
610
- )
611
-
612
- # 2. Define call parameters
613
- if prompt is not None and isinstance(prompt, str):
614
- batch_size = 1
615
- elif prompt is not None and isinstance(prompt, list):
616
- batch_size = len(prompt)
617
- else:
618
- batch_size = prompt_embeds.shape[0]
619
-
620
- device = self._execution_device
621
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
622
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
623
- # corresponds to doing no classifier free guidance.
624
- do_classifier_free_guidance = guidance_scale > 1.0
625
-
626
- # 3. Encode input prompt
627
- text_encoder_lora_scale = (
628
- cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None
629
- )
630
- prompt_embeds = self._encode_prompt(
631
- prompt,
632
- device,
633
- num_images_per_prompt,
634
- do_classifier_free_guidance,
635
- negative_prompt,
636
- prompt_embeds=prompt_embeds,
637
- negative_prompt_embeds=negative_prompt_embeds,
638
- lora_scale=text_encoder_lora_scale,
639
- )
640
-
641
- # 4. Prepare timesteps
642
- self.scheduler.set_timesteps(num_inference_steps, device=device)
643
- timesteps = self.scheduler.timesteps
644
-
645
- # 5. Prepare latent variables
646
- num_channels_latents = self.unet.config.in_channels
647
- latents = self.prepare_latents(
648
- batch_size * num_images_per_prompt,
649
- num_channels_latents,
650
- height,
651
- width,
652
- prompt_embeds.dtype,
653
- device,
654
- generator,
655
- latents,
656
- )
657
-
658
- # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
659
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
660
-
661
- # 7. Denoising loop
662
- num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
663
- with self.progress_bar(total=num_inference_steps) as progress_bar:
664
- for i, t in enumerate(timesteps):
665
- # expand the latents if we are doing classifier free guidance
666
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
667
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
668
-
669
- # predict the noise residual
670
- noise_pred = self.unet(
671
- latent_model_input,
672
- t,
673
- encoder_hidden_states=prompt_embeds,
674
- cross_attention_kwargs=cross_attention_kwargs,
675
- return_dict=False,
676
- )[0]
677
-
678
- # perform guidance
679
- if do_classifier_free_guidance:
680
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
681
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
682
-
683
- if do_classifier_free_guidance and guidance_rescale > 0.0:
684
- # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf
685
- noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale)
686
-
687
- # compute the previous noisy sample x_t -> x_t-1
688
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
689
-
690
- # call the callback, if provided
691
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
692
- progress_bar.update()
693
- if callback is not None and i % callback_steps == 0:
694
- callback(i, t, latents)
695
-
696
- if not output_type == "latent":
697
- image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
698
- image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
699
- else:
700
- image = latents
701
- has_nsfw_concept = None
702
-
703
- if has_nsfw_concept is None:
704
- do_denormalize = [True] * image.shape[0]
705
- else:
706
- do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept]
707
-
708
- image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize)
709
-
710
- # Offload last model to CPU
711
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
712
- self.final_offload_hook.offload()
713
-
714
- if not return_dict:
715
- return (image, has_nsfw_concept)
716
-
717
- return AltDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/sd_model_cfg.py DELETED
@@ -1,10 +0,0 @@
1
- # The model dict is used for webUI only
2
-
3
- model_dict = {
4
- 'Stable Diffusion 1.5': '',
5
- 'revAnimated_v11': 'models/revAnimated_v11.safetensors',
6
- 'realisticVisionV20_v20': 'models/realisticVisionV20_v20.safetensors',
7
- 'DGSpitzer/Cyberpunk-Anime-Diffusion': 'Cyberpunk-Anime-Diffusion.safetensors',
8
- 'wavymulder/Analog-Diffusion': 'analog-diffusion-1.0.safetensors',
9
- 'Fictiverse/Stable_Diffusion_PaperCut_Model': 'PaperCut_v1.safetensors',
10
- }
 
 
 
 
 
 
 
 
 
 
 
spaces/Artrajz/vits-simple-api/bert_vits2/text/chinese.py DELETED
@@ -1,196 +0,0 @@
1
- import os
2
- import re
3
-
4
- import cn2an
5
- from pypinyin import lazy_pinyin, Style
6
-
7
- from bert_vits2.text.symbols import punctuation
8
- from bert_vits2.text.tone_sandhi import ToneSandhi
9
-
10
- current_file_path = os.path.dirname(__file__)
11
- pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in
12
- open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()}
13
-
14
- import jieba.posseg as psg
15
- from jieba import lcut
16
- lcut("预加载")
17
-
18
- rep_map = {
19
- ':': ',',
20
- ';': ',',
21
- ',': ',',
22
- '。': '.',
23
- '!': '!',
24
- '?': '?',
25
- '\n': '.',
26
- "·": ",",
27
- '、': ",",
28
- '...': '…',
29
- '$': '.',
30
- '“': "'",
31
- '”': "'",
32
- '‘': "'",
33
- '’': "'",
34
- '(': "'",
35
- ')': "'",
36
- '(': "'",
37
- ')': "'",
38
- '《': "'",
39
- '》': "'",
40
- '【': "'",
41
- '】': "'",
42
- '[': "'",
43
- ']': "'",
44
- '—': "-",
45
- '~': "-",
46
- '~': "-",
47
- '「': "'",
48
- '」': "'",
49
-
50
- }
51
-
52
- tone_modifier = ToneSandhi()
53
-
54
-
55
- def replace_punctuation(text):
56
- text = text.replace("嗯", "恩").replace("呣", "母")
57
- pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys()))
58
-
59
- replaced_text = pattern.sub(lambda x: rep_map[x.group()], text)
60
-
61
- replaced_text = re.sub(r'[^\u4e00-\u9fa5' + "".join(punctuation) + r']+', '', replaced_text)
62
-
63
- return replaced_text
64
-
65
-
66
- def g2p(text):
67
- pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation))
68
- sentences = [i for i in re.split(pattern, text) if i.strip() != '']
69
- phones, tones, word2ph = _g2p(sentences)
70
- assert sum(word2ph) == len(phones)
71
- assert len(word2ph) == len(text) # Sometimes it will crash,you can add a try-catch.
72
- phones = ['_'] + phones + ["_"]
73
- tones = [0] + tones + [0]
74
- word2ph = [1] + word2ph + [1]
75
- return phones, tones, word2ph
76
-
77
-
78
- def _get_initials_finals(word):
79
- initials = []
80
- finals = []
81
- orig_initials = lazy_pinyin(
82
- word, neutral_tone_with_five=True, style=Style.INITIALS)
83
- orig_finals = lazy_pinyin(
84
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
85
- for c, v in zip(orig_initials, orig_finals):
86
- initials.append(c)
87
- finals.append(v)
88
- return initials, finals
89
-
90
-
91
- def _g2p(segments):
92
- phones_list = []
93
- tones_list = []
94
- word2ph = []
95
- for seg in segments:
96
- pinyins = []
97
- # Replace all English words in the sentence
98
- seg = re.sub('[a-zA-Z]+', '', seg)
99
- seg_cut = psg.lcut(seg)
100
- initials = []
101
- finals = []
102
- seg_cut = tone_modifier.pre_merge_for_modify(seg_cut)
103
- for word, pos in seg_cut:
104
- if pos == 'eng':
105
- continue
106
- sub_initials, sub_finals = _get_initials_finals(word)
107
- sub_finals = tone_modifier.modified_tone(word, pos,
108
- sub_finals)
109
- initials.append(sub_initials)
110
- finals.append(sub_finals)
111
-
112
- # assert len(sub_initials) == len(sub_finals) == len(word)
113
- initials = sum(initials, [])
114
- finals = sum(finals, [])
115
- #
116
- for c, v in zip(initials, finals):
117
- raw_pinyin = c + v
118
- # NOTE: post process for pypinyin outputs
119
- # we discriminate i, ii and iii
120
- if c == v:
121
- assert c in punctuation
122
- phone = [c]
123
- tone = '0'
124
- word2ph.append(1)
125
- else:
126
- v_without_tone = v[:-1]
127
- tone = v[-1]
128
-
129
- pinyin = c + v_without_tone
130
- assert tone in '12345'
131
-
132
- if c:
133
- # 多音节
134
- v_rep_map = {
135
- "uei": 'ui',
136
- 'iou': 'iu',
137
- 'uen': 'un',
138
- }
139
- if v_without_tone in v_rep_map.keys():
140
- pinyin = c + v_rep_map[v_without_tone]
141
- else:
142
- # 单音节
143
- pinyin_rep_map = {
144
- 'ing': 'ying',
145
- 'i': 'yi',
146
- 'in': 'yin',
147
- 'u': 'wu',
148
- }
149
- if pinyin in pinyin_rep_map.keys():
150
- pinyin = pinyin_rep_map[pinyin]
151
- else:
152
- single_rep_map = {
153
- 'v': 'yu',
154
- 'e': 'e',
155
- 'i': 'y',
156
- 'u': 'w',
157
- }
158
- if pinyin[0] in single_rep_map.keys():
159
- pinyin = single_rep_map[pinyin[0]] + pinyin[1:]
160
-
161
- assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin)
162
- phone = pinyin_to_symbol_map[pinyin].split(' ')
163
- word2ph.append(len(phone))
164
-
165
- phones_list += phone
166
- tones_list += [int(tone)] * len(phone)
167
- return phones_list, tones_list, word2ph
168
-
169
-
170
- def text_normalize(text):
171
- numbers = re.findall(r'\d+(?:\.?\d+)?', text)
172
- for number in numbers:
173
- text = text.replace(number, cn2an.an2cn(number), 1)
174
- text = replace_punctuation(text)
175
- return text
176
-
177
-
178
- def get_bert_feature(text, word2ph):
179
- from bert_vits2.text import chinese_bert
180
- return chinese_bert.get_bert_feature(text, word2ph)
181
-
182
-
183
- if __name__ == '__main__':
184
- from bert_vits2.text import get_bert_feature
185
-
186
- text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏"
187
- text = text_normalize(text)
188
- print(text)
189
- phones, tones, word2ph = g2p(text)
190
- bert = get_bert_feature(text, word2ph)
191
-
192
- print(phones, tones, word2ph, bert.shape)
193
-
194
- # # 示例用法
195
- # text = "这是一个示例文本:,你好!这是一个测试...."
196
- # print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Arun1217/mygenaiapp/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Mygenaiapp
3
- emoji: 🌖
4
- colorFrom: pink
5
- colorTo: gray
6
- sdk: gradio
7
- sdk_version: 3.39.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AtomdffAI/wechatgpt4atom/scripts/shutdown.sh DELETED
@@ -1,16 +0,0 @@
1
- #!/bin/bash
2
-
3
- #关闭服务
4
- cd `dirname $0`/..
5
- export BASE_DIR=`pwd`
6
- pid=`ps ax | grep -i app.py | grep "${BASE_DIR}" | grep python3 | grep -v grep | awk '{print $1}'`
7
- if [ -z "$pid" ] ; then
8
- echo "No chatgpt-on-wechat running."
9
- exit -1;
10
- fi
11
-
12
- echo "The chatgpt-on-wechat(${pid}) is running..."
13
-
14
- kill ${pid}
15
-
16
- echo "Send shutdown request to chatgpt-on-wechat(${pid}) OK"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/torchscript.py DELETED
@@ -1,132 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
-
3
- import os
4
- import torch
5
-
6
- from detectron2.utils.file_io import PathManager
7
-
8
- from .torchscript_patch import freeze_training_mode, patch_instances
9
-
10
- __all__ = ["scripting_with_instances", "dump_torchscript_IR"]
11
-
12
-
13
- def scripting_with_instances(model, fields):
14
- """
15
- Run :func:`torch.jit.script` on a model that uses the :class:`Instances` class. Since
16
- attributes of :class:`Instances` are "dynamically" added in eager mode,it is difficult
17
- for scripting to support it out of the box. This function is made to support scripting
18
- a model that uses :class:`Instances`. It does the following:
19
-
20
- 1. Create a scriptable ``new_Instances`` class which behaves similarly to ``Instances``,
21
- but with all attributes been "static".
22
- The attributes need to be statically declared in the ``fields`` argument.
23
- 2. Register ``new_Instances``, and force scripting compiler to
24
- use it when trying to compile ``Instances``.
25
-
26
- After this function, the process will be reverted. User should be able to script another model
27
- using different fields.
28
-
29
- Example:
30
- Assume that ``Instances`` in the model consist of two attributes named
31
- ``proposal_boxes`` and ``objectness_logits`` with type :class:`Boxes` and
32
- :class:`Tensor` respectively during inference. You can call this function like:
33
- ::
34
- fields = {"proposal_boxes": Boxes, "objectness_logits": torch.Tensor}
35
- torchscipt_model = scripting_with_instances(model, fields)
36
-
37
- Note:
38
- It only support models in evaluation mode.
39
-
40
- Args:
41
- model (nn.Module): The input model to be exported by scripting.
42
- fields (Dict[str, type]): Attribute names and corresponding type that
43
- ``Instances`` will use in the model. Note that all attributes used in ``Instances``
44
- need to be added, regardless of whether they are inputs/outputs of the model.
45
- Data type not defined in detectron2 is not supported for now.
46
-
47
- Returns:
48
- torch.jit.ScriptModule: the model in torchscript format
49
- """
50
- assert (
51
- not model.training
52
- ), "Currently we only support exporting models in evaluation mode to torchscript"
53
-
54
- with freeze_training_mode(model), patch_instances(fields):
55
- scripted_model = torch.jit.script(model)
56
- return scripted_model
57
-
58
-
59
- # alias for old name
60
- export_torchscript_with_instances = scripting_with_instances
61
-
62
-
63
- def dump_torchscript_IR(model, dir):
64
- """
65
- Dump IR of a TracedModule/ScriptModule/Function in various format (code, graph,
66
- inlined graph). Useful for debugging.
67
-
68
- Args:
69
- model (TracedModule/ScriptModule/ScriptFUnction): traced or scripted module
70
- dir (str): output directory to dump files.
71
- """
72
- dir = os.path.expanduser(dir)
73
- PathManager.mkdirs(dir)
74
-
75
- def _get_script_mod(mod):
76
- if isinstance(mod, torch.jit.TracedModule):
77
- return mod._actual_script_module
78
- return mod
79
-
80
- # Dump pretty-printed code: https://pytorch.org/docs/stable/jit.html#inspecting-code
81
- with PathManager.open(os.path.join(dir, "model_ts_code.txt"), "w") as f:
82
-
83
- def get_code(mod):
84
- # Try a few ways to get code using private attributes.
85
- try:
86
- # This contains more information than just `mod.code`
87
- return _get_script_mod(mod)._c.code
88
- except AttributeError:
89
- pass
90
- try:
91
- return mod.code
92
- except AttributeError:
93
- return None
94
-
95
- def dump_code(prefix, mod):
96
- code = get_code(mod)
97
- name = prefix or "root model"
98
- if code is None:
99
- f.write(f"Could not found code for {name} (type={mod.original_name})\n")
100
- f.write("\n")
101
- else:
102
- f.write(f"\nCode for {name}, type={mod.original_name}:\n")
103
- f.write(code)
104
- f.write("\n")
105
- f.write("-" * 80)
106
-
107
- for name, m in mod.named_children():
108
- dump_code(prefix + "." + name, m)
109
-
110
- if isinstance(model, torch.jit.ScriptFunction):
111
- f.write(get_code(model))
112
- else:
113
- dump_code("", model)
114
-
115
- def _get_graph(model):
116
- try:
117
- # Recursively dump IR of all modules
118
- return _get_script_mod(model)._c.dump_to_str(True, False, False)
119
- except AttributeError:
120
- return model.graph.str()
121
-
122
- with PathManager.open(os.path.join(dir, "model_ts_IR.txt"), "w") as f:
123
- f.write(_get_graph(model))
124
-
125
- # Dump IR of the entire graph (all submodules inlined)
126
- with PathManager.open(os.path.join(dir, "model_ts_IR_inlined.txt"), "w") as f:
127
- f.write(str(model.inlined_graph))
128
-
129
- if not isinstance(model, torch.jit.ScriptFunction):
130
- # Dump the model structure in pytorch style
131
- with PathManager.open(os.path.join(dir, "model.txt"), "w") as f:
132
- f.write(str(model))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/2 Y Lnea Apk.md DELETED
@@ -1,73 +0,0 @@
1
- <br />
2
- <h1>2 y línea apk: ¿Qué es y cómo usarlo? </h1>
3
- <p>Si usted está buscando una manera de obtener un segundo número de teléfono en su teléfono inteligente, tableta, o el ordenador, es posible que haya oído hablar de 2 y apk línea. Pero, ¿qué es exactamente y cómo se puede utilizar? En este artículo, vamos a explicar todo lo que necesita saber acerca de 2 y apk línea, incluyendo sus características, beneficios, y el proceso de instalación. </p>
4
- <h2>2 y línea apk</h2><br /><p><b><b>Download</b> >> <a href="https://bltlly.com/2v6KuJ">https://bltlly.com/2v6KuJ</a></b></p><br /><br />
5
- <h2>Introducción</h2>
6
- <h3>¿Qué es 2 y línea apk? </h3>
7
- <p>2 y la línea apk es una aplicación para Android que le permite obtener un segundo número de teléfono de Estados Unidos o Canadá que funciona en su teléfono inteligente, tableta o computadora. Es un sistema de teléfono de negocios con todas las funciones, diseñado para profesionales móviles, freelancers y empresarios que necesitan un número separado para el trabajo o el uso personal. Puede utilizar 2 y la línea apk para hacer y recibir llamadas, enviar y recibir mensajes de texto, correo de voz de acceso, personalizar identificador de llamadas, y más. </p>
8
- <h3> ¿Por qué necesita 2 y apk línea? </h3>
9
- <p>Hay muchas razones por las que podría necesitar 2 y apk línea. Aquí están algunos de ellos:</p>
10
- <ul>
11
- <li> Desea mantener su número personal privado de clientes, clientes o extraños. </li>
12
- <li>Quieres tener un número diferente para diferentes propósitos, como trabajo, citas, compras en línea, etc.</li>
13
- <li> Desea ahorrar dinero en su factura de teléfono mediante el uso de un segundo número gratuito o de bajo costo. </li>
14
- <li> Desea tener un número de copia de seguridad en caso de que su número principal no esté disponible o perdido. </li>
15
- <li> Desea ampliar su presencia comercial teniendo un número local en otro código de área o país. </li>
16
- </ul>
17
- <h3>¿Cómo funciona 2 y línea apk? </h3>
18
-
19
- <h2>Características de 2 y línea apk</h2>
20
- <h3>Número local gratuito</h3>
21
- <p>Con 2 y línea apk, puede obtener un número local gratuito en cualquier código de área de Estados Unidos o Canadá. Puede elegir entre millones de números que están disponibles en la aplicación. También puede cambiar su número en cualquier momento si lo desea. </p>
22
- <p></p>
23
- <h3>Llamadas y mensajes de texto ilimitados</h3>
24
- <p>Con 2 y línea apk, se puede disfrutar de llamadas ilimitadas y mensajes de texto a cualquier número de Estados Unidos o Canadá. No tienes que preocuparte por minutos, mensajes o cargos. También puedes enviar imágenes, videos, emojis, pegatinas y notas de voz con tus textos. </p>
25
- <h3>Buzón de voz y identificador de llamadas personalizables</h3>
26
- <p>Con 2 y línea apk, puede personalizar su saludo de correo de voz y el identificador de llamadas para su segundo número. Puede grabar su propio mensaje o usar uno de los pregrabados. También puede establecer el nombre del identificador de llamadas a lo que desee, como el nombre de su negocio o apodo. </p>
27
- <h3>Desvío de llamadas y llamadas de conferencia</h3>
28
- <p>Con 2 y línea apk, puede reenviar sus llamadas a otro número o dispositivo si está ocupado o no disponible. También puede hacer llamadas de conferencia con hasta cinco personas al mismo tiempo. También puede silenciar, retener o transferir llamadas como desee. </p>
29
- <h3>Llamadas internacionales de bajo costo</h3>
30
- <p>Con 2 y línea apk, puede hacer llamadas internacionales baratas a más de 200 países y regiones. Puede pagar a medida que avanza con tarifas asequibles o comprar un plan de suscripción para llamadas ilimitadas a destinos seleccionados. También puedes ganar créditos gratis viendo anuncios o completando ofertas. </p>
31
- <h2>Cómo descargar e instalar 2 y la línea apk</h2>
32
- <h3>Requisitos y compatibilidad</h3>
33
- <p>Para usar 2 y línea apk, es necesario tener un dispositivo Android que se ejecuta en Android 4.4 o superior. También necesita tener una conexión a Internet estable, ya sea Wi-Fi o datos móviles. No necesitas rootear tu dispositivo ni tener permisos especiales para usar la aplicación. </p>
34
- <h3>Pasos para descargar e instalar 2 y apk línea</h3>
35
-
36
- <ol>
37
- <li>Ir a la página web oficial de 2 y apk línea y haga clic en el botón de descarga. Alternativamente, se puede buscar para 2 y línea apk en Google Play Store y descargarlo desde allí. </li>
38
- <li>Una vez que la descarga se ha completado, abra el archivo y toque en instalar. Es posible que necesite habilitar fuentes desconocidas en su configuración si descargó el archivo desde el sitio web. </li>
39
- <li>Espere a que termine la instalación y luego abra la aplicación. Verá una pantalla de bienvenida con algunas instrucciones sobre cómo usar la aplicación. </li>
40
- <li>Toca en continuar y acepta los términos y condiciones. Se te pedirá que permitas que la aplicación funcione correctamente. </li>
41
- <li>Toque en permitir y proceder al siguiente paso. </li>
42
- </ol>
43
- <h3>Cómo activar tu segundo número de teléfono</h3>
44
- <p>Aquí están los pasos para activar su segundo número de teléfono con 2 y la línea apk:</p>
45
- <ol>
46
- <li>Después de conceder los permisos, verá una pantalla donde puede elegir su segundo número de teléfono. Puede introducir un código de área o un nombre de ciudad, o navegar por la lista de números disponibles. </li>
47
- <li>Seleccione un número que le guste y toque en continuar. A continuación, verá una pantalla de confirmación con su número elegido y algo de información al respecto. </li>
48
- <li>Toque en confirmar y activar su número. A continuación, recibirá un código de verificación a través de un mensaje de texto en su número principal. </li>
49
- <li>Introduzca el código en la aplicación y verifique su número. Verá una pantalla de felicitaciones con algunos consejos sobre cómo usar su segundo número. </li>
50
- <li>Toque en comenzar a usar 2ndLine y disfrute de su nuevo número de teléfono. </li>
51
- </ol>
52
- <h2>Conclusión</h2>
53
- <h3>Resumen de los puntos principales</h3>
54
-
55
- <h3>Llamada a la acción</h3>
56
- <p>Si usted está interesado en probar 2 y línea apk, se puede descargar desde el sitio web oficial o Google Play Store. También puede visitar su página de preguntas frecuentes para obtener más información sobre la aplicación. No se pierda esta oportunidad de obtener un segundo número de teléfono de forma gratuita o a un bajo costo. Descargar 2 y apk línea hoy y disfrutar de sus beneficios. </p>
57
- <h4>Preguntas frecuentes</h4>
58
- <ul>
59
- <li><b>¿Qué es 2ndLine? </b><br/>
60
- 2ndLine es una aplicación Android que le permite obtener un segundo número de teléfono de Estados Unidos o Canadá que funciona en su teléfono inteligente, tableta o computadora. </li>
61
- <li><b>¿Cuánto cuesta 2ndLine? </b><br/>
62
- Puede obtener un número local gratuito con 2ndLine, o pagar por un número premium con más funciones. También puede hacer llamadas internacionales baratas con 2ndLine, o comprar un plan de suscripción para llamadas ilimitadas a destinos seleccionados. </li>
63
- <li><b>¿Cómo puedo obtener un segundo número de teléfono con 2ndLine? </b><br/>
64
- Solo tienes que descargar la aplicación, elegir un número de teléfono de la lista disponible, y activarlo. A continuación, puede utilizar su segundo número como si fuera su número de teléfono normal. </li>
65
- <li><b>¿Puedo usar 2ndLine en varios dispositivos? </b><br/>
66
- Sí, puede usar 2ndLine en múltiples dispositivos con la misma cuenta. Solo necesita iniciar sesión con su dirección de correo electrónico y contraseña en cada dispositivo. </li>
67
- <li><b>¿Puedo transferir mi número actual a 2ndLine? </b><br/>
68
- Sí, puede transferir su número existente a 2ndLine si es elegible. Solo tiene que ponerse en contacto con el equipo de soporte y proporcionar alguna información sobre su número y transportista. Puede haber una tarifa involucrada en el proceso de portar. </li>
69
- <li><b>¿Es 2ndLine seguro y confiable? </b><br/>
70
- Sí, 2ndLine es seguro y confiable. Utiliza cifrado y autenticación para proteger sus datos y privacidad. También utiliza servicios de voz y texto de alta calidad para garantizar una comunicación clara y fluida. </li>
71
- </ul></p> 64aa2da5cf<br />
72
- <br />
73
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/9 Cu Sinif Umumi Tarix Testleri.md DELETED
@@ -1,107 +0,0 @@
1
- <br />
2
- <h1>9 cu sinif umumi tarix testleri: Cómo preparar y aprobar sus exámenes</h1>
3
- <p>Si usted es un estudiante en Azerbaiyán que se está preparando para 9 cu sinif umumi tarix testleri (exámenes de historia general para el noveno grado), es posible que se sienta ansioso y abrumado. Estos exámenes no son fáciles; requieren mucho conocimiento, habilidades y práctica. Según el <a href="( 1 )">Ministerio de Educación</a>, más de 100.000 estudiantes toman estos exámenes cada año, pero solo alrededor del 60% de ellos pasan. Los exámenes cubren una amplia gama de temas de la historia desde los tiempos antiguos hasta los tiempos modernos, como civilizaciones, imperios, religiones, guerras, revoluciones y movimientos. Las preguntas son en su mayoría de elección múltiple o tipo de respuesta corta. </p>
4
- <h2>9 cu sinif umumi tarix testleri</h2><br /><p><b><b>Download Zip</b> &#10037;&#10037;&#10037; <a href="https://bltlly.com/2v6LJv">https://bltlly.com/2v6LJv</a></b></p><br /><br />
5
- <p>Pero no te preocupes; este artículo está aquí para ayudarte. En este artículo, encontrarás consejos y recursos útiles que te ayudarán a estudiar y practicar estos exámenes de manera efectiva. También aprenderás a aprobar estos exámenes con confianza y facilidad. Siguiendo esta guía, usted será capaz de mejorar sus conocimientos de historia, habilidades, y el rendimiento. Así que vamos a empezar! </p>
6
- <h2>Cómo estudiar para 9 pruebas de la universidad sinif Tarix</h2>
7
- <p>El primer paso para prepararse para estos exámenes es estudiar bien. Estudiar bien significa no solo memorizar hechos y fechas, sino también entender conceptos y conexiones. Aquí hay algunos consejos generales de estudio que debes seguir:</p>
8
- <ul>
9
- <li>Haga un plan de estudio. Establezca una meta realista para cada tema y subtema, y asigne suficiente tiempo para cada uno de ellos. Revise su plan regularmente y ajústelo según sea necesario. </li>
10
- <li>Revisa el programa. El programa es el documento oficial que describe el contenido y los objetivos de los exámenes. Puede encontrar el plan de estudios en el sitio web del Ministerio de Educación <a href=""</a>. Asegúrese de estar familiarizado con los temas principales, subtemas y términos clave que se tratan en los exámenes. </li>
11
-
12
- <li>Utilice tarjetas. Las tarjetas son tarjetas que tienen una pregunta en un lado y una respuesta en el otro. Son muy útiles para memorizar hechos, fechas, nombres y definiciones. Puede hacer sus propias tarjetas o usar herramientas en línea, como <a href="">Quizlet</a> o <a href="">Anki</a>, para crear y estudiar tarjetas. </li>
13
- </ul>
14
- <p>Estudiar bien también significa usar recursos en línea que pueden ayudarlo a aprender y revisar los temas de historia cubiertos en estos exámenes. Aquí hay algunos recursos en línea que deberías usar:</p>
15
- <ul>
16
- <li>e-derslik. Esta es la plataforma oficial en línea del Ministerio de Educación que proporciona acceso gratuito a libros de texto, pruebas de práctica y lecciones de video para todas las materias, incluida la historia. Usted puede encontrar el e-derslik para 9 cu sinif umumi tarix testleri <a href="">aquí</a>. </li>
17
- <li>oxuyan. Esta es una plataforma en línea que ofrece cursos interactivos, cuestionarios y videos para diversos temas, incluida la historia. Usted puede encontrar el curso oxuyan para 9 cu sinif umumi tarix testleri <a href="">aquí</a>. </li>
18
- <li>star test book. Esta es una plataforma en línea que ofrece información completa y actualizada sobre los temas de historia tratados en estos exámenes. Puede encontrar el libro de pruebas de estrellas para 9 cu sinif umumi tarix testleri <a href=">here</a>. </li>
19
- </ul>
20
- <p>Estudiar bien también significa leer libros y sitios web que ofrecen información profunda y confiable sobre los temas de historia cubiertos en estos exámenes. Aquí hay algunos libros y sitios web que deberías leer:</p>
21
- <ul>
22
- <li><em>Tarix 9</em> por Azerbaycan Respublikasi Tehsil Nazirliyi. Este es el libro de texto oficial para 9 cu sinif umumi tarix testleri. Cubre todos los temas principales y subtemas de una manera clara y estructurada. Puede encontrar el libro de texto en e-derslik o comprarlo en cualquier librería. </li>
23
-
24
- <li><a href="">History.com</a>. Este es un sitio web que ofrece artículos, videos, podcasts y juegos sobre varios temas de historia, como civilizaciones antiguas, guerras mundiales, figuras históricas y movimientos culturales. Está escrito por expertos y periodistas, con fuentes y referencias. Puede utilizar este sitio web para complementar su conocimiento e interés en la historia. </li>
25
- </ul>
26
- <p>Estudiar bien también significa revisar los exámenes anteriores y analizar los comentarios y respuestas. Esto te ayudará a familiarizarte con el formato, el nivel de dificultad y los tipos de preguntas que aparecen en estos exámenes. También le ayudará a identificar sus fortalezas y debilidades, y mejorar su precisión y velocidad. Aquí hay algunas maneras de revisar exámenes anteriores:</p>
27
- <p></p>
28
- <ul>
29
- <li>Descargue los exámenes anteriores del sitio web del Ministerio de Educación <a href=""</a>. Usted puede encontrar documentos de examen de 2016 a 2021 para 9 cu sinif umumi tarix testleri en este sitio web. También puede encontrar las respuestas y comentarios para cada artículo. </li>
30
- <li>Resolver exámenes anteriores bajo condiciones de examen. Trate de simular el entorno de examen real tanto como sea posible. Configure un temporizador para 90 minutos (la duración de estos exámenes), apague su teléfono y otras distracciones, y use solo un lápiz y papel (sin calculadora o Internet). Después de terminar de resolver un documento, revise sus respuestas y marque su puntuación. </li>
31
- <li>Analiza tu rendimiento y aprende de tus errores. Revisa tu rendimiento y aprende de tus errores. Después de revisar tus respuestas y puntuación, analiza dónde te fue bien y dónde necesitas mejorar. Mira la retroalimentación y las explicaciones para cada pregunta y entiende por qué lo hiciste bien o mal. Aprende de tus errores y evita repetirlos en el futuro. </li>
32
- </ul>
33
- <h2>Cómo practicar para 9 pruebas de la prueba de la prueba del umumi del sinif del cu</h2>
34
-
35
- <ul>
36
- <li>Utilice plataformas en línea para tomar exámenes simulados. Hay muchas plataformas en línea que ofrecen exámenes simulados para 9 cu sinif umumi tarix testleri. Estas plataformas simulan las condiciones reales del examen y proporcionan retroalimentación instantánea y puntajes. Aquí hay algunas plataformas en línea que debe usar:</li>
37
- <ul>
38
- <li>onlayn sinaq merkezi. Esta es una plataforma en línea que ofrece exámenes simulados gratuitos para varios temas, incluida la historia. Usted puede encontrar el onlayn sinaq merkezi para 9 cu sinif umumi tarix testleri <a href="">aquí</a>. </li>
39
- <li>bayram sefixanov. Esta es una plataforma en línea que ofrece exámenes simulados pagados para varios temas, incluyendo la historia. Usted puede encontrar el sefixanov bayram para 9 cu sinif umumi tarix testleri <a href="">aquí</a>. </li>
40
- <li>abituriyent sinaq. Esta es una plataforma en línea que ofrece exámenes simulados pagados para varios temas, incluida la historia. Puede encontrar el sinaq abituriyent para 9 cu sinif umumi tarix testleri <a href="">aquí</a>. </li>
41
- </ul>
42
- <li>Maneje su tiempo, evite distracciones y maneje el estrés durante los exámenes de práctica. Tomar exámenes simulados puede ayudarlo a mejorar sus habilidades de gestión del tiempo, concentración y manejo del estrés. Aquí hay algunos consejos para ayudarte con estas habilidades:</li>
43
- <ul>
44
- <li>Administre su tiempo. Asigne suficiente tiempo para cada pregunta y sección, y mantenga un registro de su progreso. No gaste demasiado tiempo en una pregunta o omita cualquier pregunta. Si está atascado, pase a la siguiente pregunta y vuelva más tarde. </li>
45
- <li>Evite distracciones. Elija un lugar tranquilo y cómodo para tomar los exámenes de práctica. Apaga el teléfono y otros dispositivos que podrían distraerte. Concéntrate en las preguntas y tus respuestas, y no dejes que tu mente divague. </li>
46
- <li>Lidia con el estrés. Tomar exámenes de práctica puede ser estresante, especialmente si no estás lo suficientemente seguro o preparado. Para lidiar con el estrés, trata de relajar tu cuerpo y tu mente antes y durante los exámenes de práctica. Respira profundamente, estira tus músculos, bebe agua y piensa positivamente. </li>
47
- </ul>
48
-
49
- <ul>
50
- <li><a href="">Memrise</a>. Esta es una aplicación que te ayuda a memorizar hechos, fechas, nombres y definiciones relacionadas con la historia utilizando métodos divertidos e interactivos, como imágenes, videos, mnemotecnia y cuestionarios. </li>
51
- <li><a href="">BrainPOP</a>. Esta es una aplicación que te ayuda a aprender y revisar temas de historia usando videos animados, cuestionarios interactivos, juegos y actividades. </li>
52
- <li><a href="">Civilización</a>. Este es un juego que le ayuda a explorar y entender la historia mediante la creación y el liderazgo de su propia civilización desde la antigüedad hasta los tiempos modernos. </li>
53
- </ul>
54
- <li>Únase a comunidades, foros y grupos en línea donde puede discutir temas de historia, hacer preguntas y compartir experiencias con otros estudiantes. Hay muchas plataformas en línea donde se puede conectar con otros estudiantes que se están preparando para estos exámenes o que están interesados en la historia. Aquí hay algunas plataformas en línea a las que deberías unirte:</li>
55
- <ul>
56
- <li><a href="">Reddit</a>. Este es un sitio web donde puedes encontrar varios subreddits (comunidades) relacionados con la historia, como r/history, r/AskHistorians, r/HistoryMemes, etc.</li>
57
- <li><a href="">Facebook</a>. Este es un sitio web donde puedes encontrar varios grupos relacionados con la historia, como History Lovers Club, History Buffs, History Matters, etc.</li>
58
- <li><a href="">Discordia</a>. Este es un sitio web donde puedes encontrar varios servidores (salas de chat) relacionados con la historia, como History Hub, History Hangout, History Nerds, etc.</li>
59
- </ul>
60
- </ul>
61
- <h2>Cómo hacer pruebas de Ace 9 cu sinif umumi tarix</h2>
62
- <p>El tercer paso para prepararse para estos exámenes es superarlos. Acing significa no solo pasarlos, sino también puntuar alto y mostrar su excelencia. Aquí hay algunos consejos para ayudarte a superar estos exámenes:</p>
63
- <ul>
64
- <li>Prepárate para el día del examen. El día del examen es el día más importante de tu preparación. Es necesario asegurarse de que está listo física, mental y emocionalmente para el examen. Aquí hay algunos consejos para ayudarle a prepararse para el día del examen:</li>
65
- <ul>
66
-
67
- <li>Come un desayuno saludable. El desayuno es la comida más importante del día, especialmente en el día del examen. Come un desayuno equilibrado y nutritivo que te dará energía y te mantendrá lleno hasta el examen. Evite los alimentos que son demasiado grasosos, picantes o azucarados, ya que podrían molestar su estómago o hacerle sentir lento. </li>
68
- <li>Traiga los materiales necesarios. Asegúrese de tener todo lo que necesita para el examen, como su tarjeta de identificación, bolígrafo, lápiz, borrador y botella de agua. Consulte el sitio web <a href=">Ministerio de Educación</a> para ver la lista de artículos permitidos y prohibidos en la sala de exámenes. No lleves nada que pueda distraerte o meterte en problemas, como tu teléfono, calculadora o notas. </li>
69
- </ul>
70
- <li>Aborda las preguntas del examen. Las preguntas del examen están diseñadas para poner a prueba tus conocimientos, habilidades y habilidades relacionadas con la historia. Necesitas acercarte a ellos con cuidado y estratégicamente. Aquí hay algunos consejos para ayudarte a abordar las preguntas del examen:</li>
71
- <ul>
72
- <li>Lea las preguntas cuidadosamente. Antes de contestar cualquier pregunta, léala cuidadosamente y entienda lo que está haciendo. Preste atención a las palabras clave, como comparar, contrastar, analizar, explicar, etc., y responda en consecuencia. No asumas ni adivines nada; basa tu respuesta en hechos y pruebas. </li>
73
- <li>Eliminar respuestas incorrectas. La mayoría de las preguntas son de tipo de opción múltiple, lo que significa que debe elegir una respuesta correcta entre cuatro opciones. Para aumentar sus posibilidades de elegir la respuesta correcta, trate de eliminar las respuestas equivocadas primero. Busca pistas, como contradicciones, inconsistencias o irrelevancia, que puedan ayudarte a descartar las opciones equivocadas. </li>
74
- <li>Revisa tu trabajo. Después de responder a una pregunta, revisa tu trabajo y asegúrate de que sea correcto y completo. Busca cualquier error, como errores de ortografía, gramática o cálculo, que pueda afectar tu puntuación. Si le queda tiempo al final del examen, revise sus respuestas y haga los cambios necesarios. </li>
75
- </ul>
76
-
77
- <ul>
78
- <li>Relaja tu cuerpo y mente. Si te sientes tenso o ansioso durante el examen, trata de relajar tu cuerpo y mente haciendo algunos ejercicios simples. Respira profunda y lentamente, estira los músculos, bebe agua y piensa positivamente. </li>
79
- <li>Concéntrate en ti mismo y en tus objetivos. No te compares con otros durante el examen. Enfócate en ti mismo y en tus metas. Recuerda por qué estás tomando estos exámenes y lo que quieres lograr con ellos. </li>
80
- <li>Sé optimista y realista. No dejes que pensamientos o emociones negativas afecten tu rendimiento durante el examen. Sé optimista y cree en ti mismo y en tus habilidades. Pero también sé realista y acepta que quizás no lo sepas todo o lo hagas todo bien. </li>
81
- </ul>
82
- </ul>
83
- <h2>Conclusión</h2>
84
- <p>En conclusión, 9 cu sinif umumi tarix testleri son exámenes importantes que requieren mucha preparación y práctica. Siguiendo esta guía, usted será capaz de estudiar bien el uso de varios consejos y recursos; practicar bien el uso de varios métodos y plataformas; y el as bien el uso de diversas estrategias y habilidades. </p>
85
- <p>Esperamos que este artículo te haya ayudado a entender cómo preparar y aprobar estos exámenes de manera efectiva. ¡Te deseamos buena suerte y éxito en tus exámenes! </p>
86
- <h2>Preguntas frecuentes</h2>
87
- <p>Aquí hay algunas preguntas frecuentes sobre 9 cu sinif umumi tarix testleri:</p>
88
- <ol>
89
- <li><strong>¿Qué son los testleri 9 cu sinif umumi tarix? </strong></li>
90
- <p>9 cu sinif umumi tarix testleri son exámenes de historia general para estudiantes de noveno grado en Azerbaiyán. Forman parte del sistema nacional de evaluación que evalúa los conocimientos y habilidades de los estudiantes en diversas materias. </p>
91
- <li><strong>¿Cuántas preguntas hay en 9 cu sinif umumi tarix testleri? </strong></li>
92
-
93
- <li><strong>¿Cuáles son los principales temas tratados en 9 cu sinif umumi tarix testleri? </strong></li>
94
- <p>Los principales temas que se tratan en el 9 cu sinif umumi tarix testleri son:</p>
95
- <ul>
96
- <li>Historia antigua: el origen y el desarrollo de la civilización humana, las civilizaciones antiguas de Mesopotamia, Egipto, India, China, Grecia, y Roma, las religiones y las culturas de estas civilizaciones, los logros y las contribuciones de estas civilizaciones a la historia del mundo. </li>
97
- <li>Historia medieval: el auge y la caída del Imperio bizantino, la civilización árabe-islámica, el Imperio mongol, el Imperio otomano, el Imperio safávida, el sultanato mameluco y el Imperio timúrida, las religiones y culturas de estos imperios, las interacciones y conflictos entre estos imperios y otras regiones del mundo. </li>
98
- <li>Historia moderna: el surgimiento y desarrollo del sistema mundial moderno, la exploración y colonización europea de las Américas, África y Asia, el auge y declive de los imperios español, portugués, británico, francés, holandés y ruso, las revoluciones y movimientos que dieron forma al mundo moderno, como el Renacimiento, la Reforma, la Ilustración, la Revolución Industrial, la Revolución Americana, la Revolución Francesa, las Guerras Napoleónicas, las Guerras Latinoamericanas de Independencia, los movimientos nacionalistas en Asia y África.</li>
99
- <li>Historia contemporánea: los principales acontecimientos y tendencias que influyeron en el mundo contemporáneo, como las guerras mundiales, la guerra fría, el proceso de descolonización, la formación de organizaciones y alianzas internacionales, el proceso de globalización, los avances tecnológicos y científicos, los cambios sociales y culturales, las cuestiones ambientales y humanitarias.</li>
100
- </ul>
101
- <li><strong>¿Cómo puedo acceder a los recursos en línea para 9 cu sinif umumi tarix testleri? </strong></li>
102
-
103
- <li><strong>¿Cómo puedo mejorar mis habilidades de historia para 9 cu sinif umumi tarix testleri? </strong></li>
104
- <p>Puede mejorar sus habilidades de historia para 9 cu sinif umumi tarix testleri mediante el uso de varios métodos y herramientas que pueden ayudarlo a mejorar su memoria, pensamiento crítico y habilidades para resolver problemas relacionados con la historia. Algunos de estos métodos y herramientas son tarjetas, toma de notas, revisión de exámenes anteriores, uso de aplicaciones y juegos, y unirse a comunidades en línea. Puedes encontrar más detalles sobre estos métodos y herramientas en este artículo. </p>
105
- </ol></p> 64aa2da5cf<br />
106
- <br />
107
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Avanzado Youtube Apk Descargar La ltima Versin 2023.md DELETED
@@ -1,72 +0,0 @@
1
-
2
- <h1>Vanced YouTube APK Descargar la última versión 2023 Descargar</h1>
3
- <p>YouTube es una de las plataformas para compartir videos más populares del mundo, con miles de millones de usuarios y horas de contenido. Sin embargo, muchas personas no están satisfechas con la aplicación oficial de YouTube, ya que tiene algunas limitaciones y molestias, como anuncios, sin reproducción de fondo, sin modo oscuro y más. Si usted es una de esas personas que quieren disfrutar de YouTube sin estas restricciones, entonces usted podría estar interesado en YouTube Vanced, una versión modificada de la aplicación oficial de YouTube que ofrece muchas características adicionales y opciones de personalización. En este artículo, te diremos qué es YouTube Vanced, qué características tiene, cómo descargarlo e instalarlo, cómo actualizarlo y si es seguro y legal de usar. </p>
4
- <h2>avanzado youtube apk descargar la última versión 2023</h2><br /><p><b><b>Download</b> === <a href="https://bltlly.com/2v6LMm">https://bltlly.com/2v6LMm</a></b></p><br /><br />
5
- <h2>¿Qué es YouTube Vanced? </h2>
6
- <p>YouTube Vanced es una versión modificada de la aplicación oficial de YouTube que ofrece bloqueo de anuncios, reproducción de fondo, modo oscuro y temas, bloqueo de patrocinadores, modo de imágenes incrustadas y zoom, y más. YouTube Vanced es una popular aplicación de terceros que ofrece funciones adicionales y opciones de personalización más allá de la aplicación oficial de YouTube. No está disponible en la Google Play Store, pero se puede descargar desde su sitio web oficial u otras fuentes. YouTube Vanced requiere una aplicación complementaria separada llamada Vanced Manager, que le ayuda a instalar y actualizar la aplicación modded. También necesitas instalar otra aplicación llamada MicroG si quieres iniciar sesión con tu cuenta de Google y acceder a tus suscripciones, listas de reproducción, historial, etc.</p>
7
- <h3>Características de YouTube Vanced</h3>
8
- <p>YouTube Vanced tiene muchas características que lo hacen superior a la aplicación oficial de YouTube. Aquí están algunas de las principales:</p>
9
- <h4>Bloqueo de anuncios</h4>
10
-
11
- <h4>Reproducción de fondo</h4>
12
- <p>Otra limitación de la aplicación oficial de YouTube es que no le permite reproducir vídeos en segundo plano cuando se cambia a otra aplicación o bloquear la pantalla. Esto significa que no puedes escuchar música o podcasts en YouTube mientras haces otras cosas en tu teléfono. Con YouTube Vanced, puedes habilitar la reproducción en segundo plano y seguir escuchando tus vídeos incluso cuando sales de la aplicación o apagas la pantalla. También puede controlar la reproducción desde el panel de notificaciones o la pantalla de bloqueo. </p>
13
- <h4>Modo oscuro y temas</h4>
14
- <p>Si prefieres un tema más oscuro para tus aplicaciones, entonces te encantará la función de modo oscuro de YouTube Vanced. Puede cambiar entre los modos de luz y oscuridad con un simple toque en el icono de la aplicación. También puede elegir entre diferentes temas de color para la interfaz de la aplicación, como negro, azul, rosa, etc. De esta manera, puede personalizar la aplicación de acuerdo a su preferencia y estado de ánimo. </p>
15
- <h4>Bloque patrocinador</h4>
16
- <p>A veces, incluso cuando bloqueas anuncios en YouTube, todavía tienes que ver segmentos patrocinados o promociones dentro de los videos. Estos pueden ser molestos y desperdiciar su tiempo. Con la función de bloqueo de patrocinadores de YouTube Vanced, puede omitir estos segmentos de forma automática o manual. También puedes contribuir a la base de datos de bloques de patrocinadores marcando los segmentos patrocinados en los videos que ves. </p>
17
- <h4>modo PiP y zoom</h4>
18
- <p>PiP mode significa modo imagen en imagen, que le permite ver videos en una ventana pequeña mientras usa otras aplicaciones en su teléfono. Esto es útil si desea realizar múltiples tareas o mantener un ojo en algo más mientras ve YouTube. Con YouTube Vanced, puede habilitar el modo de imágenes incrustadas para cualquier vídeo y cambiar el tamaño o mover la ventana como desee. También puede acercar o alejar el vídeo pellizcando la pantalla. Esto es útil si desea ver más detalles o ajustar el vídeo al tamaño de la pantalla. </p>
19
- <p></p>
20
- <h3>¿Cómo descargar e instalar YouTube Vanced? </h3>
21
-
22
- <h4>Requisitos</h4>
23
- <ul>
24
- <li>Tu dispositivo debe estar ejecutando Android 4.4 o superior. </li>
25
- <li> Es necesario habilitar fuentes desconocidas en la configuración del dispositivo para permitir la instalación de aplicaciones desde fuera de la Google Play Store.</li>
26
- <li>Necesitas descargar e instalar Vanced Manager, que es la aplicación oficial para instalar y actualizar YouTube Vanced.</li>
27
- <li>Necesitas descargar e instalar MicroG, que es una pequeña aplicación que te permite iniciar sesión con tu cuenta de Google en YouTube Vanced.</li>
28
- </ul>
29
- <h4>Pasos</h4>
30
- <ol>
31
- <li>Descargar Vanced Manager desde su sitio web oficial u otras fuentes. También puede escanear el código QR a continuación para obtener el enlace. </li>
32
- <li>Instalar Vanced Manager en su dispositivo y abrirlo. </li>
33
- <li>Toque en el icono de YouTube Vanced y seleccione su variante preferida (raíz o no raíz) y el tema (oscuro o claro). </li>
34
- <li>Toque en el botón Instalar y espere a que se complete la descarga y la instalación. </li>
35
- <li>Toque en el icono de MicroG y toque en el botón Instalar. Espere a que se complete la descarga y la instalación. </li>
36
- <li>Abra YouTube Vanced e inicie sesión con su cuenta de Google si desea acceder a sus suscripciones, listas de reproducción, historial, etc.</li>
37
- <li>Disfruta de YouTube avanzado con todas sus características y opciones de personalización. </li>
38
- </ol>
39
- <p><img src="https://vancedapp.com/assets/img/qr.png" alt="Código QR para Vanced Manager" width="200" height="200"></p>
40
- <h3>¿Cómo actualizar YouTube Vanced? </h3>
41
- <p>YouTube Vanced no se actualiza automáticamente como la aplicación oficial de YouTube. Debe usar Vanced Manager para comprobar si hay actualizaciones e instalarlas manualmente. Estos son los pasos para actualizar YouTube Vanced:</p>
42
- <ol>
43
- <li>Abra Vanced Manager en su dispositivo. </li>
44
- <li>Toque en el icono de YouTube Vanced y compruebe si hay una nueva versión disponible. Si lo hay, toque el botón Actualizar y espere a que se complete la descarga y la instalación. </li>
45
-
46
- <li>Si hay una nueva versión de MicroG disponible, toque en el icono de MicroG y toque en el botón Actualizar. Espere a que se complete la descarga y la instalación. </li>
47
- </ol>
48
- <h3>¿YouTube Vanced es seguro y legal? </h3>
49
- <p>Mucha gente se pregunta si YouTube Vanced es seguro y legal de usar, ya que es una versión modificada de la aplicación oficial de YouTube que evita algunas de sus restricciones y políticas. La respuesta no es muy clara, ya que hay algunos riesgos e incertidumbres involucrados. Aquí hay algunos puntos a considerar:</p>
50
- <ul>
51
- <li>YouTube Vanced no es un virus o malware que dañará su dispositivo o robará sus datos. Es una aplicación conocida y confiable que ha sido utilizada por millones de usuarios sin ningún problema importante. Sin embargo, siempre debe descargarlo desde su sitio web oficial u otras fuentes de buena reputación, ya que puede haber versiones falsas o maliciosas de la misma en otros lugares. </li>
52
- <li>YouTube Vanced no está avalado o aprobado por Google o YouTube, ya que viola sus términos de servicio. Esto significa que pueden prohibir o suspender tu cuenta si detectan que la estás usando. Sin embargo, esto es muy poco probable, ya que hasta ahora no se ha informado de tales acciones. Aún así, debe usarlo bajo su propio riesgo y discreción, ya que no hay garantía de que siempre funcione o sea seguro. </li>
53
- <li>YouTube Vanced no es ilegal en la mayoría de los países, ya que no implica piratería o robo de contenido. Es simplemente una versión modificada de una aplicación existente que ofrece características adicionales y opciones de personalización. Sin embargo, algunos países pueden tener diferentes leyes o reglamentos con respecto a dichas aplicaciones, por lo que debe comprobarlos antes de usarlo. Además, debes respetar los derechos y preferencias de los creadores y anunciantes de contenido en YouTube, ya que podrían perder ingresos o exposición debido a las funciones de YouTube Vanced. </li>
54
- </ul>
55
- <h2>Conclusión</h2>
56
-
57
- <h2>Preguntas frecuentes</h2>
58
- <p>Aquí hay algunas preguntas frecuentes sobre YouTube Vanced:</p>
59
- <ol>
60
- <li><b>¿Cuál es la diferencia entre las variantes raíz y no raíz de YouTube Vanced? </b></li>
61
- <p>La variante raíz de YouTube Vanced es para dispositivos que tienen acceso root, lo que significa que tienen control total sobre la configuración del sistema y los archivos. La variante raíz reemplaza la aplicación oficial de YouTube en su dispositivo, por lo que no necesita instalar MicroG o usar un icono de aplicación separado. La variante no raíz de YouTube Vanced es para dispositivos que no tienen acceso root, lo que significa que tienen un control limitado sobre la configuración del sistema y los archivos. La variante no raíz no reemplaza la aplicación oficial de YouTube en su dispositivo, por lo que debe instalar MicroG y usar un icono de aplicación separado. </p>
62
- <li><b>¿Puedo usar YouTube Vanced en mi PC o TV? </b></li>
63
- <p>No, YouTube Vanced solo está disponible para dispositivos Android. Sin embargo, puede usar otros métodos o herramientas para obtener características similares en su PC o TV, como extensiones de navegador, aplicaciones de escritorio, aplicaciones de televisión inteligente, etc.</p>
64
- <li><b>¿Puedo descargar videos de YouTube Vanced? </b></li>
65
- <p>No, YouTube Vanced no tiene una función de descarga de video incorporada. Sin embargo, puedes usar otras aplicaciones o sitios web para descargar vídeos de YouTube, como TubeMate, VidMate, SnapTube, etc.</p>
66
- <li><b>¿Puedo usar las funciones de YouTube Premium en YouTube Vanced? </b></li>
67
- <p>No, YouTube Vanced no le da acceso a funciones de YouTube Premium, como reproducción sin conexi��n, contenido original, transmisión de música, etc. Aún necesita suscribirse a YouTube Premium si desea disfrutar de estas funciones. </p>
68
- <li><b>¿Puedo usar varias cuentas en YouTube Vanced? </b></li>
69
- <p>Sí, puedes usar varias cuentas en YouTube Vanced iniciando sesión con diferentes cuentas de Google. Puede cambiar entre ellos desde el menú de la cuenta en la aplicación. </p>
70
- </ol></p> 64aa2da5cf<br />
71
- <br />
72
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Como Hacer Una Hoja De Vida.md DELETED
@@ -1,61 +0,0 @@
1
-
2
- <h1>Cómo descargar Office 2019 gratis</h1>
3
- <p>Microsoft Office es una de las suites de productividad más populares y ampliamente utilizadas en el mundo. Incluye potentes aplicaciones como Word, Excel, PowerPoint, Outlook y más. Sin embargo, obtener la última versión de Office puede ser caro, especialmente si desea usarlo en varios dispositivos. </p>
4
- <h2>como hacer una hoja de vida</h2><br /><p><b><b>DOWNLOAD</b> &#9734;&#9734;&#9734; <a href="https://bltlly.com/2v6KOM">https://bltlly.com/2v6KOM</a></b></p><br /><br />
5
- <p>Afortunadamente, hay algunas maneras de descargar Office 2019 gratis legalmente. En este artículo, te mostraremos qué es Office 2019, por qué lo quieres y cómo obtenerlo sin pagar un centavo. </p>
6
- <h2>¿Qué es Office 2019 y por qué es posible que lo desee</h2>
7
- <p>Office 2019 es la última versión de la suite de software de oficina de Microsoft. Fue lanzado en septiembre de 2018 y es una compra única que no requiere una suscripción. A diferencia de Office 365, que es un servicio basado en la nube que ofrece actualizaciones regulares y nuevas características, Office 2019 es un producto independiente que no recibirá cambios ni mejoras importantes. </p>
8
- <p>Sin embargo, eso no significa que Office 2019 sea inferior o obsoleto. De hecho, hay algunas razones por las que podría preferir Office 2019 sobre Office 365. </p>
9
- <h3>Oficina 2019 vs Oficina 365</h3>
10
- <p>La principal diferencia entre Office 2019 y Office 365 es cómo se conectan a la nube. Ambas suites cuentan con acceso a OneDrive, el servicio de almacenamiento en la nube de Microsoft. Pero, Office 2019 no viene con ningún espacio de almacenamiento en OneDrive y no obtiene acceso a las versiones en línea de aplicaciones como Word, Excel y PowerPoint. Office 365, por otro lado, incluye 1 TB de almacenamiento gratuito y puede editar fácilmente todos sus archivos en línea. </p>
11
-
12
- <p>Entonces, ¿cuál debes elegir? Depende de tus necesidades y preferencias. Si desea tener las últimas funciones y actualizaciones, acceder a sus archivos desde cualquier lugar y usar varios dispositivos, Office 365 podría ser una mejor opción para usted. Si desea ahorrar dinero a largo plazo, usar sus archivos sin conexión y no necesita aplicaciones o servicios adicionales, Office 2019 podría ser suficiente para usted. </p>
13
- <p></p>
14
- <h3>Características y beneficios de Office 2019</h3>
15
- <p>A pesar de que Office 2019 no tiene todas las campanas y silbatos de Office 365, todavía tiene algunas características y beneficios impresionantes que pueden mejorar su productividad y creatividad. Estos son algunos de ellos:</p>
16
- <ul>
17
- <li><strong>Nuevas herramientas de entintado:</strong> Puede usar su pluma o dedo para dibujar, escribir, resaltar y borrar en Word, Excel, PowerPoint y Outlook. También puede convertir su tinta a formas o texto, o realizar problemas matemáticos complejos con Ink Math Assistant.</li>
18
- <li><strong>Nuevos tipos de datos:</strong> Puede trabajar con nuevos tipos de datos en Excel, como Stocks y Geografía. Estos tipos de datos pueden extraer información de fuentes en línea y actualizarse automáticamente. </li>
19
- <li><strong>Nuevas funciones:</strong> Puede usar nuevas funciones en Excel, como TEXTJOIN, CONCAT, IFS, SWITCH y más. Continuando con el artículo: <li><strong>Nuevos gráficos y efectos visuales:</strong> Puede crear gráficos e imágenes impresionantes en Excel y PowerPoint, como Embudo, Mapa, Cronología y modelos 3D. Estos gráficos y gráficos pueden ayudarlo a presentar sus datos de una manera más atractiva e interactiva. </li>
20
- <li><strong>Nuevas animaciones y transiciones:</strong> Puede agregar nuevas animaciones y transiciones en PowerPoint, como Morph, Zoom y 3D. Estas animaciones y transiciones pueden ayudarle a crear presentaciones dinámicas y cautivadoras. </li>
21
-
22
- <li><strong>Nuevas herramientas de aprendizaje:</strong> Puede usar nuevas herramientas de aprendizaje en Word y Outlook, como Leer en voz alta, Espaciado de texto y Modo de enfoque. Estas herramientas de aprendizaje pueden ayudarte a mejorar tus habilidades de lectura y escritura. </li>
23
- </ul>
24
- <h2>Cómo obtener Office 2019 gratis legalmente</h2>
25
- <p>Si estás interesado en obtener Office 2019 gratis legalmente, tienes algunas opciones que considerar. Aquí están algunas de ellas:</p>
26
- <h3>Opción 1: Usar Microsoft 365 para la Web</h3>
27
- <p>Una de las maneras más fáciles de obtener Office 2019 gratis es usar Microsoft 365 para la web. Esta es una versión en línea gratuita de Office que incluye Word, Excel, PowerPoint, OneNote y Outlook. Puede acceder a estas aplicaciones desde cualquier navegador y crear, editar y compartir sus archivos en línea. También obtiene 5 GB de almacenamiento gratuito en OneDrive.</p>
28
- <p>Para usar Microsoft 365 para la web, solo necesita una cuenta de Microsoft. Si no lo tienes, puedes crear uno gratis aquí: <a href="">https://signup.live.com/</a>. Una vez que tenga una cuenta, puede iniciar sesión aquí: <a href="">https://www.office.com/</a>. A continuación, puede comenzar a usar las aplicaciones desde la página de inicio o el lanzador de aplicaciones. </p>
29
- <h3>Opción 2: Utilice el programa de descuento de Microsoft Workplace</h3>
30
- <p>Otra manera de obtener Office 2019 de forma gratuita es utilizar Microsoft Workplace Discount Program. Este es un programa que permite a los empleados elegibles de las organizaciones participantes obtener Office 2019 a un precio con descuento o incluso gratis. Puede comprobar si su organización forma parte de este programa aquí: <a href="">https://www.microsoft.com/en-us/home-use-program</a>. </p>
31
- <p>Para utilizar Microsoft Workplace Discount Program, necesita una dirección de correo electrónico de trabajo válida de su organización. Si su organización es elegible, recibirá un correo electrónico con un enlace para comprar Office 2019 a un precio reducido o gratis. A continuación, puede descargar e instalar Office 2019 en su dispositivo personal. </p>
32
- <h3>Opción 3: Utilice el servidor en línea de Microsoft Office</h3>
33
-
34
- <p>Para usar Microsoft Office Online Server, necesita una licencia de Windows Server y una licencia de Office. Usted puede obtener estas licencias de forma gratuita si usted es un estudiante o un educador. Puedes comprobar si eres elegible aquí: <a href="">https://www.microsoft.com/en-us/education/products/office</a>. Una vez que tenga las licencias, puede descargar e instalar Office Online Server en su servidor aquí: <a href="">https://www.microsoft.com/en-us/download/details.aspx?id=49030</a>. Luego, puede configurar y usar las aplicaciones desde su servidor. </p> Continuando con el artículo: <h2>Cómo instalar y activar Office 2019 en su PC o Mac</h2>
35
- <p>Si ha comprado u obtenido Office 2019 a través de una de las opciones anteriores, puede instalarlo y activarlo en su PC o Mac. Estos son los pasos para hacerlo:</p>
36
- <h3>Paso 1: Descargar Office 2019 desde una fuente de confianza</h3>
37
- <p>El primer paso es descargar Office 2019 desde una fuente confiable. Puede hacerlo desde la Tienda de Microsoft, el sitio web de Microsoft o el enlace que recibió de su organización o escuela. Asegúrese de descargar la versión correcta para su dispositivo y sistema operativo. </p>
38
- <h3>Paso 2: Ejecute el archivo de configuración y siga las instrucciones</h3>
39
- <p>El segundo paso es ejecutar el archivo de configuración y seguir las instrucciones. Dependiendo de su dispositivo y sistema operativo, el archivo de configuración podría ser un . exe, . dmg, o archivo . iso. Haga doble clic en el archivo y permita que se ejecute. Luego, siga las instrucciones en la pantalla para instalar Office 2019 en su dispositivo. </p>
40
- <h3>Paso 3: Ingrese su clave de producto o inicie sesión con su cuenta de Microsoft</h3>
41
-
42
- <p>Para activar Office 2019, debe ingresar su clave de producto o iniciar sesión con su cuenta de Microsoft. Puede hacer esto cuando inicie cualquiera de las aplicaciones de Office por primera vez. Verá una solicitud para activar Office 2019. Siga las instrucciones en la pantalla para introducir su clave de producto o iniciar sesión con su cuenta de Microsoft. </p>
43
- <h2>Conclusión</h2>
44
- <p>Office 2019 es una suite de productividad potente y versátil que puede ayudarlo a crear, editar y compartir documentos, hojas de cálculo, presentaciones y más. Sin embargo, también puede ser caro, especialmente si desea usarlo en varios dispositivos. </p>
45
- <p>En este artículo, le hemos mostrado cómo descargar Office 2019 gratis legalmente. Puede utilizar Microsoft 365 para la web, Microsoft Workplace Discount Program o Microsoft Office Online Server. También puede instalar y activar Office 2019 en su PC o Mac siguiendo algunos pasos simples. </p>
46
- <p>Esperamos que este artículo haya sido útil e informativo para usted. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. </p>
47
- <h2>Preguntas frecuentes</h2>
48
- <ul>
49
- <li><strong>Q: ¿Es Office 2019 compatible con Windows 10? </strong></li>
50
- <li>A: Sí, Office 2019 es compatible con Windows 10. También es compatible con Windows 8.1 y Windows Server 2019. </li>
51
- <li><strong>Q: ¿Es Office 2019 compatible con Mac OS? </strong></li>
52
- <li>A: Sí, Office 2019 es compatible con Mac OS. También es compatible con Mac OS X 10.14 Mojave y versiones posteriores. </li>
53
- <li><strong>Q: ¿Cuántos dispositivos puedo instalar Office 2019 en? </strong></li>
54
- <li>A: Puede instalar Office 2019 en un dispositivo por licencia. Si desea usarlo en varios dispositivos, debe comprar varias licencias o usar Office 365 en su lugar. </li>
55
- <li><strong>Q: ¿Cuánto tiempo dura Office 2019? </strong></li>
56
- <li>A: Office 2019 dura tanto como su dispositivo lo soporte. No caduca ni requiere renovación. Sin embargo, no recibe ninguna actualización importante o nuevas características. </li>
57
-
58
- <li>A: Sí, puede actualizar de Office 2016 a Office 2019. Sin embargo, necesita comprar una nueva licencia para Office 2019 o usar una de las opciones anteriores para obtenerla de forma gratuita. </li>
59
- </ul></p> 64aa2da5cf<br />
60
- <br />
61
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BernardoOlisan/vqganclip/CLIP/README.md DELETED
@@ -1,193 +0,0 @@
1
- # CLIP
2
-
3
- [[Blog]](https://openai.com/blog/clip/) [[Paper]](https://arxiv.org/abs/2103.00020) [[Model Card]](model-card.md) [[Colab]](https://colab.research.google.com/github/openai/clip/blob/master/notebooks/Interacting_with_CLIP.ipynb)
4
-
5
- CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. We found CLIP matches the performance of the original ResNet50 on ImageNet “zero-shot” without using any of the original 1.28M labeled examples, overcoming several major challenges in computer vision.
6
-
7
-
8
-
9
- ## Approach
10
-
11
- ![CLIP](CLIP.png)
12
-
13
-
14
-
15
- ## Usage
16
-
17
- First, [install PyTorch 1.7.1](https://pytorch.org/get-started/locally/) and torchvision, as well as small additional dependencies, and then install this repo as a Python package. On a CUDA GPU machine, the following will do the trick:
18
-
19
- ```bash
20
- $ conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=11.0
21
- $ pip install ftfy regex tqdm
22
- $ pip install git+https://github.com/openai/CLIP.git
23
- ```
24
-
25
- Replace `cudatoolkit=11.0` above with the appropriate CUDA version on your machine or `cpuonly` when installing on a machine without a GPU.
26
-
27
- ```python
28
- import torch
29
- import clip
30
- from PIL import Image
31
-
32
- device = "cuda" if torch.cuda.is_available() else "cpu"
33
- model, preprocess = clip.load("ViT-B/32", device=device)
34
-
35
- image = preprocess(Image.open("CLIP.png")).unsqueeze(0).to(device)
36
- text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device)
37
-
38
- with torch.no_grad():
39
- image_features = model.encode_image(image)
40
- text_features = model.encode_text(text)
41
-
42
- logits_per_image, logits_per_text = model(image, text)
43
- probs = logits_per_image.softmax(dim=-1).cpu().numpy()
44
-
45
- print("Label probs:", probs) # prints: [[0.9927937 0.00421068 0.00299572]]
46
- ```
47
-
48
-
49
- ## API
50
-
51
- The CLIP module `clip` provides the following methods:
52
-
53
- #### `clip.available_models()`
54
-
55
- Returns the names of the available CLIP models.
56
-
57
- #### `clip.load(name, device=..., jit=False)`
58
-
59
- Returns the model and the TorchVision transform needed by the model, specified by the model name returned by `clip.available_models()`. It will download the model as necessary. The `name` argument can also be a path to a local checkpoint.
60
-
61
- The device to run the model can be optionally specified, and the default is to use the first CUDA device if there is any, otherwise the CPU. When `jit` is `False`, a non-JIT version of the model will be loaded.
62
-
63
- #### `clip.tokenize(text: Union[str, List[str]], context_length=77)`
64
-
65
- Returns a LongTensor containing tokenized sequences of given text input(s). This can be used as the input to the model
66
-
67
- ---
68
-
69
- The model returned by `clip.load()` supports the following methods:
70
-
71
- #### `model.encode_image(image: Tensor)`
72
-
73
- Given a batch of images, returns the image features encoded by the vision portion of the CLIP model.
74
-
75
- #### `model.encode_text(text: Tensor)`
76
-
77
- Given a batch of text tokens, returns the text features encoded by the language portion of the CLIP model.
78
-
79
- #### `model(image: Tensor, text: Tensor)`
80
-
81
- Given a batch of images and a batch of text tokens, returns two Tensors, containing the logit scores corresponding to each image and text input. The values are cosine similarities between the corresponding image and text features, times 100.
82
-
83
-
84
-
85
- ## More Examples
86
-
87
- ### Zero-Shot Prediction
88
-
89
- The code below performs zero-shot prediction using CLIP, as shown in Appendix B in the paper. This example takes an image from the [CIFAR-100 dataset](https://www.cs.toronto.edu/~kriz/cifar.html), and predicts the most likely labels among the 100 textual labels from the dataset.
90
-
91
- ```python
92
- import os
93
- import clip
94
- import torch
95
- from torchvision.datasets import CIFAR100
96
-
97
- # Load the model
98
- device = "cuda" if torch.cuda.is_available() else "cpu"
99
- model, preprocess = clip.load('ViT-B/32', device)
100
-
101
- # Download the dataset
102
- cifar100 = CIFAR100(root=os.path.expanduser("~/.cache"), download=True, train=False)
103
-
104
- # Prepare the inputs
105
- image, class_id = cifar100[3637]
106
- image_input = preprocess(image).unsqueeze(0).to(device)
107
- text_inputs = torch.cat([clip.tokenize(f"a photo of a {c}") for c in cifar100.classes]).to(device)
108
-
109
- # Calculate features
110
- with torch.no_grad():
111
- image_features = model.encode_image(image_input)
112
- text_features = model.encode_text(text_inputs)
113
-
114
- # Pick the top 5 most similar labels for the image
115
- image_features /= image_features.norm(dim=-1, keepdim=True)
116
- text_features /= text_features.norm(dim=-1, keepdim=True)
117
- similarity = (100.0 * image_features @ text_features.T).softmax(dim=-1)
118
- values, indices = similarity[0].topk(5)
119
-
120
- # Print the result
121
- print("\nTop predictions:\n")
122
- for value, index in zip(values, indices):
123
- print(f"{cifar100.classes[index]:>16s}: {100 * value.item():.2f}%")
124
- ```
125
-
126
- The output will look like the following (the exact numbers may be slightly different depending on the compute device):
127
-
128
- ```
129
- Top predictions:
130
-
131
- snake: 65.31%
132
- turtle: 12.29%
133
- sweet_pepper: 3.83%
134
- lizard: 1.88%
135
- crocodile: 1.75%
136
- ```
137
-
138
- Note that this example uses the `encode_image()` and `encode_text()` methods that return the encoded features of given inputs.
139
-
140
-
141
- ### Linear-probe evaluation
142
-
143
- The example below uses [scikit-learn](https://scikit-learn.org/) to perform logistic regression on image features.
144
-
145
- ```python
146
- import os
147
- import clip
148
- import torch
149
-
150
- import numpy as np
151
- from sklearn.linear_model import LogisticRegression
152
- from torch.utils.data import DataLoader
153
- from torchvision.datasets import CIFAR100
154
- from tqdm import tqdm
155
-
156
- # Load the model
157
- device = "cuda" if torch.cuda.is_available() else "cpu"
158
- model, preprocess = clip.load('ViT-B/32', device)
159
-
160
- # Load the dataset
161
- root = os.path.expanduser("~/.cache")
162
- train = CIFAR100(root, download=True, train=True, transform=preprocess)
163
- test = CIFAR100(root, download=True, train=False, transform=preprocess)
164
-
165
-
166
- def get_features(dataset):
167
- all_features = []
168
- all_labels = []
169
-
170
- with torch.no_grad():
171
- for images, labels in tqdm(DataLoader(dataset, batch_size=100)):
172
- features = model.encode_image(images.to(device))
173
-
174
- all_features.append(features)
175
- all_labels.append(labels)
176
-
177
- return torch.cat(all_features).cpu().numpy(), torch.cat(all_labels).cpu().numpy()
178
-
179
- # Calculate the image features
180
- train_features, train_labels = get_features(train)
181
- test_features, test_labels = get_features(test)
182
-
183
- # Perform logistic regression
184
- classifier = LogisticRegression(random_state=0, C=0.316, max_iter=1000, verbose=1)
185
- classifier.fit(train_features, train_labels)
186
-
187
- # Evaluate using the logistic regression classifier
188
- predictions = classifier.predict(test_features)
189
- accuracy = np.mean((test_labels == predictions).astype(np.float)) * 100.
190
- print(f"Accuracy = {accuracy:.3f}")
191
- ```
192
-
193
- Note that the `C` value should be determined via a hyperparameter sweep using a validation split.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/panel.py DELETED
@@ -1,308 +0,0 @@
1
- from typing import TYPE_CHECKING, Optional
2
-
3
- from .align import AlignMethod
4
- from .box import ROUNDED, Box
5
- from .cells import cell_len
6
- from .jupyter import JupyterMixin
7
- from .measure import Measurement, measure_renderables
8
- from .padding import Padding, PaddingDimensions
9
- from .segment import Segment
10
- from .style import Style, StyleType
11
- from .text import Text, TextType
12
-
13
- if TYPE_CHECKING:
14
- from .console import Console, ConsoleOptions, RenderableType, RenderResult
15
-
16
-
17
- class Panel(JupyterMixin):
18
- """A console renderable that draws a border around its contents.
19
-
20
- Example:
21
- >>> console.print(Panel("Hello, World!"))
22
-
23
- Args:
24
- renderable (RenderableType): A console renderable object.
25
- box (Box, optional): A Box instance that defines the look of the border (see :ref:`appendix_box`.
26
- Defaults to box.ROUNDED.
27
- safe_box (bool, optional): Disable box characters that don't display on windows legacy terminal with *raster* fonts. Defaults to True.
28
- expand (bool, optional): If True the panel will stretch to fill the console
29
- width, otherwise it will be sized to fit the contents. Defaults to True.
30
- style (str, optional): The style of the panel (border and contents). Defaults to "none".
31
- border_style (str, optional): The style of the border. Defaults to "none".
32
- width (Optional[int], optional): Optional width of panel. Defaults to None to auto-detect.
33
- height (Optional[int], optional): Optional height of panel. Defaults to None to auto-detect.
34
- padding (Optional[PaddingDimensions]): Optional padding around renderable. Defaults to 0.
35
- highlight (bool, optional): Enable automatic highlighting of panel title (if str). Defaults to False.
36
- """
37
-
38
- def __init__(
39
- self,
40
- renderable: "RenderableType",
41
- box: Box = ROUNDED,
42
- *,
43
- title: Optional[TextType] = None,
44
- title_align: AlignMethod = "center",
45
- subtitle: Optional[TextType] = None,
46
- subtitle_align: AlignMethod = "center",
47
- safe_box: Optional[bool] = None,
48
- expand: bool = True,
49
- style: StyleType = "none",
50
- border_style: StyleType = "none",
51
- width: Optional[int] = None,
52
- height: Optional[int] = None,
53
- padding: PaddingDimensions = (0, 1),
54
- highlight: bool = False,
55
- ) -> None:
56
- self.renderable = renderable
57
- self.box = box
58
- self.title = title
59
- self.title_align: AlignMethod = title_align
60
- self.subtitle = subtitle
61
- self.subtitle_align = subtitle_align
62
- self.safe_box = safe_box
63
- self.expand = expand
64
- self.style = style
65
- self.border_style = border_style
66
- self.width = width
67
- self.height = height
68
- self.padding = padding
69
- self.highlight = highlight
70
-
71
- @classmethod
72
- def fit(
73
- cls,
74
- renderable: "RenderableType",
75
- box: Box = ROUNDED,
76
- *,
77
- title: Optional[TextType] = None,
78
- title_align: AlignMethod = "center",
79
- subtitle: Optional[TextType] = None,
80
- subtitle_align: AlignMethod = "center",
81
- safe_box: Optional[bool] = None,
82
- style: StyleType = "none",
83
- border_style: StyleType = "none",
84
- width: Optional[int] = None,
85
- padding: PaddingDimensions = (0, 1),
86
- ) -> "Panel":
87
- """An alternative constructor that sets expand=False."""
88
- return cls(
89
- renderable,
90
- box,
91
- title=title,
92
- title_align=title_align,
93
- subtitle=subtitle,
94
- subtitle_align=subtitle_align,
95
- safe_box=safe_box,
96
- style=style,
97
- border_style=border_style,
98
- width=width,
99
- padding=padding,
100
- expand=False,
101
- )
102
-
103
- @property
104
- def _title(self) -> Optional[Text]:
105
- if self.title:
106
- title_text = (
107
- Text.from_markup(self.title)
108
- if isinstance(self.title, str)
109
- else self.title.copy()
110
- )
111
- title_text.end = ""
112
- title_text.plain = title_text.plain.replace("\n", " ")
113
- title_text.no_wrap = True
114
- title_text.expand_tabs()
115
- title_text.pad(1)
116
- return title_text
117
- return None
118
-
119
- @property
120
- def _subtitle(self) -> Optional[Text]:
121
- if self.subtitle:
122
- subtitle_text = (
123
- Text.from_markup(self.subtitle)
124
- if isinstance(self.subtitle, str)
125
- else self.subtitle.copy()
126
- )
127
- subtitle_text.end = ""
128
- subtitle_text.plain = subtitle_text.plain.replace("\n", " ")
129
- subtitle_text.no_wrap = True
130
- subtitle_text.expand_tabs()
131
- subtitle_text.pad(1)
132
- return subtitle_text
133
- return None
134
-
135
- def __rich_console__(
136
- self, console: "Console", options: "ConsoleOptions"
137
- ) -> "RenderResult":
138
- _padding = Padding.unpack(self.padding)
139
- renderable = (
140
- Padding(self.renderable, _padding) if any(_padding) else self.renderable
141
- )
142
- style = console.get_style(self.style)
143
- border_style = style + console.get_style(self.border_style)
144
- width = (
145
- options.max_width
146
- if self.width is None
147
- else min(options.max_width, self.width)
148
- )
149
-
150
- safe_box: bool = console.safe_box if self.safe_box is None else self.safe_box
151
- box = self.box.substitute(options, safe=safe_box)
152
-
153
- def align_text(
154
- text: Text, width: int, align: str, character: str, style: Style
155
- ) -> Text:
156
- """Gets new aligned text.
157
-
158
- Args:
159
- text (Text): Title or subtitle text.
160
- width (int): Desired width.
161
- align (str): Alignment.
162
- character (str): Character for alignment.
163
- style (Style): Border style
164
-
165
- Returns:
166
- Text: New text instance
167
- """
168
- text = text.copy()
169
- text.truncate(width)
170
- excess_space = width - cell_len(text.plain)
171
- if excess_space:
172
- if align == "left":
173
- return Text.assemble(
174
- text,
175
- (character * excess_space, style),
176
- no_wrap=True,
177
- end="",
178
- )
179
- elif align == "center":
180
- left = excess_space // 2
181
- return Text.assemble(
182
- (character * left, style),
183
- text,
184
- (character * (excess_space - left), style),
185
- no_wrap=True,
186
- end="",
187
- )
188
- else:
189
- return Text.assemble(
190
- (character * excess_space, style),
191
- text,
192
- no_wrap=True,
193
- end="",
194
- )
195
- return text
196
-
197
- title_text = self._title
198
- if title_text is not None:
199
- title_text.stylize_before(border_style)
200
-
201
- child_width = (
202
- width - 2
203
- if self.expand
204
- else console.measure(
205
- renderable, options=options.update_width(width - 2)
206
- ).maximum
207
- )
208
- child_height = self.height or options.height or None
209
- if child_height:
210
- child_height -= 2
211
- if title_text is not None:
212
- child_width = min(
213
- options.max_width - 2, max(child_width, title_text.cell_len + 2)
214
- )
215
-
216
- width = child_width + 2
217
- child_options = options.update(
218
- width=child_width, height=child_height, highlight=self.highlight
219
- )
220
- lines = console.render_lines(renderable, child_options, style=style)
221
-
222
- line_start = Segment(box.mid_left, border_style)
223
- line_end = Segment(f"{box.mid_right}", border_style)
224
- new_line = Segment.line()
225
- if title_text is None or width <= 4:
226
- yield Segment(box.get_top([width - 2]), border_style)
227
- else:
228
- title_text = align_text(
229
- title_text,
230
- width - 4,
231
- self.title_align,
232
- box.top,
233
- border_style,
234
- )
235
- yield Segment(box.top_left + box.top, border_style)
236
- yield from console.render(title_text, child_options.update_width(width - 4))
237
- yield Segment(box.top + box.top_right, border_style)
238
-
239
- yield new_line
240
- for line in lines:
241
- yield line_start
242
- yield from line
243
- yield line_end
244
- yield new_line
245
-
246
- subtitle_text = self._subtitle
247
- if subtitle_text is not None:
248
- subtitle_text.stylize_before(border_style)
249
-
250
- if subtitle_text is None or width <= 4:
251
- yield Segment(box.get_bottom([width - 2]), border_style)
252
- else:
253
- subtitle_text = align_text(
254
- subtitle_text,
255
- width - 4,
256
- self.subtitle_align,
257
- box.bottom,
258
- border_style,
259
- )
260
- yield Segment(box.bottom_left + box.bottom, border_style)
261
- yield from console.render(
262
- subtitle_text, child_options.update_width(width - 4)
263
- )
264
- yield Segment(box.bottom + box.bottom_right, border_style)
265
-
266
- yield new_line
267
-
268
- def __rich_measure__(
269
- self, console: "Console", options: "ConsoleOptions"
270
- ) -> "Measurement":
271
- _title = self._title
272
- _, right, _, left = Padding.unpack(self.padding)
273
- padding = left + right
274
- renderables = [self.renderable, _title] if _title else [self.renderable]
275
-
276
- if self.width is None:
277
- width = (
278
- measure_renderables(
279
- console,
280
- options.update_width(options.max_width - padding - 2),
281
- renderables,
282
- ).maximum
283
- + padding
284
- + 2
285
- )
286
- else:
287
- width = self.width
288
- return Measurement(width, width)
289
-
290
-
291
- if __name__ == "__main__": # pragma: no cover
292
- from .console import Console
293
-
294
- c = Console()
295
-
296
- from .box import DOUBLE, ROUNDED
297
- from .padding import Padding
298
-
299
- p = Panel(
300
- "Hello, World!",
301
- title="rich.Panel",
302
- style="white on blue",
303
- box=DOUBLE,
304
- padding=1,
305
- )
306
-
307
- c.print()
308
- c.print(p)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BigChungux/Pet_Survey2/info.md DELETED
@@ -1,17 +0,0 @@
1
- Pet Survey
2
-
3
- ### 🧐 Problem Statement and Research Summary
4
- Have you ever had or currently have the problem of choosing the right pet to adopt? Well, I have had that problem too. When I was young I
5
- accidentally adopted a pet fish and after a few days, I knew I had chosen the wrong pet to adopt. I don't want anyone else to face that
6
- problem so look no further this AI was built through countless accurate survey data and "accurately"recommends the best pet for you!
7
-
8
- ### 🎣 Data Collection Plan
9
- e survey data I collected was about users personalities, life, the amount of time they had, how many people they lived with, etc. I collected
10
- this databecause it all ties in with finding the best pet for each user. Users who have a lot of free time usually choose either a dog or cat
11
- as they require constant/more tending while users with little to no free time chose pets such as a fish as they don't require constant tending.
12
- ### 💥 Ethical Considerations (Data Privacy and Bias)
13
- * The data we collect will only ever be used to recommend the best pet for you. WE WILL NOT SELL DATA LIKE FACEBOOK.
14
- * There may be bias with dogs as a majority(over 50%!)of people chose dogs in our survey and so the AI will most likely be biased towards dogs.
15
- ### 👻 Our Team
16
- Its just me... Pema Sherpa
17
- ![aiEDU logo](https://images.squarespace-cdn.com/content/v1/5e4efdef6d10420691f02bc1/5db5a8a3-1761-4fce-a096-bd5f2515162f/aiEDU+_black+logo+stacked.png?format=100w)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BilalSardar/Reinhard_Color_Transformation/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Reinhard Color Transformation
3
- emoji: 😻
4
- colorFrom: blue
5
- colorTo: purple
6
- sdk: gradio
7
- sdk_version: 3.47.1
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Boadiwaa/Recipes/openai/api_resources/abstract/engine_api_resource.py DELETED
@@ -1,192 +0,0 @@
1
- import time
2
- from pydoc import apropos
3
- from typing import Optional
4
- from urllib.parse import quote_plus
5
-
6
- import openai
7
- from openai import api_requestor, error, util
8
- from openai.api_resources.abstract.api_resource import APIResource
9
- from openai.openai_response import OpenAIResponse
10
- from openai.util import ApiType
11
-
12
- MAX_TIMEOUT = 20
13
-
14
-
15
- class EngineAPIResource(APIResource):
16
- engine_required = True
17
- plain_old_data = False
18
-
19
- def __init__(self, engine: Optional[str] = None, **kwargs):
20
- super().__init__(engine=engine, **kwargs)
21
-
22
- @classmethod
23
- def class_url(
24
- cls,
25
- engine: Optional[str] = None,
26
- api_type: Optional[str] = None,
27
- api_version: Optional[str] = None,
28
- ):
29
- # Namespaces are separated in object names with periods (.) and in URLs
30
- # with forward slashes (/), so replace the former with the latter.
31
- base = cls.OBJECT_NAME.replace(".", "/") # type: ignore
32
- typed_api_type, api_version = cls._get_api_type_and_version(api_type, api_version)
33
-
34
- if typed_api_type == ApiType.AZURE:
35
- if not api_version:
36
- raise error.InvalidRequestError(
37
- "An API version is required for the Azure API type."
38
- )
39
- if engine is None:
40
- raise error.InvalidRequestError(
41
- "You must provide the deployment name in the 'engine' parameter to access the Azure OpenAI service"
42
- )
43
- extn = quote_plus(engine)
44
- return "/%s/%s/%s/%s?api-version=%s" % (
45
- cls.azure_api_prefix,
46
- cls.azure_deployments_prefix,
47
- extn,
48
- base,
49
- api_version
50
- )
51
-
52
- elif typed_api_type == ApiType.OPEN_AI:
53
- if engine is None:
54
- return "/%s" % (base)
55
-
56
- extn = quote_plus(engine)
57
- return "/engines/%s/%s" % (extn, base)
58
-
59
- else:
60
- raise error.InvalidAPIType("Unsupported API type %s" % api_type)
61
-
62
- @classmethod
63
- def create(
64
- cls,
65
- api_key=None,
66
- api_base=None,
67
- api_type=None,
68
- request_id=None,
69
- api_version=None,
70
- organization=None,
71
- **params,
72
- ):
73
- engine = params.pop("engine", None)
74
- timeout = params.pop("timeout", None)
75
- stream = params.get("stream", False)
76
- headers = params.pop("headers", None)
77
- if engine is None and cls.engine_required:
78
- raise error.InvalidRequestError(
79
- "Must provide an 'engine' parameter to create a %s" % cls, "engine"
80
- )
81
-
82
- if timeout is None:
83
- # No special timeout handling
84
- pass
85
- elif timeout > 0:
86
- # API only supports timeouts up to MAX_TIMEOUT
87
- params["timeout"] = min(timeout, MAX_TIMEOUT)
88
- timeout = (timeout - params["timeout"]) or None
89
- elif timeout == 0:
90
- params["timeout"] = MAX_TIMEOUT
91
-
92
- requestor = api_requestor.APIRequestor(
93
- api_key,
94
- api_base=api_base,
95
- api_type=api_type,
96
- api_version=api_version,
97
- organization=organization,
98
- )
99
- url = cls.class_url(engine, api_type, api_version)
100
- response, _, api_key = requestor.request(
101
- "post",
102
- url,
103
- params=params,
104
- headers=headers,
105
- stream=stream,
106
- request_id=request_id,
107
- )
108
-
109
- if stream:
110
- assert not isinstance(response, OpenAIResponse) # must be an iterator
111
- return (
112
- util.convert_to_openai_object(
113
- line,
114
- api_key,
115
- api_version,
116
- organization,
117
- engine=engine,
118
- plain_old_data=cls.plain_old_data,
119
- )
120
- for line in response
121
- )
122
- else:
123
- obj = util.convert_to_openai_object(
124
- response,
125
- api_key,
126
- api_version,
127
- organization,
128
- engine=engine,
129
- plain_old_data=cls.plain_old_data,
130
- )
131
-
132
- if timeout is not None:
133
- obj.wait(timeout=timeout or None)
134
-
135
- return obj
136
-
137
- def instance_url(self):
138
- id = self.get("id")
139
-
140
- if not isinstance(id, str):
141
- raise error.InvalidRequestError(
142
- f"Could not determine which URL to request: {type(self).__name__} instance has invalid ID: {id}, {type(id)}. ID should be of type str.",
143
- "id",
144
- )
145
-
146
- extn = quote_plus(id)
147
- params_connector = '?'
148
-
149
- if self.typed_api_type == ApiType.AZURE:
150
- api_version = self.api_version or openai.api_version
151
- if not api_version:
152
- raise error.InvalidRequestError(
153
- "An API version is required for the Azure API type."
154
- )
155
- base = self.OBJECT_NAME.replace(".", "/")
156
- url = "/%s/%s/%s/%s/%s?api-version=%s" % (
157
- self.azure_api_prefix,
158
- self.azure_deployments_prefix,
159
- self.engine,
160
- base,
161
- extn,
162
- api_version
163
- )
164
- params_connector = '&'
165
-
166
-
167
- elif self.typed_api_type == ApiType.OPEN_AI:
168
- base = self.class_url(self.engine, self.api_type, self.api_version)
169
- url = "%s/%s" % (base, extn)
170
-
171
- else:
172
- raise error.InvalidAPIType("Unsupported API type %s" % self.api_type)
173
-
174
- timeout = self.get("timeout")
175
- if timeout is not None:
176
- timeout = quote_plus(str(timeout))
177
- url += params_connector + "timeout={}".format(timeout)
178
- return url
179
-
180
- def wait(self, timeout=None):
181
- start = time.time()
182
- while self.status != "complete":
183
- self.timeout = (
184
- min(timeout + start - time.time(), MAX_TIMEOUT)
185
- if timeout is not None
186
- else MAX_TIMEOUT
187
- )
188
- if self.timeout < 0:
189
- del self.timeout
190
- break
191
- self.refresh()
192
- return self
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/LIVE/README.md DELETED
@@ -1,44 +0,0 @@
1
- # LIVE-pytorch
2
- Towards Layer-wise Image Vectorization
3
-
4
- ### Updated for rebuttal (Jan/28/2022):
5
- #### User study
6
- We create a [user study](https://wj.qq.com/s2/9665341/19ed) as suggested. A more complex user study will be added in the revised version.
7
-
8
- The results are collected here: [user study details](user_study_state.csv)
9
-
10
- #### Code installation
11
-
12
- we added detailed [conda env file](env.yml) and collected detail [system information](system_info.txt) to help the installation.
13
-
14
- A more detailed docker and Google Colab demo will be provided.
15
-
16
-
17
- <div align="center">
18
- <img src="example.png" width="650px" height="300px">
19
- </div>
20
- LIVE is able to explicitly presents a Layer-wise representation for simple images.
21
-
22
- ## Installation
23
- ```bash
24
- pip3 install torch torchvision
25
- pip install svgwrite
26
- pip install svgpathtools
27
- pip install cssutils
28
- pip install numba
29
- pip install torch-tools
30
- pip install visdom
31
- pip install scikit-fmm
32
- pip install opencv-python==4.5.4.60
33
- pip install easydict
34
- pip install scikit-fmm
35
-
36
- ```
37
- Next, please refer DiffVG to install [pydiffvg](https://github.com/BachiLi/diffvg)
38
-
39
-
40
- ## Run
41
- ```bash
42
- python main.py --config config/all.yaml --experiment experiment_8x1 --signature demo1 --target data/demo1.png
43
- ```
44
- Please modify the config files to change configurations.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/pybind11/tests/test_smart_ptr.cpp DELETED
@@ -1,369 +0,0 @@
1
- /*
2
- tests/test_smart_ptr.cpp -- binding classes with custom reference counting,
3
- implicit conversions between types
4
-
5
- Copyright (c) 2016 Wenzel Jakob <[email protected]>
6
-
7
- All rights reserved. Use of this source code is governed by a
8
- BSD-style license that can be found in the LICENSE file.
9
- */
10
-
11
- #if defined(_MSC_VER) && _MSC_VER < 1910
12
- # pragma warning(disable: 4702) // unreachable code in system header
13
- #endif
14
-
15
- #include "pybind11_tests.h"
16
- #include "object.h"
17
-
18
- // Make pybind aware of the ref-counted wrapper type (s):
19
-
20
- // ref<T> is a wrapper for 'Object' which uses intrusive reference counting
21
- // It is always possible to construct a ref<T> from an Object* pointer without
22
- // possible inconsistencies, hence the 'true' argument at the end.
23
- PYBIND11_DECLARE_HOLDER_TYPE(T, ref<T>, true);
24
- // Make pybind11 aware of the non-standard getter member function
25
- namespace pybind11 { namespace detail {
26
- template <typename T>
27
- struct holder_helper<ref<T>> {
28
- static const T *get(const ref<T> &p) { return p.get_ptr(); }
29
- };
30
- }}
31
-
32
- // The following is not required anymore for std::shared_ptr, but it should compile without error:
33
- PYBIND11_DECLARE_HOLDER_TYPE(T, std::shared_ptr<T>);
34
-
35
- // This is just a wrapper around unique_ptr, but with extra fields to deliberately bloat up the
36
- // holder size to trigger the non-simple-layout internal instance layout for single inheritance with
37
- // large holder type:
38
- template <typename T> class huge_unique_ptr {
39
- std::unique_ptr<T> ptr;
40
- uint64_t padding[10];
41
- public:
42
- huge_unique_ptr(T *p) : ptr(p) {};
43
- T *get() { return ptr.get(); }
44
- };
45
- PYBIND11_DECLARE_HOLDER_TYPE(T, huge_unique_ptr<T>);
46
-
47
- // Simple custom holder that works like unique_ptr
48
- template <typename T>
49
- class custom_unique_ptr {
50
- std::unique_ptr<T> impl;
51
- public:
52
- custom_unique_ptr(T* p) : impl(p) { }
53
- T* get() const { return impl.get(); }
54
- T* release_ptr() { return impl.release(); }
55
- };
56
- PYBIND11_DECLARE_HOLDER_TYPE(T, custom_unique_ptr<T>);
57
-
58
- // Simple custom holder that works like shared_ptr and has operator& overload
59
- // To obtain address of an instance of this holder pybind should use std::addressof
60
- // Attempt to get address via operator& may leads to segmentation fault
61
- template <typename T>
62
- class shared_ptr_with_addressof_operator {
63
- std::shared_ptr<T> impl;
64
- public:
65
- shared_ptr_with_addressof_operator( ) = default;
66
- shared_ptr_with_addressof_operator(T* p) : impl(p) { }
67
- T* get() const { return impl.get(); }
68
- T** operator&() { throw std::logic_error("Call of overloaded operator& is not expected"); }
69
- };
70
- PYBIND11_DECLARE_HOLDER_TYPE(T, shared_ptr_with_addressof_operator<T>);
71
-
72
- // Simple custom holder that works like unique_ptr and has operator& overload
73
- // To obtain address of an instance of this holder pybind should use std::addressof
74
- // Attempt to get address via operator& may leads to segmentation fault
75
- template <typename T>
76
- class unique_ptr_with_addressof_operator {
77
- std::unique_ptr<T> impl;
78
- public:
79
- unique_ptr_with_addressof_operator() = default;
80
- unique_ptr_with_addressof_operator(T* p) : impl(p) { }
81
- T* get() const { return impl.get(); }
82
- T* release_ptr() { return impl.release(); }
83
- T** operator&() { throw std::logic_error("Call of overloaded operator& is not expected"); }
84
- };
85
- PYBIND11_DECLARE_HOLDER_TYPE(T, unique_ptr_with_addressof_operator<T>);
86
-
87
-
88
- TEST_SUBMODULE(smart_ptr, m) {
89
-
90
- // test_smart_ptr
91
-
92
- // Object implementation in `object.h`
93
- py::class_<Object, ref<Object>> obj(m, "Object");
94
- obj.def("getRefCount", &Object::getRefCount);
95
-
96
- // Custom object with builtin reference counting (see 'object.h' for the implementation)
97
- class MyObject1 : public Object {
98
- public:
99
- MyObject1(int value) : value(value) { print_created(this, toString()); }
100
- std::string toString() const { return "MyObject1[" + std::to_string(value) + "]"; }
101
- protected:
102
- virtual ~MyObject1() { print_destroyed(this); }
103
- private:
104
- int value;
105
- };
106
- py::class_<MyObject1, ref<MyObject1>>(m, "MyObject1", obj)
107
- .def(py::init<int>());
108
- py::implicitly_convertible<py::int_, MyObject1>();
109
-
110
- m.def("make_object_1", []() -> Object * { return new MyObject1(1); });
111
- m.def("make_object_2", []() -> ref<Object> { return new MyObject1(2); });
112
- m.def("make_myobject1_1", []() -> MyObject1 * { return new MyObject1(4); });
113
- m.def("make_myobject1_2", []() -> ref<MyObject1> { return new MyObject1(5); });
114
- m.def("print_object_1", [](const Object *obj) { py::print(obj->toString()); });
115
- m.def("print_object_2", [](ref<Object> obj) { py::print(obj->toString()); });
116
- m.def("print_object_3", [](const ref<Object> &obj) { py::print(obj->toString()); });
117
- m.def("print_object_4", [](const ref<Object> *obj) { py::print((*obj)->toString()); });
118
- m.def("print_myobject1_1", [](const MyObject1 *obj) { py::print(obj->toString()); });
119
- m.def("print_myobject1_2", [](ref<MyObject1> obj) { py::print(obj->toString()); });
120
- m.def("print_myobject1_3", [](const ref<MyObject1> &obj) { py::print(obj->toString()); });
121
- m.def("print_myobject1_4", [](const ref<MyObject1> *obj) { py::print((*obj)->toString()); });
122
-
123
- // Expose constructor stats for the ref type
124
- m.def("cstats_ref", &ConstructorStats::get<ref_tag>);
125
-
126
-
127
- // Object managed by a std::shared_ptr<>
128
- class MyObject2 {
129
- public:
130
- MyObject2(const MyObject2 &) = default;
131
- MyObject2(int value) : value(value) { print_created(this, toString()); }
132
- std::string toString() const { return "MyObject2[" + std::to_string(value) + "]"; }
133
- virtual ~MyObject2() { print_destroyed(this); }
134
- private:
135
- int value;
136
- };
137
- py::class_<MyObject2, std::shared_ptr<MyObject2>>(m, "MyObject2")
138
- .def(py::init<int>());
139
- m.def("make_myobject2_1", []() { return new MyObject2(6); });
140
- m.def("make_myobject2_2", []() { return std::make_shared<MyObject2>(7); });
141
- m.def("print_myobject2_1", [](const MyObject2 *obj) { py::print(obj->toString()); });
142
- m.def("print_myobject2_2", [](std::shared_ptr<MyObject2> obj) { py::print(obj->toString()); });
143
- m.def("print_myobject2_3", [](const std::shared_ptr<MyObject2> &obj) { py::print(obj->toString()); });
144
- m.def("print_myobject2_4", [](const std::shared_ptr<MyObject2> *obj) { py::print((*obj)->toString()); });
145
-
146
- // Object managed by a std::shared_ptr<>, additionally derives from std::enable_shared_from_this<>
147
- class MyObject3 : public std::enable_shared_from_this<MyObject3> {
148
- public:
149
- MyObject3(const MyObject3 &) = default;
150
- MyObject3(int value) : value(value) { print_created(this, toString()); }
151
- std::string toString() const { return "MyObject3[" + std::to_string(value) + "]"; }
152
- virtual ~MyObject3() { print_destroyed(this); }
153
- private:
154
- int value;
155
- };
156
- py::class_<MyObject3, std::shared_ptr<MyObject3>>(m, "MyObject3")
157
- .def(py::init<int>());
158
- m.def("make_myobject3_1", []() { return new MyObject3(8); });
159
- m.def("make_myobject3_2", []() { return std::make_shared<MyObject3>(9); });
160
- m.def("print_myobject3_1", [](const MyObject3 *obj) { py::print(obj->toString()); });
161
- m.def("print_myobject3_2", [](std::shared_ptr<MyObject3> obj) { py::print(obj->toString()); });
162
- m.def("print_myobject3_3", [](const std::shared_ptr<MyObject3> &obj) { py::print(obj->toString()); });
163
- m.def("print_myobject3_4", [](const std::shared_ptr<MyObject3> *obj) { py::print((*obj)->toString()); });
164
-
165
- // test_smart_ptr_refcounting
166
- m.def("test_object1_refcounting", []() {
167
- ref<MyObject1> o = new MyObject1(0);
168
- bool good = o->getRefCount() == 1;
169
- py::object o2 = py::cast(o, py::return_value_policy::reference);
170
- // always request (partial) ownership for objects with intrusive
171
- // reference counting even when using the 'reference' RVP
172
- good &= o->getRefCount() == 2;
173
- return good;
174
- });
175
-
176
- // test_unique_nodelete
177
- // Object with a private destructor
178
- class MyObject4 {
179
- public:
180
- MyObject4(int value) : value{value} { print_created(this); }
181
- int value;
182
- private:
183
- ~MyObject4() { print_destroyed(this); }
184
- };
185
- py::class_<MyObject4, std::unique_ptr<MyObject4, py::nodelete>>(m, "MyObject4")
186
- .def(py::init<int>())
187
- .def_readwrite("value", &MyObject4::value);
188
-
189
- // test_unique_deleter
190
- // Object with std::unique_ptr<T, D> where D is not matching the base class
191
- // Object with a protected destructor
192
- class MyObject4a {
193
- public:
194
- MyObject4a(int i) {
195
- value = i;
196
- print_created(this);
197
- };
198
- int value;
199
- protected:
200
- virtual ~MyObject4a() { print_destroyed(this); }
201
- };
202
- py::class_<MyObject4a, std::unique_ptr<MyObject4a, py::nodelete>>(m, "MyObject4a")
203
- .def(py::init<int>())
204
- .def_readwrite("value", &MyObject4a::value);
205
-
206
- // Object derived but with public destructor and no Deleter in default holder
207
- class MyObject4b : public MyObject4a {
208
- public:
209
- MyObject4b(int i) : MyObject4a(i) { print_created(this); }
210
- ~MyObject4b() { print_destroyed(this); }
211
- };
212
- py::class_<MyObject4b, MyObject4a>(m, "MyObject4b")
213
- .def(py::init<int>());
214
-
215
- // test_large_holder
216
- class MyObject5 { // managed by huge_unique_ptr
217
- public:
218
- MyObject5(int value) : value{value} { print_created(this); }
219
- ~MyObject5() { print_destroyed(this); }
220
- int value;
221
- };
222
- py::class_<MyObject5, huge_unique_ptr<MyObject5>>(m, "MyObject5")
223
- .def(py::init<int>())
224
- .def_readwrite("value", &MyObject5::value);
225
-
226
- // test_shared_ptr_and_references
227
- struct SharedPtrRef {
228
- struct A {
229
- A() { print_created(this); }
230
- A(const A &) { print_copy_created(this); }
231
- A(A &&) { print_move_created(this); }
232
- ~A() { print_destroyed(this); }
233
- };
234
-
235
- A value = {};
236
- std::shared_ptr<A> shared = std::make_shared<A>();
237
- };
238
- using A = SharedPtrRef::A;
239
- py::class_<A, std::shared_ptr<A>>(m, "A");
240
- py::class_<SharedPtrRef>(m, "SharedPtrRef")
241
- .def(py::init<>())
242
- .def_readonly("ref", &SharedPtrRef::value)
243
- .def_property_readonly("copy", [](const SharedPtrRef &s) { return s.value; },
244
- py::return_value_policy::copy)
245
- .def_readonly("holder_ref", &SharedPtrRef::shared)
246
- .def_property_readonly("holder_copy", [](const SharedPtrRef &s) { return s.shared; },
247
- py::return_value_policy::copy)
248
- .def("set_ref", [](SharedPtrRef &, const A &) { return true; })
249
- .def("set_holder", [](SharedPtrRef &, std::shared_ptr<A>) { return true; });
250
-
251
- // test_shared_ptr_from_this_and_references
252
- struct SharedFromThisRef {
253
- struct B : std::enable_shared_from_this<B> {
254
- B() { print_created(this); }
255
- B(const B &) : std::enable_shared_from_this<B>() { print_copy_created(this); }
256
- B(B &&) : std::enable_shared_from_this<B>() { print_move_created(this); }
257
- ~B() { print_destroyed(this); }
258
- };
259
-
260
- B value = {};
261
- std::shared_ptr<B> shared = std::make_shared<B>();
262
- };
263
- using B = SharedFromThisRef::B;
264
- py::class_<B, std::shared_ptr<B>>(m, "B");
265
- py::class_<SharedFromThisRef>(m, "SharedFromThisRef")
266
- .def(py::init<>())
267
- .def_readonly("bad_wp", &SharedFromThisRef::value)
268
- .def_property_readonly("ref", [](const SharedFromThisRef &s) -> const B & { return *s.shared; })
269
- .def_property_readonly("copy", [](const SharedFromThisRef &s) { return s.value; },
270
- py::return_value_policy::copy)
271
- .def_readonly("holder_ref", &SharedFromThisRef::shared)
272
- .def_property_readonly("holder_copy", [](const SharedFromThisRef &s) { return s.shared; },
273
- py::return_value_policy::copy)
274
- .def("set_ref", [](SharedFromThisRef &, const B &) { return true; })
275
- .def("set_holder", [](SharedFromThisRef &, std::shared_ptr<B>) { return true; });
276
-
277
- // Issue #865: shared_from_this doesn't work with virtual inheritance
278
- struct SharedFromThisVBase : std::enable_shared_from_this<SharedFromThisVBase> {
279
- SharedFromThisVBase() = default;
280
- SharedFromThisVBase(const SharedFromThisVBase &) = default;
281
- virtual ~SharedFromThisVBase() = default;
282
- };
283
- struct SharedFromThisVirt : virtual SharedFromThisVBase {};
284
- static std::shared_ptr<SharedFromThisVirt> sft(new SharedFromThisVirt());
285
- py::class_<SharedFromThisVirt, std::shared_ptr<SharedFromThisVirt>>(m, "SharedFromThisVirt")
286
- .def_static("get", []() { return sft.get(); });
287
-
288
- // test_move_only_holder
289
- struct C {
290
- C() { print_created(this); }
291
- ~C() { print_destroyed(this); }
292
- };
293
- py::class_<C, custom_unique_ptr<C>>(m, "TypeWithMoveOnlyHolder")
294
- .def_static("make", []() { return custom_unique_ptr<C>(new C); })
295
- .def_static("make_as_object", []() { return py::cast(custom_unique_ptr<C>(new C)); });
296
-
297
- // test_holder_with_addressof_operator
298
- struct TypeForHolderWithAddressOf {
299
- TypeForHolderWithAddressOf() { print_created(this); }
300
- TypeForHolderWithAddressOf(const TypeForHolderWithAddressOf &) { print_copy_created(this); }
301
- TypeForHolderWithAddressOf(TypeForHolderWithAddressOf &&) { print_move_created(this); }
302
- ~TypeForHolderWithAddressOf() { print_destroyed(this); }
303
- std::string toString() const {
304
- return "TypeForHolderWithAddressOf[" + std::to_string(value) + "]";
305
- }
306
- int value = 42;
307
- };
308
- using HolderWithAddressOf = shared_ptr_with_addressof_operator<TypeForHolderWithAddressOf>;
309
- py::class_<TypeForHolderWithAddressOf, HolderWithAddressOf>(m, "TypeForHolderWithAddressOf")
310
- .def_static("make", []() { return HolderWithAddressOf(new TypeForHolderWithAddressOf); })
311
- .def("get", [](const HolderWithAddressOf &self) { return self.get(); })
312
- .def("print_object_1", [](const TypeForHolderWithAddressOf *obj) { py::print(obj->toString()); })
313
- .def("print_object_2", [](HolderWithAddressOf obj) { py::print(obj.get()->toString()); })
314
- .def("print_object_3", [](const HolderWithAddressOf &obj) { py::print(obj.get()->toString()); })
315
- .def("print_object_4", [](const HolderWithAddressOf *obj) { py::print((*obj).get()->toString()); });
316
-
317
- // test_move_only_holder_with_addressof_operator
318
- struct TypeForMoveOnlyHolderWithAddressOf {
319
- TypeForMoveOnlyHolderWithAddressOf(int value) : value{value} { print_created(this); }
320
- ~TypeForMoveOnlyHolderWithAddressOf() { print_destroyed(this); }
321
- std::string toString() const {
322
- return "MoveOnlyHolderWithAddressOf[" + std::to_string(value) + "]";
323
- }
324
- int value;
325
- };
326
- using MoveOnlyHolderWithAddressOf = unique_ptr_with_addressof_operator<TypeForMoveOnlyHolderWithAddressOf>;
327
- py::class_<TypeForMoveOnlyHolderWithAddressOf, MoveOnlyHolderWithAddressOf>(m, "TypeForMoveOnlyHolderWithAddressOf")
328
- .def_static("make", []() { return MoveOnlyHolderWithAddressOf(new TypeForMoveOnlyHolderWithAddressOf(0)); })
329
- .def_readwrite("value", &TypeForMoveOnlyHolderWithAddressOf::value)
330
- .def("print_object", [](const TypeForMoveOnlyHolderWithAddressOf *obj) { py::print(obj->toString()); });
331
-
332
- // test_smart_ptr_from_default
333
- struct HeldByDefaultHolder { };
334
- py::class_<HeldByDefaultHolder>(m, "HeldByDefaultHolder")
335
- .def(py::init<>())
336
- .def_static("load_shared_ptr", [](std::shared_ptr<HeldByDefaultHolder>) {});
337
-
338
- // test_shared_ptr_gc
339
- // #187: issue involving std::shared_ptr<> return value policy & garbage collection
340
- struct ElementBase {
341
- virtual ~ElementBase() { } /* Force creation of virtual table */
342
- ElementBase() = default;
343
- ElementBase(const ElementBase&) = delete;
344
- };
345
- py::class_<ElementBase, std::shared_ptr<ElementBase>>(m, "ElementBase");
346
-
347
- struct ElementA : ElementBase {
348
- ElementA(int v) : v(v) { }
349
- int value() { return v; }
350
- int v;
351
- };
352
- py::class_<ElementA, ElementBase, std::shared_ptr<ElementA>>(m, "ElementA")
353
- .def(py::init<int>())
354
- .def("value", &ElementA::value);
355
-
356
- struct ElementList {
357
- void add(std::shared_ptr<ElementBase> e) { l.push_back(e); }
358
- std::vector<std::shared_ptr<ElementBase>> l;
359
- };
360
- py::class_<ElementList, std::shared_ptr<ElementList>>(m, "ElementList")
361
- .def(py::init<>())
362
- .def("add", &ElementList::add)
363
- .def("get", [](ElementList &el) {
364
- py::list list;
365
- for (auto &e : el.l)
366
- list.append(py::cast(e));
367
- return list;
368
- });
369
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/internal/rename_cub_namespace.sh DELETED
@@ -1,7 +0,0 @@
1
- #! /bin/bash
2
-
3
- # Run this in //sw/gpgpu/thrust/thrust/system/cuda/detail/cub to add a THRUST_
4
- # prefix to CUB's namespace macro.
5
-
6
- sed -i -e 's/CUB_NS_P/THRUST_CUB_NS_P/g' `find . -type f`
7
-
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/cmake/README.md DELETED
@@ -1,215 +0,0 @@
1
- # Using Thrust with CMake
2
-
3
- Thrust provides configuration files that simplify using Thrust
4
- from other CMake projects. Requirements:
5
-
6
- - Thrust >= 1.9.10
7
- - CMake >= 3.15
8
-
9
- See the [Fixing Legacy FindThrust.cmake](#fixing-legacy-findthrustcmake)
10
- section for solutions that work on older Thrust versions.
11
-
12
- ## User Guide
13
-
14
- #### Default Configuration (CUDA)
15
-
16
- Thrust is configured using a `thrust_create_target` CMake function that
17
- assembles a complete interface to the Thrust library:
18
-
19
- ```cmake
20
- find_package(Thrust REQUIRED CONFIG)
21
- thrust_create_target(Thrust)
22
- target_link_libraries(MyProgram Thrust)
23
- ```
24
-
25
- The first argument is the name of the interface target to create, and any
26
- additional options will be used to configure the target. By default,
27
- `thrust_create_target` will configure its result to use CUDA acceleration.
28
-
29
- If desired, `thrust_create_target` may be called multiple times to build
30
- several unique Thrust interface targets with different configurations, as
31
- detailed below.
32
-
33
- **Note:** If CMake is unable to locate Thrust, specify the path to Thrust's CMake
34
- configuration directory (where this README file is located) as `Thrust_DIR`,
35
- e.g.:
36
-
37
- ```
38
- $ cmake . -DThrust_DIR=/usr/local/cuda/include/thrust/cmake/
39
- ```
40
-
41
- #### TBB / OpenMP
42
-
43
- To explicitly specify host/device systems, `HOST` and `DEVICE` arguments can be
44
- passed to `thrust_create_target`. If an explicit system is not specified, the
45
- target will default to using CPP for host and/or CUDA for device.
46
-
47
- ```cmake
48
- thrust_create_target(ThrustTBB DEVICE TBB)
49
- thrust_create_target(ThrustOMP HOST CPP DEVICE OMP)
50
- ```
51
-
52
- will create targets `ThrustTBB` and `ThrustOMP`. Both will use the serial `CPP`
53
- host system, but will find and use TBB or OpenMP for the device system.
54
-
55
- #### Configure Target from Cache Options
56
-
57
- To allow a Thrust target to be configurable easily via `cmake-gui` or
58
- `ccmake`, pass the `FROM_OPTIONS` flag to `thrust_create_target`. This will add
59
- `THRUST_HOST_SYSTEM` and `THRUST_DEVICE_SYSTEM` options to the CMake cache that
60
- allow selection from the systems supported by this version of Thrust.
61
-
62
- ```cmake
63
- thrust_create_target(Thrust FROM_OPTIONS
64
- [HOST_OPTION <option name>]
65
- [DEVICE_OPTION <option name>]
66
- [HOST_OPTION_DOC <doc string>]
67
- [DEVICE_OPTION_DOC <doc string>]
68
- [HOST <default host system name>]
69
- [DEVICE <default device system name>]
70
- [ADVANCED]
71
- )
72
- ```
73
-
74
- The optional arguments have sensible defaults, but may be configured per
75
- `thrust_create_target` call:
76
-
77
- | Argument | Default | Description |
78
- |---------------------|-------------------------|---------------------------------|
79
- | `HOST_OPTION` | `THRUST_HOST_SYSTEM` | Name of cache option for host |
80
- | `DEVICE_OPTION` | `THRUST_DEVICE_SYSTEM` | Name of cache option for device |
81
- | `HOST_OPTION_DOC` | Thrust's host system. | Docstring for host option |
82
- | `DEVICE_OPTION_DOC` | Thrust's device system. | Docstring for device option |
83
- | `HOST` | `CPP` | Default host system |
84
- | `DEVICE` | `CUDA` | Default device system |
85
- | `ADVANCED` | *N/A* | Mark cache options advanced |
86
-
87
- ### Specifying Thrust Version Requirements
88
-
89
- A specific version of Thrust may be required in the `find_package` call:
90
-
91
- ```cmake
92
- find_package(Thrust 1.9.10)
93
- ```
94
-
95
- will only consider Thrust installations with version `1.9.10.X`. An exact match
96
- down to the patch version can be forced by using `EXACT` matching:
97
-
98
- ```cmake
99
- find_package(Thrust 1.9.10.1 EXACT)
100
- ```
101
-
102
- would only match the 1.9.10.1 release.
103
-
104
- #### Using a Specific TBB or OpenMP Environment
105
-
106
- When `thrust_create_target` is called, it will lazily load the requested
107
- systems on-demand through internal `find_package` calls. If a project already
108
- uses TBB or OpenMP, it may specify a CMake target for Thrust to share instead:
109
-
110
- ```cmake
111
- thrust_set_TBB_target(MyTBBTarget)
112
- thrust_set_OMP_target(MyOMPTarget)
113
- ```
114
-
115
- These functions must be called **before** `thrust_create_target`, and will
116
- have no effect if the dependency is loaded as a
117
- `find_package(Thrust COMPONENT [...])` component.
118
-
119
- #### Testing for Systems
120
-
121
- The following functions check if a system has been found, either by lazy loading
122
- through `thrust_create_target` or as a `find_package` `COMPONENT` /
123
- `OPTIONAL_COMPONENT`:
124
-
125
- ```cmake
126
- # Set var_name to TRUE or FALSE if an individual system has been found:
127
- thrust_is_cuda_system_found(<var_name>)
128
- thrust_is_cpp_system_found(<var_name>)
129
- thrust_is_tbb_system_found(<var_name>)
130
- thrust_is_omp_system_found(<var_name>)
131
-
132
- # Generic version that takes a component name from CUDA, CPP, TBB, OMP:
133
- thrust_is_system_found(<component_name> <var_name>)
134
-
135
- # Defines `THRUST_*_FOUND` variables in the current scope that reflect the
136
- # state of all known systems. Can be used to refresh these flags after
137
- # lazy system loading.
138
- thrust_update_system_found_flags()
139
- ```
140
-
141
- #### Debugging
142
-
143
- Thrust will produce a detailed log describing its targets, cache options, and
144
- interfaces when `--log-level=VERBOSE` is passed to CMake 3.15.7 or newer:
145
-
146
- ```
147
- $ cmake . --log-level=VERBOSE
148
- ```
149
-
150
- This can be handy for inspecting interface and dependency information.
151
-
152
- ## Fixing Legacy FindThrust.cmake
153
-
154
- A community-created `FindThrust.cmake` module exists and is necessary to find
155
- Thrust installations prior to Thrust 1.9.10. Its usage is discouraged whenever
156
- possible and the config files in this directory should be strongly preferred.
157
- However, projects that need to support old versions of Thrust may still need to
158
- use the legacy `FindThrust.cmake` with pre-1.9.10 installations.
159
-
160
- One popular flavor of this find module has a version parsing bug. Projects that
161
- rely on `FindThrust.cmake` should check for this and patch their copies as
162
- follows.
163
-
164
- Replace:
165
-
166
- ```cmake
167
- string( REGEX MATCH "^[0-9]" major ${version} )
168
- string( REGEX REPLACE "^${major}00" "" version "${version}" )
169
- string( REGEX MATCH "^[0-9]" minor ${version} )
170
- string( REGEX REPLACE "^${minor}0" "" version "${version}" )
171
- ```
172
-
173
- with:
174
-
175
- ```cmake
176
- math(EXPR major "${version} / 100000")
177
- math(EXPR minor "(${version} / 100) % 1000")
178
- math(EXPR version "${version} % 100")
179
- ```
180
-
181
- # Thrust Developer Documentation
182
-
183
- This portion of the file contains descriptions of Thrust's internal CMake target
184
- structure for Thrust developers. It should not be necessary for users
185
- who just want to use Thrust from their projects.
186
-
187
- ## Internal Targets
188
-
189
- By default, `find_package(Thrust)` will only create a single `Thrust::Thrust`
190
- target that describes where the actual Thrust headers are located. It does not
191
- locate or create configurations for any dependencies; these are lazily loaded
192
- on-demand by calls to `create_thrust_target`, or when explicitly requested via
193
- `find_package`'s component mechanism.
194
-
195
- As mentioned, the basic Thrust interface is described by the `Thrust::Thrust`
196
- target.
197
-
198
- Each backend system (`CPP`, `CUDA`, `TBB`, `OMP`) is described by multiple
199
- targets:
200
-
201
- - `Thrust::${system}`
202
- - Specifies an interface configured to build against all
203
- dependencies for this backend (including `Thrust::Thrust`).
204
- - For example, the `Thrust::CUDA` target is an interface
205
- target that combines the interfaces of both Thrust and CUB.
206
- - `Thrust::${system}::Host`
207
- - Configures an interface for using a specific host system.
208
- - Multiple `::Host` targets cannot be combined in the same library/executable.
209
- Attempting to do so will produce a CMake configuration error.
210
- - Only defined for systems that support being used as the host.
211
- - `Thrust::${system}::Device`
212
- - Configures an interface for using a specific device system.
213
- - Multiple `::Device` targets cannot be combined in the same library/executable.
214
- Attempting to do so will produce a CMake configuration error.
215
- - Only defined for systems that support being used as the device.