diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Far Cry 4 PC Crack Tips and Tricks for a Smooth Gameplay.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Far Cry 4 PC Crack Tips and Tricks for a Smooth Gameplay.md deleted file mode 100644 index c85aa707f69e2b391f882ccd23bf7b89c9f09999..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Far Cry 4 PC Crack Tips and Tricks for a Smooth Gameplay.md +++ /dev/null @@ -1,115 +0,0 @@ - -

Download Far Cry 4 PC Crack

| H1 | |

Far Cry 4 is one of the most popular open-world first-person shooter games in recent years. It was released in 2014 by Ubisoft and received critical acclaim for its stunning graphics, diverse environments, animal companions system, and engaging story. The game is set in the fictional Himalayan region of Kyrat, where you play as Ajay Ghale, a young man who returns to his homeland to fulfill his mother's final wish of spreading her ashes in her place of birth. However, he soon becomes involved in a civil war between the forces of Pagan Min, a despotic self-appointed king, and the Golden Path, a rebel movement fighting for democracy.

-

download far cry 4 pc crack


Download File 🗹 https://byltly.com/2uKy0O



| P | |

However, not everyone can afford to buy the game or play it on their PC. That's why some people resort to using a PC crack, which is a modified version of the game that bypasses the copy protection and allows you to play it for free. A PC crack can also unlock some features that are otherwise unavailable in the official version, such as multiplayer mode or map editor. However, using a PC crack also comes with some risks and drawbacks, such as viruses, malware, errors, bugs, and legal issues.

| P | |

In this article, we will show you how to download Far Cry 4 PC crack safely and easily, how to fix some common issues with it, and how to enjoy the game to its fullest potential. But before we do that, we want to remind you that using a PC crack is illegal and unethical, and we do not condone or encourage it in any way. This article is for educational purposes only and we are not responsible for any consequences that may arise from using a PC crack.

| P |

How to Download Far Cry 4 PC Crack

| H2 |

The first step to download Far Cry 4 PC crack is to find a reliable source for it. There are many websites that claim to offer PC cracks for various games, but not all of them are trustworthy or safe. Some of them may contain viruses or malware that can harm your computer or steal your personal information. Some of them may also provide fake or outdated PC cracks that do not work or cause problems with the game.

| P |

Therefore, you need to be careful and do some research before downloading any PC crack from any website. Here are some tips on how to find a reliable source for Far Cry 4 PC crack:

| P | -

Based on these criteria, we have found one website that seems to offer a reliable source for Far Cry 4 PC crack. It is called Fitgirl Repacks Site (index:1) , which is a well-known website that provides compressed repacks of various games with all DLCs included. The website has positive reviews and ratings from many users who have downloaded and used its repacks successfully. The website also provides clear and detailed instructions on how to download, install, and run its repacks.

| P |

The website offers two versions of Far Cry 4 repack: one with multiplayer files included (14.8 GB) and one without multiplayer files (10.8 GB). The repack is based on Far.Cry.4.Proper-RELOADED release (index:1) , which is an official release by RELOADED group (index:3) , which is one of the most reputable groups that provide PC cracks for various games. The repack also includes all DLCs added & activated (index:1) , missing Japanese voiceovers added (index:1) , all patches up to v1.10 applied (index:1) , extreme injector v3.6.1 for 2/3-core CPUs added (index:1) , modified unofficial RELOADED crack (index:3) , ULC unlocker added (index:3) , 100% lossless & MD5 perfect (index:1) , selective download feature (index:1) , significantly smaller archive size (index:1) , fast installation time (index:1) , after-install integrity check (index:1) , HDD space after installation up to 40 GB (index:1) , at least 2 GB of free RAM required for installing this repack (index:1) . The repack is compatible with Windows 7/8/10 (64-bit versions only), Intel Core i5-750 @ 2.6 GHz or AMD Phenom II X4 955 @ 3.2 GHz processor or better, 4 GB RAM or more, NVIDIA GeForce GTX 460 or AMD Radeon HD5850 graphics card or better with 1 GB VRAM or more.

| P |

To download Far Cry 4 repack from Fitgirl Repacks Site (index:1) , you need to follow these steps:

| P |
    | OL |
  1. Go to https://fitgirl-repacks-site.org/far-cry-4-gold-edition-download-torrent-repack/
  2. | LI |
  3. Select your preferred download option from the list of mirrors provided on the website.
  4. | LI |
  5. If you choose torrent option, you need to have a torrent client installed on your computer such as uTorrent or BitTorrent.
  6. | LI |
  7. If you choose direct download option, you need to have a download manager installed on your computer such as IDM or JDownloader2.
  8. | LI |
  9. Download all parts of the repack according to your selected option.
  10. | LI |
  11. Extract all parts of the repack using WinRAR or 7-Zip software.
  12. | LI |
  13. Run setup.exe file from extracted folder as administrator.
  14. | LI |
  15. Select your preferred language from English/Russian/Spanish/French/German/Italian/Polish/Portuguese-Brazil/Japanese/Chinese Simplified/Chinese Traditional/Czech/Danish/Dutch/Finnish/Norwegian/Swedish.
  16. | LI |
  17. Select your preferred components from singleplayer files/multiplayer files/voiceovers files according to your needs.
  18. | LI |
  19. Select your installation directory where you want to install the game.
  20. | LI |
  21. Click install button and wait until installation process is completed.
  22. | LI |
-

How to Check Far Cry 4 PC Crack for Viruses and Malware

-

-Even though Fitgirl Repacks Site (index:1) seems to be a reliable source for Far Cry 4 PC crack, it is still advisable to check it for viruses and malware before installing it on your computer. This is because some malicious files may be hidden or disguised within the repack that can harm your computer or steal your personal information.

-

To check Far Cry 4 repack for viruses and malware, you need to follow these steps:

-

How to download far cry 4 pc crack for free
-Download far cry 4 pc crack full version
-Download far cry 4 pc crack torrent
-Download far cry 4 pc crack skidrow
-Download far cry 4 pc crack only
-Download far cry 4 pc crack no survey
-Download far cry 4 pc crack online
-Download far cry 4 pc crack reloaded
-Download far cry 4 pc crack update
-Download far cry 4 pc crack fix
-Download far cry 4 pc crack and keygen
-Download far cry 4 pc crack direct link
-Download far cry 4 pc crack mega
-Download far cry 4 pc crack rar
-Download far cry 4 pc crack iso
-Download far cry 4 pc crack highly compressed
-Download far cry 4 pc crack repack
-Download far cry 4 pc crack cpy
-Download far cry 4 pc crack codex
-Download far cry 4 pc crack steam
-Download far cry 4 pc crack windows 10
-Download far cry 4 pc crack without virus
-Download far cry 4 pc crack working
-Download far cry 4 pc crack gameplay
-Download far cry 4 pc crack trainer
-Download far cry 4 pc crack cheats
-Download far cry 4 pc crack mods
-Download far cry 4 pc crack multiplayer
-Download far cry 4 pc crack co-op
-Download far cry 4 pc crack patch
-Download far cry 4 pc crack dlc
-Download far cry 4 pc crack gold edition
-Download far cry 4 pc crack valley of the yetis
-Download far cry 4 pc crack escape from durgesh prison
-Download far cry 4 pc crack over run
-Download far cry 4 pc crack hurk deluxe pack
-Download far cry 4 pc crack kyrat edition
-Download far cry 4 pc crack limited edition
-Download far cry 4 pc crack complete edition
-Download far cry 4 pc crack ultimate edition
-Download far cry 4 pc crack system requirements
-Download far cry 4 pc crack installation guide
-Download far cry 4 pc crack error solution
-Download far cry 4 pc crack black screen fix
-Download far cry 4 pc crack lag fix
-Download far cry 4 pc crack sound fix
-Download far cry 4 pc crack save game location
-Download far cry 4 pc crack unlock all weapons and skills
-Download far cry 4 pc crack map editor
-Download far cry 4 pc crack custom maps

-
    -
  1. Download an antivirus software such as Avast or AVG on your computer if you don't have one already.
  2. -
  3. Update your antivirus software with latest virus definitions and scan settings.
  4. -
  5. Right-click on the downloaded repack file and select Scan with [your antivirus software] from the context menu.
  6. -
  7. Wait for the scan to complete and check the results. If any threats are detected, delete or quarantine them according to your antivirus software's instructions.
  8. -
  9. If no threats are detected, you can proceed to install the repack on your computer.
  10. -
-

Alternatively, you can also use an online virus scanner such as VirusTotal or Jotti to check the repack file for viruses and malware. These are websites that allow you to upload any file and scan it with multiple antivirus engines at once. To use an online virus scanner, you need to follow these steps:

-
    -
  1. Go to https://www.virustotal.com/ or https://virusscan.jotti.org/
  2. -
  3. Click on Choose File or Browse button and select the downloaded repack file from your computer.
  4. -
  5. Click on Scan It or Submit File button and wait for the scan to complete.
  6. -
  7. Check the results and see if any antivirus engine detects any threat in the repack file. If any threat is detected, do not install the repack on your computer.
  8. -
  9. If no threat is detected, you can proceed to install the repack on your computer.
  10. -
-

How to Install Far Cry 4 PC Crack and Run the Game

-

After downloading and checking Far Cry 4 repack for viruses and malware, you can install it on your computer and run the game. To install Far Cry 4 repack on your computer, you need to follow these steps:

-
    -
  1. Disable your antivirus software temporarily to avoid any interference with the installation process.
  2. -
  3. Run setup.exe file from extracted folder as administrator.
  4. -
  5. Select your preferred language from English/Russian/Spanish/French/German/Italian/Polish/Portuguese-Brazil/Japanese/Chinese Simplified/Chinese Traditional/Czech/Danish/Dutch/Finnish/Norwegian/Swedish.
  6. -
  7. Select your preferred components from singleplayer files/multiplayer files/voiceovers files according to your needs.
  8. -
  9. Select your installation directory where you want to install the game.
  10. -
  11. Click install button and wait until installation process is completed.
  12. -
  13. Enable your antivirus software again after the installation is done.
  14. -
-

To run Far Cry 4 game on your computer, you need to follow these steps:

-
    -
  1. Go to the installation directory where you installed the game.
  2. -
  3. Run FarCry4.exe file as administrator.
  4. -
  5. Select your preferred graphics settings and resolution from the game launcher.
  6. -
  7. Click Play button and enjoy the game.
  8. -
-

How to Fix Common Issues with Far Cry 4 PC Crack

-

Even though Far Cry 4 PC crack allows you to play the game for free, it may also cause some issues that can affect your gaming experience. Some of these issues are related to the PC crack itself, while others are related to the game itself. Here are some of the common issues that you may encounter with Far Cry 4 PC crack and how to fix them:

- - - - - - - -
IssueSolution
The game crashes or freezes randomly.This may be caused by incompatible drivers, outdated patches, corrupted files, or insufficient system resources. To fix this issue, you can try these steps:
  • Update your graphics card drivers and DirectX version.
  • Update the PC crack to the latest version using Fitgirl Repacks Site (index:1) or other sources.
  • Verify the integrity of game files using Steam or Uplay client if you have them installed.
  • Lower your graphics settings and resolution in the game launcher.
  • Close any unnecessary programs running in the background while playing the game.
The game does not launch or shows a black screen.This may be caused by missing or blocked files, incompatible settings, or antivirus interference. To fix this issue, you can try these steps:
  • Add FarCry4.exe file and installation directory to your antivirus software's exclusion list or disable it temporarily while playing the game.
  • Run FarCry4.exe file as administrator and in compatibility mode for Windows 7 or 8 if you have Windows 10.
  • Delete GamerProfile.xml file from Documents\My Games\Far Cry 4 folder and launch the game again.
  • Rename video.dat file to video.dat.bak in Data_Win32 folder inside installation directory and launch the game again.
The game shows an error message saying "Please insert correct DVD-ROM."This may be caused by a faulty PC crack or a missing DLL file. To fix this issue, you can try these steps:
  • Download a fixed DLL for map editor from http://sendfile.su/1356321 (index:1) and place it in bin folder inside installation directory, overwriting existing file.
  • Download a modified unofficial RELOADED crack (index:3) or another PC crack from a reliable source and replace FarCry4.exe file in bin folder inside installation directory with it.
The game does not save progress or shows corrupted save files.This may be caused by a wrong save location, a read-only attribute, or a Uplay conflict. To fix this issue, you can try these steps:
  • Create a new folder named "savegames" in Documents\My Games\Far Cry 4 folder and move all save files from Documents\My Games\Far Cry 4\user_id folder into it.
  • Right-click on savegames folder and select Properties. Uncheck Read-only option under Attributes and click OK.
  • Delete Uplay folder from Program Files (x86) folder if you have it installed. Alternatively, disable cloud synchronization option in Uplay settings if you want to keep Uplay installed.
The game does not recognize keyboard or mouse input or shows wrong key bindings.This may be caused by a corrupted configuration file, a conflicting device driver, or a keyboard layout issue. To fix this issue, you can try these steps:
  • Delete GamerProfile.xml file from Documents\My Games\Far Cry 4 folder and launch the game again. Customize your key bindings in-game as needed.
  • Unplug any unnecessary devices such as controllers, joysticks, webcams, etc. from your computer while playing the game.
  • Change your keyboard layout to English (US) in Windows settings if you have a different layout set up.
-

Conclusion

-

In this article, we have shown you how to download Far Cry 4 PC crack safely and easily, how to check it for viruses and malware, how to install it and run the game, and how to fix some common issues with it. We hope this article has been helpful and informative for you. However, we also want to remind you once again that using a PC crack is illegal and unethical, and we do not condone or encourage it in any way. This article is for educational purposes only and we are not responsible for any consequences that may arise from using a PC crack. If you like Far Cry 4 game and want to support its developers, we urge you to buy it legally from official sources such as Steam or Uplay. Thank you for reading this article and have a great day!

-

FAQs

-

Here are some frequently asked questions related to Far Cry 4 PC crack:

-
    -
  1. Q: Can I play multiplayer mode with Far Cry 4 PC crack?
    A: No, you cannot play multiplayer mode with Far Cry 4 PC crack. The multiplayer mode requires an online connection and a valid Uplay account which cannot be bypassed by any PC crack. If you want to play multiplayer mode with other players online, you need to buy the game legally from official sources such as Steam or Uplay.
  2. -
  3. Q: Can I use map editor with Far Cry 4 PC crack?
    A: Yes, A: Yes, you can use map editor with Far Cry 4 PC crack. The map editor allows you to create your own custom maps and missions using the game's assets and tools. However, you need to download a fixed DLL for map editor from http://sendfile.su/1356321 (index:1) and place it in bin folder inside installation directory, overwriting existing file. Otherwise, the map editor will not launch or show an error message.
  4. -
  5. Q: How can I update Far Cry 4 PC crack to the latest version?
    A: To update Far Cry 4 PC crack to the latest version, you need to download the latest version of the PC crack from Fitgirl Repacks Site (index:1) or other sources and replace FarCry4.exe file in bin folder inside installation directory with it. You also need to download and install the latest patches for the game from official sources such as Steam or Uplay if you have them installed.
  6. -
  7. Q: How can I uninstall Far Cry 4 PC crack from my computer?
    A: To uninstall Far Cry 4 PC crack from your computer, you need to delete the installation directory where you installed the game and remove any leftover files and folders from Documents\My Games\Far Cry 4 folder. You also need to scan your computer for viruses and malware using an antivirus software after uninstalling the PC crack.
  8. -
  9. Q: Where can I find more information and help about Far Cry 4 PC crack?
    A: You can find more information and help about Far Cry 4 PC crack on various websites and forums that discuss PC gaming and piracy. Some of these websites and forums are:
  10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Enscape 3.0 Free Download with Crack The Ultimate Guide for Architects.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Enscape 3.0 Free Download with Crack The Ultimate Guide for Architects.md deleted file mode 100644 index abe10f5bb509aa1673d6d90c9367d94b3c852428..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Enscape 3.0 Free Download with Crack The Ultimate Guide for Architects.md +++ /dev/null @@ -1,27 +0,0 @@ - -

Enscape 3.0 Free Download with Crack: How to Get the Best Real-Time Rendering Software for Architects

-

If you are an architect, designer, or engineer, you might have heard of Enscape 3.0, the latest version of the powerful real-time rendering software that integrates with popular CAD programs like Revit, SketchUp, Rhino, and ArchiCAD. Enscape 3.0 allows you to create stunning photorealistic images and videos of your projects in seconds, without any need for complex settings or exporting. You can also explore your designs in virtual reality with a single click.

-

enscape 3.0 free download with crack


Download File ►►► https://byltly.com/2uKwBr



-

However, Enscape 3.0 is not a cheap software. It costs $699 per year for a single user license, which might be too expensive for some users. That's why some people are looking for a way to download and install Enscape 3.0 free with crack, which is a modified version of the software that bypasses the activation and verification process. In this article, we will show you how to do that in a few simple steps.

-

What is Enscape 3.0 Free Download with Crack?

-

Enscape 3.0 free download with crack is a hacked version of the original software that allows you to use it without paying for a license or entering a product key. However, this also means that you won't be able to access some of the features and updates that the official software offers, such as online support, cloud rendering, asset library, and bug fixes. Therefore, we recommend that you use Enscape 3.0 free download with crack only for testing purposes and not for professional work.

-

How to Download and Install Enscape 3.0 Free Download with Crack?

-

To download and install Enscape 3.0 free download with crack, you will need to follow these steps:

-
    -
  1. Download the Enscape 3.0 free download with crack file from a reliable source. You can search for it on Google or use one of the links provided below. Make sure that the file is compatible with your system and has positive reviews from other users.
  2. -
  3. Extract the Enscape 3.0 free download with crack file using a software like WinRAR or 7-Zip. You will get a folder containing the software files and the crack files.
  4. -
  5. Copy the crack files and paste them into the software folder. You will need to replace the original files with the cracked ones.
  6. -
  7. Run the software as administrator and enjoy using Enscape 3.0 free download with crack.
  8. -
-

Where to Download Enscape 3.0 Free Download with Crack?

-

There are many websites that offer Enscape 3.0 free download with crack, but not all of them are safe and reliable. Some of them might contain viruses, malware, or fake files that can harm your computer or steal your personal information. Therefore, you should be careful when choosing where to download Enscape 3.0 free download with crack from. Here are some of the websites that we recommend:

-

- -

Conclusion

-

Enscape 3.0 is a great rendering software that offers fast and realistic results for your architectural projects. However, if you don't want to pay for it or you want to test it before buying it, you can download and

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Video Editor Crack vs. Paid Video Editing Software Which One is Better?.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Video Editor Crack vs. Paid Video Editing Software Which One is Better?.md deleted file mode 100644 index 8f6b032685dc85d8aa11b7630195d3521dd296d8..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Video Editor Crack vs. Paid Video Editing Software Which One is Better?.md +++ /dev/null @@ -1,23 +0,0 @@ - -

Free Video Editor Crack: Why You Should Avoid It and What Are the Best Alternatives

-

If you are looking for a way to edit your videos without spending a lot of money, you might be tempted to download a free video editor crack from the internet. However, this is not a good idea for several reasons. In this article, we will explain why you should avoid free video editor crack and what are the best alternatives to edit your videos professionally and legally.

-

Why You Should Avoid Free Video Editor Crack

-

Free video editor crack is a pirated version of a paid video editing software that claims to offer the same features and benefits as the original one. However, there are many risks and disadvantages associated with using free video editor crack, such as:

-

free video editor crack


Download File ✦✦✦ https://byltly.com/2uKxOi



- -

What Are the Best Alternatives to Free Video Editor Crack

-

Instead of risking your PC's security and performance by using free video editor crack, you should consider using one of the following alternatives:

- -

Conclusion

-

Free video editor crack is not worth downloading or using because it can expose your PC to various threats and problems. Instead, you should opt for a legitimate and reliable video editing software or service that can help you create amazing videos for your personal or professional use. We hope this article has helped you understand why you should avoid free video editor crack and what are the best alternatives to it.

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GS Typing Tutor Review Features Pricing and License Code.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GS Typing Tutor Review Features Pricing and License Code.md deleted file mode 100644 index 39b4b9ea1e773f90b1c7b4a5c175c3dd75d52f11..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GS Typing Tutor Review Features Pricing and License Code.md +++ /dev/null @@ -1,22 +0,0 @@ -
-

GS Typing Tutor License Code: How to Get It for Free

-

GS Typing Tutor is a software that helps you learn keyboard typing, improve your typing speed and accuracy, and test your typing skills. It is suitable for beginners, intermediate and advanced learners, as well as for students, teachers and professionals. GS Typing Tutor offers various features, such as:

-

gs typing tutor license code


DOWNLOAD ✑ ✑ ✑ https://byltly.com/2uKwGG



- -

GS Typing Tutor is compatible with Windows 98 and later versions, and supports multiple languages, such as English, French, German, Spanish, Italian, Portuguese, Dutch, Swedish, Finnish, Danish and Norwegian. You can download a free trial version of GS Typing Tutor from the official website or from other sources, such as FileHippo or Softonic. The trial version allows you to use the software for 30 days with some limitations. To unlock all the features and use the software without any restrictions, you need to buy a license code from the official website. The license code costs $29.95 for a single user license, $99.95 for a family license (up to 5 users), or $199.95 for a site license (up to 100 users).

-

But what if you want to use GS Typing Tutor for free without buying a license code? Is there a way to get a free license code or crack the software? The answer is yes, but you will need to be careful and follow some steps. Here is how you can get GS Typing Tutor license code for free:

-
    -
  1. If you have bought GS Typing Tutor before but lost your license code, you can retrieve it from the official website. You need to fill out a form with your name, email address and order ID, and click the "Submit" button. You will receive your license code by email within 24 hours.
  2. -
  3. If you have not bought GS Typing Tutor before but want to use it for free, you can try to find a free license code or a cracked version of the software online. However, this is illegal and risky. You may face legal consequences or malware infections if you download from untrusted sources or use fake codes. We do not condone piracy and recommend that you buy the software from the official website if you can afford it.
  4. -
  5. If you want to use GS Typing Tutor legally and safely for free, you can look for alternative software that offers similar features and functions. There are many free typing tutor software available online, such as TIPP10, Rapid Typing Tutor, KeyBlaze Free Typing Tutor, Tux Typing, TypingMaster, Sonma Typing-Expert, and more. You can compare their pros and cons and choose the one that suits your needs best.
  6. -
-

Note: Using GS Typing Tutor without a valid license code is illegal and risky. You may face legal consequences or malware infections if you download from untrusted sources or use fake codes. We do not condone piracy and recommend that you buy the software from the official website if you can afford it.

-

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/4media Video Cutter 2 Serial Crack Internet Profesional.md b/spaces/1gistliPinn/ChatGPT4/Examples/4media Video Cutter 2 Serial Crack Internet Profesional.md deleted file mode 100644 index 53b200b868789f702447eb8e8f7a1dcd6845fd30..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/4media Video Cutter 2 Serial Crack Internet Profesional.md +++ /dev/null @@ -1,10 +0,0 @@ -
-

Microsoft, Windows and Internet Explorer are trademarks or registered. You can find the serial number of the printer on. (or 'Allow another pro-. 4Media Video Converter Ultimate 7.8.28 Crack. It’s an excellent software for all of us to more or less convert video files. It’s an excellent software for all of us to more or less convert video files. 4Media Audio Converter Pro; 4Media AVCHD Converter; 4Media AVI to. 4Media Audio Converter Pro; 4Media MP4 to MP3 Converter; 4Media MP4 to MP3 Converter. 4Media Audio Converter Pro; 4Media AVCHD Converter; 4Media AVI to.

-

4media Video Cutter 2 Serial Crack internet profesional


Download Ziphttps://imgfil.com/2uy24S



-

No matter how you download 4Media Video Cutter Pro 2 Serial Crack, it is very easy to crack in a few simple steps. 4Media Video Cutter Pro 2 Serial Crack and the serial numbers of almost all computer games were known before official publishers release their products, they have been included in our database.

-

Cool Campfire GXP 3.5.9 Incl Registration Keygen Serial Number [Latest]. Realistic Fire Night. Full cracked version for Free. GX PLUS 4.5.3. F/L/X Win.exe. Notice : We do not own the services in any way, all I have are the copyrights of the developers.

-

4Media Video Cutter 2 Serial Crack internet profesional ##TOP##. 4Media Video Converter Ultimate 7.8.28 Crack; 4Media Video Cutter 2; 4Media Video Editor 2; 4Media Video Joiner 2; 4Media Video Cutter 2 Serial Crack ; 4Media Audio Converter.

-

Installation of the latest update of. in parallel, lets you transfer files between hard disc and USB. The software features may seem completely smooth from view. 4Media Video Cutter 2 Serial Crack internet profesional

-

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Abbyy Pdf Transformer 3.0 Full Crack [HOT].md b/spaces/1gistliPinn/ChatGPT4/Examples/Abbyy Pdf Transformer 3.0 Full Crack [HOT].md deleted file mode 100644 index ea00af35488b632a7c052951a3b3b09af67c3315..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Abbyy Pdf Transformer 3.0 Full Crack [HOT].md +++ /dev/null @@ -1,6 +0,0 @@ -

Abbyy Pdf Transformer 3.0 Full Crack


DOWNLOADhttps://imgfil.com/2uy27s



- -008 Exhaust Okay I have a 2000 Isuzu Trooper 4x4 with Tod torque on demand a ... and Items 1 - 16 of 61 3 Pdf isuzu 3ld1 cylinder head torque settings - Searches ... Identifier-ark ark:/13960/t5hb1v348 Ocr ABBYY FineReader 9. ... go to far beyond that bolt torque spec you will crack the composite gasket. 4d29de3e1b
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Aimersoft Video Converter Ultimate 4.1.0.2 Serial-[HB] Keygen ((BETTER)).md b/spaces/1gistliPinn/ChatGPT4/Examples/Aimersoft Video Converter Ultimate 4.1.0.2 Serial-[HB] Keygen ((BETTER)).md deleted file mode 100644 index 68bf612815160fdeb2cac71591eec0bb1c0ee790..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Aimersoft Video Converter Ultimate 4.1.0.2 Serial-[HB] Keygen ((BETTER)).md +++ /dev/null @@ -1,6 +0,0 @@ -

Aimersoft Video Converter Ultimate 4.1.0.2 Serial-[HB] Keygen


Download ··· https://imgfil.com/2uy1kC



-
-Aimersoft Video Converter Ultimate 9 Serial Key & Crack Capacity to. ... Aimersoft Video Converter Ultimate 4.1.0.2 + Serial-[HB].dll 123.46 . 4d29de3e1b
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/All Black - The Punjabi Hit Song by Raftaar and Sukh-E Muzical Doctorz - MP3 Download.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/All Black - The Punjabi Hit Song by Raftaar and Sukh-E Muzical Doctorz - MP3 Download.md deleted file mode 100644 index 6dbd48bc5020f6ff03b34d3e8be7ab3daba5a193..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/All Black - The Punjabi Hit Song by Raftaar and Sukh-E Muzical Doctorz - MP3 Download.md +++ /dev/null @@ -1,80 +0,0 @@ - -
- Benefits of downloading MP3 songs: Quality, convenience, and offline access.
- Methods of downloading MP3 song all black: Online converters, websites, and apps.
- Conclusion: Summarize the main points and provide a call to action. | | H2: Introduction | - Explain the genre, lyrics, and meaning of the song.
- Mention the artists, release date, and awards of the song.
- Provide some background information on the Indian music industry and its global influence. | | H2: Benefits of Downloading MP3 Songs | - Quality: MP3 format preserves the original sound quality of the song.
- Convenience: MP3 files are easy to store, transfer, and play on any device.
- Offline access: MP3 files can be accessed without internet connection or streaming fees. | | H2: Methods of Downloading MP3 Song All Black | - Online converters: How to use online tools to convert YouTube videos or other sources to MP3 files.
- Websites: How to use websites that offer free or paid downloads of MP3 songs.
- Apps: How to use apps that allow users to download MP3 songs from various platforms. | | H2: Conclusion | - Summarize the main points of the article.
- Provide a call to action for the readers to download the song and enjoy it.
- Invite feedback and comments from the readers. | Table 2: Article with HTML formatting

How to Download MP3 Song All Black by Sukh-E and Raftaar

-

If you are a fan of Indian music, you might have heard of the hit song All Black by Sukh-E and Raftaar. This song is a fusion of Punjabi rap and pop music, with catchy lyrics and beats that will make you want to dance. The song is about celebrating life and love in style, with references to luxury brands and cars.

-

download mp3 song all black


DOWNLOAD ✔✔✔ https://urlin.us/2uT0ad



-

All Black was released in 2015 and became an instant success, topping the charts in India and abroad. The song has over 400 million views on YouTube and has won several awards, including the PTC Punjabi Music Award for Best Duo/Group Song in 2016. The song also features in the Bollywood movie Baar Baar Dekho, starring Katrina Kaif and Sidharth Malhotra.

-

The Indian music industry is one of the largest and most diverse in the world, producing songs in various languages, genres, and styles. Indian music has also gained popularity worldwide, thanks to its unique blend of tradition and modernity, as well as its influence on other forms of music, such as hip hop, reggae, and EDM.

-

If you love this song and want to listen to it anytime, anywhere, you might want to download it as an MP3 file. MP3 is a digital audio format that compresses sound data without losing much quality. By downloading MP3 songs, you can enjoy several benefits, such as:

-

Benefits of Downloading MP3 Songs

- -

So how can you download MP3 song all black? There are several methods you can use, depending on your source and preference. Here are some of them:

-

Methods of Downloading MP3 Song All Black

-
    -
  1. Online converters: One of the easiest ways to download MP3 song all black is to use online tools that can convert YouTube videos or other sources to MP3 files. All you need to do is copy the URL of the video you want to convert, paste it into the online converter, and click the download button. Some of the popular online converters are: - [YTMP3]: This website can convert YouTube videos to MP3 or MP4 files, with high quality and fast speed. You can download up to 1 hour of video at a time, and there is no registration or software installation required. - [OnlineVideoConverter]: This website can convert videos from YouTube, Vimeo, Dailymotion, and other platforms to MP3, MP4, AVI, and other formats. You can also choose the quality and format of the output file, and crop or trim the video if needed. - [MP3Skull]: This website can download MP3 songs from YouTube, SoundCloud, and other sources. You can also search for songs by name, artist, or genre, and listen to them online before downloading. To download MP3 song all black using online converters, follow these steps: - Go to YouTube and search for the song All Black by Sukh-E and Raftaar. - Copy the URL of the video from the address bar. - Go to one of the online converters mentioned above and paste the URL into the input box. - Choose MP3 as the output format and click the convert or download button. - Wait for the conversion process to finish and download the MP3 file to your device.
  2. -
  3. Websites: Another way to download MP3 song all black is to use websites that offer free or paid downloads of MP3 songs. Some of these websites are: - [Pagalworld]: This website provides free downloads of Bollywood, Punjabi, Indipop, and DJ remix songs. You can browse by category, artist, or album, and download songs in various qualities and sizes. - [Gaana]: This website is a leading music streaming service in India, offering millions of songs in different languages and genres. You can also download songs for offline listening with a premium subscription. - [Hungama]: This website is a digital entertainment platform that offers music, movies, videos, and games. You can download songs for free with a limited number of downloads per month, or get unlimited downloads with a paid plan. To download MP3 song all black using websites, follow these steps: - Go to one of the websites mentioned above and search for the song All Black by Sukh-E and Raftaar. - Click on the download or play button next to the song title. - Choose the quality and format of the MP3 file and click the confirm or save button. - Download the MP3 file to your device.
  4. -
  5. Apps: A third way to download MP3 song all black is to use apps that allow users to download MP3 songs from various platforms. Some of these apps are: - [VidMate]: This app is a powerful video downloader that can download videos and music from YouTube, Facebook, Instagram, and other sites. You can also watch live TV, movies, and shows on this app. - [Snaptube]: This app is a simple and fast video downloader that can download videos and music from YouTube, Facebook, TikTok, and other platforms. You can also explore trending videos and music on this app. - [Wynk Music]: This app is a popular music streaming service that offers over 6 million songs in various languages and genres. You can also download songs for offline listening with a premium subscription. To download MP3 song all black using apps, follow these steps: - Download and install one of the apps mentioned above on your device. - Open the app and search for the song All Black by Sukh-E and Raftaar. - Tap on the download or play button next to the song title. - Choose the quality and format of the MP3 file and tap the confirm or save button. - Download the MP3 file to your device.
  6. -
-

Conclusion

-

Downloading MP3 song all black is easy and convenient with these methods. You can enjoy this amazing song in high quality, offline mode, and on any device you want. Whether you use online converters, websites, or apps, you can get your favorite song in just a few clicks.

-

So what are you waiting for? Download MP3 song all black today and groove to its catchy beats. You will surely love this song as much as we do.

-

download mp3 song all black by raftaar and sukh-e
-download mp3 song all black from afrocharts
-download mp3 song all black punjabi
-download mp3 song all black 320kbps
-download mp3 song all black remix
-download mp3 song all black video
-download mp3 song all black lyrics
-download mp3 song all black dj
-download mp3 song all black mr jatt
-download mp3 song all black pagalworld
-download mp3 song all black djpunjab
-download mp3 song all black ringtone
-download mp3 song all black hd
-download mp3 song all black full
-download mp3 song all black online
-download mp3 song all black free
-download mp3 song all black gaana
-download mp3 song all black jiosaavn
-download mp3 song all black spotify
-download mp3 song all black apple music
-download mp3 song all black youtube
-download mp3 song all black soundcloud
-download mp3 song all black audiomack
-download mp3 song all black wapking
-download mp3 song all black waploaded
-download mp3 song all black naijaloaded
-download mp3 song all black tooxclusive
-download mp3 song all black notjustok
-download mp3 song all black 9jaflaver
-download mp3 song all black fakaza
-download mp3 song all black zamusic
-download mp3 song all black sahiphopmag
-download mp3 song all black hiphopza
-download mp3 song all black hitvibes
-download mp3 song all black flexyjamz
-download mp3 song all black afrobeat9ja
-download mp3 song all black afrohouseking
-download mp3 song all black afrofire
-download mp3 song all black malawi-music.com
-download mp3 song all black zambianmusicblog.co.zm
-download mp3 song all black zedgossip.net

-

If you liked this article, please share it with your friends and family. Also, let us know your feedback and comments below. We would love to hear from you.

-

FAQs

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars Latest APK Enjoy the Epic Multiplayer Battles on Android.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars Latest APK Enjoy the Epic Multiplayer Battles on Android.md deleted file mode 100644 index 4cc074be2042be86c5fc905ff8b0aa98c9bacc11..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars Latest APK Enjoy the Epic Multiplayer Battles on Android.md +++ /dev/null @@ -1,85 +0,0 @@ - -

Brawl Stars Latest APK: How to Download and Play the Popular Game

-

If you are looking for a fun and exciting game to play on your mobile device, you might want to check out Brawl Stars. Brawl Stars is a multiplayer game that lets you compete with other players in various modes and arenas. You can also collect and upgrade different characters, called brawlers, each with their own unique skills and abilities. In this article, we will tell you everything you need to know about Brawl Stars, including how to download and install the latest APK version of the game.

-

brawl stars latest apk


Download Ziphttps://urlin.us/2uT22e



-

What is Brawl Stars?

-

A fast-paced multiplayer game with different modes and characters

-

Brawl Stars is a game that combines elements of shooting, fighting, strategy, and teamwork. You can choose from over 40 brawlers, each with their own strengths and weaknesses, and use them to battle other players in various modes. Some of the modes include:

- -

A free-to-play game with optional in-app purchases

-

Brawl Stars is free to download and play, but you can also spend real money to buy gems, which are the premium currency of the game. Gems can be used to buy brawl boxes, which contain brawlers, coins, power points, star points, or gadgets. You can also use gems to buy skins, which change the appearance of your brawlers, or passes, which give you access to exclusive rewards and quests. However, spending money is not necessary to enjoy the game, as you can also earn gems, coins, power points, star points, and gadgets by playing the game regularly.

-

A game developed by Supercell, the makers of Clash of Clans and Clash Royale

-

Brawl Stars is developed by Supercell, a Finnish company that is known for creating popular mobile games such as Clash of Clans, Clash Royale, Hay Day, and Boom Beach. Supercell is also known for its high-quality graphics, sound effects, music, and animations. Brawl Stars has a colorful and cartoony style that appeals to both kids and adults. The game also has frequent updates that add new brawlers, modes, maps, events, skins, and features.

-

brawl stars apk download latest version 2023
-brawl stars mod apk unlimited gems and coins latest version
-brawl stars hack apk download latest version
-brawl stars new update apk download
-brawl stars apk for android tv
-brawl stars apk for pc windows 10
-brawl stars apk for ios devices
-brawl stars apk mirror download link
-brawl stars apk pure free download
-brawl stars apk uptodown latest version
-brawl stars private server apk download
-brawl stars nulls apk download latest version
-brawl stars rebrawl apk download
-brawl stars lwarb beta apk download
-brawl stars phoenix apk download
-brawl stars gameplay tips and tricks apk
-brawl stars best brawlers guide apk
-brawl stars tier list 2023 apk
-brawl stars club rankings and stats apk
-brawl stars esports tournaments and news apk
-brawl stars fan art and wallpapers apk
-brawl stars skins and cosmetics apk
-brawl stars memes and jokes apk
-brawl stars quizzes and trivia apk
-brawl stars soundtracks and ringtones apk

-

Why download the latest APK of Brawl Stars?

-

To enjoy the new features and updates of the game

-

One of the reasons to download the latest APK of Brawl Stars is to enjoy the new features and updates that the game offers. For example, the latest version of the game, which was released on June 15, 2023, introduced a new brawler named Buzz, who is a lifeguard with a grappling hook and a stun ability. The update also added new skins, maps, quests, balance changes, and bug fixes. By downloading the latest APK of Brawl Stars, you can experience the game at its best and avoid missing out on any of the fun.

-

To avoid compatibility issues and bugs

-

Another reason to download the latest APK of Brawl Stars is to avoid compatibility issues and bugs that might affect your gameplay. Sometimes, older versions of the game might not work properly on newer devices or operating systems, or might have glitches that prevent you from playing smoothly. By downloading the latest APK of Brawl Stars, you can ensure that your game runs smoothly and without any problems.

-

To access the game in regions where it is not officially available

-

A final reason to download the latest APK of Brawl Stars is to access the game in regions where it is not officially available. Brawl Stars is a global game that is available in most countries, but there might be some regions where it is not yet released or banned for some reason. If you live in one of those regions, you might not be able to download the game from the official app store. However, by downloading the latest APK of Brawl Stars from a third-party source, you can bypass this restriction and play the game wherever you are.

-

How to download and install the latest APK of Brawl Stars?

-

Step 1: Find a reliable source for the APK file

-

The first step to download and install the latest APK of Brawl Stars is to find a reliable source for the APK file. You can search online for websites that offer APK files for various apps and games, but be careful to choose a trustworthy and safe one. Some websites might have fake or malicious APK files that could harm your device or steal your data. To avoid this, you can check the reviews and ratings of the website, or use an antivirus software to scan the APK file before downloading it.

-

Step 2: Enable unknown sources on your device settings

-

The second step to download and install the latest APK of Brawl Stars is to enable unknown sources on your device settings. This is because your device might not allow you to install apps from sources other than the official app store by default. To enable unknown sources, you can go to your device settings, then security or privacy, then toggle on the option that allows installation from unknown sources. You might also need to grant permission to your browser or file manager to install apps from unknown sources.

-

Step 3: Download and install the APK file

-

The third step to download and install the latest APK of Brawl Stars is to download and install the APK file. You can do this by clicking on the download link or button on the website where you found the APK file, then waiting for it to finish downloading. Once it is downloaded, you can open it with your file manager or browser, then tap on install. You might need to accept some terms and conditions before proceeding with the installation.

-

Step 4: Launch the game and sign in with your account

-

The final step to download and install the latest APK of Brawl Stars is to launch the game and sign in with your account. You can do this by tapping on the game icon on your home screen or app drawer, then waiting for it to load. You might need to agree to some permissions or policies before playing the game. Once you are in the game, you can sign in with your Supercell ID, Google Play Games account, Facebook account, or Apple ID, depending on your device and preference. This will allow you to sync your progress and purchases across different devices.

-

What are some tips and tricks for playing Brawl Stars?

-

Choose your brawler wisely according to your play style and mode

-

One of the tips for playing Brawl Stars is to choose your brawler wisely according to your play style and mode. As mentioned earlier, there are over 40 brawlers in the game, each with their own unique skills and abilities. Some brawlers are better suited for certain modes or situations than others. For example, some brawlers are good at close-range combat, while others are good at long-range combat. Some brawlers are good at dealing damage, while others are good at supporting or healing their teammates. Some brawlers are good at controlling zones or objectives, while others are good at sneaking or stealing gems or stars. Therefore, you should choose your brawler according to your play style and the mode you are playing. You can also switch your brawler between matches if you want to try a different strategy or adapt to the enemy's team composition.

-

Collect and upgrade your brawlers to unlock their super abilities, star powers, and gadgets

-

Another tip for playing Brawl Stars is to collect and upgrade your brawlers to unlock their super abilities, star powers, and gadgets. Super abilities are powerful moves that can be activated once you fill up your super meter by attacking or taking damage. Star powers are passive skills that enhance your brawler's performance in some way. Gadgets are active items that can be used once per match to give you an edge in certain situations. You can unlock super abilities by reaching level 2 with your brawler, star powers by reaching level 9, and gadgets by reaching level 7. You can also upgrade your brawler's power level by using coins and power points, which increase their health, damage, and super damage.

-

Join a club and team up with other players for more fun and rewards

-

A final tip for playing Brawl Stars is to join a club and team up with other players for more fun and rewards. A club is a group of players who can chat, play, and compete together. You can join an existing club or create your own one. By joining a club, you can make friends, learn from other players, and participate in club events or wars. You can also team up with other players from your club or from the global chat to play together in friendly or competitive matches. By playing with teammates, you can coordinate your strategies, communicate with voice chat, and earn more trophies and rewards.

-

Conclusion

-

Brawl Stars is a popular game that offers a lot of fun and excitement for mobile gamers. You can download and install the latest APK of Brawl Stars to enjoy the new features and updates of the game, avoid compatibility issues and bugs, and access the game in regions where it is not officially available. You can also follow some tips and tricks to improve your skills and performance in the game, such as choosing your brawler wisely, collecting and upgrading your brawlers, and joining a club and teaming up with other players. If you are ready to join the brawl, download the latest APK of Brawl Stars now and start playing!

-

FAQs

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Air1 Roku App Listen to Worship Music from the Comfort of Your Home Television.md b/spaces/1phancelerku/anime-remove-background/Air1 Roku App Listen to Worship Music from the Comfort of Your Home Television.md deleted file mode 100644 index 56676dec70433382e59bc9f1febf43820dadfcd4..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Air1 Roku App Listen to Worship Music from the Comfort of Your Home Television.md +++ /dev/null @@ -1,175 +0,0 @@ - -

How to Download Air1.com and Enjoy Worship Music Anywhere

-

Do you love worship music and want to listen to it wherever you go? Do you want to grow in your faith and get inspired by daily verses and prayers? If you answered yes, then you should download air1.com, a website that offers worship music and faith content for free. In this article, we will show you how to download air1.com on different devices, such as your phone, tablet, computer, smart speaker, or TV. We will also explain why listening to air1.com can benefit your spiritual life and well-being.

-

What is Air1.com and Why You Should Listen to It

-

Air1.com is a website that offers worship music and faith content

-

Air1.com is a website that plays worship music from various artists, such as Elevation Worship, Maverick City Music, Shane & Shane, and more. You can listen to air1.com live or on demand, and discover new songs and artists that will uplift your soul. You can also explore in-depth music content with air1.com's top artists, such as interviews, videos, lyrics, and stories behind the songs.

-

download air1.com


Download Zip ->>> https://jinyurl.com/2uNPiu



-

But air1.com is more than just music. It is also a website that offers faith content that will help you grow in your relationship with God. You can read the Verse of the Day, submit requests for prayer and pray for others, dive deeper into all new faith content, and get inspired and share beautiful daily verse images. You can also enter contests, get exclusive content, free tickets, and new songs from air1.com.

-

Listening to air1.com can benefit your spiritual life and well-being

-

Listening to air1.com can have many benefits for your spiritual life and well-being. Here are some of them:

- -

How to Download Air1.com on Different Devices

-

Download the Air1 App for Android or iOS

-

Features of the Air1 App

-

If you want to listen to air1.com on your phone or tablet, you can download the Air1 app for Android or iOS. The Air1 app has many features that will enhance your listening experience. You can:

- -

How to Install the Air1 App

-

To install the Air1 app on your Android or iOS device, follow these simple steps:

-
    -
  1. Go to the Google Play Store or the App Store on your device.
  2. -
  3. Search for "Air1" and tap on the app icon.
  4. -
  5. Tap on "Install" or "Get" and wait for the app to download.
  6. -
  7. Open the app and enjoy listening to air1.com.
  8. -
-

Enable the Air1 Skill for Amazon Alexa

-

How to Enable the Air1 Skill

-

If you have an Amazon Alexa device, such as an Echo, Dot, or Show, you can enable the Air1 skill and listen to air1.com with voice commands. To enable the Air1 skill, follow these steps:

-
    -
  1. Open the Alexa app on your phone or tablet.
  2. -
  3. Tap on the menu icon and select "Skills & Games".
  4. -
  5. Search for "Air1" and tap on the skill icon.
  6. -
  7. Tap on "Enable to Use" and follow the instructions to link your account.
  8. -
  9. Say "Alexa, open Air1" to start listening to air1.com.
  10. -
-

How to Use the Air1 Skill

-

Once you have enabled the Air1 skill, you can use voice commands to control your listening experience. Here are some examples of what you can say:

- -

Listen to Air1 Online Through iHeartRadio or TuneIn

-

How to Access Air1 on iHeartRadio or TuneIn

-

If you prefer to listen to air1.com online through your computer or browser, you can use iHeartRadio or TuneIn. These are online platforms that let you stream live radio stations from around the world. To access air1.com on iHeartRadio or TuneIn, follow these steps:

-

How to download air1.com worship music
-Download air1.com app for Android or iOS
-Download air1.com podcast with Candace Cameron Bure
-Download air1.com exclusive performance videos
-Download air1.com influencer survey and win prizes
-Download air1.com daily Bible verses and devotions
-Download air1.com live stream and listen online
-Download air1.com latest worship songs and lyrics
-Download air1.com book offer for Father's Day
-Download air1.com station finder and locate nearby stations
-Download air1.com donation receipt and support the ministry
-Download air1.com prayer request form and share your needs
-Download air1.com concert tickets and see your favorite artists
-Download air1.com newsletter and get updates and offers
-Download air1.com wallpapers and backgrounds for your devices
-Download air1.com artist interviews and stories
-Download air1.com song request feature and hear what you want
-Download air1.com feedback form and share your opinions
-Download air1.com volunteer opportunities and serve your community
-Download air1.com merchandise and show your support
-Download air1.com music playlist and discover new songs
-Download air1.com radio app for Windows or Mac
-Download air1.com social media links and follow them online
-Download air1.com testimonies and be inspired by others
-Download air1.com events calendar and plan your schedule
-Download air1.com contact information and get in touch with them
-Download air1.com career opportunities and join their team
-Download air1.com listener stories and hear how they impact lives
-Download air1.com music charts and see what's trending
-Download air1.com song history and find out what played when
-Download air1.com music reviews and ratings
-Download air1.com FAQs and answers
-Download air1.com press releases and media kit
-Download air1.com partner resources and tools
-Download air1.com privacy policy and terms of use
-Download air1.com station logos and images
-Download air1.com music videos and watch online
-Download air1.com song lyrics and sing along
-Download air1.com artist bios and photos
-Download air1.com music genres and categories
-Download air1.com music awards and nominations
-Download air1.com music trivia and quizzes
-Download air1.com music news and updates
-Download air1.com music blogs and articles
-Download air1.com music podcasts and episodes
-Download air1.com music contests and giveaways
-Download air1.com music coupons and discounts
-Download air1.com music recommendations and suggestions

-
    -
  1. Go to iHeartRadio.com or TuneIn.com on your computer or browser.
  2. -
  3. Search for "Air1" and click on the station logo.
  4. -
  5. Enjoy listening to air1.com online.
  6. -
-

Benefits of Listening to Air1 Online

-

Listening to air1.com online through iHeartRadio or TuneIn has some benefits that you might like. For example, you can:

- -

Listen to Air1 on Your Home Television with Roku

-

How to Install the Air1 Roku App

-

If you have a Roku device connected to your home television, you can install the Air1 Roku app and listen to air1.com on your TV. To install the Air1 Roku app, follow these steps:

-
    -
  1. Turn on your Roku device and TV.
  2. -
  3. Navigate to the Roku Channel Store and search for "Air1".
  4. -
  5. Select the Air1 app and click on "Add Channel".
  6. -
  7. Wait for the app to download and install.
  8. -
  9. Open the app and enjoy listening to air1.com on your TV.
  10. -
-

Features of the Air1 Roku App

-

The Air1 Roku app has some features that will make your listening experience more enjoyable. You can:

- -

Conclusion and FAQs

-

Summary of the Main Points

-

In this article, we have shown you how to download air1.com and enjoy worship music anywhere. We have explained what air1.com is and why you should listen to it. We have also given you instructions on how to download air1.com on different devices, such as your phone, tablet, computer, smart speaker, or TV. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to contact us at support@air1.com. Thank you for reading and happy listening!

-

FAQs About Downloading and Listening to Air1.com

-

Here are some frequently asked questions about downloading and listening to air1.com:

- - - - - - - - - - - - - - - - - - - - - - - - - -
QuestionAnswer
Is air1.com free to download and listen?Yes, air1.com is free to download and listen. However, you may incur data charges from your internet service provider or mobile carrier depending on your plan and usage.
Can I listen to air1.com offline?No, you need an internet connection to listen to air1.com. However, you can download some of the faith content, such as the Verse of the Day and daily verse images, and access them offline.
Can I request a song or a prayer on air1.com?Yes, you can request a song or a prayer on air1.com. You can call 888-937-2471 or text 38568 to request a song. You can also submit a prayer request online or call 888-937-2471 to pray with someone.
How can I support air1.com?You can support air1.com by donating online or by phone at 888-937-2471. You can also support air1.com by sharing it with your friends and family, following it on social media, and leaving a positive review on the app store or the website.
How can I contact air1.com?You can contact air1.com by email at support@air1.com or by phone at 888-937-2471. You can also follow air1.com on Facebook, Twitter, Instagram, YouTube, and TikTok.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Five Classic Solitaire Card Games on Your Android Device with Microsoft Solitaire Collection.md b/spaces/1phancelerku/anime-remove-background/Enjoy Five Classic Solitaire Card Games on Your Android Device with Microsoft Solitaire Collection.md deleted file mode 100644 index d63d59c3d593fff188f084c3a67ba176ec501fb2..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy Five Classic Solitaire Card Games on Your Android Device with Microsoft Solitaire Collection.md +++ /dev/null @@ -1,101 +0,0 @@ -
-

Microsoft Solitaire Collection Android Free Download: How to Play the Classic Card Games on Your Phone

-

Introduction

-

If you are a fan of solitaire card games, you might have played or heard of Microsoft Solitaire Collection. It is a collection of five popular solitaire games that you can play on your Windows PC or laptop. But did you know that you can also play Microsoft Solitaire Collection on your Android phone or tablet? In this article, we will show you how to download and play Microsoft Solitaire Collection on Android for free. We will also explain the features and benefits of this app, and answer some frequently asked questions.

-

microsoft solitaire collection android free download


Download Zip ★★★ https://jinyurl.com/2uNRJ2



-

What is Microsoft Solitaire Collection?

-

Microsoft Solitaire Collection is an app that offers five of the best solitaire card games in one place. You can choose from Klondike, Spider, FreeCell, TriPeaks, and Pyramid solitaire games, each with different rules and challenges. You can also enjoy daily challenges, events, themes, card backs, achievements, and more. Microsoft Solitaire Collection is fun for players of all ages and skill levels. You can relax with the classics, sharpen your mind, or challenge yourself with different modes and difficulties.

-

Why download Microsoft Solitaire Collection on Android?

-

There are many reasons why you might want to download Microsoft Solitaire Collection on your Android device. Here are some of them:

- -

How to download Microsoft Solitaire Collection on Android

-

Downloading Microsoft Solitaire Collection on your Android device is very easy and simple. Just follow these steps:

-

Step 1: Go to Google Play Store

-

Open the Google Play Store app on your Android device. You can find it on your home screen or in your app drawer.

-

Step 2: Search for Microsoft Solitaire Collection

-

In the search bar at the top of the screen, type "Microsoft Solitaire Collection" and tap the magnifying glass icon. You will see a list of results related to your search.

-

Step 3: Install the app

-

Find the app that has the name "Microsoft Solitaire Collection" and the logo of a blue spade card. Tap on it to open its details page. Then tap on the green "Install" button to start downloading and installing the app on your device. You might need to grant some permissions for the app to work properly.

-

Once the installation is complete, you can open the app from your home screen or app drawer. You can also tap on the "Open" button on the Google Play Store page.

-

How to play Microsoft Solitaire Collection on Android

-

Playing Microsoft Solitaire Collection on your Android device is very fun and easy. Here are some tips and tricks to help you get started:

-

Choose a game mode

-

When you open the app, you will see five icons representing the five solitaire games available. You can tap on any of them to start playing. Each game has its own rules and objectives, but the basic goal is to move all the cards to the foundations or clear the board. Here is a brief overview of each game mode:

-

microsoft solitaire collection android app
-microsoft solitaire collection apk download
-microsoft solitaire collection for android cnet
-microsoft solitaire collection google play store
-microsoft solitaire collection spider solitaire android
-microsoft solitaire collection klondike solitaire android
-microsoft solitaire collection freecell solitaire android
-microsoft solitaire collection tripeaks solitaire android
-microsoft solitaire collection pyramid solitaire android
-microsoft solitaire collection dark mode android
-microsoft solitaire collection classic theme android
-microsoft solitaire collection aquarium theme android
-microsoft solitaire collection beach theme android
-microsoft solitaire collection retro theme android
-microsoft solitaire collection daily challenges android
-microsoft solitaire collection events and rewards android
-microsoft solitaire collection achievements and gamerscore android
-microsoft solitaire collection sign in with microsoft account android
-microsoft solitaire collection connect with xbox game pass android
-microsoft solitaire collection ad-free experience android
-microsoft solitaire collection 30 years of fun android
-microsoft solitaire collection millions of gamers worldwide android
-microsoft solitaire collection simple rules and gameplay android
-microsoft solitaire collection relax and enjoy android
-microsoft solitaire collection keep your mind sharp android
-microsoft solitaire collection challenge yourself android
-microsoft solitaire collection traditional scoring android
-microsoft solitaire collection vegas scoring android
-microsoft solitaire collection one or three card draw android
-microsoft solitaire collection single suit or four suits spider android
-microsoft solitaire collection four free cell spaces freecell android
-microsoft solitaire collection cards in sequence tripeaks android
-microsoft solitaire collection combo points tripeaks android
-microsoft solitaire collection cards that add up to 13 pyramid android
-microsoft solitaire collection earn badges and rewards daily challenges android
-microsoft solitaire collection track your progress daily challenges android
-microsoft solitaire collection compete with other players daily challenges android
-microsoft solitaire collection choose your mood themes and card backs android
-microsoft solitaire collection save your stats and level sign in with account android
-microsoft solitaire collection pick up where you left off sign in with account android

-

Klondike Solitaire

-

This is the classic and most popular solitaire game. You have seven columns of cards, and you need to build four foundations from Ace to King in the same suit. You can move cards from one column to another if they are in descending order and alternating colors. You can also draw cards from the stock pile and place them on the waste pile or the columns. You can choose from three difficulty levels: Easy, Medium, and Hard.

-

Spider Solitaire

-

This is a challenging solitaire game that requires more strategy and skill. You have 10 columns of cards, and you need to clear the board by creating eight runs of cards from King to Ace in the same suit. You can move cards from one column to another if they are in descending order and the same suit. You can also draw cards from the stock pile and place them on any column. You can choose from three difficulty levels: One Suit, Two Suits, and Four Suits.

-

FreeCell Solitaire

-

This is a solitaire game that tests your logic and patience. You have four free cells, four foundations, and eight columns of cards. You need to build four foundations from Ace to King in the same suit. You can move cards from one column to another if they are in descending order and alternating colors. You can also move cards to the free cells or the foundations. You can choose from four difficulty levels: Easy, Medium, Hard, and Expert.

-

TriPeaks Solitaire

-

This is a solitaire game that is fast-paced and fun. You have three peaks of cards, and you need to clear them by selecting cards that are one higher or one lower than the card on the waste pile. You can also draw cards from the stock pile and place them on the waste pile. You can choose from two difficulty levels: Normal and Hard.

-

Pyramid Solitaire

-

This is a solitaire game that is simple and addictive. You have a pyramid of cards, and you need to clear it by selecting pairs of cards that add up to 13. You can also draw cards from the stock pile and place them on the waste pile. You can choose from two difficulty levels: Normal and Hard.

Complete daily challenges and events

-

One of the best features of Microsoft Solitaire Collection is that it offers daily challenges and events for you to enjoy. You can earn coins, badges, and rewards by completing various tasks and goals in each game mode. You can also compete with other players around the world and see how you rank on the leaderboards. You can access the daily challenges and events by tapping on the calendar icon on the main menu.

-

Customize your theme and card backs

-

Another great feature of Microsoft Solitaire Collection is that it allows you to customize your theme and card backs. You can choose from different backgrounds, colors, and styles to suit your mood and preference. You can also unlock new themes and card backs by completing achievements and challenges. You can access the theme and card back options by tapping on the gear icon on the main menu.

-

Save your progress and earn achievements

-

The last feature we want to mention is that Microsoft Solitaire Collection lets you save your progress and earn achievements. You can sign in with your Microsoft account to sync your data across devices and access your stats, level, coins, badges, and rewards. You can also earn achievements by completing various milestones and challenges in each game mode. You can access your profile and achievements by tapping on the trophy icon on the main menu.

-

Conclusion

-

In conclusion, Microsoft Solitaire Collection is a fantastic app that offers five of the best solitaire card games in one place. You can play Klondike, Spider, FreeCell, TriPeaks, and Pyramid solitaire games on your Android device for free. You can also enjoy daily challenges, events, themes, card backs, achievements, and more. Microsoft Solitaire Collection is fun for players of all ages and skill levels. You can relax with the classics, sharpen your mind, or challenge yourself with different modes and difficulties. If you are a fan of solitaire games, you should definitely download Microsoft Solitaire Collection on your Android device today.

-

FAQs

-

Here are some frequently asked questions about Microsoft Solitaire Collection:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/safety_checker.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/safety_checker.py deleted file mode 100644 index c9820cce25ce9eb77c2d0c11810c05aba81bebcd..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/safety_checker.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import numpy as np -import paddle -import paddle.nn.functional as F - -from paddlenlp.transformers import ( - CLIPPretrainedModel, - CLIPVisionConfig, - CLIPVisionModel, -) - -from ...utils import logging - -logger = logging.get_logger(__name__) - - -def cosine_distance(image_embeds, text_embeds): - normalized_image_embeds = F.normalize(image_embeds) - normalized_text_embeds = F.normalize(text_embeds) - return paddle.matmul(normalized_image_embeds, normalized_text_embeds, transpose_y=True) - - -class StableDiffusionSafetyChecker(CLIPPretrainedModel): - config_class = CLIPVisionConfig - - def __init__(self, config: CLIPVisionConfig): - super().__init__(config) - self.clip = CLIPVisionModel(config) - self.vision_projection = paddle.create_parameter( - (config.hidden_size, config.projection_dim), dtype=paddle.get_default_dtype() - ) - - self.register_buffer("concept_embeds", paddle.ones([17, config.projection_dim])) - self.register_buffer("special_care_embeds", paddle.ones([3, config.projection_dim])) - - self.register_buffer("concept_embeds_weights", paddle.ones([17])) - self.register_buffer("special_care_embeds_weights", paddle.ones([3])) - - @paddle.no_grad() - def forward(self, clip_input, images): - pooled_output = self.clip(clip_input)[1] # pooled_output - image_embeds = paddle.matmul(pooled_output, self.vision_projection) - - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16 - special_cos_dist = cosine_distance(image_embeds, self.special_care_embeds).astype("float32").numpy() - cos_dist = cosine_distance(image_embeds, self.concept_embeds).astype("float32").numpy() - - result = [] - batch_size = image_embeds.shape[0] - for i in range(batch_size): - result_img = {"special_scores": {}, "special_care": [], "concept_scores": {}, "bad_concepts": []} - - # increase this value to create a stronger `nfsw` filter - # at the cost of increasing the possibility of filtering benign images - adjustment = 0.0 - - for concept_idx in range(len(special_cos_dist[0])): - concept_cos = special_cos_dist[i][concept_idx] - concept_threshold = self.special_care_embeds_weights[concept_idx].item() - result_img["special_scores"][concept_idx] = round(concept_cos - concept_threshold + adjustment, 3) - if result_img["special_scores"][concept_idx] > 0: - result_img["special_care"].append({concept_idx, result_img["special_scores"][concept_idx]}) - adjustment = 0.01 - - for concept_idx in range(len(cos_dist[0])): - concept_cos = cos_dist[i][concept_idx] - concept_threshold = self.concept_embeds_weights[concept_idx].item() - result_img["concept_scores"][concept_idx] = round(concept_cos - concept_threshold + adjustment, 3) - if result_img["concept_scores"][concept_idx] > 0: - result_img["bad_concepts"].append(concept_idx) - - result.append(result_img) - - has_nsfw_concepts = [len(res["bad_concepts"]) > 0 for res in result] - - for idx, has_nsfw_concept in enumerate(has_nsfw_concepts): - if has_nsfw_concept: - images[idx] = np.zeros(images[idx].shape) # black image - - if any(has_nsfw_concepts): - logger.warning( - "Potential NSFW content was detected in one or more images. A black image will be returned instead." - " Try again with a different prompt and/or seed." - ) - - return images, has_nsfw_concepts - - def forward_fastdeploy(self, clip_input: paddle.Tensor, images: paddle.Tensor): - pooled_output = self.clip(clip_input)[1] # pooled_output - image_embeds = paddle.matmul(pooled_output, self.vision_projection) - - special_cos_dist = cosine_distance(image_embeds, self.special_care_embeds) - cos_dist = cosine_distance(image_embeds, self.concept_embeds) - - # increase this value to create a stronger `nsfw` filter - # at the cost of increasing the possibility of filtering benign images - adjustment = 0.0 - - special_scores = special_cos_dist - self.special_care_embeds_weights + adjustment - # special_scores = special_scores.round(decimals=3) - special_care = paddle.any(special_scores > 0, axis=1) - special_adjustment = special_care * 0.01 - special_adjustment = special_adjustment.unsqueeze(1).expand([-1, cos_dist.shape[1]]) - - concept_scores = (cos_dist - self.concept_embeds_weights) + special_adjustment - # concept_scores = concept_scores.round(decimals=3) - has_nsfw_concepts = paddle.any(concept_scores > 0, axis=1) - - images[has_nsfw_concepts] = 0.0 # black image - - return images, has_nsfw_concepts diff --git a/spaces/2023Liu2023/bingo/src/lib/bots/bing/utils.ts b/spaces/2023Liu2023/bingo/src/lib/bots/bing/utils.ts deleted file mode 100644 index 64b4b96452d125346b0fc4436b5f7c18c962df0b..0000000000000000000000000000000000000000 --- a/spaces/2023Liu2023/bingo/src/lib/bots/bing/utils.ts +++ /dev/null @@ -1,87 +0,0 @@ -import { ChatResponseMessage, BingChatResponse } from './types' - -export function convertMessageToMarkdown(message: ChatResponseMessage): string { - if (message.messageType === 'InternalSearchQuery') { - return message.text - } - for (const card of message.adaptiveCards??[]) { - for (const block of card.body) { - if (block.type === 'TextBlock') { - return block.text - } - } - } - return '' -} - -const RecordSeparator = String.fromCharCode(30) - -export const websocketUtils = { - packMessage(data: any) { - return `${JSON.stringify(data)}${RecordSeparator}` - }, - unpackMessage(data: string | ArrayBuffer | Blob) { - if (!data) return {} - return data - .toString() - .split(RecordSeparator) - .filter(Boolean) - .map((s) => { - try { - return JSON.parse(s) - } catch (e) { - return {} - } - }) - }, -} - -export async function createImage(prompt: string, id: string, headers: HeadersInit): Promise { - const { headers: responseHeaders } = await fetch(`https://www.bing.com/images/create?partner=sydney&re=1&showselective=1&sude=1&kseed=7000&SFX=&q=${encodeURIComponent(prompt)}&iframeid=${id}`, - { - method: 'HEAD', - headers, - redirect: 'manual' - }, - ); - - if (!/&id=([^&]+)$/.test(responseHeaders.get('location') || '')) { - throw new Error('请求异常,请检查 cookie 是否有效') - } - - const resultId = RegExp.$1; - let count = 0 - const imageThumbUrl = `https://www.bing.com/images/create/async/results/${resultId}?q=${encodeURIComponent(prompt)}&partner=sydney&showselective=1&IID=images.as`; - - do { - await sleep(3000); - const content = await fetch(imageThumbUrl, { headers, method: 'GET' }) - - // @ts-ignore - if (content.headers.get('content-length') > 1) { - const text = await content.text() - return (text?.match(/ target?.split('src="').pop()?.replace(/&/g, '&')) - .map(img => `![${prompt}](${img})`).join(' ') - } - } while(count ++ < 10); -} - - -export async function* streamAsyncIterable(stream: ReadableStream) { - const reader = stream.getReader() - try { - while (true) { - const { done, value } = await reader.read() - if (done) { - return - } - yield value - } - } finally { - reader.releaseLock() - } -} - -export const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms)) - diff --git a/spaces/232labs/VToonify/vtoonify/model/encoder/encoders/__init__.py b/spaces/232labs/VToonify/vtoonify/model/encoder/encoders/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/4Taps/SadTalker/src/utils/croper.py b/spaces/4Taps/SadTalker/src/utils/croper.py deleted file mode 100644 index e68d280ee4bd83db2089c226af5d4be714fcca9d..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/src/utils/croper.py +++ /dev/null @@ -1,295 +0,0 @@ -import os -import cv2 -import time -import glob -import argparse -import scipy -import numpy as np -from PIL import Image -from tqdm import tqdm -from itertools import cycle - -from torch.multiprocessing import Pool, Process, set_start_method - - -""" -brief: face alignment with FFHQ method (https://github.com/NVlabs/ffhq-dataset) -author: lzhbrian (https://lzhbrian.me) -date: 2020.1.5 -note: code is heavily borrowed from - https://github.com/NVlabs/ffhq-dataset - http://dlib.net/face_landmark_detection.py.html -requirements: - apt install cmake - conda install Pillow numpy scipy - pip install dlib - # download face landmark model from: - # http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2 -""" - -import numpy as np -from PIL import Image -import dlib - - -class Croper: - def __init__(self, path_of_lm): - # download model from: http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2 - self.predictor = dlib.shape_predictor(path_of_lm) - - def get_landmark(self, img_np): - """get landmark with dlib - :return: np.array shape=(68, 2) - """ - detector = dlib.get_frontal_face_detector() - dets = detector(img_np, 1) - # print("Number of faces detected: {}".format(len(dets))) - # for k, d in enumerate(dets): - if len(dets) == 0: - return None - d = dets[0] - # Get the landmarks/parts for the face in box d. - shape = self.predictor(img_np, d) - # print("Part 0: {}, Part 1: {} ...".format(shape.part(0), shape.part(1))) - t = list(shape.parts()) - a = [] - for tt in t: - a.append([tt.x, tt.y]) - lm = np.array(a) - # lm is a shape=(68,2) np.array - return lm - - def align_face(self, img, lm, output_size=1024): - """ - :param filepath: str - :return: PIL Image - """ - lm_chin = lm[0: 17] # left-right - lm_eyebrow_left = lm[17: 22] # left-right - lm_eyebrow_right = lm[22: 27] # left-right - lm_nose = lm[27: 31] # top-down - lm_nostrils = lm[31: 36] # top-down - lm_eye_left = lm[36: 42] # left-clockwise - lm_eye_right = lm[42: 48] # left-clockwise - lm_mouth_outer = lm[48: 60] # left-clockwise - lm_mouth_inner = lm[60: 68] # left-clockwise - - # Calculate auxiliary vectors. - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - eye_avg = (eye_left + eye_right) * 0.5 - eye_to_eye = eye_right - eye_left - mouth_left = lm_mouth_outer[0] - mouth_right = lm_mouth_outer[6] - mouth_avg = (mouth_left + mouth_right) * 0.5 - eye_to_mouth = mouth_avg - eye_avg - - # Choose oriented crop rectangle. - x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1] # 双眼差与双嘴差相加 - x /= np.hypot(*x) # hypot函数计算直角三角形的斜边长,用斜边长对三角形两条直边做归一化 - x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8) # 双眼差和眼嘴差,选较大的作为基准尺度 - y = np.flipud(x) * [-1, 1] - c = eye_avg + eye_to_mouth * 0.1 - quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) # 定义四边形,以面部基准位置为中心上下左右平移得到四个顶点 - qsize = np.hypot(*x) * 2 # 定义四边形的大小(边长),为基准尺度的2倍 - - # Shrink. - # 如果计算出的四边形太大了,就按比例缩小它 - shrink = int(np.floor(qsize / output_size * 0.5)) - if shrink > 1: - rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink))) - img = img.resize(rsize, Image.ANTIALIAS) - quad /= shrink - qsize /= shrink - - # Crop. - border = max(int(np.rint(qsize * 0.1)), 3) - crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]), - min(crop[3] + border, img.size[1])) - if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]: - # img = img.crop(crop) - quad -= crop[0:2] - - # Pad. - pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0), - max(pad[3] - img.size[1] + border, 0)) - # if enable_padding and max(pad) > border - 4: - # pad = np.maximum(pad, int(np.rint(qsize * 0.3))) - # img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect') - # h, w, _ = img.shape - # y, x, _ = np.ogrid[:h, :w, :1] - # mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]), - # 1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3])) - # blur = qsize * 0.02 - # img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0) - # img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0) - # img = Image.fromarray(np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB') - # quad += pad[:2] - - # Transform. - quad = (quad + 0.5).flatten() - lx = max(min(quad[0], quad[2]), 0) - ly = max(min(quad[1], quad[7]), 0) - rx = min(max(quad[4], quad[6]), img.size[0]) - ry = min(max(quad[3], quad[5]), img.size[0]) - # img = img.transform((transform_size, transform_size), Image.QUAD, (quad + 0.5).flatten(), - # Image.BILINEAR) - # if output_size < transform_size: - # img = img.resize((output_size, output_size), Image.ANTIALIAS) - - # Save aligned image. - return crop, [lx, ly, rx, ry] - - # def crop(self, img_np_list): - # for _i in range(len(img_np_list)): - # img_np = img_np_list[_i] - # lm = self.get_landmark(img_np) - # if lm is None: - # return None - # crop, quad = self.align_face(img=Image.fromarray(img_np), lm=lm, output_size=512) - # clx, cly, crx, cry = crop - # lx, ly, rx, ry = quad - # lx, ly, rx, ry = int(lx), int(ly), int(rx), int(ry) - - # _inp = img_np_list[_i] - # _inp = _inp[cly:cry, clx:crx] - # _inp = _inp[ly:ry, lx:rx] - # img_np_list[_i] = _inp - # return img_np_list - - def crop(self, img_np_list, xsize=512): # first frame for all video - img_np = img_np_list[0] - lm = self.get_landmark(img_np) - if lm is None: - return None - crop, quad = self.align_face(img=Image.fromarray(img_np), lm=lm, output_size=xsize) - clx, cly, crx, cry = crop - lx, ly, rx, ry = quad - lx, ly, rx, ry = int(lx), int(ly), int(rx), int(ry) - for _i in range(len(img_np_list)): - _inp = img_np_list[_i] - _inp = _inp[cly:cry, clx:crx] - # cv2.imwrite('test1.jpg', _inp) - _inp = _inp[ly:ry, lx:rx] - # cv2.imwrite('test2.jpg', _inp) - img_np_list[_i] = _inp - return img_np_list, crop, quad - - -def read_video(filename, uplimit=100): - frames = [] - cap = cv2.VideoCapture(filename) - cnt = 0 - while cap.isOpened(): - ret, frame = cap.read() - if ret: - frame = cv2.resize(frame, (512, 512)) - frames.append(frame) - else: - break - cnt += 1 - if cnt >= uplimit: - break - cap.release() - assert len(frames) > 0, f'{filename}: video with no frames!' - return frames - - -def create_video(video_name, frames, fps=25, video_format='.mp4', resize_ratio=1): - # video_name = os.path.dirname(image_folder) + video_format - # img_list = glob.glob1(image_folder, 'frame*') - # img_list.sort() - # frame = cv2.imread(os.path.join(image_folder, img_list[0])) - # frame = cv2.resize(frame, (0, 0), fx=resize_ratio, fy=resize_ratio) - # height, width, layers = frames[0].shape - height, width, layers = 512, 512, 3 - if video_format == '.mp4': - fourcc = cv2.VideoWriter_fourcc(*'mp4v') - elif video_format == '.avi': - fourcc = cv2.VideoWriter_fourcc(*'XVID') - video = cv2.VideoWriter(video_name, fourcc, fps, (width, height)) - for _frame in frames: - _frame = cv2.resize(_frame, (height, width), interpolation=cv2.INTER_LINEAR) - video.write(_frame) - -def create_images(video_name, frames): - height, width, layers = 512, 512, 3 - images_dir = video_name.split('.')[0] - os.makedirs(images_dir, exist_ok=True) - for i, _frame in enumerate(frames): - _frame = cv2.resize(_frame, (height, width), interpolation=cv2.INTER_LINEAR) - _frame_path = os.path.join(images_dir, str(i)+'.jpg') - cv2.imwrite(_frame_path, _frame) - -def run(data): - filename, opt, device = data - os.environ['CUDA_VISIBLE_DEVICES'] = device - croper = Croper() - - frames = read_video(filename, uplimit=opt.uplimit) - name = filename.split('/')[-1] # .split('.')[0] - name = os.path.join(opt.output_dir, name) - - frames = croper.crop(frames) - if frames is None: - print(f'{name}: detect no face. should removed') - return - # create_video(name, frames) - create_images(name, frames) - - -def get_data_path(video_dir): - eg_video_files = ['/apdcephfs/share_1290939/quincheng/datasets/HDTF/backup_fps25/WDA_KatieHill_000.mp4'] - # filenames = list() - # VIDEO_EXTENSIONS_LOWERCASE = {'mp4'} - # VIDEO_EXTENSIONS = VIDEO_EXTENSIONS_LOWERCASE.union({f.upper() for f in VIDEO_EXTENSIONS_LOWERCASE}) - # extensions = VIDEO_EXTENSIONS - # for ext in extensions: - # filenames = sorted(glob.glob(f'{opt.input_dir}/**/*.{ext}')) - # print('Total number of videos:', len(filenames)) - return eg_video_files - - -def get_wra_data_path(video_dir): - if opt.option == 'video': - videos_path = sorted(glob.glob(f'{video_dir}/*.mp4')) - elif opt.option == 'image': - videos_path = sorted(glob.glob(f'{video_dir}/*/')) - else: - raise NotImplementedError - print('Example videos: ', videos_path[:2]) - return videos_path - - -if __name__ == '__main__': - set_start_method('spawn') - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser.add_argument('--input_dir', type=str, help='the folder of the input files') - parser.add_argument('--output_dir', type=str, help='the folder of the output files') - parser.add_argument('--device_ids', type=str, default='0,1') - parser.add_argument('--workers', type=int, default=8) - parser.add_argument('--uplimit', type=int, default=500) - parser.add_argument('--option', type=str, default='video') - - root = '/apdcephfs/share_1290939/quincheng/datasets/HDTF' - cmd = f'--input_dir {root}/backup_fps25_first20s_sync/ ' \ - f'--output_dir {root}/crop512_stylegan_firstframe_sync/ ' \ - '--device_ids 0 ' \ - '--workers 8 ' \ - '--option video ' \ - '--uplimit 500 ' - opt = parser.parse_args(cmd.split()) - # filenames = get_data_path(opt.input_dir) - filenames = get_wra_data_path(opt.input_dir) - os.makedirs(opt.output_dir, exist_ok=True) - print(f'Video numbers: {len(filenames)}') - pool = Pool(opt.workers) - args_list = cycle([opt]) - device_ids = opt.device_ids.split(",") - device_ids = cycle(device_ids) - for data in tqdm(pool.imap_unordered(run, zip(filenames, args_list, device_ids))): - None diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/nets_new.py b/spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/nets_new.py deleted file mode 100644 index c9898f63e3f320597b96c45a3df22d941e467614..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/nets_new.py +++ /dev/null @@ -1,132 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F -from uvr5_pack.lib_v5 import layers_new as layers - - -class BaseNet(nn.Module): - def __init__( - self, nin, nout, nin_lstm, nout_lstm, dilations=((4, 2), (8, 4), (12, 6)) - ): - super(BaseNet, self).__init__() - self.enc1 = layers.Conv2DBNActiv(nin, nout, 3, 1, 1) - self.enc2 = layers.Encoder(nout, nout * 2, 3, 2, 1) - self.enc3 = layers.Encoder(nout * 2, nout * 4, 3, 2, 1) - self.enc4 = layers.Encoder(nout * 4, nout * 6, 3, 2, 1) - self.enc5 = layers.Encoder(nout * 6, nout * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(nout * 8, nout * 8, dilations, dropout=True) - - self.dec4 = layers.Decoder(nout * (6 + 8), nout * 6, 3, 1, 1) - self.dec3 = layers.Decoder(nout * (4 + 6), nout * 4, 3, 1, 1) - self.dec2 = layers.Decoder(nout * (2 + 4), nout * 2, 3, 1, 1) - self.lstm_dec2 = layers.LSTMModule(nout * 2, nin_lstm, nout_lstm) - self.dec1 = layers.Decoder(nout * (1 + 2) + 1, nout * 1, 3, 1, 1) - - def __call__(self, x): - e1 = self.enc1(x) - e2 = self.enc2(e1) - e3 = self.enc3(e2) - e4 = self.enc4(e3) - e5 = self.enc5(e4) - - h = self.aspp(e5) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = torch.cat([h, self.lstm_dec2(h)], dim=1) - h = self.dec1(h, e1) - - return h - - -class CascadedNet(nn.Module): - def __init__(self, n_fft, nout=32, nout_lstm=128): - super(CascadedNet, self).__init__() - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - self.nin_lstm = self.max_bin // 2 - self.offset = 64 - - self.stg1_low_band_net = nn.Sequential( - BaseNet(2, nout // 2, self.nin_lstm // 2, nout_lstm), - layers.Conv2DBNActiv(nout // 2, nout // 4, 1, 1, 0), - ) - - self.stg1_high_band_net = BaseNet( - 2, nout // 4, self.nin_lstm // 2, nout_lstm // 2 - ) - - self.stg2_low_band_net = nn.Sequential( - BaseNet(nout // 4 + 2, nout, self.nin_lstm // 2, nout_lstm), - layers.Conv2DBNActiv(nout, nout // 2, 1, 1, 0), - ) - self.stg2_high_band_net = BaseNet( - nout // 4 + 2, nout // 2, self.nin_lstm // 2, nout_lstm // 2 - ) - - self.stg3_full_band_net = BaseNet( - 3 * nout // 4 + 2, nout, self.nin_lstm, nout_lstm - ) - - self.out = nn.Conv2d(nout, 2, 1, bias=False) - self.aux_out = nn.Conv2d(3 * nout // 4, 2, 1, bias=False) - - def forward(self, x): - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - l1_in = x[:, :, :bandw] - h1_in = x[:, :, bandw:] - l1 = self.stg1_low_band_net(l1_in) - h1 = self.stg1_high_band_net(h1_in) - aux1 = torch.cat([l1, h1], dim=2) - - l2_in = torch.cat([l1_in, l1], dim=1) - h2_in = torch.cat([h1_in, h1], dim=1) - l2 = self.stg2_low_band_net(l2_in) - h2 = self.stg2_high_band_net(h2_in) - aux2 = torch.cat([l2, h2], dim=2) - - f3_in = torch.cat([x, aux1, aux2], dim=1) - f3 = self.stg3_full_band_net(f3_in) - - mask = torch.sigmoid(self.out(f3)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux = torch.cat([aux1, aux2], dim=1) - aux = torch.sigmoid(self.aux_out(aux)) - aux = F.pad( - input=aux, - pad=(0, 0, 0, self.output_bin - aux.size()[2]), - mode="replicate", - ) - return mask, aux - else: - return mask - - def predict_mask(self, x): - mask = self.forward(x) - - if self.offset > 0: - mask = mask[:, :, :, self.offset : -self.offset] - assert mask.size()[3] > 0 - - return mask - - def predict(self, x, aggressiveness=None): - mask = self.forward(x) - pred_mag = x * mask - - if self.offset > 0: - pred_mag = pred_mag[:, :, :, self.offset : -self.offset] - assert pred_mag.size()[3] > 0 - - return pred_mag diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/syntaspeech/syntactic_graph_encoder.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/syntaspeech/syntactic_graph_encoder.py deleted file mode 100644 index d703ae2c986b231a92ce468500ceb927d2a6ce7c..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/syntaspeech/syntactic_graph_encoder.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -import dgl -from dgl.nn.pytorch import GatedGraphConv - -def sequence_mask(lengths, maxlen, dtype=torch.bool): - if maxlen is None: - maxlen = lengths.max() - mask = ~(torch.ones((len(lengths), maxlen)).to(lengths.device).cumsum(dim=1).t() > lengths).t() - mask.type(dtype) - return mask - - -def group_hidden_by_segs(h, seg_ids, max_len): - """ - :param h: [B, T, H] - :param seg_ids: [B, T] - :return: h_ph: [B, T_ph, H] - """ - B, T, H = h.shape - h_gby_segs = h.new_zeros([B, max_len + 1, H]).scatter_add_(1, seg_ids[:, :, None].repeat([1, 1, H]), h) - all_ones = h.new_ones(h.shape[:2]) - cnt_gby_segs = h.new_zeros([B, max_len + 1]).scatter_add_(1, seg_ids, all_ones).contiguous() - h_gby_segs = h_gby_segs[:, 1:] - cnt_gby_segs = cnt_gby_segs[:, 1:] - h_gby_segs = h_gby_segs / torch.clamp(cnt_gby_segs[:, :, None], min=1) - # assert h_gby_segs.shape[-1] == 192 - return h_gby_segs - -class GraphAuxEnc(nn.Module): - def __init__(self, in_dim, hid_dim, out_dim, n_iterations=5, n_edge_types=6): - super(GraphAuxEnc, self).__init__() - self.in_dim = in_dim - self.hid_dim = hid_dim - self.out_dim = out_dim - self.skip_connect = True - self.dropout_after_gae = False - - self.ggc_1 = GatedGraphConv(in_feats=in_dim, out_feats=hid_dim - , n_steps=n_iterations, n_etypes=n_edge_types) - self.ggc_2 = GatedGraphConv(in_feats=hid_dim, out_feats=out_dim - , n_steps=n_iterations, n_etypes=n_edge_types) - self.dropout = nn.Dropout(p=0.5) - - @staticmethod - def ph_encoding_to_word_encoding(ph_encoding, ph2word, word_len): - """ - ph_encoding: [batch, t_p, hid] - ph2word: tensor [batch, t_w] - word_len: tensor [batch] - """ - word_encoding_for_graph, batch_word_encoding, has_word_row_idx = GraphAuxEnc._process_ph_to_word_encoding( - ph_encoding, - ph2word, - word_len) - # [batch, t_w, hid] - return batch_word_encoding, word_encoding_for_graph - - def pad_word_encoding_to_phoneme(self, word_encoding, ph2word, t_p): - return self._postprocess_word2ph(word_encoding, ph2word, t_p) - - @staticmethod - def _process_ph_to_word_encoding(ph_encoding, ph2word, word_len=None): - """ - ph_encoding: [batch, t_p, hid] - ph2word: tensor [batch, t_w] - word_len: tensor [batch] - """ - word_len = word_len.reshape([-1,]) - max_len = max(word_len) - num_nodes = sum(word_len) - - batch_word_encoding = group_hidden_by_segs(ph_encoding, ph2word, max_len) - bs, t_p, hid = batch_word_encoding.shape - has_word_mask = sequence_mask(word_len, max_len) # [batch, t_p, 1] - word_encoding = batch_word_encoding.reshape([bs * t_p, hid]) - has_word_row_idx = has_word_mask.reshape([-1]) - word_encoding = word_encoding[has_word_row_idx] - assert word_encoding.shape[0] == num_nodes - return word_encoding, batch_word_encoding, has_word_row_idx - - @staticmethod - def _postprocess_word2ph(word_encoding, ph2word, t_p): - word_encoding = F.pad(word_encoding,[0,0,1,0]) - ph2word_ = ph2word[:, :, None].repeat([1, 1, word_encoding.shape[-1]]) - out = torch.gather(word_encoding, 1, ph2word_) # [B, T, H] - return out - - @staticmethod - def _repeat_one_sequence(x, d, T): - """Repeat each frame according to duration.""" - if d.sum() == 0: - d = d.fill_(1) - hid = x.shape[-1] - expanded_lst = [x_.repeat(int(d_), 1) for x_, d_ in zip(x, d) if d_ != 0] - expanded = torch.cat(expanded_lst, dim=0) - if T > expanded.shape[0]: - expanded = torch.cat([expanded, torch.zeros([T - expanded.shape[0], hid]).to(expanded.device)], dim=0) - return expanded - - def word_forward(self, graph_lst, word_encoding, etypes_lst): - """ - word encoding in, word encoding out. - """ - batched_graph = dgl.batch(graph_lst) - inp = word_encoding - batched_etypes = torch.cat(etypes_lst) # [num_edges_in_batch, 1] - assert batched_graph.num_nodes() == inp.shape[0] - - gcc1_out = self.ggc_1(batched_graph, inp, batched_etypes) - if self.dropout_after_gae: - gcc1_out = self.dropout(gcc1_out) - gcc2_out = self.ggc_2(batched_graph, gcc1_out, batched_etypes) # [num_nodes_in_batch, hin] - if self.dropout_after_gae: - gcc2_out = self.ggc_2(batched_graph, gcc2_out, batched_etypes) - if self.skip_connect: - assert self.in_dim == self.hid_dim and self.hid_dim == self.out_dim - gcc2_out = inp + gcc1_out + gcc2_out - - word_len = torch.tensor([g.num_nodes() for g in graph_lst]).reshape([-1]) - max_len = max(word_len) - has_word_mask = sequence_mask(word_len, max_len) # [batch, t_p, 1] - has_word_row_idx = has_word_mask.reshape([-1]) - bs = len(graph_lst) - t_w = max([g.num_nodes() for g in graph_lst]) - hid = word_encoding.shape[-1] - output = torch.zeros([bs * t_w, hid]).to(gcc2_out.device) - output[has_word_row_idx] = gcc2_out - output = output.reshape([bs, t_w, hid]) - word_level_output = output - return torch.transpose(word_level_output, 1, 2) - - def forward(self, graph_lst, ph_encoding, ph2word, etypes_lst, return_word_encoding=False): - """ - graph_lst: [list of dgl_graph] - ph_encoding: [batch, hid, t_p] - ph2word: [list of list[1,2,2,2,3,3,3]] - etypes_lst: [list of etypes]; etypes: torch.LongTensor - """ - t_p = ph_encoding.shape[-1] - ph_encoding = ph_encoding.transpose(1,2) # [batch, t_p, hid] - word_len = torch.tensor([g.num_nodes() for g in graph_lst]).reshape([-1]) - batched_graph = dgl.batch(graph_lst) - inp, batched_word_encoding, has_word_row_idx = self._process_ph_to_word_encoding(ph_encoding, ph2word, - word_len=word_len) # [num_nodes_in_batch, in_dim] - bs, t_w, hid = batched_word_encoding.shape - batched_etypes = torch.cat(etypes_lst) # [num_edges_in_batch, 1] - gcc1_out = self.ggc_1(batched_graph, inp, batched_etypes) - gcc2_out = self.ggc_2(batched_graph, gcc1_out, batched_etypes) # [num_nodes_in_batch, hin] - # skip connection - gcc2_out = inp + gcc1_out + gcc2_out # [n_nodes, hid] - - output = torch.zeros([bs * t_w, hid]).to(gcc2_out.device) - output[has_word_row_idx] = gcc2_out - output = output.reshape([bs, t_w, hid]) - word_level_output = output - output = self._postprocess_word2ph(word_level_output, ph2word, t_p) # [batch, t_p, hid] - output = torch.transpose(output, 1, 2) - - if return_word_encoding: - return output, torch.transpose(word_level_output, 1, 2) - else: - return output - -if __name__ == '__main__': - # Unit Test for batching graphs - from modules.syntaspeech.syntactic_graph_buider import Sentence2GraphParser, plot_dgl_sentence_graph - parser = Sentence2GraphParser("en") - - # Unit Test for English Graph Builder - text1 = "To be or not to be , that 's a question ." - text2 = "I love you . You love me . Mixue ice-scream and tea ." - graph1, etypes1 = parser.parse(text1) - graph2, etypes2 = parser.parse(text2) - batched_text = " " + text1 + " " + " " + " " + text2 + " " - batched_nodes = [graph1.num_nodes(), graph2.num_nodes()] - plot_dgl_sentence_graph(dgl.batch([graph1, graph2]), {i: w for i, w in enumerate(batched_text.split(" "))}) - etypes_lst = [etypes1, etypes2] - - # Unit Test for Graph Encoder forward - in_feats = 4 - out_feats = 4 - enc = GraphAuxEnc(in_dim=in_feats, hid_dim=in_feats, out_dim=out_feats) - ph2word = torch.tensor([ - [1, 2, 3, 3, 3, 4, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 0], - [1, 2, 3, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16] - ]) - inp = torch.randn([2, in_feats, 17]) # [N_sentence, feat, ph_length] - graph_lst = [graph1, graph2] - out = enc(graph_lst, inp, ph2word, etypes_lst) - print(out.shape) # [N_sentence, feat, ph_length] diff --git a/spaces/AIGC-Audio/AudioGPT/audio_detection/target_sound_detection/src/models.py b/spaces/AIGC-Audio/AudioGPT/audio_detection/target_sound_detection/src/models.py deleted file mode 100644 index 3016b9274aeb86091d30d980803c7106f15ddd54..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/audio_detection/target_sound_detection/src/models.py +++ /dev/null @@ -1,1288 +0,0 @@ -# !/usr/bin/env python -# -*- coding: utf-8 -*- -# @Time : 2021/3/9 16:33 -# @Author : dongchao yang -# @File : train.py -from itertools import zip_longest -import numpy as np -from scipy import ndimage -import torch -import torch.nn as nn -import torch.nn.functional as F -import time -from torchlibrosa.augmentation import SpecAugmentation -from torchlibrosa.stft import Spectrogram, LogmelFilterBank -import math -from sklearn.cluster import KMeans -import os -import time -from functools import partial -# import timm -# from timm.models.layers import DropPath, to_2tuple, trunc_normal_ -import warnings -from functools import partial -# from timm.models.registry import register_model -# from timm.models.vision_transformer import _cfg -# from mmdet.utils import get_root_logger -# from mmcv.runner import load_checkpoint -# from mmcv.runner import _load_checkpoint, load_state_dict -# import mmcv.runner -import copy -from collections import OrderedDict -import io -import re -DEBUG=0 -event_labels = ['Alarm', 'Alarm_clock', 'Animal', 'Applause', 'Arrow', 'Artillery_fire', - 'Babbling', 'Baby_laughter', 'Bark', 'Basketball_bounce', 'Battle_cry', - 'Bell', 'Bird', 'Bleat', 'Bouncing', 'Breathing', 'Buzz', 'Camera', - 'Cap_gun', 'Car', 'Car_alarm', 'Cat', 'Caw', 'Cheering', 'Child_singing', - 'Choir', 'Chop', 'Chopping_(food)', 'Clapping', 'Clickety-clack', 'Clicking', - 'Clip-clop', 'Cluck', 'Coin_(dropping)', 'Computer_keyboard', 'Conversation', - 'Coo', 'Cough', 'Cowbell', 'Creak', 'Cricket', 'Croak', 'Crow', 'Crowd', 'DTMF', - 'Dog', 'Door', 'Drill', 'Drip', 'Engine', 'Engine_starting', 'Explosion', 'Fart', - 'Female_singing', 'Filing_(rasp)', 'Finger_snapping', 'Fire', 'Fire_alarm', 'Firecracker', - 'Fireworks', 'Frog', 'Gasp', 'Gears', 'Giggle', 'Glass', 'Glass_shatter', 'Gobble', 'Groan', - 'Growling', 'Hammer', 'Hands', 'Hiccup', 'Honk', 'Hoot', 'Howl', 'Human_sounds', 'Human_voice', - 'Insect', 'Laughter', 'Liquid', 'Machine_gun', 'Male_singing', 'Mechanisms', 'Meow', 'Moo', - 'Motorcycle', 'Mouse', 'Music', 'Oink', 'Owl', 'Pant', 'Pant_(dog)', 'Patter', 'Pig', 'Plop', - 'Pour', 'Power_tool', 'Purr', 'Quack', 'Radio', 'Rain_on_surface', 'Rapping', 'Rattle', - 'Reversing_beeps', 'Ringtone', 'Roar', 'Run', 'Rustle', 'Scissors', 'Scrape', 'Scratch', - 'Screaming', 'Sewing_machine', 'Shout', 'Shuffle', 'Shuffling_cards', 'Singing', - 'Single-lens_reflex_camera', 'Siren', 'Skateboard', 'Sniff', 'Snoring', 'Speech', - 'Speech_synthesizer', 'Spray', 'Squeak', 'Squeal', 'Steam', 'Stir', 'Surface_contact', - 'Tap', 'Tap_dance', 'Telephone_bell_ringing', 'Television', 'Tick', 'Tick-tock', 'Tools', - 'Train', 'Train_horn', 'Train_wheels_squealing', 'Truck', 'Turkey', 'Typewriter', 'Typing', - 'Vehicle', 'Video_game_sound', 'Water', 'Whimper_(dog)', 'Whip', 'Whispering', 'Whistle', - 'Whistling', 'Whoop', 'Wind', 'Writing', 'Yip', 'and_pans', 'bird_song', 'bleep', 'clink', - 'cock-a-doodle-doo', 'crinkling', 'dove', 'dribble', 'eructation', 'faucet', 'flapping_wings', - 'footsteps', 'gunfire', 'heartbeat', 'infant_cry', 'kid_speaking', 'man_speaking', 'mastication', - 'mice', 'river', 'rooster', 'silverware', 'skidding', 'smack', 'sobbing', 'speedboat', 'splatter', - 'surf', 'thud', 'thwack', 'toot', 'truck_horn', 'tweet', 'vroom', 'waterfowl', 'woman_speaking'] -def load_checkpoint(model, - filename, - map_location=None, - strict=False, - logger=None, - revise_keys=[(r'^module\.', '')]): - """Load checkpoint from a file or URI. - Args: - model (Module): Module to load checkpoint. - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str): Same as :func:`torch.load`. - strict (bool): Whether to allow different params for the model and - checkpoint. - logger (:mod:`logging.Logger` or None): The logger for error message. - revise_keys (list): A list of customized keywords to modify the - state_dict in checkpoint. Each item is a (pattern, replacement) - pair of the regular expression operations. Default: strip - the prefix 'module.' by [(r'^module\\.', '')]. - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - checkpoint = _load_checkpoint(filename, map_location, logger) - ''' - new_proj = torch.nn.Conv2d(1, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) - new_proj.weight = torch.nn.Parameter(torch.sum(checkpoint['patch_embed1.proj.weight'], dim=1).unsqueeze(1)) - checkpoint['patch_embed1.proj.weight'] = new_proj.weight - new_proj.weight = torch.nn.Parameter(torch.sum(checkpoint['patch_embed1.proj.weight'], dim=2).unsqueeze(2).repeat(1,1,3,1)) - checkpoint['patch_embed1.proj.weight'] = new_proj.weight - new_proj.weight = torch.nn.Parameter(torch.sum(checkpoint['patch_embed1.proj.weight'], dim=3).unsqueeze(3).repeat(1,1,1,3)) - checkpoint['patch_embed1.proj.weight'] = new_proj.weight - ''' - new_proj = torch.nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(4, 4), padding=(2, 2)) - new_proj.weight = torch.nn.Parameter(torch.sum(checkpoint['patch_embed1.proj.weight'], dim=1).unsqueeze(1)) - checkpoint['patch_embed1.proj.weight'] = new_proj.weight - # OrderedDict is a subclass of dict - if not isinstance(checkpoint, dict): - raise RuntimeError( - f'No state_dict found in checkpoint file {filename}') - # get state_dict from checkpoint - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - - # strip prefix of state_dict - metadata = getattr(state_dict, '_metadata', OrderedDict()) - for p, r in revise_keys: - state_dict = OrderedDict( - {re.sub(p, r, k): v - for k, v in state_dict.items()}) - state_dict = OrderedDict({k.replace('backbone.',''):v for k,v in state_dict.items()}) - # Keep metadata in state_dict - state_dict._metadata = metadata - - # load state_dict - load_state_dict(model, state_dict, strict, logger) - return checkpoint - -def init_weights(m): - if isinstance(m, (nn.Conv2d, nn.Conv1d)): - nn.init.kaiming_normal_(m.weight) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - if isinstance(m, nn.Linear): - nn.init.kaiming_uniform_(m.weight) - if m.bias is not None: - nn.init.constant_(m.bias, 0) -def init_layer(layer): - """Initialize a Linear or Convolutional layer. """ - nn.init.xavier_uniform_(layer.weight) - if hasattr(layer, 'bias'): - if layer.bias is not None: - layer.bias.data.fill_(0.) - - -def init_bn(bn): - """Initialize a Batchnorm layer. """ - bn.bias.data.fill_(0.) - bn.weight.data.fill_(1.) - -class MaxPool(nn.Module): - def __init__(self, pooldim=1): - super().__init__() - self.pooldim = pooldim - - def forward(self, logits, decision): - return torch.max(decision, dim=self.pooldim)[0] - - -class LinearSoftPool(nn.Module): - """LinearSoftPool - Linear softmax, takes logits and returns a probability, near to the actual maximum value. - Taken from the paper: - A Comparison of Five Multiple Instance Learning Pooling Functions for Sound Event Detection with Weak Labeling - https://arxiv.org/abs/1810.09050 - """ - def __init__(self, pooldim=1): - super().__init__() - self.pooldim = pooldim - - def forward(self, logits, time_decision): - return (time_decision**2).sum(self.pooldim) / (time_decision.sum( - self.pooldim)+1e-7) - -class ConvBlock(nn.Module): - def __init__(self, in_channels, out_channels): - - super(ConvBlock, self).__init__() - - self.conv1 = nn.Conv2d(in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), stride=(1, 1), - padding=(1, 1), bias=False) - - self.conv2 = nn.Conv2d(in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), stride=(1, 1), - padding=(1, 1), bias=False) - - self.bn1 = nn.BatchNorm2d(out_channels) - self.bn2 = nn.BatchNorm2d(out_channels) - - self.init_weight() - - def init_weight(self): - init_layer(self.conv1) - init_layer(self.conv2) - init_bn(self.bn1) - init_bn(self.bn2) - - - def forward(self, input, pool_size=(2, 2), pool_type='avg'): - - x = input - x = F.relu_(self.bn1(self.conv1(x))) - x = F.relu_(self.bn2(self.conv2(x))) - if pool_type == 'max': - x = F.max_pool2d(x, kernel_size=pool_size) - elif pool_type == 'avg': - x = F.avg_pool2d(x, kernel_size=pool_size) - elif pool_type == 'avg+max': - x1 = F.avg_pool2d(x, kernel_size=pool_size) - x2 = F.max_pool2d(x, kernel_size=pool_size) - x = x1 + x2 - else: - raise Exception('Incorrect argument!') - - return x - -class ConvBlock_GLU(nn.Module): - def __init__(self, in_channels, out_channels,kernel_size=(3,3)): - super(ConvBlock_GLU, self).__init__() - self.conv1 = nn.Conv2d(in_channels=in_channels, - out_channels=out_channels, - kernel_size=kernel_size, stride=(1, 1), - padding=(1, 1), bias=False) - self.bn1 = nn.BatchNorm2d(out_channels) - self.sigmoid = nn.Sigmoid() - self.init_weight() - - def init_weight(self): - init_layer(self.conv1) - init_bn(self.bn1) - - def forward(self, input, pool_size=(2, 2), pool_type='avg'): - x = input - x = self.bn1(self.conv1(x)) - cnn1 = self.sigmoid(x[:, :x.shape[1]//2, :, :]) - cnn2 = x[:,x.shape[1]//2:,:,:] - x = cnn1*cnn2 - if pool_type == 'max': - x = F.max_pool2d(x, kernel_size=pool_size) - elif pool_type == 'avg': - x = F.avg_pool2d(x, kernel_size=pool_size) - elif pool_type == 'avg+max': - x1 = F.avg_pool2d(x, kernel_size=pool_size) - x2 = F.max_pool2d(x, kernel_size=pool_size) - x = x1 + x2 - elif pool_type == 'None': - pass - elif pool_type == 'LP': - pass - #nn.LPPool2d(4, pool_size) - else: - raise Exception('Incorrect argument!') - return x - -class Mul_scale_GLU(nn.Module): - def __init__(self): - super(Mul_scale_GLU,self).__init__() - self.conv_block1_1 = ConvBlock_GLU(in_channels=1, out_channels=64,kernel_size=(1,1)) # 1*1 - self.conv_block1_2 = ConvBlock_GLU(in_channels=1, out_channels=64,kernel_size=(3,3)) # 3*3 - self.conv_block1_3 = ConvBlock_GLU(in_channels=1, out_channels=64,kernel_size=(5,5)) # 5*5 - self.conv_block2 = ConvBlock_GLU(in_channels=96, out_channels=128*2) - # self.conv_block3 = ConvBlock(in_channels=64, out_channels=128) - self.conv_block3 = ConvBlock_GLU(in_channels=128, out_channels=128*2) - self.conv_block4 = ConvBlock_GLU(in_channels=128, out_channels=256*2) - self.conv_block5 = ConvBlock_GLU(in_channels=256, out_channels=256*2) - self.conv_block6 = ConvBlock_GLU(in_channels=256, out_channels=512*2) - self.conv_block7 = ConvBlock_GLU(in_channels=512, out_channels=512*2) - self.padding = nn.ReplicationPad2d((0,1,0,1)) - - def forward(self, input, fi=None): - """ - Input: (batch_size, data_length)""" - x1 = self.conv_block1_1(input, pool_size=(2, 2), pool_type='avg') - x1 = x1[:,:,:500,:32] - #print('x1 ',x1.shape) - x2 = self.conv_block1_2(input,pool_size=(2,2),pool_type='avg') - #print('x2 ',x2.shape) - x3 = self.conv_block1_3(input,pool_size=(2,2),pool_type='avg') - x3 = self.padding(x3) - #print('x3 ',x3.shape) - # assert 1==2 - x = torch.cat([x1,x2],dim=1) - x = torch.cat([x,x3],dim=1) - #print('x ',x.shape) - x = self.conv_block2(x, pool_size=(2, 2), pool_type='None') - x = self.conv_block3(x,pool_size=(2,2),pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) # - #print('x2,3 ',x.shape) - x = self.conv_block4(x, pool_size=(2, 4), pool_type='None') - x = self.conv_block5(x,pool_size=(2,4),pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - #print('x4,5 ',x.shape) - - x = self.conv_block6(x, pool_size=(1, 4), pool_type='None') - x = self.conv_block7(x, pool_size=(1, 4), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - # print('x6,7 ',x.shape) - # assert 1==2 - return x - -class Cnn14(nn.Module): - def __init__(self, sample_rate=32000, window_size=1024, hop_size=320, mel_bins=64, fmin=50, - fmax=14000, classes_num=527): - - super(Cnn14, self).__init__() - - window = 'hann' - center = True - pad_mode = 'reflect' - ref = 1.0 - amin = 1e-10 - top_db = None - - # Spectrogram extractor - self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size, - win_length=window_size, window=window, center=center, pad_mode=pad_mode, - freeze_parameters=True) - - # Logmel feature extractor - self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size, - n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db, - freeze_parameters=True) - - # Spec augmenter - self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2, - freq_drop_width=8, freq_stripes_num=2) - - self.bn0 = nn.BatchNorm2d(64) - - self.conv_block1 = ConvBlock(in_channels=1, out_channels=64) - self.conv_block2 = ConvBlock(in_channels=64, out_channels=128) - self.conv_block3 = ConvBlock(in_channels=128, out_channels=256) - self.conv_block4 = ConvBlock(in_channels=256, out_channels=512) - self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024) - self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048) - - self.fc1 = nn.Linear(2048, 128, bias=True) - self.fc_audioset = nn.Linear(128, classes_num, bias=True) - - self.init_weight() - - def init_weight(self): - init_layer(self.fc1) - init_layer(self.fc_audioset) - - def forward(self, input_, mixup_lambda=None): - """ - Input: (batch_size, data_length)""" - input_ = input_.unsqueeze(1) - x = self.conv_block1(input_, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block4(x, pool_size=(1, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block5(x, pool_size=(1, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block6(x, pool_size=(1, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - # print(x.shape) - # x = torch.mean(x, dim=3) - x = x.transpose(1, 2).contiguous().flatten(-2) - x = self.fc1(x) - # print(x.shape) - # assert 1==2 - # (x1,_) = torch.max(x, dim=2) - # x2 = torch.mean(x, dim=2) - # x = x1 + x2 - # x = F.dropout(x, p=0.5, training=self.training) - # x = F.relu_(self.fc1(x)) - # embedding = F.dropout(x, p=0.5, training=self.training) - return x - -class Cnn10_fi(nn.Module): - def __init__(self): - super(Cnn10_fi, self).__init__() - self.conv_block1 = ConvBlock(in_channels=1, out_channels=64) - self.conv_block2 = ConvBlock(in_channels=64, out_channels=128) - self.conv_block3 = ConvBlock(in_channels=128, out_channels=256) - self.conv_block4 = ConvBlock(in_channels=256, out_channels=512) - - # self.fc1 = nn.Linear(512, 512, bias=True) - # self.fc_audioset = nn.Linear(512, classes_num, bias=True) - - # self.init_weight() - - def forward(self, input, fi=None): - """ - Input: (batch_size, data_length)""" - - x = self.conv_block1(input, pool_size=(2, 2), pool_type='avg') - if fi != None: - gamma = fi[:,0].unsqueeze(1).unsqueeze(2).unsqueeze(3).expand_as(x) - beta = fi[:,1].unsqueeze(1).unsqueeze(2).unsqueeze(3).expand_as(x) - x = (gamma)*x + beta - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg') - if fi != None: - gamma = fi[:,0].unsqueeze(1).unsqueeze(2).unsqueeze(3).expand_as(x) - beta = fi[:,1].unsqueeze(1).unsqueeze(2).unsqueeze(3).expand_as(x) - x = (gamma)*x + beta - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block3(x, pool_size=(2, 4), pool_type='avg') - if fi != None: - gamma = fi[:,0].unsqueeze(1).unsqueeze(2).unsqueeze(3).expand_as(x) - beta = fi[:,1].unsqueeze(1).unsqueeze(2).unsqueeze(3).expand_as(x) - x = (gamma)*x + beta - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block4(x, pool_size=(1, 4), pool_type='avg') - if fi != None: - gamma = fi[:,0].unsqueeze(1).unsqueeze(2).unsqueeze(3).expand_as(x) - beta = fi[:,1].unsqueeze(1).unsqueeze(2).unsqueeze(3).expand_as(x) - x = (gamma)*x + beta - x = F.dropout(x, p=0.2, training=self.training) - return x - -class Cnn10_mul_scale(nn.Module): - def __init__(self,scale=8): - super(Cnn10_mul_scale, self).__init__() - self.conv_block1_1 = ConvBlock_GLU(in_channels=1, out_channels=64,kernel_size=(1,1)) - self.conv_block1_2 = ConvBlock_GLU(in_channels=1, out_channels=64,kernel_size=(3,3)) - self.conv_block1_3 = ConvBlock_GLU(in_channels=1, out_channels=64,kernel_size=(5,5)) - self.conv_block2 = ConvBlock(in_channels=96, out_channels=128) - self.conv_block3 = ConvBlock(in_channels=128, out_channels=256) - self.conv_block4 = ConvBlock(in_channels=256, out_channels=512) - self.scale = scale - self.padding = nn.ReplicationPad2d((0,1,0,1)) - def forward(self, input, pool_size=(2, 2), pool_type='avg'): - """ - Input: (batch_size, data_length)""" - if self.scale == 8: - pool_size1 = (2,2) - pool_size2 = (2,2) - pool_size3 = (2,4) - pool_size4 = (1,4) - elif self.scale == 4: - pool_size1 = (2,2) - pool_size2 = (2,2) - pool_size3 = (1,4) - pool_size4 = (1,4) - elif self.scale == 2: - pool_size1 = (2,2) - pool_size2 = (1,2) - pool_size3 = (1,4) - pool_size4 = (1,4) - else: - pool_size1 = (1,2) - pool_size2 = (1,2) - pool_size3 = (1,4) - pool_size4 = (1,4) - # print('input ',input.shape) - x1 = self.conv_block1_1(input, pool_size=pool_size1, pool_type='avg') - x1 = x1[:,:,:500,:32] - #print('x1 ',x1.shape) - x2 = self.conv_block1_2(input, pool_size=pool_size1, pool_type='avg') - #print('x2 ',x2.shape) - x3 = self.conv_block1_3(input, pool_size=pool_size1, pool_type='avg') - x3 = self.padding(x3) - #print('x3 ',x3.shape) - # assert 1==2 - m_i = min(x3.shape[2],min(x1.shape[2],x2.shape[2])) - #print('m_i ', m_i) - x = torch.cat([x1[:,:,:m_i,:],x2[:,:, :m_i,:],x3[:,:, :m_i,:]],dim=1) - # x = torch.cat([x,x3],dim=1) - - # x = self.conv_block1(input, pool_size=pool_size1, pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block2(x, pool_size=pool_size2, pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block3(x, pool_size=pool_size3, pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block4(x, pool_size=pool_size4, pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - return x - - -class Cnn10(nn.Module): - def __init__(self,scale=8): - super(Cnn10, self).__init__() - self.conv_block1 = ConvBlock(in_channels=1, out_channels=64) - self.conv_block2 = ConvBlock(in_channels=64, out_channels=128) - self.conv_block3 = ConvBlock(in_channels=128, out_channels=256) - self.conv_block4 = ConvBlock(in_channels=256, out_channels=512) - self.scale = scale - def forward(self, input, pool_size=(2, 2), pool_type='avg'): - """ - Input: (batch_size, data_length)""" - if self.scale == 8: - pool_size1 = (2,2) - pool_size2 = (2,2) - pool_size3 = (2,4) - pool_size4 = (1,4) - elif self.scale == 4: - pool_size1 = (2,2) - pool_size2 = (2,2) - pool_size3 = (1,4) - pool_size4 = (1,4) - elif self.scale == 2: - pool_size1 = (2,2) - pool_size2 = (1,2) - pool_size3 = (1,4) - pool_size4 = (1,4) - else: - pool_size1 = (1,2) - pool_size2 = (1,2) - pool_size3 = (1,4) - pool_size4 = (1,4) - x = self.conv_block1(input, pool_size=pool_size1, pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block2(x, pool_size=pool_size2, pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block3(x, pool_size=pool_size3, pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block4(x, pool_size=pool_size4, pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - return x - -class MeanPool(nn.Module): - def __init__(self, pooldim=1): - super().__init__() - self.pooldim = pooldim - - def forward(self, logits, decision): - return torch.mean(decision, dim=self.pooldim) - -class ResPool(nn.Module): - def __init__(self, pooldim=1): - super().__init__() - self.pooldim = pooldim - self.linPool = LinearSoftPool(pooldim=1) - -class AutoExpPool(nn.Module): - def __init__(self, outputdim=10, pooldim=1): - super().__init__() - self.outputdim = outputdim - self.alpha = nn.Parameter(torch.full((outputdim, ), 1)) - self.pooldim = pooldim - - def forward(self, logits, decision): - scaled = self.alpha * decision # \alpha * P(Y|x) in the paper - return (logits * torch.exp(scaled)).sum( - self.pooldim) / torch.exp(scaled).sum(self.pooldim) - - -class SoftPool(nn.Module): - def __init__(self, T=1, pooldim=1): - super().__init__() - self.pooldim = pooldim - self.T = T - - def forward(self, logits, decision): - w = torch.softmax(decision / self.T, dim=self.pooldim) - return torch.sum(decision * w, dim=self.pooldim) - - -class AutoPool(nn.Module): - """docstring for AutoPool""" - def __init__(self, outputdim=10, pooldim=1): - super().__init__() - self.outputdim = outputdim - self.alpha = nn.Parameter(torch.ones(outputdim)) - self.dim = pooldim - - def forward(self, logits, decision): - scaled = self.alpha * decision # \alpha * P(Y|x) in the paper - weight = torch.softmax(scaled, dim=self.dim) - return torch.sum(decision * weight, dim=self.dim) # B x C - - -class ExtAttentionPool(nn.Module): - def __init__(self, inputdim, outputdim=10, pooldim=1, **kwargs): - super().__init__() - self.inputdim = inputdim - self.outputdim = outputdim - self.pooldim = pooldim - self.attention = nn.Linear(inputdim, outputdim) - nn.init.zeros_(self.attention.weight) - nn.init.zeros_(self.attention.bias) - self.activ = nn.Softmax(dim=self.pooldim) - - def forward(self, logits, decision): - # Logits of shape (B, T, D), decision of shape (B, T, C) - w_x = self.activ(self.attention(logits) / self.outputdim) - h = (logits.permute(0, 2, 1).contiguous().unsqueeze(-2) * - w_x.unsqueeze(-1)).flatten(-2).contiguous() - return torch.sum(h, self.pooldim) - - -class AttentionPool(nn.Module): - """docstring for AttentionPool""" - def __init__(self, inputdim, outputdim=10, pooldim=1, **kwargs): - super().__init__() - self.inputdim = inputdim - self.outputdim = outputdim - self.pooldim = pooldim - self.transform = nn.Linear(inputdim, outputdim) - self.activ = nn.Softmax(dim=self.pooldim) - self.eps = 1e-7 - - def forward(self, logits, decision): - # Input is (B, T, D) - # B, T , D - w = self.activ(torch.clamp(self.transform(logits), -15, 15)) - detect = (decision * w).sum( - self.pooldim) / (w.sum(self.pooldim) + self.eps) - # B, T, D - return detect - -class Block2D(nn.Module): - def __init__(self, cin, cout, kernel_size=3, padding=1): - super().__init__() - self.block = nn.Sequential( - nn.BatchNorm2d(cin), - nn.Conv2d(cin, - cout, - kernel_size=kernel_size, - padding=padding, - bias=False), - nn.LeakyReLU(inplace=True, negative_slope=0.1)) - - def forward(self, x): - return self.block(x) - -class AudioCNN(nn.Module): - def __init__(self, classes_num): - super(AudioCNN, self).__init__() - self.conv_block1 = ConvBlock(in_channels=1, out_channels=64) - self.conv_block2 = ConvBlock(in_channels=64, out_channels=128) - self.conv_block3 = ConvBlock(in_channels=128, out_channels=256) - self.conv_block4 = ConvBlock(in_channels=256, out_channels=512) - self.fc1 = nn.Linear(512,128,bias=True) - self.fc = nn.Linear(128, classes_num, bias=True) - self.init_weights() - - def init_weights(self): - init_layer(self.fc) - - def forward(self, input): - ''' - Input: (batch_size, times_steps, freq_bins)''' - # [128, 801, 168] --> [128,1,801,168] - x = input[:, None, :, :] - '''(batch_size, 1, times_steps, freq_bins)''' - x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg') # 128,64,400,84 - x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg') # 128,128,200,42 - x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg') # 128,256,100,21 - x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg') # 128,512,50,10 - '''(batch_size, feature_maps, time_steps, freq_bins)''' - x = torch.mean(x, dim=3) # (batch_size, feature_maps, time_stpes) # 128,512,50 - (x, _) = torch.max(x, dim=2) # (batch_size, feature_maps) 128,512 - x = self.fc1(x) # 128,128 - output = self.fc(x) # 128,10 - return x,output - - def extract(self,input): - '''Input: (batch_size, times_steps, freq_bins)''' - x = input[:, None, :, :] - x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg') - x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg') - x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg') - x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg') - '''(batch_size, feature_maps, time_steps, freq_bins)''' - x = torch.mean(x, dim=3) # (batch_size, feature_maps, time_stpes) - (x, _) = torch.max(x, dim=2) # (batch_size, feature_maps) - x = self.fc1(x) # 128,128 - return x - -def parse_poolingfunction(poolingfunction_name='mean', **kwargs): - """parse_poolingfunction - A heler function to parse any temporal pooling - Pooling is done on dimension 1 - :param poolingfunction_name: - :param **kwargs: - """ - poolingfunction_name = poolingfunction_name.lower() - if poolingfunction_name == 'mean': - return MeanPool(pooldim=1) - elif poolingfunction_name == 'max': - return MaxPool(pooldim=1) - elif poolingfunction_name == 'linear': - return LinearSoftPool(pooldim=1) - elif poolingfunction_name == 'expalpha': - return AutoExpPool(outputdim=kwargs['outputdim'], pooldim=1) - - elif poolingfunction_name == 'soft': - return SoftPool(pooldim=1) - elif poolingfunction_name == 'auto': - return AutoPool(outputdim=kwargs['outputdim']) - elif poolingfunction_name == 'attention': - return AttentionPool(inputdim=kwargs['inputdim'], - outputdim=kwargs['outputdim']) -class conv1d(nn.Module): - def __init__(self, nin, nout, kernel_size=3, stride=1, padding='VALID', dilation=1): - super(conv1d, self).__init__() - if padding == 'VALID': - dconv_pad = 0 - elif padding == 'SAME': - dconv_pad = dilation * ((kernel_size - 1) // 2) - else: - raise ValueError("Padding Mode Error!") - self.conv = nn.Conv1d(nin, nout, kernel_size=kernel_size, stride=stride, padding=dconv_pad) - self.act = nn.ReLU() - self.init_layer(self.conv) - - def init_layer(self, layer, nonlinearity='relu'): - """Initialize a Linear or Convolutional layer. """ - nn.init.kaiming_normal_(layer.weight, nonlinearity=nonlinearity) - nn.init.constant_(layer.bias, 0.1) - - def forward(self, x): - out = self.act(self.conv(x)) - return out - -class Atten_1(nn.Module): - def __init__(self, input_dim, context=2, dropout_rate=0.2): - super(Atten_1, self).__init__() - self._matrix_k = nn.Linear(input_dim, input_dim // 4) - self._matrix_q = nn.Linear(input_dim, input_dim // 4) - self.relu = nn.ReLU() - self.context = context - self._dropout_layer = nn.Dropout(dropout_rate) - self.init_layer(self._matrix_k) - self.init_layer(self._matrix_q) - - def init_layer(self, layer, nonlinearity='leaky_relu'): - """Initialize a Linear or Convolutional layer. """ - nn.init.kaiming_uniform_(layer.weight, nonlinearity=nonlinearity) - if hasattr(layer, 'bias'): - if layer.bias is not None: - layer.bias.data.fill_(0.) - - def forward(self, input_x): - k_x = input_x - k_x = self.relu(self._matrix_k(k_x)) - k_x = self._dropout_layer(k_x) - # print('k_x ',k_x.shape) - q_x = input_x[:, self.context, :] - # print('q_x ',q_x.shape) - q_x = q_x[:, None, :] - # print('q_x1 ',q_x.shape) - q_x = self.relu(self._matrix_q(q_x)) - q_x = self._dropout_layer(q_x) - # print('q_x2 ',q_x.shape) - x_ = torch.matmul(k_x, q_x.transpose(-2, -1) / math.sqrt(k_x.size(-1))) - # print('x_ ',x_.shape) - x_ = x_.squeeze(2) - alpha = F.softmax(x_, dim=-1) - att_ = alpha - # print('alpha ',alpha) - alpha = alpha.unsqueeze(2).repeat(1,1,input_x.shape[2]) - # print('alpha ',alpha) - # alpha = alpha.view(alpha.size(0), alpha.size(1), alpha.size(2), 1) - out = alpha * input_x - # print('out ', out.shape) - # out = out.mean(2) - out = out.mean(1) - # print('out ',out.shape) - # assert 1==2 - #y = alpha * input_x - #return y, att_ - out = input_x[:, self.context, :] + out - return out - -class Fusion(nn.Module): - def __init__(self, inputdim, inputdim2, n_fac): - super().__init__() - self.fuse_layer1 = conv1d(inputdim, inputdim2*n_fac,1) - self.fuse_layer2 = conv1d(inputdim2, inputdim2*n_fac,1) - self.avg_pool = nn.AvgPool1d(n_fac, stride=n_fac) # 沿着最后一个维度进行pooling - - def forward(self,embedding,mix_embed): - embedding = embedding.permute(0,2,1) - fuse1_out = self.fuse_layer1(embedding) # [2, 501, 2560] ,512*5, 1D卷积融合,spk_embeding ,扩大其维度 - fuse1_out = fuse1_out.permute(0,2,1) - - mix_embed = mix_embed.permute(0,2,1) - fuse2_out = self.fuse_layer2(mix_embed) # [2, 501, 2560] ,512*5, 1D卷积融合,spk_embeding ,扩大其维度 - fuse2_out = fuse2_out.permute(0,2,1) - as_embs = torch.mul(fuse1_out, fuse2_out) # 相乘 [2, 501, 2560] - # (10, 501, 512) - as_embs = self.avg_pool(as_embs) # [2, 501, 512] 相当于 2560//5 - return as_embs - -class CDur_fusion(nn.Module): - def __init__(self, inputdim, outputdim, **kwargs): - super().__init__() - self.features = nn.Sequential( - Block2D(1, 32), - nn.LPPool2d(4, (2, 4)), - Block2D(32, 128), - Block2D(128, 128), - nn.LPPool2d(4, (2, 4)), - Block2D(128, 128), - Block2D(128, 128), - nn.LPPool2d(4, (1, 4)), - nn.Dropout(0.3), - ) - with torch.no_grad(): - rnn_input_dim = self.features(torch.randn(1, 1, 500,inputdim)).shape - rnn_input_dim = rnn_input_dim[1] * rnn_input_dim[-1] - - self.gru = nn.GRU(128, 128, bidirectional=True, batch_first=True) - self.fusion = Fusion(128,2) - self.fc = nn.Linear(256,256) - self.outputlayer = nn.Linear(256, outputdim) - self.features.apply(init_weights) - self.outputlayer.apply(init_weights) - - def forward(self, x, embedding): # - batch, time, dim = x.shape - x = x.unsqueeze(1) # (b,1,t,d) - x = self.features(x) # - x = x.transpose(1, 2).contiguous().flatten(-2) # 重新拷贝一份x,之后推平-2:-1之间的维度 # (b,125,128) - embedding = embedding.unsqueeze(1) - embedding = embedding.repeat(1, x.shape[1], 1) - x = self.fusion(embedding,x) - #x = torch.cat((x, embedding), dim=2) # [B, T, 128 + emb_dim] - if not hasattr(self, '_flattened'): - self.gru.flatten_parameters() - x, _ = self.gru(x) # x torch.Size([16, 125, 256]) - x = self.fc(x) - decision_time = torch.softmax(self.outputlayer(x),dim=2) # x torch.Size([16, 125, 2]) - decision_up = torch.nn.functional.interpolate( - decision_time.transpose(1, 2), # [16, 2, 125] - time, # 501 - mode='linear', - align_corners=False).transpose(1, 2) # 从125插值回 501 ?--> (16,501,2) - return decision_time[:,:,0],decision_up - -class CDur(nn.Module): - def __init__(self, inputdim, outputdim,time_resolution, **kwargs): - super().__init__() - self.features = nn.Sequential( - Block2D(1, 32), - nn.LPPool2d(4, (2, 4)), - Block2D(32, 128), - Block2D(128, 128), - nn.LPPool2d(4, (2, 4)), - Block2D(128, 128), - Block2D(128, 128), - nn.LPPool2d(4, (2, 4)), - nn.Dropout(0.3), - ) - with torch.no_grad(): - rnn_input_dim = self.features(torch.randn(1, 1, 500,inputdim)).shape - rnn_input_dim = rnn_input_dim[1] * rnn_input_dim[-1] - - self.gru = nn.GRU(256, 256, bidirectional=True, batch_first=True) - self.fc = nn.Linear(512,256) - self.outputlayer = nn.Linear(256, outputdim) - self.features.apply(init_weights) - self.outputlayer.apply(init_weights) - - def forward(self, x, embedding,one_hot=None): # - batch, time, dim = x.shape - x = x.unsqueeze(1) # (b,1,t,d) - x = self.features(x) # - x = x.transpose(1, 2).contiguous().flatten(-2) # 重新拷贝一份x,之后推平-2:-1之间的维度 # (b,125,128) - embedding = embedding.unsqueeze(1) - embedding = embedding.repeat(1, x.shape[1], 1) - x = torch.cat((x, embedding), dim=2) # [B, T, 128 + emb_dim] - if not hasattr(self, '_flattened'): - self.gru.flatten_parameters() - x, _ = self.gru(x) # x torch.Size([16, 125, 256]) - x = self.fc(x) - decision_time = torch.softmax(self.outputlayer(x),dim=2) # x torch.Size([16, 125, 2]) - decision_up = torch.nn.functional.interpolate( - decision_time.transpose(1, 2), # [16, 2, 125] - time, # 501 - mode='linear', - align_corners=False).transpose(1, 2) # 从125插值回 501 ?--> (16,501,2) - return decision_time[:,:,0],decision_up - -class CDur_big(nn.Module): - def __init__(self, inputdim, outputdim, **kwargs): - super().__init__() - self.features = nn.Sequential( - Block2D(1, 64), - Block2D(64, 64), - nn.LPPool2d(4, (2, 2)), - Block2D(64, 128), - Block2D(128, 128), - nn.LPPool2d(4, (2, 2)), - Block2D(128, 256), - Block2D(256, 256), - nn.LPPool2d(4, (2, 4)), - Block2D(256, 512), - Block2D(512, 512), - nn.LPPool2d(4, (1, 4)), - nn.Dropout(0.3),) - with torch.no_grad(): - rnn_input_dim = self.features(torch.randn(1, 1, 500,inputdim)).shape - rnn_input_dim = rnn_input_dim[1] * rnn_input_dim[-1] - self.gru = nn.GRU(640, 512, bidirectional=True, batch_first=True) - self.fc = nn.Linear(1024,256) - self.outputlayer = nn.Linear(256, outputdim) - self.features.apply(init_weights) - self.outputlayer.apply(init_weights) - - def forward(self, x, embedding): # - batch, time, dim = x.shape - x = x.unsqueeze(1) # (b,1,t,d) - x = self.features(x) # - x = x.transpose(1, 2).contiguous().flatten(-2) # 重新拷贝一份x,之后推平-2:-1之间的维度 # (b,125,512) - embedding = embedding.unsqueeze(1) - embedding = embedding.repeat(1, x.shape[1], 1) - x = torch.cat((x, embedding), dim=2) # [B, T, 128 + emb_dim] - if not hasattr(self, '_flattened'): - self.gru.flatten_parameters() - x, _ = self.gru(x) # x torch.Size([16, 125, 256]) - x = self.fc(x) - decision_time = torch.softmax(self.outputlayer(x),dim=2) # x torch.Size([16, 125, 2]) - decision_up = torch.nn.functional.interpolate( - decision_time.transpose(1, 2), # [16, 2, 125] - time, # 501 - mode='linear', - align_corners=False).transpose(1, 2) # 从125插值回 501 ?--> (16,501,2) - return decision_time[:,:,0],decision_up - -class CDur_GLU(nn.Module): - def __init__(self, inputdim, outputdim, **kwargs): - super().__init__() - self.features = Mul_scale_GLU() - # with torch.no_grad(): - # rnn_input_dim = self.features(torch.randn(1, 1, 500,inputdim)).shape - # rnn_input_dim = rnn_input_dim[1] * rnn_input_dim[-1] - self.gru = nn.GRU(640, 512,1, bidirectional=True, batch_first=True) # previous is 640 - # self.gru = LSTMModel(640, 512,1) - self.fc = nn.Linear(1024,256) - self.outputlayer = nn.Linear(256, outputdim) - # self.features.apply(init_weights) - self.outputlayer.apply(init_weights) - - def forward(self, x, embedding,one_hot=None): # - batch, time, dim = x.shape - x = x.unsqueeze(1) # (b,1,t,d) - x = self.features(x) # - x = x.transpose(1, 2).contiguous().flatten(-2) # 重新拷贝一份x,之后推平-2:-1之间的维度 # (b,125,512) - # print('x ',x.shape) - # assert 1==2 - embedding = embedding.unsqueeze(1) - embedding = embedding.repeat(1, x.shape[1], 1) - - x = torch.cat((x, embedding), dim=2) # [B, T, 128 + emb_dim] - if not hasattr(self, '_flattened'): - self.gru.flatten_parameters() - x, _ = self.gru(x) # x torch.Size([16, 125, 256]) - # x = self.gru(x) # x torch.Size([16, 125, 256]) - x = self.fc(x) - decision_time = torch.softmax(self.outputlayer(x),dim=2) # x torch.Size([16, 125, 2]) - decision_up = torch.nn.functional.interpolate( - decision_time.transpose(1, 2), # [16, 2, 125] - time, # 501 - mode='linear', - align_corners=False).transpose(1, 2) # 从125插值回 501 ?--> (16,501,2) - return decision_time[:,:,0],decision_up - -class CDur_CNN14(nn.Module): - def __init__(self, inputdim, outputdim,time_resolution,**kwargs): - super().__init__() - if time_resolution==125: - self.features = Cnn10(8) - elif time_resolution == 250: - #print('time_resolution ',time_resolution) - self.features = Cnn10(4) - elif time_resolution == 500: - self.features = Cnn10(2) - else: - self.features = Cnn10(0) - with torch.no_grad(): - rnn_input_dim = self.features(torch.randn(1, 1, 500,inputdim)).shape - rnn_input_dim = rnn_input_dim[1] * rnn_input_dim[-1] - # self.features = Cnn10() - self.gru = nn.GRU(640, 512, bidirectional=True, batch_first=True) - # self.gru = LSTMModel(640, 512,1) - self.fc = nn.Linear(1024,256) - self.outputlayer = nn.Linear(256, outputdim) - # self.features.apply(init_weights) - self.outputlayer.apply(init_weights) - - def forward(self, x, embedding,one_hot=None): - batch, time, dim = x.shape - x = x.unsqueeze(1) # (b,1,t,d) - x = self.features(x) # - x = x.transpose(1, 2).contiguous().flatten(-2) # 重新拷贝一份x,之后推平-2:-1之间的维度 # (b,125,512) - # print('x ',x.shape) - # assert 1==2 - embedding = embedding.unsqueeze(1) - embedding = embedding.repeat(1, x.shape[1], 1) - x = torch.cat((x, embedding), dim=2) # [B, T, 128 + emb_dim] - if not hasattr(self, '_flattened'): - self.gru.flatten_parameters() - x, _ = self.gru(x) # x torch.Size([16, 125, 256]) - # x = self.gru(x) # x torch.Size([16, 125, 256]) - x = self.fc(x) - decision_time = torch.softmax(self.outputlayer(x),dim=2) # x torch.Size([16, 125, 2]) - decision_up = torch.nn.functional.interpolate( - decision_time.transpose(1, 2), # [16, 2, 125] - time, # 501 - mode='linear', - align_corners=False).transpose(1, 2) # 从125插值回 501 ?--> (16,501,2) - return decision_time[:,:,0],decision_up - -class CDur_CNN_mul_scale(nn.Module): - def __init__(self, inputdim, outputdim,time_resolution,**kwargs): - super().__init__() - if time_resolution==125: - self.features = Cnn10_mul_scale(8) - elif time_resolution == 250: - #print('time_resolution ',time_resolution) - self.features = Cnn10_mul_scale(4) - elif time_resolution == 500: - self.features = Cnn10_mul_scale(2) - else: - self.features = Cnn10_mul_scale(0) - # with torch.no_grad(): - # rnn_input_dim = self.features(torch.randn(1, 1, 500,inputdim)).shape - # rnn_input_dim = rnn_input_dim[1] * rnn_input_dim[-1] - # self.features = Cnn10() - self.gru = nn.GRU(640, 512, bidirectional=True, batch_first=True) - # self.gru = LSTMModel(640, 512,1) - self.fc = nn.Linear(1024,256) - self.outputlayer = nn.Linear(256, outputdim) - # self.features.apply(init_weights) - self.outputlayer.apply(init_weights) - - def forward(self, x, embedding,one_hot=None): - # print('x ',x.shape) - # assert 1==2 - batch, time, dim = x.shape - x = x.unsqueeze(1) # (b,1,t,d) - x = self.features(x) # - x = x.transpose(1, 2).contiguous().flatten(-2) # 重新拷贝一份x,之后推平-2:-1之间的维度 # (b,125,512) - # print('x ',x.shape) - # assert 1==2 - embedding = embedding.unsqueeze(1) - embedding = embedding.repeat(1, x.shape[1], 1) - x = torch.cat((x, embedding), dim=2) # [B, T, 128 + emb_dim] - if not hasattr(self, '_flattened'): - self.gru.flatten_parameters() - x, _ = self.gru(x) # x torch.Size([16, 125, 256]) - # x = self.gru(x) # x torch.Size([16, 125, 256]) - x = self.fc(x) - decision_time = torch.softmax(self.outputlayer(x),dim=2) # x torch.Size([16, 125, 2]) - decision_up = torch.nn.functional.interpolate( - decision_time.transpose(1, 2), # [16, 2, 125] - time, # 501 - mode='linear', - align_corners=False).transpose(1, 2) # 从125插值回 501 ?--> (16,501,2) - return decision_time[:,:,0],decision_up - -class CDur_CNN_mul_scale_fusion(nn.Module): - def __init__(self, inputdim, outputdim, time_resolution,**kwargs): - super().__init__() - if time_resolution==125: - self.features = Cnn10_mul_scale(8) - elif time_resolution == 250: - #print('time_resolution ',time_resolution) - self.features = Cnn10_mul_scale(4) - elif time_resolution == 500: - self.features = Cnn10_mul_scale(2) - else: - self.features = Cnn10_mul_scale(0) - # with torch.no_grad(): - # rnn_input_dim = self.features(torch.randn(1, 1, 500,inputdim)).shape - # rnn_input_dim = rnn_input_dim[1] * rnn_input_dim[-1] - # self.features = Cnn10() - self.gru = nn.GRU(512, 512, bidirectional=True, batch_first=True) - # self.gru = LSTMModel(640, 512,1) - self.fc = nn.Linear(1024,256) - self.fusion = Fusion(128,512,2) - self.outputlayer = nn.Linear(256, outputdim) - # self.features.apply(init_weights) - self.outputlayer.apply(init_weights) - - def forward(self, x, embedding,one_hot=None): - # print('x ',x.shape) - # assert 1==2 - batch, time, dim = x.shape - x = x.unsqueeze(1) # (b,1,t,d) - x = self.features(x) # - x = x.transpose(1, 2).contiguous().flatten(-2) # 重新拷贝一份x,之后推平-2:-1之间的维度 # (b,125,512) - # print('x ',x.shape) - # assert 1==2 - embedding = embedding.unsqueeze(1) - embedding = embedding.repeat(1, x.shape[1], 1) - x = self.fusion(embedding, x) - #x = torch.cat((x, embedding), dim=2) # [B, T, 128 + emb_dim] - if not hasattr(self, '_flattened'): - self.gru.flatten_parameters() - x, _ = self.gru(x) # x torch.Size([16, 125, 256]) - # x = self.gru(x) # x torch.Size([16, 125, 256]) - x = self.fc(x) - decision_time = torch.softmax(self.outputlayer(x),dim=2) # x torch.Size([16, 125, 2]) - decision_up = torch.nn.functional.interpolate( - decision_time.transpose(1, 2), # [16, 2, 125] - time, # 501 - mode='linear', - align_corners=False).transpose(1, 2) # 从125插值回 501 ?--> (16,501,2) - return decision_time[:,:,0],decision_up - - -class RaDur_fusion(nn.Module): - def __init__(self, model_config, inputdim, outputdim, time_resolution, **kwargs): - super().__init__() - self.encoder = Cnn14() - self.detection = CDur_CNN_mul_scale_fusion(inputdim, outputdim, time_resolution) - self.softmax = nn.Softmax(dim=2) - #self.temperature = 5 - # if model_config['pre_train']: - # self.encoder.load_state_dict(torch.load(model_config['encoder_path'])['model']) - # self.detection.load_state_dict(torch.load(model_config['CDur_path'])) - - self.q = nn.Linear(128,128) - self.k = nn.Linear(128,128) - self.q_ee = nn.Linear(128, 128) - self.k_ee = nn.Linear(128, 128) - self.temperature = 11.3 # sqrt(128) - self.att_pool = model_config['att_pool'] - self.enhancement = model_config['enhancement'] - self.tao = model_config['tao'] - self.top = model_config['top'] - self.bn = nn.BatchNorm1d(128) - self.EE_fusion = Fusion(128, 128, 4) - - def get_w(self,q,k): - q = self.q(q) - k = self.k(k) - q = q.unsqueeze(1) - attn = torch.bmm(q, k.transpose(1, 2)) - attn = attn/self.temperature - attn = self.softmax(attn) - return attn - - def get_w_ee(self,q,k): - q = self.q_ee(q) - k = self.k_ee(k) - q = q.unsqueeze(1) - attn = torch.bmm(q, k.transpose(1, 2)) - attn = attn/self.temperature - attn = self.softmax(attn) - return attn - - def attention_pooling(self, embeddings, mean_embedding): - att_pool_w = self.get_w(mean_embedding,embeddings) - embedding = torch.bmm(att_pool_w, embeddings).squeeze(1) - # print(embedding.shape) - # print(att_pool_w.shape) - # print(att_pool_w[0]) - # assert 1==2 - return embedding - - def select_topk_embeddings(self, scores, embeddings, k): - _, idx_DESC = scores.sort(descending=True, dim=1) # 根据分数进行排序 - top_k = _[:,:k] - # print('top_k ', top_k) - # top_k = top_k.mean(1) - idx_topk = idx_DESC[:, :k] # 取top_k个 - # print('index ', idx_topk) - idx_topk = idx_topk.unsqueeze(2).expand([-1, -1, embeddings.shape[2]]) - selected_embeddings = torch.gather(embeddings, 1, idx_topk) - return selected_embeddings,top_k - - def sum_with_attention(self, embedding, top_k, selected_embeddings): - # print('embedding ',embedding) - # print('selected_embeddings ',selected_embeddings.shape) - att_1 = self.get_w_ee(embedding, selected_embeddings) - att_1 = att_1.squeeze(1) - #print('att_1 ',att_1.shape) - larger = top_k > self.tao - # print('larger ',larger) - top_k = top_k*larger - # print('top_k ',top_k.shape) - # print('top_k ',top_k) - att_1 = att_1*top_k - #print('att_1 ',att_1.shape) - # assert 1==2 - att_2 = att_1.unsqueeze(2).repeat(1,1,128) - Es = selected_embeddings*att_2 - return Es - - def orcal_EE(self, x, embedding, label): - batch, time, dim = x.shape - - mixture_embedding = self.encoder(x) # 8, 125, 128 - mixture_embedding = mixture_embedding.transpose(1,2) - mixture_embedding = self.bn(mixture_embedding) - mixture_embedding = mixture_embedding.transpose(1,2) - - x = x.unsqueeze(1) # (b,1,t,d) - x = self.detection.features(x) # - x = x.transpose(1, 2).contiguous().flatten(-2) # 重新拷贝一份x,之后推平-2:-1之间的维度 # (b,125,128) - embedding_pre = embedding.unsqueeze(1) - embedding_pre = embedding_pre.repeat(1, x.shape[1], 1) - f = self.detection.fusion(embedding_pre, x) # the first stage results - #f = torch.cat((x, embedding_pre), dim=2) # [B, T, 128 + emb_dim] - if not hasattr(self, '_flattened'): - self.detection.gru.flatten_parameters() - f, _ = self.detection.gru(f) # x torch.Size([16, 125, 256]) - f = self.detection.fc(f) - decision_time = torch.softmax(self.detection.outputlayer(f),dim=2) # x torch.Size([16, 125, 2]) - - selected_embeddings, top_k = self.select_topk_embeddings(decision_time[:,:,0], mixture_embedding, self.top) - - selected_embeddings = self.sum_with_attention(embedding, top_k, selected_embeddings) # add the weight - - mix_embedding = selected_embeddings.mean(1).unsqueeze(1) # - mix_embedding = mix_embedding.repeat(1, x.shape[1], 1) - embedding = embedding.unsqueeze(1) - embedding = embedding.repeat(1, x.shape[1], 1) - mix_embedding = self.EE_fusion(mix_embedding, embedding) # 使用神经网络进行融合 - # mix_embedding2 = selected_embeddings2.mean(1) - #mix_embedding = embedding + mix_embedding # 直接相加 - # new detection results - # embedding_now = mix_embedding.unsqueeze(1) - # embedding_now = embedding_now.repeat(1, x.shape[1], 1) - f_now = self.detection.fusion(mix_embedding, x) - #f_now = torch.cat((x, embedding_now), dim=2) # - f_now, _ = self.detection.gru(f_now) # x torch.Size([16, 125, 256]) - f_now = self.detection.fc(f_now) - decision_time_now = torch.softmax(self.detection.outputlayer(f_now), dim=2) # x torch.Size([16, 125, 2]) - - top_k = top_k.mean(1) # get avg score,higher score will have more weight - larger = top_k > self.tao - top_k = top_k * larger - top_k = top_k/2.0 - # print('top_k ',top_k) - # assert 1==2 - # print('tok_k[ ',top_k.shape) - # print('decision_time ',decision_time.shape) - # print('decision_time_now ',decision_time_now.shape) - neg_w = top_k.unsqueeze(1).unsqueeze(2) - neg_w = neg_w.repeat(1, decision_time_now.shape[1], decision_time_now.shape[2]) - # print('neg_w ',neg_w.shape) - #print('neg_w ',neg_w[:,0:10,0]) - pos_w = 1-neg_w - #print('pos_w ',pos_w[:,0:10,0]) - decision_time_final = decision_time*pos_w + neg_w*decision_time_now - #print('decision_time_final ',decision_time_final[0,0:10,0]) - # print(decision_time_final[0,:,:]) - #assert 1==2 - return decision_time_final - - def forward(self, x, ref, label=None): - batch, time, dim = x.shape - logit = torch.zeros(1).cuda() - embeddings = self.encoder(ref) - mean_embedding = embeddings.mean(1) - if self.att_pool == True: - mean_embedding = self.bn(mean_embedding) - embeddings = embeddings.transpose(1,2) - embeddings = self.bn(embeddings) - embeddings = embeddings.transpose(1,2) - embedding = self.attention_pooling(embeddings, mean_embedding) - else: - embedding = mean_embedding - if self.enhancement == True: - decision_time = self.orcal_EE(x, embedding, label) - decision_up = torch.nn.functional.interpolate( - decision_time.transpose(1, 2), # [16, 2, 125] - time, # 501 - mode='linear', - align_corners=False).transpose(1, 2) # 从125插值回 501 ?--> (16,501,2) - return decision_time[:,:,0], decision_up, logit - - x = x.unsqueeze(1) # (b,1,t,d) - x = self.detection.features(x) # - x = x.transpose(1, 2).contiguous().flatten(-2) # 重新拷贝一份x,之后推平-2:-1之间的维度 # (b,125,128) - embedding = embedding.unsqueeze(1) - embedding = embedding.repeat(1, x.shape[1], 1) - # x = torch.cat((x, embedding), dim=2) # [B, T, 128 + emb_dim] - x = self.detection.fusion(embedding, x) - # embedding = embedding.unsqueeze(1) - # embedding = embedding.repeat(1, x.shape[1], 1) - # x = torch.cat((x, embedding), dim=2) # [B, T, 128 + emb_dim] - if not hasattr(self, '_flattened'): - self.detection.gru.flatten_parameters() - x, _ = self.detection.gru(x) # x torch.Size([16, 125, 256]) - x = self.detection.fc(x) - decision_time = torch.softmax(self.detection.outputlayer(x),dim=2) # x torch.Size([16, 125, 2]) - decision_up = torch.nn.functional.interpolate( - decision_time.transpose(1, 2), - time, # 501 - mode='linear', - align_corners=False).transpose(1, 2) # 从125插值回 501 ?--> (16,501,2) - return decision_time[:,:,0], decision_up, logit \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/audio_foundation_models.py b/spaces/AIGC-Audio/AudioGPT/audio_foundation_models.py deleted file mode 100644 index 38172dc802f121fab4d166ab5d46c37b2d48cc10..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/audio_foundation_models.py +++ /dev/null @@ -1,1033 +0,0 @@ -import sys -import os - -sys.path.append(os.path.dirname(os.path.realpath(__file__))) -sys.path.append(os.path.dirname(os.path.dirname(os.path.realpath(__file__)))) -sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'NeuralSeq')) -sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'text_to_audio/Make_An_Audio')) -sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'audio_detection')) -sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'mono2binaural')) -import matplotlib -import librosa -from transformers import AutoModelForCausalLM, AutoTokenizer, CLIPSegProcessor, CLIPSegForImageSegmentation -import torch -from diffusers import StableDiffusionPipeline -from diffusers import StableDiffusionInstructPix2PixPipeline, EulerAncestralDiscreteScheduler -import re -import uuid -import soundfile -from diffusers import StableDiffusionInpaintPipeline -from PIL import Image -import numpy as np -from omegaconf import OmegaConf -from transformers import pipeline, BlipProcessor, BlipForConditionalGeneration, BlipForQuestionAnswering -import cv2 -import einops -from einops import repeat -from pytorch_lightning import seed_everything -import random -from ldm.util import instantiate_from_config -from ldm.data.extract_mel_spectrogram import TRANSFORMS_16000 -from pathlib import Path -from vocoder.hifigan.modules import VocoderHifigan -from vocoder.bigvgan.models import VocoderBigVGAN -from ldm.models.diffusion.ddim import DDIMSampler -from wav_evaluation.models.CLAPWrapper import CLAPWrapper -from inference.svs.ds_e2e import DiffSingerE2EInfer -from audio_to_text.inference_waveform import AudioCapModel -import whisper -from text_to_speech.TTS_binding import TTSInference -from inference.svs.ds_e2e import DiffSingerE2EInfer -from inference.tts.GenerSpeech import GenerSpeechInfer -from utils.hparams import set_hparams -from utils.hparams import hparams as hp -from utils.os_utils import move_file -import scipy.io.wavfile as wavfile -from audio_infer.utils import config as detection_config -from audio_infer.pytorch.models import PVT -from src.models import BinauralNetwork -from sound_extraction.model.LASSNet import LASSNet -from sound_extraction.utils.stft import STFT -from sound_extraction.utils.wav_io import load_wav, save_wav -from target_sound_detection.src import models as tsd_models -from target_sound_detection.src.models import event_labels -from target_sound_detection.src.utils import median_filter, decode_with_timestamps -import clip - - -def prompts(name, description): - def decorator(func): - func.name = name - func.description = description - return func - - return decorator - - -def initialize_model(config, ckpt, device): - config = OmegaConf.load(config) - model = instantiate_from_config(config.model) - model.load_state_dict(torch.load(ckpt, map_location='cpu')["state_dict"], strict=False) - - model = model.to(device) - model.cond_stage_model.to(model.device) - model.cond_stage_model.device = model.device - sampler = DDIMSampler(model) - return sampler - - -def initialize_model_inpaint(config, ckpt): - config = OmegaConf.load(config) - model = instantiate_from_config(config.model) - model.load_state_dict(torch.load(ckpt, map_location='cpu')["state_dict"], strict=False) - device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - model = model.to(device) - print(model.device, device, model.cond_stage_model.device) - sampler = DDIMSampler(model) - return sampler - - -def select_best_audio(prompt, wav_list): - clap_model = CLAPWrapper('text_to_audio/Make_An_Audio/useful_ckpts/CLAP/CLAP_weights_2022.pth', - 'text_to_audio/Make_An_Audio/useful_ckpts/CLAP/config.yml', - use_cuda=torch.cuda.is_available()) - text_embeddings = clap_model.get_text_embeddings([prompt]) - score_list = [] - for data in wav_list: - sr, wav = data - audio_embeddings = clap_model.get_audio_embeddings([(torch.FloatTensor(wav), sr)], resample=True) - score = clap_model.compute_similarity(audio_embeddings, text_embeddings, - use_logit_scale=False).squeeze().cpu().numpy() - score_list.append(score) - max_index = np.array(score_list).argmax() - print(score_list, max_index) - return wav_list[max_index] - - -def merge_audio(audio_path_1, audio_path_2): - merged_signal = [] - sr_1, signal_1 = wavfile.read(audio_path_1) - sr_2, signal_2 = wavfile.read(audio_path_2) - merged_signal.append(signal_1) - merged_signal.append(signal_2) - merged_signal = np.hstack(merged_signal) - merged_signal = np.asarray(merged_signal, dtype=np.int16) - audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav") - wavfile.write(audio_filename, sr_1, merged_signal) - return audio_filename - - -class T2I: - def __init__(self, device): - print("Initializing T2I to %s" % device) - self.device = device - self.pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) - self.text_refine_tokenizer = AutoTokenizer.from_pretrained("Gustavosta/MagicPrompt-Stable-Diffusion") - self.text_refine_model = AutoModelForCausalLM.from_pretrained("Gustavosta/MagicPrompt-Stable-Diffusion") - self.text_refine_gpt2_pipe = pipeline("text-generation", model=self.text_refine_model, - tokenizer=self.text_refine_tokenizer, device=self.device) - self.pipe.to(device) - - @prompts(name="Generate Image From User Input Text", - description="useful when you want to generate an image from a user input text and save it to a file. " - "like: generate an image of an object or something, or generate an image that includes some objects. " - "The input to this tool should be a string, representing the text used to generate image. ") - def inference(self, text): - image_filename = os.path.join('image', str(uuid.uuid4())[0:8] + ".png") - refined_text = self.text_refine_gpt2_pipe(text)[0]["generated_text"] - print(f'{text} refined to {refined_text}') - image = self.pipe(refined_text).images[0] - image.save(image_filename) - print(f"Processed T2I.run, text: {text}, image_filename: {image_filename}") - return image_filename - - -class ImageCaptioning: - def __init__(self, device): - print("Initializing ImageCaptioning to %s" % device) - self.device = device - self.processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") - self.model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base").to( - self.device) - - @prompts(name="Remove Something From The Photo", - description="useful when you want to remove and object or something from the photo " - "from its description or location. " - "The input to this tool should be a comma separated string of two, " - "representing the image_path and the object need to be removed. ") - def inference(self, image_path): - inputs = self.processor(Image.open(image_path), return_tensors="pt").to(self.device) - out = self.model.generate(**inputs) - captions = self.processor.decode(out[0], skip_special_tokens=True) - return captions - - -class T2A: - def __init__(self, device): - print("Initializing Make-An-Audio to %s" % device) - self.device = device - self.sampler = initialize_model('text_to_audio/Make_An_Audio/configs/text-to-audio/txt2audio_args.yaml', - 'text_to_audio/Make_An_Audio/useful_ckpts/ta40multi_epoch=000085.ckpt', - device=device) - self.vocoder = VocoderBigVGAN('text_to_audio/Make_An_Audio/vocoder/logs/bigv16k53w', device=device) - - def txt2audio(self, text, seed=55, scale=1.5, ddim_steps=100, n_samples=3, W=624, H=80): - SAMPLE_RATE = 16000 - prng = np.random.RandomState(seed) - start_code = prng.randn(n_samples, self.sampler.model.first_stage_model.embed_dim, H // 8, W // 8) - start_code = torch.from_numpy(start_code).to(device=self.device, dtype=torch.float32) - uc = self.sampler.model.get_learned_conditioning(n_samples * [""]) - c = self.sampler.model.get_learned_conditioning(n_samples * [text]) - shape = [self.sampler.model.first_stage_model.embed_dim, H // 8, W // 8] # (z_dim, 80//2^x, 848//2^x) - samples_ddim, _ = self.sampler.sample(S=ddim_steps, - conditioning=c, - batch_size=n_samples, - shape=shape, - verbose=False, - unconditional_guidance_scale=scale, - unconditional_conditioning=uc, - x_T=start_code) - - x_samples_ddim = self.sampler.model.decode_first_stage(samples_ddim) - x_samples_ddim = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0) # [0, 1] - - wav_list = [] - for idx, spec in enumerate(x_samples_ddim): - wav = self.vocoder.vocode(spec) - wav_list.append((SAMPLE_RATE, wav)) - best_wav = select_best_audio(text, wav_list) - return best_wav - - @prompts(name="Generate Audio From User Input Text", - description="useful for when you want to generate an audio " - "from a user input text and it saved it to a file." - "The input to this tool should be a string, " - "representing the text used to generate audio.") - def inference(self, text, seed=55, scale=1.5, ddim_steps=100, n_samples=3, W=624, H=80): - melbins, mel_len = 80, 624 - with torch.no_grad(): - result = self.txt2audio( - text=text, - H=melbins, - W=mel_len - ) - audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav") - soundfile.write(audio_filename, result[1], samplerate=16000) - print(f"Processed T2I.run, text: {text}, audio_filename: {audio_filename}") - return audio_filename - - -class I2A: - def __init__(self, device): - print("Initializing Make-An-Audio-Image to %s" % device) - self.device = device - self.sampler = initialize_model('text_to_audio/Make_An_Audio/configs/img_to_audio/img2audio_args.yaml', - 'text_to_audio/Make_An_Audio/useful_ckpts/ta54_epoch=000216.ckpt', - device=device) - self.vocoder = VocoderBigVGAN('text_to_audio/Make_An_Audio/vocoder/logs/bigv16k53w', device=device) - - def img2audio(self, image, seed=55, scale=3, ddim_steps=100, W=624, H=80): - SAMPLE_RATE = 16000 - n_samples = 1 # only support 1 sample - prng = np.random.RandomState(seed) - start_code = prng.randn(n_samples, self.sampler.model.first_stage_model.embed_dim, H // 8, W // 8) - start_code = torch.from_numpy(start_code).to(device=self.device, dtype=torch.float32) - uc = self.sampler.model.get_learned_conditioning(n_samples * [""]) - # image = Image.fromarray(image) - image = Image.open(image) - image = self.sampler.model.cond_stage_model.preprocess(image).unsqueeze(0) - image_embedding = self.sampler.model.cond_stage_model.forward_img(image) - c = image_embedding.repeat(n_samples, 1, 1) - shape = [self.sampler.model.first_stage_model.embed_dim, H // 8, W // 8] # (z_dim, 80//2^x, 848//2^x) - samples_ddim, _ = self.sampler.sample(S=ddim_steps, - conditioning=c, - batch_size=n_samples, - shape=shape, - verbose=False, - unconditional_guidance_scale=scale, - unconditional_conditioning=uc, - x_T=start_code) - - x_samples_ddim = self.sampler.model.decode_first_stage(samples_ddim) - x_samples_ddim = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0) # [0, 1] - wav_list = [] - for idx, spec in enumerate(x_samples_ddim): - wav = self.vocoder.vocode(spec) - wav_list.append((SAMPLE_RATE, wav)) - best_wav = wav_list[0] - return best_wav - - @prompts(name="Generate Audio From The Image", - description="useful for when you want to generate an audio " - "based on an image. " - "The input to this tool should be a string, " - "representing the image_path. ") - def inference(self, image, seed=55, scale=3, ddim_steps=100, W=624, H=80): - melbins, mel_len = 80, 624 - with torch.no_grad(): - result = self.img2audio( - image=image, - H=melbins, - W=mel_len - ) - audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav") - soundfile.write(audio_filename, result[1], samplerate=16000) - print(f"Processed I2a.run, image_filename: {image}, audio_filename: {audio_filename}") - return audio_filename - - -class TTS: - def __init__(self, device=None): - self.model = TTSInference(device) - - @prompts(name="Synthesize Speech Given the User Input Text", - description="useful for when you want to convert a user input text into speech audio it saved it to a file." - "The input to this tool should be a string, " - "representing the text used to be converted to speech.") - def inference(self, text): - inp = {"text": text} - out = self.model.infer_once(inp) - audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav") - soundfile.write(audio_filename, out, samplerate=22050) - return audio_filename - - -class T2S: - def __init__(self, device=None): - if device is None: - device = 'cuda' if torch.cuda.is_available() else 'cpu' - print("Initializing DiffSinger to %s" % device) - self.device = device - self.exp_name = 'checkpoints/0831_opencpop_ds1000' - self.config = 'NeuralSeq/egs/egs_bases/svs/midi/e2e/opencpop/ds1000.yaml' - self.set_model_hparams() - self.pipe = DiffSingerE2EInfer(self.hp, device) - self.default_inp = { - 'text': '你 说 你 不 SP 懂 为 何 在 这 时 牵 手 AP', - 'notes': 'D#4/Eb4 | D#4/Eb4 | D#4/Eb4 | D#4/Eb4 | rest | D#4/Eb4 | D4 | D4 | D4 | D#4/Eb4 | F4 | D#4/Eb4 | D4 | rest', - 'notes_duration': '0.113740 | 0.329060 | 0.287950 | 0.133480 | 0.150900 | 0.484730 | 0.242010 | 0.180820 | 0.343570 | 0.152050 | 0.266720 | 0.280310 | 0.633300 | 0.444590' - } - - def set_model_hparams(self): - set_hparams(config=self.config, exp_name=self.exp_name, print_hparams=False) - self.hp = hp - - @prompts(name="Generate Singing Voice From User Input Text, Note and Duration Sequence", - description="useful for when you want to generate a piece of singing voice (Optional: from User Input Text, Note and Duration Sequence) " - "and save it to a file." - "If Like: Generate a piece of singing voice, the input to this tool should be \"\" since there is no User Input Text, Note and Duration Sequence. " - "If Like: Generate a piece of singing voice. Text: xxx, Note: xxx, Duration: xxx. " - "Or Like: Generate a piece of singing voice. Text is xxx, note is xxx, duration is xxx." - "The input to this tool should be a comma seperated string of three, " - "representing text, note and duration sequence since User Input Text, Note and Duration Sequence are all provided. ") - def inference(self, inputs): - self.set_model_hparams() - val = inputs.split(",") - key = ['text', 'notes', 'notes_duration'] - try: - inp = {k: v for k, v in zip(key, val)} - wav = self.pipe.infer_once(inp) - except: - print('Error occurs. Generate default audio sample.\n') - inp = self.default_inp - wav = self.pipe.infer_once(inp) - wav *= 32767 - audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav") - wavfile.write(audio_filename, self.hp['audio_sample_rate'], wav.astype(np.int16)) - print(f"Processed T2S.run, audio_filename: {audio_filename}") - return audio_filename - - -class TTS_OOD: - def __init__(self, device): - if device is None: - device = 'cuda' if torch.cuda.is_available() else 'cpu' - print("Initializing GenerSpeech to %s" % device) - self.device = device - self.exp_name = 'checkpoints/GenerSpeech' - self.config = 'NeuralSeq/modules/GenerSpeech/config/generspeech.yaml' - self.set_model_hparams() - self.pipe = GenerSpeechInfer(self.hp, device) - - def set_model_hparams(self): - set_hparams(config=self.config, exp_name=self.exp_name, print_hparams=False) - f0_stats_fn = f'{hp["binary_data_dir"]}/train_f0s_mean_std.npy' - if os.path.exists(f0_stats_fn): - hp['f0_mean'], hp['f0_std'] = np.load(f0_stats_fn) - hp['f0_mean'] = float(hp['f0_mean']) - hp['f0_std'] = float(hp['f0_std']) - hp['emotion_encoder_path'] = 'checkpoints/Emotion_encoder.pt' - self.hp = hp - - @prompts(name="Style Transfer", - description="useful for when you want to generate speech samples with styles " - "(e.g., timbre, emotion, and prosody) derived from a reference custom voice. " - "Like: Generate a speech with style transferred from this voice. The text is xxx., or speak using the voice of this audio. The text is xxx." - "The input to this tool should be a comma seperated string of two, " - "representing reference audio path and input text. ") - def inference(self, inputs): - self.set_model_hparams() - key = ['ref_audio', 'text'] - val = inputs.split(",") - inp = {k: v for k, v in zip(key, val)} - wav = self.pipe.infer_once(inp) - wav *= 32767 - audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav") - wavfile.write(audio_filename, self.hp['audio_sample_rate'], wav.astype(np.int16)) - print( - f"Processed GenerSpeech.run. Input text:{val[1]}. Input reference audio: {val[0]}. Output Audio_filename: {audio_filename}") - return audio_filename - - -class Inpaint: - def __init__(self, device): - print("Initializing Make-An-Audio-inpaint to %s" % device) - self.device = device - self.sampler = initialize_model_inpaint('text_to_audio/Make_An_Audio/configs/inpaint/txt2audio_args.yaml', - 'text_to_audio/Make_An_Audio/useful_ckpts/inpaint7_epoch00047.ckpt') - self.vocoder = VocoderBigVGAN('text_to_audio/Make_An_Audio/vocoder/logs/bigv16k53w', device=device) - self.cmap_transform = matplotlib.cm.viridis - - def make_batch_sd(self, mel, mask, num_samples=1): - - mel = torch.from_numpy(mel)[None, None, ...].to(dtype=torch.float32) - mask = torch.from_numpy(mask)[None, None, ...].to(dtype=torch.float32) - masked_mel = (1 - mask) * mel - - mel = mel * 2 - 1 - mask = mask * 2 - 1 - masked_mel = masked_mel * 2 - 1 - - batch = { - "mel": repeat(mel.to(device=self.device), "1 ... -> n ...", n=num_samples), - "mask": repeat(mask.to(device=self.device), "1 ... -> n ...", n=num_samples), - "masked_mel": repeat(masked_mel.to(device=self.device), "1 ... -> n ...", n=num_samples), - } - return batch - - def gen_mel(self, input_audio_path): - SAMPLE_RATE = 16000 - sr, ori_wav = wavfile.read(input_audio_path) - print("gen_mel") - print(sr, ori_wav.shape, ori_wav) - ori_wav = ori_wav.astype(np.float32, order='C') / 32768.0 - if len(ori_wav.shape) == 2: # stereo - ori_wav = librosa.to_mono( - ori_wav.T) # gradio load wav shape could be (wav_len,2) but librosa expects (2,wav_len) - print(sr, ori_wav.shape, ori_wav) - ori_wav = librosa.resample(ori_wav, orig_sr=sr, target_sr=SAMPLE_RATE) - - mel_len, hop_size = 848, 256 - input_len = mel_len * hop_size - if len(ori_wav) < input_len: - input_wav = np.pad(ori_wav, (0, mel_len * hop_size), constant_values=0) - else: - input_wav = ori_wav[:input_len] - - mel = TRANSFORMS_16000(input_wav) - return mel - - def gen_mel_audio(self, input_audio): - SAMPLE_RATE = 16000 - sr, ori_wav = input_audio - print("gen_mel_audio") - print(sr, ori_wav.shape, ori_wav) - - ori_wav = ori_wav.astype(np.float32, order='C') / 32768.0 - if len(ori_wav.shape) == 2: # stereo - ori_wav = librosa.to_mono( - ori_wav.T) # gradio load wav shape could be (wav_len,2) but librosa expects (2,wav_len) - print(sr, ori_wav.shape, ori_wav) - ori_wav = librosa.resample(ori_wav, orig_sr=sr, target_sr=SAMPLE_RATE) - - mel_len, hop_size = 848, 256 - input_len = mel_len * hop_size - if len(ori_wav) < input_len: - input_wav = np.pad(ori_wav, (0, mel_len * hop_size), constant_values=0) - else: - input_wav = ori_wav[:input_len] - mel = TRANSFORMS_16000(input_wav) - return mel - - def inpaint(self, batch, seed, ddim_steps, num_samples=1, W=512, H=512): - model = self.sampler.model - - prng = np.random.RandomState(seed) - start_code = prng.randn(num_samples, model.first_stage_model.embed_dim, H // 8, W // 8) - start_code = torch.from_numpy(start_code).to(device=self.device, dtype=torch.float32) - - c = model.get_first_stage_encoding(model.encode_first_stage(batch["masked_mel"])) - cc = torch.nn.functional.interpolate(batch["mask"], - size=c.shape[-2:]) - c = torch.cat((c, cc), dim=1) # (b,c+1,h,w) 1 is mask - - shape = (c.shape[1] - 1,) + c.shape[2:] - samples_ddim, _ = self.sampler.sample(S=ddim_steps, - conditioning=c, - batch_size=c.shape[0], - shape=shape, - verbose=False) - x_samples_ddim = model.decode_first_stage(samples_ddim) - - mask = batch["mask"] # [-1,1] - mel = torch.clamp((batch["mel"] + 1.0) / 2.0, min=0.0, max=1.0) - mask = torch.clamp((batch["mask"] + 1.0) / 2.0, min=0.0, max=1.0) - predicted_mel = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0) - inpainted = (1 - mask) * mel + mask * predicted_mel - inpainted = inpainted.cpu().numpy().squeeze() - inapint_wav = self.vocoder.vocode(inpainted) - - return inpainted, inapint_wav - - def predict(self, input_audio, mel_and_mask, seed=55, ddim_steps=100): - SAMPLE_RATE = 16000 - torch.set_grad_enabled(False) - mel_img = Image.open(mel_and_mask['image']) - mask_img = Image.open(mel_and_mask["mask"]) - show_mel = np.array(mel_img.convert("L")) / 255 - mask = np.array(mask_img.convert("L")) / 255 - mel_bins, mel_len = 80, 848 - input_mel = self.gen_mel_audio(input_audio)[:, :mel_len] - mask = np.pad(mask, ((0, 0), (0, mel_len - mask.shape[1])), mode='constant', constant_values=0) - print(mask.shape, input_mel.shape) - with torch.no_grad(): - batch = self.make_batch_sd(input_mel, mask, num_samples=1) - inpainted, gen_wav = self.inpaint( - batch=batch, - seed=seed, - ddim_steps=ddim_steps, - num_samples=1, - H=mel_bins, W=mel_len - ) - inpainted = inpainted[:, :show_mel.shape[1]] - color_mel = self.cmap_transform(inpainted) - input_len = int(input_audio[1].shape[0] * SAMPLE_RATE / input_audio[0]) - gen_wav = (gen_wav * 32768).astype(np.int16)[:input_len] - image = Image.fromarray((color_mel * 255).astype(np.uint8)) - image_filename = os.path.join('image', str(uuid.uuid4())[0:8] + ".png") - image.save(image_filename) - audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav") - soundfile.write(audio_filename, gen_wav, samplerate=16000) - return image_filename, audio_filename - - @prompts(name="Audio Inpainting", - description="useful for when you want to inpaint a mel spectrum of an audio and predict this audio, " - "this tool will generate a mel spectrum and you can inpaint it, receives audio_path as input. " - "The input to this tool should be a string, " - "representing the audio_path. ") - def inference(self, input_audio_path): - crop_len = 500 - crop_mel = self.gen_mel(input_audio_path)[:, :crop_len] - color_mel = self.cmap_transform(crop_mel) - image = Image.fromarray((color_mel * 255).astype(np.uint8)) - image_filename = os.path.join('image', str(uuid.uuid4())[0:8] + ".png") - image.save(image_filename) - return image_filename - - -class ASR: - def __init__(self, device): - print("Initializing Whisper to %s" % device) - self.device = device - self.model = whisper.load_model("base", device=device) - - @prompts(name="Transcribe speech", - description="useful for when you want to know the text corresponding to a human speech, " - "receives audio_path as input. " - "The input to this tool should be a string, " - "representing the audio_path. ") - def inference(self, audio_path): - audio = whisper.load_audio(audio_path) - audio = whisper.pad_or_trim(audio) - mel = whisper.log_mel_spectrogram(audio).to(self.device) - _, probs = self.model.detect_language(mel) - options = whisper.DecodingOptions() - result = whisper.decode(self.model, mel, options) - return result.text - - def translate_english(self, audio_path): - audio = self.model.transcribe(audio_path, language='English') - return audio['text'] - - -class A2T: - def __init__(self, device): - print("Initializing Audio-To-Text Model to %s" % device) - self.device = device - self.model = AudioCapModel("audio_to_text/audiocaps_cntrstv_cnn14rnn_trm") - - @prompts(name="Generate Text From The Audio", - description="useful for when you want to describe an audio in text, " - "receives audio_path as input. " - "The input to this tool should be a string, " - "representing the audio_path. ") - def inference(self, audio_path): - audio = whisper.load_audio(audio_path) - caption_text = self.model(audio) - return caption_text[0] - - -class SoundDetection: - def __init__(self, device): - self.device = device - self.sample_rate = 32000 - self.window_size = 1024 - self.hop_size = 320 - self.mel_bins = 64 - self.fmin = 50 - self.fmax = 14000 - self.model_type = 'PVT' - self.checkpoint_path = 'audio_detection/audio_infer/useful_ckpts/audio_detection.pth' - self.classes_num = detection_config.classes_num - self.labels = detection_config.labels - self.frames_per_second = self.sample_rate // self.hop_size - # Model = eval(self.model_type) - self.model = PVT(sample_rate=self.sample_rate, window_size=self.window_size, - hop_size=self.hop_size, mel_bins=self.mel_bins, fmin=self.fmin, fmax=self.fmax, - classes_num=self.classes_num) - checkpoint = torch.load(self.checkpoint_path, map_location=self.device) - self.model.load_state_dict(checkpoint['model']) - self.model.to(device) - - @prompts(name="Detect The Sound Event From The Audio", - description="useful for when you want to know what event in the audio and the sound event start or end time, it will return an image " - "receives audio_path as input. " - "The input to this tool should be a string, " - "representing the audio_path. ") - def inference(self, audio_path): - # Forward - (waveform, _) = librosa.core.load(audio_path, sr=self.sample_rate, mono=True) - waveform = waveform[None, :] # (1, audio_length) - waveform = torch.from_numpy(waveform) - waveform = waveform.to(self.device) - # Forward - with torch.no_grad(): - self.model.eval() - batch_output_dict = self.model(waveform, None) - framewise_output = batch_output_dict['framewise_output'].data.cpu().numpy()[0] - """(time_steps, classes_num)""" - # print('Sound event detection result (time_steps x classes_num): {}'.format( - # framewise_output.shape)) - import numpy as np - import matplotlib.pyplot as plt - sorted_indexes = np.argsort(np.max(framewise_output, axis=0))[::-1] - top_k = 10 # Show top results - top_result_mat = framewise_output[:, sorted_indexes[0: top_k]] - """(time_steps, top_k)""" - # Plot result - stft = librosa.core.stft(y=waveform[0].data.cpu().numpy(), n_fft=self.window_size, - hop_length=self.hop_size, window='hann', center=True) - frames_num = stft.shape[-1] - fig, axs = plt.subplots(2, 1, sharex=True, figsize=(10, 4)) - axs[0].matshow(np.log(np.abs(stft)), origin='lower', aspect='auto', cmap='jet') - axs[0].set_ylabel('Frequency bins') - axs[0].set_title('Log spectrogram') - axs[1].matshow(top_result_mat.T, origin='upper', aspect='auto', cmap='jet', vmin=0, vmax=1) - axs[1].xaxis.set_ticks(np.arange(0, frames_num, self.frames_per_second)) - axs[1].xaxis.set_ticklabels(np.arange(0, frames_num / self.frames_per_second)) - axs[1].yaxis.set_ticks(np.arange(0, top_k)) - axs[1].yaxis.set_ticklabels(np.array(self.labels)[sorted_indexes[0: top_k]]) - axs[1].yaxis.grid(color='k', linestyle='solid', linewidth=0.3, alpha=0.3) - axs[1].set_xlabel('Seconds') - axs[1].xaxis.set_ticks_position('bottom') - plt.tight_layout() - image_filename = os.path.join('image', str(uuid.uuid4())[0:8] + ".png") - plt.savefig(image_filename) - return image_filename - - -class SoundExtraction: - def __init__(self, device): - self.device = device - self.model_file = 'sound_extraction/useful_ckpts/LASSNet.pt' - self.stft = STFT() - import torch.nn as nn - self.model = nn.DataParallel(LASSNet(device)).to(device) - checkpoint = torch.load(self.model_file) - self.model.load_state_dict(checkpoint['model']) - self.model.eval() - - @prompts(name="Extract Sound Event From Mixture Audio Based On Language Description", - description="useful for when you extract target sound from a mixture audio, you can describe the target sound by text, " - "receives audio_path and text as input. " - "The input to this tool should be a comma seperated string of two, " - "representing mixture audio path and input text.") - def inference(self, inputs): - # key = ['ref_audio', 'text'] - val = inputs.split(",") - audio_path = val[0] # audio_path, text - text = val[1] - waveform = load_wav(audio_path) - waveform = torch.tensor(waveform).transpose(1, 0) - mixed_mag, mixed_phase = self.stft.transform(waveform) - text_query = ['[CLS] ' + text] - mixed_mag = mixed_mag.transpose(2, 1).unsqueeze(0).to(self.device) - est_mask = self.model(mixed_mag, text_query) - est_mag = est_mask * mixed_mag - est_mag = est_mag.squeeze(1) - est_mag = est_mag.permute(0, 2, 1) - est_wav = self.stft.inverse(est_mag.cpu().detach(), mixed_phase) - est_wav = est_wav.squeeze(0).squeeze(0).numpy() - # est_path = f'output/est{i}.wav' - audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav") - print('audio_filename ', audio_filename) - save_wav(est_wav, audio_filename) - return audio_filename - - -class Binaural: - def __init__(self, device): - self.device = device - self.model_file = 'mono2binaural/useful_ckpts/m2b/binaural_network.net' - self.position_file = ['mono2binaural/useful_ckpts/m2b/tx_positions.txt', - 'mono2binaural/useful_ckpts/m2b/tx_positions2.txt', - 'mono2binaural/useful_ckpts/m2b/tx_positions3.txt', - 'mono2binaural/useful_ckpts/m2b/tx_positions4.txt', - 'mono2binaural/useful_ckpts/m2b/tx_positions5.txt'] - self.net = BinauralNetwork(view_dim=7, - warpnet_layers=4, - warpnet_channels=64, - ) - self.net.load_from_file(self.model_file) - self.sr = 48000 - - @prompts(name="Sythesize Binaural Audio From A Mono Audio Input", - description="useful for when you want to transfer your mono audio into binaural audio, " - "receives audio_path as input. " - "The input to this tool should be a string, " - "representing the audio_path. ") - def inference(self, audio_path): - mono, sr = librosa.load(path=audio_path, sr=self.sr, mono=True) - mono = torch.from_numpy(mono) - mono = mono.unsqueeze(0) - import numpy as np - import random - rand_int = random.randint(0, 4) - view = np.loadtxt(self.position_file[rand_int]).transpose().astype(np.float32) - view = torch.from_numpy(view) - if not view.shape[-1] * 400 == mono.shape[-1]: - mono = mono[:, :(mono.shape[-1] // 400) * 400] # - if view.shape[1] * 400 > mono.shape[1]: - m_a = view.shape[1] - mono.shape[-1] // 400 - rand_st = random.randint(0, m_a) - view = view[:, m_a:m_a + (mono.shape[-1] // 400)] # - # binauralize and save output - self.net.eval().to(self.device) - mono, view = mono.to(self.device), view.to(self.device) - chunk_size = 48000 # forward in chunks of 1s - rec_field = 1000 # add 1000 samples as "safe bet" since warping has undefined rec. field - rec_field -= rec_field % 400 # make sure rec_field is a multiple of 400 to match audio and view frequencies - chunks = [ - { - "mono": mono[:, max(0, i - rec_field):i + chunk_size], - "view": view[:, max(0, i - rec_field) // 400:(i + chunk_size) // 400] - } - for i in range(0, mono.shape[-1], chunk_size) - ] - for i, chunk in enumerate(chunks): - with torch.no_grad(): - mono = chunk["mono"].unsqueeze(0) - view = chunk["view"].unsqueeze(0) - binaural = self.net(mono, view).squeeze(0) - if i > 0: - binaural = binaural[:, -(mono.shape[-1] - rec_field):] - chunk["binaural"] = binaural - binaural = torch.cat([chunk["binaural"] for chunk in chunks], dim=-1) - binaural = torch.clamp(binaural, min=-1, max=1).cpu() - # binaural = chunked_forwarding(net, mono, view) - audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav") - import torchaudio - torchaudio.save(audio_filename, binaural, sr) - # soundfile.write(audio_filename, binaural, samplerate = 48000) - print(f"Processed Binaural.run, audio_filename: {audio_filename}") - return audio_filename - - -class TargetSoundDetection: - def __init__(self, device): - self.device = device - self.MEL_ARGS = { - 'n_mels': 64, - 'n_fft': 2048, - 'hop_length': int(22050 * 20 / 1000), - 'win_length': int(22050 * 40 / 1000) - } - self.EPS = np.spacing(1) - self.clip_model, _ = clip.load("ViT-B/32", device=self.device) - self.event_labels = event_labels - self.id_to_event = {i: label for i, label in enumerate(self.event_labels)} - config = torch.load('audio_detection/target_sound_detection/useful_ckpts/tsd/run_config.pth', - map_location='cpu') - config_parameters = dict(config) - config_parameters['tao'] = 0.6 - if 'thres' not in config_parameters.keys(): - config_parameters['thres'] = 0.5 - if 'time_resolution' not in config_parameters.keys(): - config_parameters['time_resolution'] = 125 - model_parameters = torch.load( - 'audio_detection/target_sound_detection/useful_ckpts/tsd/run_model_7_loss=-0.0724.pt' - , map_location=lambda storage, loc: storage) # load parameter - self.model = getattr(tsd_models, config_parameters['model'])(config_parameters, - inputdim=64, outputdim=2, - time_resolution=config_parameters[ - 'time_resolution'], - **config_parameters['model_args']) - self.model.load_state_dict(model_parameters) - self.model = self.model.to(self.device).eval() - self.re_embeds = torch.load('audio_detection/target_sound_detection/useful_ckpts/tsd/text_emb.pth') - self.ref_mel = torch.load('audio_detection/target_sound_detection/useful_ckpts/tsd/ref_mel.pth') - - def extract_feature(self, fname): - import soundfile as sf - y, sr = sf.read(fname, dtype='float32') - print('y ', y.shape) - ti = y.shape[0] / sr - if y.ndim > 1: - y = y.mean(1) - y = librosa.resample(y, sr, 22050) - lms_feature = np.log(librosa.feature.melspectrogram(y, **self.MEL_ARGS) + self.EPS).T - return lms_feature, ti - - def build_clip(self, text): - text = clip.tokenize(text).to(self.device) # ["a diagram with dog", "a dog", "a cat"] - text_features = self.clip_model.encode_text(text) - return text_features - - def cal_similarity(self, target, retrievals): - ans = [] - for name in retrievals.keys(): - tmp = retrievals[name] - s = torch.cosine_similarity(target.squeeze(), tmp.squeeze(), dim=0) - ans.append(s.item()) - return ans.index(max(ans)) - - @prompts(name="Target Sound Detection", - description="useful for when you want to know when the target sound event in the audio happens. You can use language descriptions to instruct the model, " - "receives text description and audio_path as input. " - "The input to this tool should be a comma seperated string of two, " - "representing audio path and the text description. ") - def inference(self, inputs): - audio_path, text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) - target_emb = self.build_clip(text) # torch type - idx = self.cal_similarity(target_emb, self.re_embeds) - target_event = self.id_to_event[idx] - embedding = self.ref_mel[target_event] - embedding = torch.from_numpy(embedding) - embedding = embedding.unsqueeze(0).to(self.device).float() - inputs, ti = self.extract_feature(audio_path) - inputs = torch.from_numpy(inputs) - inputs = inputs.unsqueeze(0).to(self.device).float() - decision, decision_up, logit = self.model(inputs, embedding) - pred = decision_up.detach().cpu().numpy() - pred = pred[:, :, 0] - frame_num = decision_up.shape[1] - time_ratio = ti / frame_num - filtered_pred = median_filter(pred, window_size=1, threshold=0.5) - time_predictions = [] - for index_k in range(filtered_pred.shape[0]): - decoded_pred = [] - decoded_pred_ = decode_with_timestamps(target_event, filtered_pred[index_k, :]) - if len(decoded_pred_) == 0: # neg deal - decoded_pred_.append((target_event, 0, 0)) - decoded_pred.append(decoded_pred_) - for num_batch in range(len(decoded_pred)): # when we test our model,the batch_size is 1 - cur_pred = pred[num_batch] - # Save each frame output, for later visualization - label_prediction = decoded_pred[num_batch] # frame predict - for event_label, onset, offset in label_prediction: - time_predictions.append({ - 'onset': onset * time_ratio, - 'offset': offset * time_ratio, }) - ans = '' - for i, item in enumerate(time_predictions): - ans = ans + 'segment' + str(i + 1) + ' start_time: ' + str(item['onset']) + ' end_time: ' + str( - item['offset']) + '\t' - return ans - - -class Speech_Enh_SC: - """Speech Enhancement or Separation in single-channel - Example usage: - enh_model = Speech_Enh_SS("cuda") - enh_wav = enh_model.inference("./test_chime4_audio_M05_440C0213_PED_REAL.wav") - """ - - def __init__(self, device="cuda", model_name="espnet/Wangyou_Zhang_chime4_enh_train_enh_conv_tasnet_raw"): - self.model_name = model_name - self.device = device - print("Initializing ESPnet Enh to %s" % device) - self._initialize_model() - - def _initialize_model(self): - from espnet_model_zoo.downloader import ModelDownloader - from espnet2.bin.enh_inference import SeparateSpeech - - d = ModelDownloader() - - cfg = d.download_and_unpack(self.model_name) - self.separate_speech = SeparateSpeech( - train_config=cfg["train_config"], - model_file=cfg["model_file"], - # for segment-wise process on long speech - segment_size=2.4, - hop_size=0.8, - normalize_segment_scale=False, - show_progressbar=True, - ref_channel=None, - normalize_output_wav=True, - device=self.device, - ) - - @prompts(name="Speech Enhancement In Single-Channel", - description="useful for when you want to enhance the quality of the speech signal by reducing background noise (single-channel), " - "receives audio_path as input." - "The input to this tool should be a string, " - "representing the audio_path. ") - def inference(self, speech_path, ref_channel=0): - speech, sr = soundfile.read(speech_path) - speech = speech[:, ref_channel] - enh_speech = self.separate_speech(speech[None, ...], fs=sr) - audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav") - soundfile.write(audio_filename, enh_speech[0].squeeze(), samplerate=sr) - return audio_filename - - -class Speech_SS: - def __init__(self, device="cuda", model_name="lichenda/wsj0_2mix_skim_noncausal"): - self.model_name = model_name - self.device = device - print("Initializing ESPnet SS to %s" % device) - self._initialize_model() - - def _initialize_model(self): - from espnet_model_zoo.downloader import ModelDownloader - from espnet2.bin.enh_inference import SeparateSpeech - - d = ModelDownloader() - - cfg = d.download_and_unpack(self.model_name) - self.separate_speech = SeparateSpeech( - train_config=cfg["train_config"], - model_file=cfg["model_file"], - # for segment-wise process on long speech - segment_size=2.4, - hop_size=0.8, - normalize_segment_scale=False, - show_progressbar=True, - ref_channel=None, - normalize_output_wav=True, - device=self.device, - ) - - @prompts(name="Speech Separation", - description="useful for when you want to separate each speech from the speech mixture, " - "receives audio_path as input." - "The input to this tool should be a string, " - "representing the audio_path. ") - def inference(self, speech_path): - speech, sr = soundfile.read(speech_path) - enh_speech = self.separate_speech(speech[None, ...], fs=sr) - audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav") - if len(enh_speech) == 1: - soundfile.write(audio_filename, enh_speech[0].squeeze(), samplerate=sr) - else: - audio_filename_1 = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav") - soundfile.write(audio_filename_1, enh_speech[0].squeeze(), samplerate=sr) - audio_filename_2 = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav") - soundfile.write(audio_filename_2, enh_speech[1].squeeze(), samplerate=sr) - audio_filename = merge_audio(audio_filename_1, audio_filename_2) - return audio_filename - -class Speech_Enh_SC: - """Speech Enhancement or Separation in single-channel - Example usage: - enh_model = Speech_Enh_SS("cuda") - enh_wav = enh_model.inference("./test_chime4_audio_M05_440C0213_PED_REAL.wav") - """ - - def __init__(self, device="cuda", model_name="espnet/Wangyou_Zhang_chime4_enh_train_enh_conv_tasnet_raw"): - self.model_name = model_name - self.device = device - print("Initializing ESPnet Enh to %s" % device) - self._initialize_model() - - def _initialize_model(self): - from espnet_model_zoo.downloader import ModelDownloader - from espnet2.bin.enh_inference import SeparateSpeech - - d = ModelDownloader() - - cfg = d.download_and_unpack(self.model_name) - self.separate_speech = SeparateSpeech( - train_config=cfg["train_config"], - model_file=cfg["model_file"], - # for segment-wise process on long speech - segment_size=2.4, - hop_size=0.8, - normalize_segment_scale=False, - show_progressbar=True, - ref_channel=None, - normalize_output_wav=True, - device=self.device, - ) - - @prompts(name="Speech Enhancement In Single-Channel", - description="useful for when you want to enhance the quality of the speech signal by reducing background noise (single-channel), " - "receives audio_path as input." - "The input to this tool should be a string, " - "representing the audio_path. ") - def inference(self, speech_path, ref_channel=0): - speech, sr = soundfile.read(speech_path) - if speech.ndim != 1: - speech = speech[:, ref_channel] - enh_speech = self.separate_speech(speech[None, ...], fs=sr) - audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav") - soundfile.write(audio_filename, enh_speech[0].squeeze(), samplerate=sr) - return audio_filename - - -class Speech_SS: - def __init__(self, device="cuda", model_name="lichenda/wsj0_2mix_skim_noncausal"): - self.model_name = model_name - self.device = device - print("Initializing ESPnet SS to %s" % device) - self._initialize_model() - - def _initialize_model(self): - from espnet_model_zoo.downloader import ModelDownloader - from espnet2.bin.enh_inference import SeparateSpeech - - d = ModelDownloader() - - cfg = d.download_and_unpack(self.model_name) - self.separate_speech = SeparateSpeech( - train_config=cfg["train_config"], - model_file=cfg["model_file"], - # for segment-wise process on long speech - segment_size=2.4, - hop_size=0.8, - normalize_segment_scale=False, - show_progressbar=True, - ref_channel=None, - normalize_output_wav=True, - device=self.device, - ) - - @prompts(name="Speech Separation", - description="useful for when you want to separate each speech from the speech mixture, " - "receives audio_path as input." - "The input to this tool should be a string, " - "representing the audio_path. ") - def inference(self, speech_path): - speech, sr = soundfile.read(speech_path) - enh_speech = self.separate_speech(speech[None, ...], fs=sr) - audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav") - if len(enh_speech) == 1: - soundfile.write(audio_filename, enh_speech[0].squeeze(), samplerate=sr) - else: - audio_filename_1 = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav") - soundfile.write(audio_filename_1, enh_speech[0].squeeze(), samplerate=sr) - audio_filename_2 = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav") - soundfile.write(audio_filename_2, enh_speech[1].squeeze(), samplerate=sr) - audio_filename = merge_audio(audio_filename_1, audio_filename_2) - return audio_filename \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/utils.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/utils.py deleted file mode 100644 index f95931fb1c422cbd8349b88e1effb9323f170b2b..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/utils.py +++ /dev/null @@ -1,26 +0,0 @@ -import argparse -import yaml -import sys - -def read_config_as_args(config_path,args=None,is_config_str=False): - return_dict = {} - - if config_path is not None: - if is_config_str: - yml_config = yaml.load(config_path, Loader=yaml.FullLoader) - else: - with open(config_path, "r") as f: - yml_config = yaml.load(f, Loader=yaml.FullLoader) - - if args != None: - for k, v in yml_config.items(): - if k in args.__dict__: - args.__dict__[k] = v - else: - sys.stderr.write("Ignored unknown parameter {} in yaml.\n".format(k)) - else: - for k, v in yml_config.items(): - return_dict[k] = v - - args = args if args != None else return_dict - return argparse.Namespace(**args) diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/act.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/act.py deleted file mode 100644 index 028debd697dd60458aae75010057df038bd3518a..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/act.py +++ /dev/null @@ -1,28 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -import torch.nn as nn -from .resample import UpSample1d, DownSample1d - - -class Activation1d(nn.Module): - def __init__(self, - activation, - up_ratio: int = 2, - down_ratio: int = 2, - up_kernel_size: int = 12, - down_kernel_size: int = 12): - super().__init__() - self.up_ratio = up_ratio - self.down_ratio = down_ratio - self.act = activation - self.upsample = UpSample1d(up_ratio, up_kernel_size) - self.downsample = DownSample1d(down_ratio, down_kernel_size) - - # x: [B,C,T] - def forward(self, x): - x = self.upsample(x) - x = self.act(x) - x = self.downsample(x) - - return x \ No newline at end of file diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/modules.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/modules.py deleted file mode 100644 index f6769e9d154301a88e6dc94e463dafd14835d8ba..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/modules.py +++ /dev/null @@ -1,314 +0,0 @@ -import torch -import torch.nn as nn -from functools import partial - -from ldm.modules.x_transformer import Encoder, TransformerWrapper # TODO: can we directly rely on lucidrains code and simply add this as a reuirement? --> test -from torch.utils.checkpoint import checkpoint -from transformers import T5Tokenizer, T5EncoderModel, CLIPTokenizer, CLIPTextModel, AutoTokenizer -from importlib_resources import files -from ldm.modules.encoders.CLAP.utils import read_config_as_args -from ldm.modules.encoders.CLAP.clap import TextEncoder -from ldm.util import default, count_params - - -class AbstractEncoder(nn.Module): - def __init__(self): - super().__init__() - - def encode(self, *args, **kwargs): - raise NotImplementedError - - -class ClassEmbedder(nn.Module): - def __init__(self, embed_dim, n_classes=1000, key='class'): - super().__init__() - self.key = key - self.embedding = nn.Embedding(n_classes, embed_dim) - - def forward(self, batch, key=None): - if key is None: - key = self.key - # this is for use in crossattn - c = batch[key][:, None]# (bsz,1) - c = self.embedding(c) - return c - - -class TransformerEmbedder(AbstractEncoder): - """Some transformer encoder layers""" - def __init__(self, n_embed, n_layer, vocab_size, max_seq_len=77, device="cuda"): - super().__init__() - self.device = device - self.transformer = TransformerWrapper(num_tokens=vocab_size, max_seq_len=max_seq_len, - attn_layers=Encoder(dim=n_embed, depth=n_layer)) - - def forward(self, tokens): - tokens = tokens.to(self.device) # meh - z = self.transformer(tokens, return_embeddings=True) - return z - - def encode(self, x): - return self(x) - - -class BERTTokenizer(AbstractEncoder): - """ Uses a pretrained BERT tokenizer by huggingface. Vocab size: 30522 (?)""" - def __init__(self, device="cuda", vq_interface=True, max_length=77): - super().__init__() - from transformers import BertTokenizerFast # TODO: add to reuquirements - self.tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased") - self.device = device - self.vq_interface = vq_interface - self.max_length = max_length - - def forward(self, text): - batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True, - return_overflowing_tokens=False, padding="max_length", return_tensors="pt") - tokens = batch_encoding["input_ids"].to(self.device) - return tokens - - @torch.no_grad() - def encode(self, text): - tokens = self(text) - if not self.vq_interface: - return tokens - return None, None, [None, None, tokens] - - def decode(self, text): - return text - - -class BERTEmbedder(AbstractEncoder):# 这里不是用的pretrained bert,是用的transformers的BertTokenizer加自定义的TransformerWrapper - """Uses the BERT tokenizr model and add some transformer encoder layers""" - def __init__(self, n_embed, n_layer, vocab_size=30522, max_seq_len=77, - device="cuda",use_tokenizer=True, embedding_dropout=0.0): - super().__init__() - self.use_tknz_fn = use_tokenizer - if self.use_tknz_fn: - self.tknz_fn = BERTTokenizer(vq_interface=False, max_length=max_seq_len) - self.device = device - self.transformer = TransformerWrapper(num_tokens=vocab_size, max_seq_len=max_seq_len, - attn_layers=Encoder(dim=n_embed, depth=n_layer), - emb_dropout=embedding_dropout) - - def forward(self, text): - if self.use_tknz_fn: - tokens = self.tknz_fn(text)#.to(self.device) - else: - tokens = text - z = self.transformer(tokens, return_embeddings=True) - return z - - def encode(self, text): - # output of length 77 - return self(text) - - -class SpatialRescaler(nn.Module): - def __init__(self, - n_stages=1, - method='bilinear', - multiplier=0.5, - in_channels=3, - out_channels=None, - bias=False): - super().__init__() - self.n_stages = n_stages - assert self.n_stages >= 0 - assert method in ['nearest','linear','bilinear','trilinear','bicubic','area'] - self.multiplier = multiplier - self.interpolator = partial(torch.nn.functional.interpolate, mode=method) - self.remap_output = out_channels is not None - if self.remap_output: - print(f'Spatial Rescaler mapping from {in_channels} to {out_channels} channels after resizing.') - self.channel_mapper = nn.Conv2d(in_channels,out_channels,1,bias=bias) - - def forward(self,x): - for stage in range(self.n_stages): - x = self.interpolator(x, scale_factor=self.multiplier) - - - if self.remap_output: - x = self.channel_mapper(x) - return x - - def encode(self, x): - return self(x) - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - -class FrozenT5Embedder(AbstractEncoder): - """Uses the T5 transformer encoder for text""" - def __init__(self, version="google/t5-v1_1-large", device="cuda", max_length=77, freeze=True): # others are google/t5-v1_1-xl and google/t5-v1_1-xxl - super().__init__() - self.tokenizer = T5Tokenizer.from_pretrained(version) - self.transformer = T5EncoderModel.from_pretrained(version) - self.device = device - self.max_length = max_length # TODO: typical value? - if freeze: - self.freeze() - - def freeze(self): - self.transformer = self.transformer.eval() - #self.train = disabled_train - for param in self.parameters(): - param.requires_grad = False - - def forward(self, text): - batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True, - return_overflowing_tokens=False, padding="max_length", return_tensors="pt") - tokens = batch_encoding["input_ids"].to(self.device) - outputs = self.transformer(input_ids=tokens) - - z = outputs.last_hidden_state - return z - - def encode(self, text): - return self(text) - - -class FrozenCLAPEmbedder(AbstractEncoder): - """Uses the CLAP transformer encoder for text (from huggingface)""" - def __init__(self, weights_path, freeze=True, device="cuda", max_length=77): # clip-vit-base-patch32 - super().__init__() - - model_state_dict = torch.load(weights_path, map_location=torch.device('cpu'))['model'] - match_params = dict() - for key in list(model_state_dict.keys()): - if 'caption_encoder' in key: - match_params[key.replace('caption_encoder.', '')] = model_state_dict[key] - - config_as_str = files('ldm').joinpath('modules/encoders/CLAP/config.yml').read_text() - args = read_config_as_args(config_as_str, is_config_str=True) - - # To device - self.tokenizer = AutoTokenizer.from_pretrained(args.text_model) # args.text_model - self.caption_encoder = TextEncoder( - args.d_proj, args.text_model, args.transformer_embed_dim - ) - - self.max_length = max_length - self.device = device - if freeze: self.freeze() - - print(f"{self.caption_encoder.__class__.__name__} comes with {count_params(self.caption_encoder) * 1.e-6:.2f} M params.") - - def freeze(self): - self.caption_encoder.base = self.caption_encoder.base.eval() - for param in self.caption_encoder.base.parameters(): - param.requires_grad = False - - - def encode(self, text): - batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True, - return_overflowing_tokens=False, padding="max_length", return_tensors="pt") - tokens = batch_encoding["input_ids"].to(self.device) - - outputs = self.caption_encoder.base(input_ids=tokens) - z = self.caption_encoder.projection(outputs.last_hidden_state) - return z - -class FrozenCLAPEmbedderNoLoad(AbstractEncoder): - def __init__(self, config, freeze=True, device="cpu", max_length=77): - super().__init__() - args = config - - # To device - self.tokenizer = AutoTokenizer.from_pretrained(args.text_model) # args.text_model - self.caption_encoder = TextEncoder( - args.d_proj, args.text_model, args.transformer_embed_dim - ) - - self.max_length = max_length - self.device = device - if freeze: self.freeze() - - print(f"{self.caption_encoder.__class__.__name__} comes with {count_params(self.caption_encoder) * 1.e-6:.2f} M params.") - - def freeze(self): - self.caption_encoder.base = self.caption_encoder.base.eval() - for param in self.caption_encoder.base.parameters(): - param.requires_grad = False - - - def encode(self, text): - batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True, - return_overflowing_tokens=False, padding="max_length", return_tensors="pt") - tokens = batch_encoding["input_ids"].to(self.device) - - outputs = self.caption_encoder.base(input_ids=tokens) - z = self.caption_encoder.projection(outputs.last_hidden_state) - return z - - -class NewFrozenCLAPEmbedder(AbstractEncoder): - """Uses the CLAP transformer encoder for text (from huggingface)""" - def __init__(self, weights_path, freeze=True, device="cuda", max_length=77): # clip-vit-base-patch32 - super().__init__() - # To device - from transformers import RobertaTokenizer - from ldm.modules.encoders.open_clap import create_model - - - model, model_cfg = create_model( - 'HTSAT-tiny', - 'roberta', - weights_path, - enable_fusion=True, - fusion_type='aff_2d' - ) - - del model.audio_branch, model.audio_transform, model.audio_projection - self.tokenizer = RobertaTokenizer.from_pretrained('roberta-base') - self.model = model - - self.max_length = max_length - self.device = device - if freeze: self.freeze() - - param_num = sum(p.numel() for p in model.parameters() if p.requires_grad) - print(f'{self.model.__class__.__name__} comes with: {param_num / 1e+6:.3f} M params.') - - def freeze(self): - self.model = self.model.eval() - for param in self.model.parameters(): - param.requires_grad = False - - def encode(self, text): - batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True, - return_overflowing_tokens=False, padding="max_length", return_tensors="pt") - outputs = self.model.text_branch(input_ids=batch_encoding["input_ids"].to(self.device), attention_mask=batch_encoding["attention_mask"].to(self.device)) - z = self.model.text_projection(outputs.last_hidden_state) - return z - -class FrozenFLANEmbedder(AbstractEncoder): - """Uses the T5 transformer encoder for text""" - def __init__(self, version="google/flan-t5-large", device="cuda", max_length=77, freeze=True): # others are google/t5-v1_1-xl and google/t5-v1_1-xxl - super().__init__() - self.tokenizer = T5Tokenizer.from_pretrained(version) - self.transformer = T5EncoderModel.from_pretrained(version) - self.device = device - self.max_length = max_length # TODO: typical value? - if freeze: - self.freeze() - - def freeze(self): - self.transformer = self.transformer.eval() - #self.train = disabled_train - for param in self.parameters(): - param.requires_grad = False - - def forward(self, text): - batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True, - return_overflowing_tokens=False, padding="max_length", return_tensors="pt") - tokens = batch_encoding["input_ids"].to(self.device) - outputs = self.transformer(input_ids=tokens) - - z = outputs.last_hidden_state - return z - - def encode(self, text): - return self(text) \ No newline at end of file diff --git a/spaces/AIZeroToHero/03-NLP-MLM-SOTA-MedEntity/README.md b/spaces/AIZeroToHero/03-NLP-MLM-SOTA-MedEntity/README.md deleted file mode 100644 index 763b0b144d5b45dcf7609bfa0373368ef7c056fd..0000000000000000000000000000000000000000 --- a/spaces/AIZeroToHero/03-NLP-MLM-SOTA-MedEntity/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: 03 NLP MLM SOTA MedEntity -emoji: 🐠 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AUBMC-AIM/OCTaGAN/README.md b/spaces/AUBMC-AIM/OCTaGAN/README.md deleted file mode 100644 index 44271ae50a057f8418739a69f57b3e321ef3deac..0000000000000000000000000000000000000000 --- a/spaces/AUBMC-AIM/OCTaGAN/README.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: OCTaGAN -emoji: 🐠 -colorFrom: purple -colorTo: blue -sdk: gradio -app_file: app.py -pinned: false -license: mit ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Abuzariii/Text-Generation-with-GPT-2/app.py b/spaces/Abuzariii/Text-Generation-with-GPT-2/app.py deleted file mode 100644 index 777c0fe0615ad4d9096505385f6eef4fab891d50..0000000000000000000000000000000000000000 --- a/spaces/Abuzariii/Text-Generation-with-GPT-2/app.py +++ /dev/null @@ -1,23 +0,0 @@ -from transformers import GPT2LMHeadModel, GPT2Tokenizer - -tokenizer = GPT2Tokenizer.from_pretrained("gpt2-large") -model = GPT2LMHeadModel.from_pretrained("gpt2-large", pad_token_id=tokenizer.eos_token_id) - -from transformers import pipeline, set_seed -generator = pipeline('text-generation', model='gpt2') - -def generate_text(prompt): - text1 = generator(prompt, max_length=3000, num_return_sequences=1) - return text1[0].get('generated_text') - - -import gradio as gr - -gr.Interface( - title = 'Text Generation using GPT 2', - fn=generate_text, - inputs=gr.Textbox(placeholder="Type Here..."), - outputs=[ - "text" - ], - theme = 'darkhuggingface').launch() \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/GptForLove.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/GptForLove.py deleted file mode 100644 index 53c403e16beecfe2e6f7255ef90c9a1bb8444230..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/GptForLove.py +++ /dev/null @@ -1,82 +0,0 @@ -from __future__ import annotations - -from aiohttp import ClientSession -import execjs, os, json - -from ..typing import AsyncGenerator -from .base_provider import AsyncGeneratorProvider -from .helper import format_prompt - -class GptForLove(AsyncGeneratorProvider): - url = "https://ai18.gptforlove.com" - supports_gpt_35_turbo = True - working = True - - @classmethod - async def create_async_generator( - cls, - model: str, - messages: list[dict[str, str]], - **kwargs - ) -> AsyncGenerator: - if not model: - model = "gpt-3.5-turbo" - headers = { - "authority": "api.gptplus.one", - "accept": "application/json, text/plain, */*", - "accept-language": "de-DE,de;q=0.9,en-DE;q=0.8,en;q=0.7,en-US;q=0.6,nl;q=0.5,zh-CN;q=0.4,zh-TW;q=0.3,zh;q=0.2", - "content-type": "application/json", - "origin": cls.url, - "referer": f"{cls.url}/", - "sec-ch-ua": "\"Google Chrome\";v=\"117\", \"Not;A=Brand\";v=\"8\", \"Chromium\";v=\"117\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "Linux", - "sec-fetch-dest": "empty", - "sec-fetch-mode": "cors", - "sec-fetch-site": "cross-site", - "user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36" - } - async with ClientSession(headers=headers) as session: - prompt = format_prompt(messages) - data = { - "prompt": prompt, - "options": {}, - "systemMessage": "You are ChatGPT, the version is GPT3.5, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown.", - "temperature": 0.8, - "top_p": 1, - "secret": get_secret(), - **kwargs - } - async with session.post("https://api.gptplus.one/chat-process", json=data) as response: - response.raise_for_status() - async for line in response.content: - try: - line = json.loads(line) - except: - raise RuntimeError(f"Broken line: {line}") - if "detail" in line: - content = line["detail"]["choices"][0]["delta"].get("content") - if content: - yield content - elif "10分钟内提问超过了5次" in line: - raise RuntimeError("Rate limit reached") - else: - raise RuntimeError(f"Response: {line}") - - -def get_secret() -> str: - dir = os.path.dirname(__file__) - dir += '/npm/node_modules/crypto-js' - source = """ -CryptoJS = require('{dir}/crypto-js') -var k = '14487141bvirvvG' - , e = Math.floor(new Date().getTime() / 1e3); -var t = CryptoJS.enc.Utf8.parse(e) - , o = CryptoJS.AES.encrypt(t, k, { - mode: CryptoJS.mode.ECB, - padding: CryptoJS.pad.Pkcs7 -}); -return o.toString() -""" - source = source.replace('{dir}', dir) - return execjs.compile(source).call('') diff --git a/spaces/AiMimicry/sovits-models/app-slice.py b/spaces/AiMimicry/sovits-models/app-slice.py deleted file mode 100644 index 909fc3d594aa3f89074d687d21af90ea41034f5e..0000000000000000000000000000000000000000 --- a/spaces/AiMimicry/sovits-models/app-slice.py +++ /dev/null @@ -1,135 +0,0 @@ -import os -import gradio as gr -import edge_tts -from pathlib import Path -import inference.infer_tool as infer_tool -import utils -from inference.infer_tool import Svc -import logging -import webbrowser -import argparse -import asyncio -import librosa -import soundfile -import gradio.processing_utils as gr_processing_utils -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger('markdown_it').setLevel(logging.WARNING) -logging.getLogger('urllib3').setLevel(logging.WARNING) -logging.getLogger('matplotlib').setLevel(logging.WARNING) - -limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -audio_postprocess_ori = gr.Audio.postprocess - -def audio_postprocess(self, y): - data = audio_postprocess_ori(self, y) - if data is None: - return None - return gr_processing_utils.encode_url_or_file_to_base64(data["name"]) - - -gr.Audio.postprocess = audio_postprocess -def create_vc_fn(model, sid): - def vc_fn(input_audio, vc_transform, auto_f0, slice_db, noise_scale, pad_seconds, tts_text, tts_voice, tts_mode): - if tts_mode: - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3") - soundfile.write("tts.wav", audio, 24000, format="wav") - wav_path = "tts.wav" - else: - if input_audio is None: - return "You need to select an audio", None - raw_audio_path = f"raw/{input_audio}" - if "." not in raw_audio_path: - raw_audio_path += ".wav" - infer_tool.format_wav(raw_audio_path) - wav_path = Path(raw_audio_path).with_suffix('.wav') - _audio = model.slice_inference( - wav_path, sid, vc_transform, slice_db, - cluster_infer_ratio=0, - auto_predict_f0=auto_f0, - noice_scale=noise_scale, - pad_seconds=pad_seconds) - model.clear_empty() - return "Success", (44100, _audio) - return vc_fn - -def refresh_raw_wav(): - return gr.Dropdown.update(choices=os.listdir("raw")) - -def change_to_tts_mode(tts_mode): - if tts_mode: - return gr.Audio.update(visible=False), gr.Button.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True) - else: - return gr.Audio.update(visible=True), gr.Button.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - parser.add_argument("--colab", action="store_true", default=False, help="share gradio app") - args = parser.parse_args() - hubert_model = utils.get_hubert_model().to(args.device) - models = [] - voices = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - for r in tts_voice_list: - voices.append(f"{r['ShortName']}-{r['Gender']}") - raw = os.listdir("raw") - for f in os.listdir("models"): - name = f - model = Svc(fr"models/{f}/{f}.pth", f"models/{f}/config.json", device=args.device) - cover = f"models/{f}/cover.png" if os.path.exists(f"models/{f}/cover.png") else None - models.append((name, cover, create_vc_fn(model, name))) - with gr.Blocks() as app: - gr.Markdown( - "#
Sovits Models\n" - "##
The input audio should be clean and pure voice without background music.\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=sayashi.Sovits-Umamusume)\n\n" - "[Open In Colab](https://colab.research.google.com/drive/1wfsBbMzmtLflOJeqc5ZnJiLY7L239hJW?usp=share_link)" - " without queue and length limitation.\n\n" - "[Original Repo](https://github.com/svc-develop-team/so-vits-svc)\n\n" - "Other models:\n" - "[rudolf](https://huggingface.co/spaces/sayashi/sovits-rudolf)\n" - "[teio](https://huggingface.co/spaces/sayashi/sovits-teio)\n" - "[goldship](https://huggingface.co/spaces/sayashi/sovits-goldship)\n" - "[tannhauser](https://huggingface.co/spaces/sayashi/sovits-tannhauser)\n" - - ) - with gr.Tabs(): - for (name, cover, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
' - f'' if cover else "" - '
' - ) - with gr.Row(): - with gr.Column(): - with gr.Row(): - vc_input = gr.Dropdown(label="Input audio", choices=raw) - vc_refresh = gr.Button("🔁", variant="primary") - vc_transform = gr.Number(label="vc_transform", value=0) - slice_db = gr.Number(label="slice_db", value=-40) - noise_scale = gr.Number(label="noise_scale", value=0.4) - pad_seconds = gr.Number(label="pad_seconds", value=0.5) - auto_f0 = gr.Checkbox(label="auto_f0", value=False) - tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False) - tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text") - tts_voice = gr.Dropdown(choices=voices, visible=False) - vc_submit = gr.Button("Generate", variant="primary") - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [vc_input, vc_transform, auto_f0, slice_db, noise_scale, pad_seconds, tts_text, tts_voice, tts_mode], [vc_output1, vc_output2]) - vc_refresh.click(refresh_raw_wav, [], [vc_input]) - tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, vc_refresh, tts_text, tts_voice]) - if args.colab: - webbrowser.open("http://127.0.0.1:7860") - app.queue(concurrency_count=1, api_open=args.api).launch(share=args.share) \ No newline at end of file diff --git a/spaces/AlexWang/lama/saicinpainting/training/losses/style_loss.py b/spaces/AlexWang/lama/saicinpainting/training/losses/style_loss.py deleted file mode 100644 index 0bb42d7fbc5d17a47bec7365889868505f5fdfb5..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/saicinpainting/training/losses/style_loss.py +++ /dev/null @@ -1,155 +0,0 @@ -import torch -import torch.nn as nn -import torchvision.models as models - - -class PerceptualLoss(nn.Module): - r""" - Perceptual loss, VGG-based - https://arxiv.org/abs/1603.08155 - https://github.com/dxyang/StyleTransfer/blob/master/utils.py - """ - - def __init__(self, weights=[1.0, 1.0, 1.0, 1.0, 1.0]): - super(PerceptualLoss, self).__init__() - self.add_module('vgg', VGG19()) - self.criterion = torch.nn.L1Loss() - self.weights = weights - - def __call__(self, x, y): - # Compute features - x_vgg, y_vgg = self.vgg(x), self.vgg(y) - - content_loss = 0.0 - content_loss += self.weights[0] * self.criterion(x_vgg['relu1_1'], y_vgg['relu1_1']) - content_loss += self.weights[1] * self.criterion(x_vgg['relu2_1'], y_vgg['relu2_1']) - content_loss += self.weights[2] * self.criterion(x_vgg['relu3_1'], y_vgg['relu3_1']) - content_loss += self.weights[3] * self.criterion(x_vgg['relu4_1'], y_vgg['relu4_1']) - content_loss += self.weights[4] * self.criterion(x_vgg['relu5_1'], y_vgg['relu5_1']) - - - return content_loss - - -class VGG19(torch.nn.Module): - def __init__(self): - super(VGG19, self).__init__() - features = models.vgg19(pretrained=True).features - self.relu1_1 = torch.nn.Sequential() - self.relu1_2 = torch.nn.Sequential() - - self.relu2_1 = torch.nn.Sequential() - self.relu2_2 = torch.nn.Sequential() - - self.relu3_1 = torch.nn.Sequential() - self.relu3_2 = torch.nn.Sequential() - self.relu3_3 = torch.nn.Sequential() - self.relu3_4 = torch.nn.Sequential() - - self.relu4_1 = torch.nn.Sequential() - self.relu4_2 = torch.nn.Sequential() - self.relu4_3 = torch.nn.Sequential() - self.relu4_4 = torch.nn.Sequential() - - self.relu5_1 = torch.nn.Sequential() - self.relu5_2 = torch.nn.Sequential() - self.relu5_3 = torch.nn.Sequential() - self.relu5_4 = torch.nn.Sequential() - - for x in range(2): - self.relu1_1.add_module(str(x), features[x]) - - for x in range(2, 4): - self.relu1_2.add_module(str(x), features[x]) - - for x in range(4, 7): - self.relu2_1.add_module(str(x), features[x]) - - for x in range(7, 9): - self.relu2_2.add_module(str(x), features[x]) - - for x in range(9, 12): - self.relu3_1.add_module(str(x), features[x]) - - for x in range(12, 14): - self.relu3_2.add_module(str(x), features[x]) - - for x in range(14, 16): - self.relu3_2.add_module(str(x), features[x]) - - for x in range(16, 18): - self.relu3_4.add_module(str(x), features[x]) - - for x in range(18, 21): - self.relu4_1.add_module(str(x), features[x]) - - for x in range(21, 23): - self.relu4_2.add_module(str(x), features[x]) - - for x in range(23, 25): - self.relu4_3.add_module(str(x), features[x]) - - for x in range(25, 27): - self.relu4_4.add_module(str(x), features[x]) - - for x in range(27, 30): - self.relu5_1.add_module(str(x), features[x]) - - for x in range(30, 32): - self.relu5_2.add_module(str(x), features[x]) - - for x in range(32, 34): - self.relu5_3.add_module(str(x), features[x]) - - for x in range(34, 36): - self.relu5_4.add_module(str(x), features[x]) - - # don't need the gradients, just want the features - for param in self.parameters(): - param.requires_grad = False - - def forward(self, x): - relu1_1 = self.relu1_1(x) - relu1_2 = self.relu1_2(relu1_1) - - relu2_1 = self.relu2_1(relu1_2) - relu2_2 = self.relu2_2(relu2_1) - - relu3_1 = self.relu3_1(relu2_2) - relu3_2 = self.relu3_2(relu3_1) - relu3_3 = self.relu3_3(relu3_2) - relu3_4 = self.relu3_4(relu3_3) - - relu4_1 = self.relu4_1(relu3_4) - relu4_2 = self.relu4_2(relu4_1) - relu4_3 = self.relu4_3(relu4_2) - relu4_4 = self.relu4_4(relu4_3) - - relu5_1 = self.relu5_1(relu4_4) - relu5_2 = self.relu5_2(relu5_1) - relu5_3 = self.relu5_3(relu5_2) - relu5_4 = self.relu5_4(relu5_3) - - out = { - 'relu1_1': relu1_1, - 'relu1_2': relu1_2, - - 'relu2_1': relu2_1, - 'relu2_2': relu2_2, - - 'relu3_1': relu3_1, - 'relu3_2': relu3_2, - 'relu3_3': relu3_3, - 'relu3_4': relu3_4, - - 'relu4_1': relu4_1, - 'relu4_2': relu4_2, - 'relu4_3': relu4_3, - 'relu4_4': relu4_4, - - 'relu5_1': relu5_1, - 'relu5_2': relu5_2, - 'relu5_3': relu5_3, - 'relu5_4': relu5_4, - } - return out diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/score_sde_ve.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/score_sde_ve.md deleted file mode 100644 index 66a00c69e3b42d42093ca0434e0b56f9cb9aae52..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/score_sde_ve.md +++ /dev/null @@ -1,20 +0,0 @@ - - -# Variance Exploding Stochastic Differential Equation (VE-SDE) scheduler - -## Overview - -Original paper can be found [here](https://arxiv.org/abs/2011.13456). - -## ScoreSdeVeScheduler -[[autodoc]] ScoreSdeVeScheduler diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/speech_to_image_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/speech_to_image_diffusion.py deleted file mode 100644 index 55d805bc8c3230254d1b0c141bf2d65514eba01f..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/speech_to_image_diffusion.py +++ /dev/null @@ -1,261 +0,0 @@ -import inspect -from typing import Callable, List, Optional, Union - -import torch -from transformers import ( - CLIPImageProcessor, - CLIPTextModel, - CLIPTokenizer, - WhisperForConditionalGeneration, - WhisperProcessor, -) - -from diffusers import ( - AutoencoderKL, - DDIMScheduler, - DiffusionPipeline, - LMSDiscreteScheduler, - PNDMScheduler, - UNet2DConditionModel, -) -from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput -from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker -from diffusers.utils import logging - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -class SpeechToImagePipeline(DiffusionPipeline): - def __init__( - self, - speech_model: WhisperForConditionalGeneration, - speech_processor: WhisperProcessor, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - ): - super().__init__() - - if safety_checker is None: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - self.register_modules( - speech_model=speech_model, - speech_processor=speech_processor, - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - feature_extractor=feature_extractor, - ) - - def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"): - if slice_size == "auto": - slice_size = self.unet.config.attention_head_dim // 2 - self.unet.set_attention_slice(slice_size) - - def disable_attention_slicing(self): - self.enable_attention_slicing(None) - - @torch.no_grad() - def __call__( - self, - audio, - sampling_rate=16_000, - height: int = 512, - width: int = 512, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[torch.Generator] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - **kwargs, - ): - inputs = self.speech_processor.feature_extractor( - audio, return_tensors="pt", sampling_rate=sampling_rate - ).input_features.to(self.device) - predicted_ids = self.speech_model.generate(inputs, max_length=480_000) - - prompt = self.speech_processor.tokenizer.batch_decode(predicted_ids, skip_special_tokens=True, normalize=True)[ - 0 - ] - - if isinstance(prompt, str): - batch_size = 1 - elif isinstance(prompt, list): - batch_size = len(prompt) - else: - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - # get prompt text embeddings - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - - if text_input_ids.shape[-1] > self.tokenizer.model_max_length: - removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length] - text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0] - - # duplicate text embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = text_embeddings.shape - text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1) - text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0] - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = uncond_embeddings.shape[1] - uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1) - uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - - # get the initial random noise unless the user supplied it - - # Unlike in other pipelines, latents need to be generated in the target device - # for 1-to-1 results reproducibility with the CompVis implementation. - # However this currently doesn't work in `mps`. - latents_shape = (batch_size * num_images_per_prompt, self.unet.config.in_channels, height // 8, width // 8) - latents_dtype = text_embeddings.dtype - if latents is None: - if self.device.type == "mps": - # randn does not exist on mps - latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to( - self.device - ) - else: - latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype) - else: - if latents.shape != latents_shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}") - latents = latents.to(self.device) - - # set timesteps - self.scheduler.set_timesteps(num_inference_steps) - - # Some schedulers like PNDM have timesteps as arrays - # It's more optimized to move all timesteps to correct device beforehand - timesteps_tensor = self.scheduler.timesteps.to(self.device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - for i, t in enumerate(self.progress_bar(timesteps_tensor)): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - latents = 1 / 0.18215 * latents - image = self.vae.decode(latents).sample - - image = (image / 2 + 0.5).clamp(0, 1) - - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return image - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=None) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/instruct_pix2pix/train_instruct_pix2pix.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/instruct_pix2pix/train_instruct_pix2pix.py deleted file mode 100644 index 97184bd95168e7678afaa5a1341c0bc759d689ee..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/instruct_pix2pix/train_instruct_pix2pix.py +++ /dev/null @@ -1,1008 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Script to fine-tune Stable Diffusion for InstructPix2Pix.""" - -import argparse -import logging -import math -import os -import shutil -from pathlib import Path - -import accelerate -import datasets -import numpy as np -import PIL -import requests -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration, set_seed -from datasets import load_dataset -from huggingface_hub import create_repo, upload_folder -from packaging import version -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPTextModel, CLIPTokenizer - -import diffusers -from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionInstructPix2PixPipeline, UNet2DConditionModel -from diffusers.optimization import get_scheduler -from diffusers.training_utils import EMAModel -from diffusers.utils import check_min_version, deprecate, is_wandb_available -from diffusers.utils.import_utils import is_xformers_available - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.19.0") - -logger = get_logger(__name__, log_level="INFO") - -DATASET_NAME_MAPPING = { - "fusing/instructpix2pix-1000-samples": ("input_image", "edit_prompt", "edited_image"), -} -WANDB_TABLE_COL_NAMES = ["original_image", "edited_image", "edit_prompt"] - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script for InstructPix2Pix.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--dataset_name", - type=str, - default=None, - help=( - "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private," - " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem," - " or to a folder containing files that 🤗 Datasets can understand." - ), - ) - parser.add_argument( - "--dataset_config_name", - type=str, - default=None, - help="The config of the Dataset, leave as None if there's only one config.", - ) - parser.add_argument( - "--train_data_dir", - type=str, - default=None, - help=( - "A folder containing the training data. Folder contents must follow the structure described in" - " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file" - " must exist to provide the captions for the images. Ignored if `dataset_name` is specified." - ), - ) - parser.add_argument( - "--original_image_column", - type=str, - default="input_image", - help="The column of the dataset containing the original image on which edits where made.", - ) - parser.add_argument( - "--edited_image_column", - type=str, - default="edited_image", - help="The column of the dataset containing the edited image.", - ) - parser.add_argument( - "--edit_prompt_column", - type=str, - default="edit_prompt", - help="The column of the dataset containing the edit instruction.", - ) - parser.add_argument( - "--val_image_url", - type=str, - default=None, - help="URL to the original image that you would like to edit (used during inference for debugging purposes).", - ) - parser.add_argument( - "--validation_prompt", type=str, default=None, help="A prompt that is sampled during training for inference." - ) - parser.add_argument( - "--num_validation_images", - type=int, - default=4, - help="Number of images that should be generated during validation with `validation_prompt`.", - ) - parser.add_argument( - "--validation_epochs", - type=int, - default=1, - help=( - "Run fine-tuning validation every X epochs. The validation process consists of running the prompt" - " `args.validation_prompt` multiple times: `args.num_validation_images`." - ), - ) - parser.add_argument( - "--max_train_samples", - type=int, - default=None, - help=( - "For debugging purposes or quicker training, truncate the number of training examples to this " - "value if set." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="instruct-pix2pix-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument( - "--cache_dir", - type=str, - default=None, - help="The directory where the downloaded models and datasets will be stored.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=256, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", - default=False, - action="store_true", - help=( - "Whether to center crop the input images to the resolution. If not set, the images will be randomly" - " cropped. The images will be resized to the resolution first before cropping." - ), - ) - parser.add_argument( - "--random_flip", - action="store_true", - help="whether to randomly flip images horizontally", - ) - parser.add_argument( - "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument("--num_train_epochs", type=int, default=100) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--conditioning_dropout_prob", - type=float, - default=None, - help="Conditioning dropout probability. Drops out the conditionings (image and edit prompt) used in training InstructPix2Pix. See section 3.2.1 in the paper: https://arxiv.org/abs/2211.09800.", - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument("--use_ema", action="store_true", help="Whether to use EMA model.") - parser.add_argument( - "--non_ema_revision", - type=str, - default=None, - required=False, - help=( - "Revision of pretrained non-ema model identifier. Must be a branch, tag or git identifier of the local or" - " remote repository specified with --pretrained_model_name_or_path." - ), - ) - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." - ), - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default=None, - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" - " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=None, - help=("Max number of checkpoints to store."), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - # Sanity checks - if args.dataset_name is None and args.train_data_dir is None: - raise ValueError("Need either a dataset name or a training folder.") - - # default to using the same revision for the non-ema model if not specified - if args.non_ema_revision is None: - args.non_ema_revision = args.revision - - return args - - -def convert_to_np(image, resolution): - image = image.convert("RGB").resize((resolution, resolution)) - return np.array(image).transpose(2, 0, 1) - - -def download_image(url): - image = PIL.Image.open(requests.get(url, stream=True).raw) - image = PIL.ImageOps.exif_transpose(image) - image = image.convert("RGB") - return image - - -def main(): - args = parse_args() - - if args.non_ema_revision is not None: - deprecate( - "non_ema_revision!=None", - "0.15.0", - message=( - "Downloading 'non_ema' weights from revision branches of the Hub is deprecated. Please make sure to" - " use `--variant=non_ema` instead." - ), - ) - logging_dir = os.path.join(args.output_dir, args.logging_dir) - accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir) - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - project_config=accelerator_project_config, - ) - - generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) - - if args.report_to == "wandb": - if not is_wandb_available(): - raise ImportError("Make sure to install wandb if you want to use it for logging during training.") - import wandb - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - datasets.utils.logging.set_verbosity_warning() - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if accelerator.is_main_process: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - # Load scheduler, tokenizer and models. - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - tokenizer = CLIPTokenizer.from_pretrained( - args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision - ) - text_encoder = CLIPTextModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision - ) - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.non_ema_revision - ) - - # InstructPix2Pix uses an additional image for conditioning. To accommodate that, - # it uses 8 channels (instead of 4) in the first (conv) layer of the UNet. This UNet is - # then fine-tuned on the custom InstructPix2Pix dataset. This modified UNet is initialized - # from the pre-trained checkpoints. For the extra channels added to the first layer, they are - # initialized to zero. - logger.info("Initializing the InstructPix2Pix UNet from the pretrained UNet.") - in_channels = 8 - out_channels = unet.conv_in.out_channels - unet.register_to_config(in_channels=in_channels) - - with torch.no_grad(): - new_conv_in = nn.Conv2d( - in_channels, out_channels, unet.conv_in.kernel_size, unet.conv_in.stride, unet.conv_in.padding - ) - new_conv_in.weight.zero_() - new_conv_in.weight[:, :4, :, :].copy_(unet.conv_in.weight) - unet.conv_in = new_conv_in - - # Freeze vae and text_encoder - vae.requires_grad_(False) - text_encoder.requires_grad_(False) - - # Create EMA for the unet. - if args.use_ema: - ema_unet = EMAModel(unet.parameters(), model_cls=UNet2DConditionModel, model_config=unet.config) - - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - import xformers - - xformers_version = version.parse(xformers.__version__) - if xformers_version == version.parse("0.0.16"): - logger.warn( - "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." - ) - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - # `accelerate` 0.16.0 will have better support for customized saving - if version.parse(accelerate.__version__) >= version.parse("0.16.0"): - # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format - def save_model_hook(models, weights, output_dir): - if args.use_ema: - ema_unet.save_pretrained(os.path.join(output_dir, "unet_ema")) - - for i, model in enumerate(models): - model.save_pretrained(os.path.join(output_dir, "unet")) - - # make sure to pop weight so that corresponding model is not saved again - weights.pop() - - def load_model_hook(models, input_dir): - if args.use_ema: - load_model = EMAModel.from_pretrained(os.path.join(input_dir, "unet_ema"), UNet2DConditionModel) - ema_unet.load_state_dict(load_model.state_dict()) - ema_unet.to(accelerator.device) - del load_model - - for i in range(len(models)): - # pop models so that they are not loaded again - model = models.pop() - - # load diffusers style into model - load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet") - model.register_to_config(**load_model.config) - - model.load_state_dict(load_model.state_dict()) - del load_model - - accelerator.register_save_state_pre_hook(save_model_hook) - accelerator.register_load_state_pre_hook(load_model_hook) - - if args.gradient_checkpointing: - unet.enable_gradient_checkpointing() - - # Enable TF32 for faster training on Ampere GPUs, - # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Initialize the optimizer - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "Please install bitsandbytes to use 8-bit Adam. You can do so by running `pip install bitsandbytes`" - ) - - optimizer_cls = bnb.optim.AdamW8bit - else: - optimizer_cls = torch.optim.AdamW - - optimizer = optimizer_cls( - unet.parameters(), - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Get the datasets: you can either provide your own training and evaluation files (see below) - # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub). - - # In distributed training, the load_dataset function guarantees that only one local process can concurrently - # download the dataset. - if args.dataset_name is not None: - # Downloading and loading a dataset from the hub. - dataset = load_dataset( - args.dataset_name, - args.dataset_config_name, - cache_dir=args.cache_dir, - ) - else: - data_files = {} - if args.train_data_dir is not None: - data_files["train"] = os.path.join(args.train_data_dir, "**") - dataset = load_dataset( - "imagefolder", - data_files=data_files, - cache_dir=args.cache_dir, - ) - # See more about loading custom images at - # https://huggingface.co/docs/datasets/main/en/image_load#imagefolder - - # Preprocessing the datasets. - # We need to tokenize inputs and targets. - column_names = dataset["train"].column_names - - # 6. Get the column names for input/target. - dataset_columns = DATASET_NAME_MAPPING.get(args.dataset_name, None) - if args.original_image_column is None: - original_image_column = dataset_columns[0] if dataset_columns is not None else column_names[0] - else: - original_image_column = args.original_image_column - if original_image_column not in column_names: - raise ValueError( - f"--original_image_column' value '{args.original_image_column}' needs to be one of: {', '.join(column_names)}" - ) - if args.edit_prompt_column is None: - edit_prompt_column = dataset_columns[1] if dataset_columns is not None else column_names[1] - else: - edit_prompt_column = args.edit_prompt_column - if edit_prompt_column not in column_names: - raise ValueError( - f"--edit_prompt_column' value '{args.edit_prompt_column}' needs to be one of: {', '.join(column_names)}" - ) - if args.edited_image_column is None: - edited_image_column = dataset_columns[2] if dataset_columns is not None else column_names[2] - else: - edited_image_column = args.edited_image_column - if edited_image_column not in column_names: - raise ValueError( - f"--edited_image_column' value '{args.edited_image_column}' needs to be one of: {', '.join(column_names)}" - ) - - # Preprocessing the datasets. - # We need to tokenize input captions and transform the images. - def tokenize_captions(captions): - inputs = tokenizer( - captions, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt" - ) - return inputs.input_ids - - # Preprocessing the datasets. - train_transforms = transforms.Compose( - [ - transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), - transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), - ] - ) - - def preprocess_images(examples): - original_images = np.concatenate( - [convert_to_np(image, args.resolution) for image in examples[original_image_column]] - ) - edited_images = np.concatenate( - [convert_to_np(image, args.resolution) for image in examples[edited_image_column]] - ) - # We need to ensure that the original and the edited images undergo the same - # augmentation transforms. - images = np.concatenate([original_images, edited_images]) - images = torch.tensor(images) - images = 2 * (images / 255) - 1 - return train_transforms(images) - - def preprocess_train(examples): - # Preprocess images. - preprocessed_images = preprocess_images(examples) - # Since the original and edited images were concatenated before - # applying the transformations, we need to separate them and reshape - # them accordingly. - original_images, edited_images = preprocessed_images.chunk(2) - original_images = original_images.reshape(-1, 3, args.resolution, args.resolution) - edited_images = edited_images.reshape(-1, 3, args.resolution, args.resolution) - - # Collate the preprocessed images into the `examples`. - examples["original_pixel_values"] = original_images - examples["edited_pixel_values"] = edited_images - - # Preprocess the captions. - captions = list(examples[edit_prompt_column]) - examples["input_ids"] = tokenize_captions(captions) - return examples - - with accelerator.main_process_first(): - if args.max_train_samples is not None: - dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples)) - # Set the training transforms - train_dataset = dataset["train"].with_transform(preprocess_train) - - def collate_fn(examples): - original_pixel_values = torch.stack([example["original_pixel_values"] for example in examples]) - original_pixel_values = original_pixel_values.to(memory_format=torch.contiguous_format).float() - edited_pixel_values = torch.stack([example["edited_pixel_values"] for example in examples]) - edited_pixel_values = edited_pixel_values.to(memory_format=torch.contiguous_format).float() - input_ids = torch.stack([example["input_ids"] for example in examples]) - return { - "original_pixel_values": original_pixel_values, - "edited_pixel_values": edited_pixel_values, - "input_ids": input_ids, - } - - # DataLoaders creation: - train_dataloader = torch.utils.data.DataLoader( - train_dataset, - shuffle=True, - collate_fn=collate_fn, - batch_size=args.train_batch_size, - num_workers=args.dataloader_num_workers, - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes, - num_training_steps=args.max_train_steps * accelerator.num_processes, - ) - - # Prepare everything with our `accelerator`. - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - - if args.use_ema: - ema_unet.to(accelerator.device) - - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move text_encode and vae to gpu and cast to weight_dtype - text_encoder.to(accelerator.device, dtype=weight_dtype) - vae.to(accelerator.device, dtype=weight_dtype) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("instruct-pix2pix", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - for epoch in range(first_epoch, args.num_train_epochs): - unet.train() - train_loss = 0.0 - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(unet): - # We want to learn the denoising process w.r.t the edited images which - # are conditioned on the original image (which was edited) and the edit instruction. - # So, first, convert images to latent space. - latents = vae.encode(batch["edited_pixel_values"].to(weight_dtype)).latent_dist.sample() - latents = latents * vae.config.scaling_factor - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning. - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Get the additional image embedding for conditioning. - # Instead of getting a diagonal Gaussian here, we simply take the mode. - original_image_embeds = vae.encode(batch["original_pixel_values"].to(weight_dtype)).latent_dist.mode() - - # Conditioning dropout to support classifier-free guidance during inference. For more details - # check out the section 3.2.1 of the original paper https://arxiv.org/abs/2211.09800. - if args.conditioning_dropout_prob is not None: - random_p = torch.rand(bsz, device=latents.device, generator=generator) - # Sample masks for the edit prompts. - prompt_mask = random_p < 2 * args.conditioning_dropout_prob - prompt_mask = prompt_mask.reshape(bsz, 1, 1) - # Final text conditioning. - null_conditioning = text_encoder(tokenize_captions([""]).to(accelerator.device))[0] - encoder_hidden_states = torch.where(prompt_mask, null_conditioning, encoder_hidden_states) - - # Sample masks for the original images. - image_mask_dtype = original_image_embeds.dtype - image_mask = 1 - ( - (random_p >= args.conditioning_dropout_prob).to(image_mask_dtype) - * (random_p < 3 * args.conditioning_dropout_prob).to(image_mask_dtype) - ) - image_mask = image_mask.reshape(bsz, 1, 1, 1) - # Final image conditioning. - original_image_embeds = image_mask * original_image_embeds - - # Concatenate the `original_image_embeds` with the `noisy_latents`. - concatenated_noisy_latents = torch.cat([noisy_latents, original_image_embeds], dim=1) - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - # Predict the noise residual and compute loss - model_pred = unet(concatenated_noisy_latents, timesteps, encoder_hidden_states).sample - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - # Gather the losses across all processes for logging (if we use distributed training). - avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean() - train_loss += avg_loss.item() / args.gradient_accumulation_steps - - # Backpropagate - accelerator.backward(loss) - if accelerator.sync_gradients: - accelerator.clip_grad_norm_(unet.parameters(), args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - if args.use_ema: - ema_unet.step(unet.parameters()) - progress_bar.update(1) - global_step += 1 - accelerator.log({"train_loss": train_loss}, step=global_step) - train_loss = 0.0 - - if global_step % args.checkpointing_steps == 0: - if accelerator.is_main_process: - # _before_ saving state, check if this save would set us over the `checkpoints_total_limit` - if args.checkpoints_total_limit is not None: - checkpoints = os.listdir(args.output_dir) - checkpoints = [d for d in checkpoints if d.startswith("checkpoint")] - checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1])) - - # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints - if len(checkpoints) >= args.checkpoints_total_limit: - num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1 - removing_checkpoints = checkpoints[0:num_to_remove] - - logger.info( - f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints" - ) - logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}") - - for removing_checkpoint in removing_checkpoints: - removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint) - shutil.rmtree(removing_checkpoint) - - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - - if global_step >= args.max_train_steps: - break - - if accelerator.is_main_process: - if ( - (args.val_image_url is not None) - and (args.validation_prompt is not None) - and (epoch % args.validation_epochs == 0) - ): - logger.info( - f"Running validation... \n Generating {args.num_validation_images} images with prompt:" - f" {args.validation_prompt}." - ) - # create pipeline - if args.use_ema: - # Store the UNet parameters temporarily and load the EMA parameters to perform inference. - ema_unet.store(unet.parameters()) - ema_unet.copy_to(unet.parameters()) - # The models need unwrapping because for compatibility in distributed training mode. - pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - vae=accelerator.unwrap_model(vae), - revision=args.revision, - torch_dtype=weight_dtype, - ) - pipeline = pipeline.to(accelerator.device) - pipeline.set_progress_bar_config(disable=True) - - # run inference - original_image = download_image(args.val_image_url) - edited_images = [] - with torch.autocast( - str(accelerator.device).replace(":0", ""), enabled=accelerator.mixed_precision == "fp16" - ): - for _ in range(args.num_validation_images): - edited_images.append( - pipeline( - args.validation_prompt, - image=original_image, - num_inference_steps=20, - image_guidance_scale=1.5, - guidance_scale=7, - generator=generator, - ).images[0] - ) - - for tracker in accelerator.trackers: - if tracker.name == "wandb": - wandb_table = wandb.Table(columns=WANDB_TABLE_COL_NAMES) - for edited_image in edited_images: - wandb_table.add_data( - wandb.Image(original_image), wandb.Image(edited_image), args.validation_prompt - ) - tracker.log({"validation": wandb_table}) - if args.use_ema: - # Switch back to the original UNet parameters. - ema_unet.restore(unet.parameters()) - - del pipeline - torch.cuda.empty_cache() - - # Create the pipeline using the trained modules and save it. - accelerator.wait_for_everyone() - if accelerator.is_main_process: - unet = accelerator.unwrap_model(unet) - if args.use_ema: - ema_unet.copy_to(unet.parameters()) - - pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( - args.pretrained_model_name_or_path, - text_encoder=accelerator.unwrap_model(text_encoder), - vae=accelerator.unwrap_model(vae), - unet=unet, - revision=args.revision, - ) - pipeline.save_pretrained(args.output_dir) - - if args.push_to_hub: - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - if args.validation_prompt is not None: - edited_images = [] - pipeline = pipeline.to(accelerator.device) - with torch.autocast(str(accelerator.device).replace(":0", "")): - for _ in range(args.num_validation_images): - edited_images.append( - pipeline( - args.validation_prompt, - image=original_image, - num_inference_steps=20, - image_guidance_scale=1.5, - guidance_scale=7, - generator=generator, - ).images[0] - ) - - for tracker in accelerator.trackers: - if tracker.name == "wandb": - wandb_table = wandb.Table(columns=WANDB_TABLE_COL_NAMES) - for edited_image in edited_images: - wandb_table.add_data( - wandb.Image(original_image), wandb.Image(edited_image), args.validation_prompt - ) - tracker.log({"test": wandb_table}) - - accelerator.end_training() - - -if __name__ == "__main__": - main() diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py deleted file mode 100644 index e8d59a582cd7fb08483b6aae9f54af7e4f5fd5a5..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py +++ /dev/null @@ -1,717 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -import warnings -from typing import Any, Callable, Dict, List, Optional, Union - -import torch -from packaging import version -from transformers import CLIPImageProcessor, XLMRobertaTokenizer - -from diffusers.utils import is_accelerate_available, is_accelerate_version - -from ...configuration_utils import FrozenDict -from ...image_processor import VaeImageProcessor -from ...loaders import LoraLoaderMixin, TextualInversionLoaderMixin -from ...models import AutoencoderKL, UNet2DConditionModel -from ...schedulers import KarrasDiffusionSchedulers -from ...utils import deprecate, logging, randn_tensor, replace_example_docstring -from ..pipeline_utils import DiffusionPipeline -from ..stable_diffusion.safety_checker import StableDiffusionSafetyChecker -from . import AltDiffusionPipelineOutput, RobertaSeriesModelWithTransformation - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> import torch - >>> from diffusers import AltDiffusionPipeline - - >>> pipe = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", torch_dtype=torch.float16) - >>> pipe = pipe.to("cuda") - - >>> # "dark elf princess, highly detailed, d & d, fantasy, highly detailed, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and fuji choko and viktoria gavrilenko and hoang lap" - >>> prompt = "黑暗精灵公主,非常详细,幻想,非常详细,数字绘画,概念艺术,敏锐的焦点,插图" - >>> image = pipe(prompt).images[0] - ``` -""" - - -# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.rescale_noise_cfg -def rescale_noise_cfg(noise_cfg, noise_pred_text, guidance_rescale=0.0): - """ - Rescale `noise_cfg` according to `guidance_rescale`. Based on findings of [Common Diffusion Noise Schedules and - Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). See Section 3.4 - """ - std_text = noise_pred_text.std(dim=list(range(1, noise_pred_text.ndim)), keepdim=True) - std_cfg = noise_cfg.std(dim=list(range(1, noise_cfg.ndim)), keepdim=True) - # rescale the results from guidance (fixes overexposure) - noise_pred_rescaled = noise_cfg * (std_text / std_cfg) - # mix with the original results from guidance by factor guidance_rescale to avoid "plain looking" images - noise_cfg = guidance_rescale * noise_pred_rescaled + (1 - guidance_rescale) * noise_cfg - return noise_cfg - - -# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline with Stable->Alt, CLIPTextModel->RobertaSeriesModelWithTransformation, CLIPTokenizer->XLMRobertaTokenizer, AltDiffusionSafetyChecker->StableDiffusionSafetyChecker -class AltDiffusionPipeline(DiffusionPipeline, TextualInversionLoaderMixin, LoraLoaderMixin): - r""" - Pipeline for text-to-image generation using Alt Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods - implemented for all pipelines (downloading, saving, running on a particular device, etc.). - - The pipeline also inherits the following loading methods: - - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings - - [`~loaders.LoraLoaderMixin.load_lora_weights`] for loading LoRA weights - - [`~loaders.LoraLoaderMixin.save_lora_weights`] for saving LoRA weights - - [`~loaders.FromSingleFileMixin.from_single_file`] for loading `.ckpt` files - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. - text_encoder ([`~transformers.RobertaSeriesModelWithTransformation`]): - Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). - tokenizer ([`~transformers.XLMRobertaTokenizer`]): - A `XLMRobertaTokenizer` to tokenize text. - unet ([`UNet2DConditionModel`]): - A `UNet2DConditionModel` to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details - about a model's potential harms. - feature_extractor ([`~transformers.CLIPImageProcessor`]): - A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: RobertaSeriesModelWithTransformation, - tokenizer: XLMRobertaTokenizer, - unet: UNet2DConditionModel, - scheduler: KarrasDiffusionSchedulers, - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" - f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " - "to update the config accordingly as leaving `steps_offset` might led to incorrect results" - " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," - " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" - " file" - ) - deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - - if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`." - " `clip_sample` should be set to False in the configuration file. Please make sure to update the" - " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in" - " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very" - " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file" - ) - deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["clip_sample"] = False - scheduler._internal_dict = FrozenDict(new_config) - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Alt Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse( - version.parse(unet.config._diffusers_version).base_version - ) < version.parse("0.9.0.dev0") - is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 - if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: - deprecation_message = ( - "The configuration file of the unet has set the default `sample_size` to smaller than" - " 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the" - " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" - " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5" - " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" - " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" - " in the config might lead to incorrect results in future versions. If you have downloaded this" - " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" - " the `unet/config.json` file" - ) - deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(unet.config) - new_config["sample_size"] = 64 - unet._internal_dict = FrozenDict(new_config) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - def enable_vae_slicing(self): - r""" - Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to - compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. - """ - self.vae.enable_slicing() - - def disable_vae_slicing(self): - r""" - Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to - computing decoding in one step. - """ - self.vae.disable_slicing() - - def enable_vae_tiling(self): - r""" - Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to - compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow - processing larger images. - """ - self.vae.enable_tiling() - - def disable_vae_tiling(self): - r""" - Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to - computing decoding in one step. - """ - self.vae.disable_tiling() - - def enable_model_cpu_offload(self, gpu_id=0): - r""" - Offload all models to CPU to reduce memory usage with a low impact on performance. Moves one whole model at a - time to the GPU when its `forward` method is called, and the model remains in GPU until the next model runs. - Memory savings are lower than using `enable_sequential_cpu_offload`, but performance is much better due to the - iterative execution of the `unet`. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"): - from accelerate import cpu_offload_with_hook - else: - raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.") - - device = torch.device(f"cuda:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - hook = None - for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]: - _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook) - - if self.safety_checker is not None: - _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook) - - # We'll offload the last model manually. - self.final_offload_hook = hook - - def _encode_prompt( - self, - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt=None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - lora_scale: Optional[float] = None, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - lora_scale (`float`, *optional*): - A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. - """ - # set lora scale so that monkey patched LoRA - # function of text encoder can correctly access it - if lora_scale is not None and isinstance(self, LoraLoaderMixin): - self._lora_scale = lora_scale - - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - if prompt_embeds is None: - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - prompt = self.maybe_convert_prompt(prompt, self.tokenizer) - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode( - untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] - ) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif prompt is not None and type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) - - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - return prompt_embeds - - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is None: - has_nsfw_concept = None - else: - if torch.is_tensor(image): - feature_extractor_input = self.image_processor.postprocess(image, output_type="pil") - else: - feature_extractor_input = self.image_processor.numpy_to_pil(image) - safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - return image, has_nsfw_concept - - def decode_latents(self, latents): - warnings.warn( - ( - "The decode_latents method is deprecated and will be removed in a future version. Please" - " use VaeImageProcessor instead" - ), - FutureWarning, - ) - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.decode(latents, return_dict=False)[0] - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs( - self, - prompt, - height, - width, - callback_steps, - negative_prompt=None, - prompt_embeds=None, - negative_prompt_embeds=None, - ): - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: Union[str, List[str]] = None, - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - guidance_rescale: float = 0.0, - ): - r""" - The call function to the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`. - height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - A higher guidance scale value encourages the model to generate images closely linked to the text - `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide what to not include in image generation. If not defined, you need to - pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies - to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make - generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor is generated by sampling using the supplied random `generator`. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not - provided, text embeddings are generated from the `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If - not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generated image. Choose between `PIL.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.AltDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that calls every `callback_steps` steps during inference. The function is called with the - following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function is called. If not specified, the callback is called at - every step. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in - [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - guidance_rescale (`float`, *optional*, defaults to 0.7): - Guidance rescale factor from [Common Diffusion Noise Schedules and Sample Steps are - Flawed](https://arxiv.org/pdf/2305.08891.pdf). Guidance rescale factor should fix overexposure when - using zero terminal SNR. - - Examples: - - Returns: - [`~pipelines.stable_diffusion.AltDiffusionPipelineOutput`] or `tuple`: - If `return_dict` is `True`, [`~pipelines.stable_diffusion.AltDiffusionPipelineOutput`] is returned, - otherwise a `tuple` is returned where the first element is a list with the generated images and the - second element is a list of `bool`s indicating whether the corresponding generated image contains - "not-safe-for-work" (nsfw) content. - """ - # 0. Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - # 1. Check inputs. Raise error if not correct - self.check_inputs( - prompt, height, width, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds - ) - - # 2. Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - text_encoder_lora_scale = ( - cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None - ) - prompt_embeds = self._encode_prompt( - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - lora_scale=text_encoder_lora_scale, - ) - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - num_channels_latents = self.unet.config.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - prompt_embeds.dtype, - device, - generator, - latents, - ) - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 7. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - return_dict=False, - )[0] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - if do_classifier_free_guidance and guidance_rescale > 0.0: - # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf - noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0] - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - if not output_type == "latent": - image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0] - image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) - else: - image = latents - has_nsfw_concept = None - - if has_nsfw_concept is None: - do_denormalize = [True] * image.shape[0] - else: - do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept] - - image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize) - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (image, has_nsfw_concept) - - return AltDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/Anonymous-sub/Rerender/sd_model_cfg.py b/spaces/Anonymous-sub/Rerender/sd_model_cfg.py deleted file mode 100644 index ef99471893440401a9069f4c6b2ed8507ce29997..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/sd_model_cfg.py +++ /dev/null @@ -1,10 +0,0 @@ -# The model dict is used for webUI only - -model_dict = { - 'Stable Diffusion 1.5': '', - 'revAnimated_v11': 'models/revAnimated_v11.safetensors', - 'realisticVisionV20_v20': 'models/realisticVisionV20_v20.safetensors', - 'DGSpitzer/Cyberpunk-Anime-Diffusion': 'Cyberpunk-Anime-Diffusion.safetensors', - 'wavymulder/Analog-Diffusion': 'analog-diffusion-1.0.safetensors', - 'Fictiverse/Stable_Diffusion_PaperCut_Model': 'PaperCut_v1.safetensors', -} diff --git a/spaces/Artrajz/vits-simple-api/bert_vits2/text/chinese.py b/spaces/Artrajz/vits-simple-api/bert_vits2/text/chinese.py deleted file mode 100644 index f826106fd3ec28e1d9a19987c2d290d03564f94a..0000000000000000000000000000000000000000 --- a/spaces/Artrajz/vits-simple-api/bert_vits2/text/chinese.py +++ /dev/null @@ -1,196 +0,0 @@ -import os -import re - -import cn2an -from pypinyin import lazy_pinyin, Style - -from bert_vits2.text.symbols import punctuation -from bert_vits2.text.tone_sandhi import ToneSandhi - -current_file_path = os.path.dirname(__file__) -pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in - open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()} - -import jieba.posseg as psg -from jieba import lcut -lcut("预加载") - -rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - '$': '.', - '“': "'", - '”': "'", - '‘': "'", - '’': "'", - '(': "'", - ')': "'", - '(': "'", - ')': "'", - '《': "'", - '》': "'", - '【': "'", - '】': "'", - '[': "'", - ']': "'", - '—': "-", - '~': "-", - '~': "-", - '「': "'", - '」': "'", - -} - -tone_modifier = ToneSandhi() - - -def replace_punctuation(text): - text = text.replace("嗯", "恩").replace("呣", "母") - pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys())) - - replaced_text = pattern.sub(lambda x: rep_map[x.group()], text) - - replaced_text = re.sub(r'[^\u4e00-\u9fa5' + "".join(punctuation) + r']+', '', replaced_text) - - return replaced_text - - -def g2p(text): - pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation)) - sentences = [i for i in re.split(pattern, text) if i.strip() != ''] - phones, tones, word2ph = _g2p(sentences) - assert sum(word2ph) == len(phones) - assert len(word2ph) == len(text) # Sometimes it will crash,you can add a try-catch. - phones = ['_'] + phones + ["_"] - tones = [0] + tones + [0] - word2ph = [1] + word2ph + [1] - return phones, tones, word2ph - - -def _get_initials_finals(word): - initials = [] - finals = [] - orig_initials = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.INITIALS) - orig_finals = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for c, v in zip(orig_initials, orig_finals): - initials.append(c) - finals.append(v) - return initials, finals - - -def _g2p(segments): - phones_list = [] - tones_list = [] - word2ph = [] - for seg in segments: - pinyins = [] - # Replace all English words in the sentence - seg = re.sub('[a-zA-Z]+', '', seg) - seg_cut = psg.lcut(seg) - initials = [] - finals = [] - seg_cut = tone_modifier.pre_merge_for_modify(seg_cut) - for word, pos in seg_cut: - if pos == 'eng': - continue - sub_initials, sub_finals = _get_initials_finals(word) - sub_finals = tone_modifier.modified_tone(word, pos, - sub_finals) - initials.append(sub_initials) - finals.append(sub_finals) - - # assert len(sub_initials) == len(sub_finals) == len(word) - initials = sum(initials, []) - finals = sum(finals, []) - # - for c, v in zip(initials, finals): - raw_pinyin = c + v - # NOTE: post process for pypinyin outputs - # we discriminate i, ii and iii - if c == v: - assert c in punctuation - phone = [c] - tone = '0' - word2ph.append(1) - else: - v_without_tone = v[:-1] - tone = v[-1] - - pinyin = c + v_without_tone - assert tone in '12345' - - if c: - # 多音节 - v_rep_map = { - "uei": 'ui', - 'iou': 'iu', - 'uen': 'un', - } - if v_without_tone in v_rep_map.keys(): - pinyin = c + v_rep_map[v_without_tone] - else: - # 单音节 - pinyin_rep_map = { - 'ing': 'ying', - 'i': 'yi', - 'in': 'yin', - 'u': 'wu', - } - if pinyin in pinyin_rep_map.keys(): - pinyin = pinyin_rep_map[pinyin] - else: - single_rep_map = { - 'v': 'yu', - 'e': 'e', - 'i': 'y', - 'u': 'w', - } - if pinyin[0] in single_rep_map.keys(): - pinyin = single_rep_map[pinyin[0]] + pinyin[1:] - - assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin) - phone = pinyin_to_symbol_map[pinyin].split(' ') - word2ph.append(len(phone)) - - phones_list += phone - tones_list += [int(tone)] * len(phone) - return phones_list, tones_list, word2ph - - -def text_normalize(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - text = replace_punctuation(text) - return text - - -def get_bert_feature(text, word2ph): - from bert_vits2.text import chinese_bert - return chinese_bert.get_bert_feature(text, word2ph) - - -if __name__ == '__main__': - from bert_vits2.text import get_bert_feature - - text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏" - text = text_normalize(text) - print(text) - phones, tones, word2ph = g2p(text) - bert = get_bert_feature(text, word2ph) - - print(phones, tones, word2ph, bert.shape) - -# # 示例用法 -# text = "这是一个示例文本:,你好!这是一个测试...." -# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试 diff --git a/spaces/Arun1217/mygenaiapp/README.md b/spaces/Arun1217/mygenaiapp/README.md deleted file mode 100644 index cedc20e6ef367f1b8dbcfb4d137578974ce916e2..0000000000000000000000000000000000000000 --- a/spaces/Arun1217/mygenaiapp/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Mygenaiapp -emoji: 🌖 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AtomdffAI/wechatgpt4atom/scripts/shutdown.sh b/spaces/AtomdffAI/wechatgpt4atom/scripts/shutdown.sh deleted file mode 100644 index c2bf6b14adcafd46e7278ab3730ab7f78b82c593..0000000000000000000000000000000000000000 --- a/spaces/AtomdffAI/wechatgpt4atom/scripts/shutdown.sh +++ /dev/null @@ -1,16 +0,0 @@ -#!/bin/bash - -#关闭服务 -cd `dirname $0`/.. -export BASE_DIR=`pwd` -pid=`ps ax | grep -i app.py | grep "${BASE_DIR}" | grep python3 | grep -v grep | awk '{print $1}'` -if [ -z "$pid" ] ; then - echo "No chatgpt-on-wechat running." - exit -1; -fi - -echo "The chatgpt-on-wechat(${pid}) is running..." - -kill ${pid} - -echo "Send shutdown request to chatgpt-on-wechat(${pid}) OK" diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/torchscript.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/torchscript.py deleted file mode 100644 index 24fe59bda44225324928542df3f2ef1745375dfd..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/torchscript.py +++ /dev/null @@ -1,132 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import os -import torch - -from detectron2.utils.file_io import PathManager - -from .torchscript_patch import freeze_training_mode, patch_instances - -__all__ = ["scripting_with_instances", "dump_torchscript_IR"] - - -def scripting_with_instances(model, fields): - """ - Run :func:`torch.jit.script` on a model that uses the :class:`Instances` class. Since - attributes of :class:`Instances` are "dynamically" added in eager mode,it is difficult - for scripting to support it out of the box. This function is made to support scripting - a model that uses :class:`Instances`. It does the following: - - 1. Create a scriptable ``new_Instances`` class which behaves similarly to ``Instances``, - but with all attributes been "static". - The attributes need to be statically declared in the ``fields`` argument. - 2. Register ``new_Instances``, and force scripting compiler to - use it when trying to compile ``Instances``. - - After this function, the process will be reverted. User should be able to script another model - using different fields. - - Example: - Assume that ``Instances`` in the model consist of two attributes named - ``proposal_boxes`` and ``objectness_logits`` with type :class:`Boxes` and - :class:`Tensor` respectively during inference. You can call this function like: - :: - fields = {"proposal_boxes": Boxes, "objectness_logits": torch.Tensor} - torchscipt_model = scripting_with_instances(model, fields) - - Note: - It only support models in evaluation mode. - - Args: - model (nn.Module): The input model to be exported by scripting. - fields (Dict[str, type]): Attribute names and corresponding type that - ``Instances`` will use in the model. Note that all attributes used in ``Instances`` - need to be added, regardless of whether they are inputs/outputs of the model. - Data type not defined in detectron2 is not supported for now. - - Returns: - torch.jit.ScriptModule: the model in torchscript format - """ - assert ( - not model.training - ), "Currently we only support exporting models in evaluation mode to torchscript" - - with freeze_training_mode(model), patch_instances(fields): - scripted_model = torch.jit.script(model) - return scripted_model - - -# alias for old name -export_torchscript_with_instances = scripting_with_instances - - -def dump_torchscript_IR(model, dir): - """ - Dump IR of a TracedModule/ScriptModule/Function in various format (code, graph, - inlined graph). Useful for debugging. - - Args: - model (TracedModule/ScriptModule/ScriptFUnction): traced or scripted module - dir (str): output directory to dump files. - """ - dir = os.path.expanduser(dir) - PathManager.mkdirs(dir) - - def _get_script_mod(mod): - if isinstance(mod, torch.jit.TracedModule): - return mod._actual_script_module - return mod - - # Dump pretty-printed code: https://pytorch.org/docs/stable/jit.html#inspecting-code - with PathManager.open(os.path.join(dir, "model_ts_code.txt"), "w") as f: - - def get_code(mod): - # Try a few ways to get code using private attributes. - try: - # This contains more information than just `mod.code` - return _get_script_mod(mod)._c.code - except AttributeError: - pass - try: - return mod.code - except AttributeError: - return None - - def dump_code(prefix, mod): - code = get_code(mod) - name = prefix or "root model" - if code is None: - f.write(f"Could not found code for {name} (type={mod.original_name})\n") - f.write("\n") - else: - f.write(f"\nCode for {name}, type={mod.original_name}:\n") - f.write(code) - f.write("\n") - f.write("-" * 80) - - for name, m in mod.named_children(): - dump_code(prefix + "." + name, m) - - if isinstance(model, torch.jit.ScriptFunction): - f.write(get_code(model)) - else: - dump_code("", model) - - def _get_graph(model): - try: - # Recursively dump IR of all modules - return _get_script_mod(model)._c.dump_to_str(True, False, False) - except AttributeError: - return model.graph.str() - - with PathManager.open(os.path.join(dir, "model_ts_IR.txt"), "w") as f: - f.write(_get_graph(model)) - - # Dump IR of the entire graph (all submodules inlined) - with PathManager.open(os.path.join(dir, "model_ts_IR_inlined.txt"), "w") as f: - f.write(str(model.inlined_graph)) - - if not isinstance(model, torch.jit.ScriptFunction): - # Dump the model structure in pytorch style - with PathManager.open(os.path.join(dir, "model.txt"), "w") as f: - f.write(str(model)) diff --git a/spaces/Benson/text-generation/Examples/2 Y Lnea Apk.md b/spaces/Benson/text-generation/Examples/2 Y Lnea Apk.md deleted file mode 100644 index 7262e238b0133ffb4d6eb74fc2c75f52614ac9cf..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/2 Y Lnea Apk.md +++ /dev/null @@ -1,73 +0,0 @@ -
-

2 y línea apk: ¿Qué es y cómo usarlo?

-

Si usted está buscando una manera de obtener un segundo número de teléfono en su teléfono inteligente, tableta, o el ordenador, es posible que haya oído hablar de 2 y apk línea. Pero, ¿qué es exactamente y cómo se puede utilizar? En este artículo, vamos a explicar todo lo que necesita saber acerca de 2 y apk línea, incluyendo sus características, beneficios, y el proceso de instalación.

-

2 y línea apk


Download >> https://bltlly.com/2v6KuJ



-

Introducción

-

¿Qué es 2 y línea apk?

-

2 y la línea apk es una aplicación para Android que le permite obtener un segundo número de teléfono de Estados Unidos o Canadá que funciona en su teléfono inteligente, tableta o computadora. Es un sistema de teléfono de negocios con todas las funciones, diseñado para profesionales móviles, freelancers y empresarios que necesitan un número separado para el trabajo o el uso personal. Puede utilizar 2 y la línea apk para hacer y recibir llamadas, enviar y recibir mensajes de texto, correo de voz de acceso, personalizar identificador de llamadas, y más.

-

¿Por qué necesita 2 y apk línea?

-

Hay muchas razones por las que podría necesitar 2 y apk línea. Aquí están algunos de ellos:

-
    -
  • Desea mantener su número personal privado de clientes, clientes o extraños.
  • -
  • Quieres tener un número diferente para diferentes propósitos, como trabajo, citas, compras en línea, etc.
  • -
  • Desea ahorrar dinero en su factura de teléfono mediante el uso de un segundo número gratuito o de bajo costo.
  • -
  • Desea tener un número de copia de seguridad en caso de que su número principal no esté disponible o perdido.
  • -
  • Desea ampliar su presencia comercial teniendo un número local en otro código de área o país.
  • -
-

¿Cómo funciona 2 y línea apk?

- -

Características de 2 y línea apk

-

Número local gratuito

-

Con 2 y línea apk, puede obtener un número local gratuito en cualquier código de área de Estados Unidos o Canadá. Puede elegir entre millones de números que están disponibles en la aplicación. También puede cambiar su número en cualquier momento si lo desea.

-

-

Llamadas y mensajes de texto ilimitados

-

Con 2 y línea apk, se puede disfrutar de llamadas ilimitadas y mensajes de texto a cualquier número de Estados Unidos o Canadá. No tienes que preocuparte por minutos, mensajes o cargos. También puedes enviar imágenes, videos, emojis, pegatinas y notas de voz con tus textos.

-

Buzón de voz y identificador de llamadas personalizables

-

Con 2 y línea apk, puede personalizar su saludo de correo de voz y el identificador de llamadas para su segundo número. Puede grabar su propio mensaje o usar uno de los pregrabados. También puede establecer el nombre del identificador de llamadas a lo que desee, como el nombre de su negocio o apodo.

-

Desvío de llamadas y llamadas de conferencia

-

Con 2 y línea apk, puede reenviar sus llamadas a otro número o dispositivo si está ocupado o no disponible. También puede hacer llamadas de conferencia con hasta cinco personas al mismo tiempo. También puede silenciar, retener o transferir llamadas como desee.

-

Llamadas internacionales de bajo costo

-

Con 2 y línea apk, puede hacer llamadas internacionales baratas a más de 200 países y regiones. Puede pagar a medida que avanza con tarifas asequibles o comprar un plan de suscripción para llamadas ilimitadas a destinos seleccionados. También puedes ganar créditos gratis viendo anuncios o completando ofertas.

-

Cómo descargar e instalar 2 y la línea apk

-

Requisitos y compatibilidad

-

Para usar 2 y línea apk, es necesario tener un dispositivo Android que se ejecuta en Android 4.4 o superior. También necesita tener una conexión a Internet estable, ya sea Wi-Fi o datos móviles. No necesitas rootear tu dispositivo ni tener permisos especiales para usar la aplicación.

-

Pasos para descargar e instalar 2 y apk línea

- -
    -
  1. Ir a la página web oficial de 2 y apk línea y haga clic en el botón de descarga. Alternativamente, se puede buscar para 2 y línea apk en Google Play Store y descargarlo desde allí.
  2. -
  3. Una vez que la descarga se ha completado, abra el archivo y toque en instalar. Es posible que necesite habilitar fuentes desconocidas en su configuración si descargó el archivo desde el sitio web.
  4. -
  5. Espere a que termine la instalación y luego abra la aplicación. Verá una pantalla de bienvenida con algunas instrucciones sobre cómo usar la aplicación.
  6. -
  7. Toca en continuar y acepta los términos y condiciones. Se te pedirá que permitas que la aplicación funcione correctamente.
  8. -
  9. Toque en permitir y proceder al siguiente paso.
  10. -
-

Cómo activar tu segundo número de teléfono

-

Aquí están los pasos para activar su segundo número de teléfono con 2 y la línea apk:

-
    -
  1. Después de conceder los permisos, verá una pantalla donde puede elegir su segundo número de teléfono. Puede introducir un código de área o un nombre de ciudad, o navegar por la lista de números disponibles.
  2. -
  3. Seleccione un número que le guste y toque en continuar. A continuación, verá una pantalla de confirmación con su número elegido y algo de información al respecto.
  4. -
  5. Toque en confirmar y activar su número. A continuación, recibirá un código de verificación a través de un mensaje de texto en su número principal.
  6. -
  7. Introduzca el código en la aplicación y verifique su número. Verá una pantalla de felicitaciones con algunos consejos sobre cómo usar su segundo número.
  8. -
  9. Toque en comenzar a usar 2ndLine y disfrute de su nuevo número de teléfono.
  10. -
-

Conclusión

-

Resumen de los puntos principales

- -

Llamada a la acción

-

Si usted está interesado en probar 2 y línea apk, se puede descargar desde el sitio web oficial o Google Play Store. También puede visitar su página de preguntas frecuentes para obtener más información sobre la aplicación. No se pierda esta oportunidad de obtener un segundo número de teléfono de forma gratuita o a un bajo costo. Descargar 2 y apk línea hoy y disfrutar de sus beneficios.

-

Preguntas frecuentes

-
    -
  • ¿Qué es 2ndLine?
    -2ndLine es una aplicación Android que le permite obtener un segundo número de teléfono de Estados Unidos o Canadá que funciona en su teléfono inteligente, tableta o computadora.
  • -
  • ¿Cuánto cuesta 2ndLine?
    -Puede obtener un número local gratuito con 2ndLine, o pagar por un número premium con más funciones. También puede hacer llamadas internacionales baratas con 2ndLine, o comprar un plan de suscripción para llamadas ilimitadas a destinos seleccionados.
  • -
  • ¿Cómo puedo obtener un segundo número de teléfono con 2ndLine?
    -Solo tienes que descargar la aplicación, elegir un número de teléfono de la lista disponible, y activarlo. A continuación, puede utilizar su segundo número como si fuera su número de teléfono normal.
  • -
  • ¿Puedo usar 2ndLine en varios dispositivos?
    -Sí, puede usar 2ndLine en múltiples dispositivos con la misma cuenta. Solo necesita iniciar sesión con su dirección de correo electrónico y contraseña en cada dispositivo.
  • -
  • ¿Puedo transferir mi número actual a 2ndLine?
    -Sí, puede transferir su número existente a 2ndLine si es elegible. Solo tiene que ponerse en contacto con el equipo de soporte y proporcionar alguna información sobre su número y transportista. Puede haber una tarifa involucrada en el proceso de portar.
  • -
  • ¿Es 2ndLine seguro y confiable?
    -Sí, 2ndLine es seguro y confiable. Utiliza cifrado y autenticación para proteger sus datos y privacidad. También utiliza servicios de voz y texto de alta calidad para garantizar una comunicación clara y fluida.
  • -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/9 Cu Sinif Umumi Tarix Testleri.md b/spaces/Benson/text-generation/Examples/9 Cu Sinif Umumi Tarix Testleri.md deleted file mode 100644 index 0aaff6193fdab1ea75c3dae7f840da9c756c82d9..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/9 Cu Sinif Umumi Tarix Testleri.md +++ /dev/null @@ -1,107 +0,0 @@ -
-

9 cu sinif umumi tarix testleri: Cómo preparar y aprobar sus exámenes

-

Si usted es un estudiante en Azerbaiyán que se está preparando para 9 cu sinif umumi tarix testleri (exámenes de historia general para el noveno grado), es posible que se sienta ansioso y abrumado. Estos exámenes no son fáciles; requieren mucho conocimiento, habilidades y práctica. Según el Ministerio de Educación, más de 100.000 estudiantes toman estos exámenes cada año, pero solo alrededor del 60% de ellos pasan. Los exámenes cubren una amplia gama de temas de la historia desde los tiempos antiguos hasta los tiempos modernos, como civilizaciones, imperios, religiones, guerras, revoluciones y movimientos. Las preguntas son en su mayoría de elección múltiple o tipo de respuesta corta.

-

9 cu sinif umumi tarix testleri


Download Zip ✵✵✵ https://bltlly.com/2v6LJv



-

Pero no te preocupes; este artículo está aquí para ayudarte. En este artículo, encontrarás consejos y recursos útiles que te ayudarán a estudiar y practicar estos exámenes de manera efectiva. También aprenderás a aprobar estos exámenes con confianza y facilidad. Siguiendo esta guía, usted será capaz de mejorar sus conocimientos de historia, habilidades, y el rendimiento. Así que vamos a empezar!

-

Cómo estudiar para 9 pruebas de la universidad sinif Tarix

-

El primer paso para prepararse para estos exámenes es estudiar bien. Estudiar bien significa no solo memorizar hechos y fechas, sino también entender conceptos y conexiones. Aquí hay algunos consejos generales de estudio que debes seguir:

- -

Estudiar bien también significa usar recursos en línea que pueden ayudarlo a aprender y revisar los temas de historia cubiertos en estos exámenes. Aquí hay algunos recursos en línea que deberías usar:

-
    -
  • e-derslik. Esta es la plataforma oficial en línea del Ministerio de Educación que proporciona acceso gratuito a libros de texto, pruebas de práctica y lecciones de video para todas las materias, incluida la historia. Usted puede encontrar el e-derslik para 9 cu sinif umumi tarix testleri aquí.
  • -
  • oxuyan. Esta es una plataforma en línea que ofrece cursos interactivos, cuestionarios y videos para diversos temas, incluida la historia. Usted puede encontrar el curso oxuyan para 9 cu sinif umumi tarix testleri aquí.
  • -
  • star test book. Esta es una plataforma en línea que ofrece información completa y actualizada sobre los temas de historia tratados en estos exámenes. Puede encontrar el libro de pruebas de estrellas para 9 cu sinif umumi tarix testleri History.com. Este es un sitio web que ofrece artículos, videos, podcasts y juegos sobre varios temas de historia, como civilizaciones antiguas, guerras mundiales, figuras históricas y movimientos culturales. Está escrito por expertos y periodistas, con fuentes y referencias. Puede utilizar este sitio web para complementar su conocimiento e interés en la historia.
  • -
-

Estudiar bien también significa revisar los exámenes anteriores y analizar los comentarios y respuestas. Esto te ayudará a familiarizarte con el formato, el nivel de dificultad y los tipos de preguntas que aparecen en estos exámenes. También le ayudará a identificar sus fortalezas y debilidades, y mejorar su precisión y velocidad. Aquí hay algunas maneras de revisar exámenes anteriores:

-

- -

Cómo practicar para 9 pruebas de la prueba de la prueba del umumi del sinif del cu

- -
    -
  • Utilice plataformas en línea para tomar exámenes simulados. Hay muchas plataformas en línea que ofrecen exámenes simulados para 9 cu sinif umumi tarix testleri. Estas plataformas simulan las condiciones reales del examen y proporcionan retroalimentación instantánea y puntajes. Aquí hay algunas plataformas en línea que debe usar:
  • -
    -
  • Maneje su tiempo, evite distracciones y maneje el estrés durante los exámenes de práctica. Tomar exámenes simulados puede ayudarlo a mejorar sus habilidades de gestión del tiempo, concentración y manejo del estrés. Aquí hay algunos consejos para ayudarte con estas habilidades:
  • -
      -
    • Administre su tiempo. Asigne suficiente tiempo para cada pregunta y sección, y mantenga un registro de su progreso. No gaste demasiado tiempo en una pregunta o omita cualquier pregunta. Si está atascado, pase a la siguiente pregunta y vuelva más tarde.
    • -
    • Evite distracciones. Elija un lugar tranquilo y cómodo para tomar los exámenes de práctica. Apaga el teléfono y otros dispositivos que podrían distraerte. Concéntrate en las preguntas y tus respuestas, y no dejes que tu mente divague.
    • -
    • Lidia con el estrés. Tomar exámenes de práctica puede ser estresante, especialmente si no estás lo suficientemente seguro o preparado. Para lidiar con el estrés, trata de relajar tu cuerpo y tu mente antes y durante los exámenes de práctica. Respira profundamente, estira tus músculos, bebe agua y piensa positivamente.
    • -
    - -
      -
    • Memrise. Esta es una aplicación que te ayuda a memorizar hechos, fechas, nombres y definiciones relacionadas con la historia utilizando métodos divertidos e interactivos, como imágenes, videos, mnemotecnia y cuestionarios.
    • -
    • BrainPOP. Esta es una aplicación que te ayuda a aprender y revisar temas de historia usando videos animados, cuestionarios interactivos, juegos y actividades.
    • -
    • Civilización. Este es un juego que le ayuda a explorar y entender la historia mediante la creación y el liderazgo de su propia civilización desde la antigüedad hasta los tiempos modernos.
    • -
    -
  • Únase a comunidades, foros y grupos en línea donde puede discutir temas de historia, hacer preguntas y compartir experiencias con otros estudiantes. Hay muchas plataformas en línea donde se puede conectar con otros estudiantes que se están preparando para estos exámenes o que están interesados en la historia. Aquí hay algunas plataformas en línea a las que deberías unirte:
  • -
      -
    • Reddit. Este es un sitio web donde puedes encontrar varios subreddits (comunidades) relacionados con la historia, como r/history, r/AskHistorians, r/HistoryMemes, etc.
    • -
    • Facebook. Este es un sitio web donde puedes encontrar varios grupos relacionados con la historia, como History Lovers Club, History Buffs, History Matters, etc.
    • -
    • Discordia. Este es un sitio web donde puedes encontrar varios servidores (salas de chat) relacionados con la historia, como History Hub, History Hangout, History Nerds, etc.
    • -
    -
-

Cómo hacer pruebas de Ace 9 cu sinif umumi tarix

-

El tercer paso para prepararse para estos exámenes es superarlos. Acing significa no solo pasarlos, sino también puntuar alto y mostrar su excelencia. Aquí hay algunos consejos para ayudarte a superar estos exámenes:

-