diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Pagemaker 7.0 Free Download Full Version for Windows XP - Step by Step Guide.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Pagemaker 7.0 Free Download Full Version for Windows XP - Step by Step Guide.md
deleted file mode 100644
index 3a8f3e1fa14522c28cfb470700d1bc02703a6ae9..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Pagemaker 7.0 Free Download Full Version for Windows XP - Step by Step Guide.md
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
How to Download Adobe Pagemaker 7.0 for Free on Windows XP
-
Adobe Pagemaker 7.0 is a popular desktop publishing software that allows you to create professional-looking documents, such as newsletters, brochures, flyers, and reports. It is compatible with Windows XP and other older versions of Windows.
-
adobe pagemaker 7.0 free download full version for windows xp
Click on the "Download Options" button and choose the "ZIP" file format.
-
Save the ZIP file to your computer and extract it using a program like WinRAR or 7-Zip.
-
Open the extracted folder and double-click on the "Setup.exe" file to start the installation process.
-
Follow the instructions on the screen and enter the serial number provided in the "Serial.txt" file when prompted.
-
After the installation is complete, you can launch Adobe Pagemaker 7.0 from your Start menu or desktop shortcut.
-
-
Congratulations! You have successfully downloaded and installed Adobe Pagemaker 7.0 for free on Windows XP. Enjoy creating stunning documents with this powerful software.
-
-
Why Use Adobe Pagemaker 7.0?
-
Adobe Pagemaker 7.0 is a versatile and easy-to-use software that offers many features and benefits for desktop publishing. Some of the reasons why you might want to use Adobe Pagemaker 7.0 are:
-
-
It supports a wide range of file formats, such as PDF, EPS, TIFF, JPEG, and more.
-
It allows you to import and edit text and graphics from other applications, such as Microsoft Word, Excel, and Photoshop.
-
It provides templates and wizards to help you create various types of documents quickly and easily.
-
It enables you to customize your documents with fonts, colors, styles, borders, backgrounds, and more.
-
It lets you print your documents with high quality and accuracy, or export them to the web or email.
-
-
With Adobe Pagemaker 7.0, you can unleash your creativity and produce professional-looking documents that suit your needs and preferences.
-
-
-
How to Learn Adobe Pagemaker 7.0?
-
If you are new to Adobe Pagemaker 7.0 or want to improve your skills, there are many resources available online that can help you learn how to use this software effectively. Some of the resources are:
The online courses offered by Udemy https://www.udemy.com/topic/adobe-pagemaker/, which teach you the basics and advanced features of Adobe Pagemaker 7.0 through video lectures and exercises.
By using these resources, you can learn Adobe Pagemaker 7.0 at your own pace and convenience, and become a proficient desktop publisher in no time.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Callofdutyfinesthourpcgamefullhighlycompressedtorrent The Ultimate Guide to Playing this Legendary Call of Duty Game on PC.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Callofdutyfinesthourpcgamefullhighlycompressedtorrent The Ultimate Guide to Playing this Legendary Call of Duty Game on PC.md
deleted file mode 100644
index 023af6bf68b92a3f4f5f9a8280e836995187def9..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Callofdutyfinesthourpcgamefullhighlycompressedtorrent The Ultimate Guide to Playing this Legendary Call of Duty Game on PC.md
+++ /dev/null
@@ -1,130 +0,0 @@
-
-
Call of Duty: Finest Hour - A Classic Shooter Game for PC
-
Are you a fan of shooter games? Do you love to relive the epic battles of World War II? If yes, then you should definitely check out Call of Duty: Finest Hour, one of the best games in the Call of Duty franchise. In this article, I will tell you everything you need to know about this amazing game, how to download it for PC using a torrent file, and why you should go for a highly compressed version of it.
Call of Duty: Finest Hour is a first-person shooter game that was released in 2004 for PlayStation 2, Xbox, and GameCube. It is a spin-off of the original Call of Duty game that was released in 2003 for PC. The game lets you experience three different campaigns from the perspectives of American, British, and Soviet soldiers during World War II. You will fight in various locations such as North Africa, Russia, and Germany, using authentic weapons and vehicles from that era.
-
But how can you play this game on your PC? Well, there is a simple way to do that. You just need to download a torrent file that contains the game data and use a torrent client such as uTorrent or BitTorrent to download it. A torrent file is a small file that contains information about the files and folders that are shared by other users over a peer-to-peer network. By using a torrent file, you can download large files faster and more efficiently.
-
However, there is one problem. The original size of Call of Duty: Finest Hour is about 4 GB, which means it will take a lot of time and space to download and install it on your PC. That's why I recommend you to download a highly compressed version of it, which reduces the size of the game without compromising its quality. A highly compressed version of Call of Duty: Finest Hour is only about 1 GB, which means you can download it in minutes and save a lot of disk space.
-
Gameplay and Features
-
Now that you have downloaded the game, let's see what it has to offer. Call of Duty: Finest Hour has three main modes: Campaign, Multiplayer, and Bonus. In Campaign mode, you can play through 19 missions that span across three campaigns: Eastern Front (Soviet), Western Front (American), and North African Campaign (British). You can choose from four difficulty levels: Greenhorn, Regular, Hardened, and Extreme. Each mission has different objectives such as destroying enemy tanks, rescuing prisoners, or defending a position.
-
In Multiplayer mode, you can play with up to 16 players online or offline using split-screen or system link. You can choose from six modes: Deathmatch, Team Deathmatch, Capture the Flag, Search and Destroy, Headquarters, and Behind Enemy Lines. You can also customize your character's appearance, weapons, perks, and skills.
-
In Bonus mode, you can unlock extra content such as concept art, interviews, cheats, and historical footage by collecting medals in Campaign mode. You can also play two bonus missions: Operation Saturn (Soviet) and The Flag Must Fall (British).
-
As for the features, Call of Duty: Finest Hour boasts impressive graphics and sound effects that immerse you in the war atmosphere. You can see realistic explosions, smoke effects, shadows, lighting effects, and weather effects. You can also hear authentic sounds such as gunshots, explosions, voices, and music. The game also features a dynamic soundtrack that changes according to your actions and situations.
-
Another feature that makes Call of Duty: Finest Hour stand out is its variety of weapons and vehicles. You can use over 30 different weapons such as rifles, machine guns, shotguns, snipers, pistols, grenades, and rocket launchers. You can also drive or ride over 10 different vehicles such as tanks, jeeps, trucks, motorcycles, and planes.
-
download call of duty finest hour pc game highly compressed
-call of duty finest hour pc game full version torrent link
-how to install call of duty finest hour pc game in low size
-call of duty finest hour pc game free download full cracked
-call of duty finest hour pc game system requirements and features
-call of duty finest hour pc game gameplay and review
-call of duty finest hour pc game cheats and trainer
-call of duty finest hour pc game mods and patches
-call of duty finest hour pc game online multiplayer mode
-call of duty finest hour pc game best settings and tips
-call of duty finest hour pc game iso file download
-call of duty finest hour pc game rar password unlocker
-call of duty finest hour pc game direct download link
-call of duty finest hour pc game highly compressed 100mb
-call of duty finest hour pc game no survey no password
-call of duty finest hour pc game single link download
-call of duty finest hour pc game repack by fitgirl
-call of duty finest hour pc game skidrow crack only
-call of duty finest hour pc game error fix and solution
-call of duty finest hour pc game comparison with ps2 version
-call of duty finest hour pc game controller support and configuration
-call of duty finest hour pc game save file location and backup
-call of duty finest hour pc game soundtracks and wallpapers
-call of duty finest hour pc game bonus missions and secrets
-call of duty finest hour pc game all weapons and vehicles
-call of duty finest hour pc game walkthrough and guide
-call of duty finest hour pc game speedrun and record
-call of duty finest hour pc game graphics mod and enhancement
-call of duty finest hour pc game windows 10 compatibility fix
-call of duty finest hour pc game keyboard and mouse controls
-call of duty finest hour pc game screenshots and videos
-call of duty finest hour pc game download for mac and linux
-call of duty finest hour pc game alternative download sites
-call of duty finest hour pc game history and development
-call of duty finest hour pc game awards and ratings
-call of duty finest hour pc game trivia and facts
-call of duty finest hour pc game fan art and cosplay
-call of duty finest hour pc game merchandise and collectibles
-call of duty finest hour pc game forum and community
-call of duty finest hour pc game news and updates
-buy call of duty finest hour pc game original cd key cheap
-sell call of duty finest hour pc game used copy online
-trade call of duty finest hour pc game with other games
-rent call of duty finest hour pc game for a limited time
-stream call of duty finest hour pc game on twitch or youtube
-watch call of duty finest hour pc game movie adaptation or documentary
-read call of duty finest hour pc game novelization or comic book
-play call of duty finest hour pc game with friends or strangers
-enjoy call of duty finest hour pc game as a classic shooter
-
Tips and Tricks
-
Now that you know what Call of Duty: Finest Hour is all about, let me give you some tips and tricks on how to install and run it on your PC, how to optimize it for better performance, and how to use cheats and hacks in it.
-
How to install and run Call of Duty: Finest Hour on PC?
-
To install and run Call of Duty: Finest Hour on your PC, you need to follow these steps:
-
-
Download a torrent file that contains Call of Duty: Finest Hour from a reliable source such as CompressedLab.
-
Download and install a torrent client such as uTorrent or BitTorrent on your PC.
-
Open the torrent file with your torrent client and select where you want to save the game data.
-
Wait for the download to finish.
-
Extract the game data using WinRAR or 7-Zip.
-
Open the extracted folder and run Setup.exe.
-
Follow the instructions on screen to install the game on your PC.
-
Run CODFH.exe from your desktop or start menu to launch the game.
-
-
How to optimize Call of Duty: Finest Hour for better performance?
-
To optimize Call of Duty: Finest Hour for better performance on your PC, you need to adjust some settings in the game options menu. Here are some suggestions:
-
-
Set your screen resolution according to your monitor size.
-
Set your graphics quality according to your PC specifications.
-
Turn off anti-aliasing, anisotropic filtering, and dynamic shadows if they cause lag or stuttering.
-
Turn on subtitles if you have trouble hearing or understanding dialogues.
-
Adjust your mouse sensitivity according to your preference.
-
Adjust your audio volume according to your environment.
-
-
How to use cheats and hacks in Call of Duty: Finest Hour?
-
To use cheats and hacks in Call of Duty: Finest Hour, you need to enter some codes in the cheat menu or use some third-party tools. Here are some examples:
-
-
Cheat Code
Effect
-
BULLETZAP
Bullets ricochet off walls
-
DAYNIGHT
Cycle through day/night settings
-
GODMODE
Invincibility
-
MENUSCREEN
Show menu screen during gameplay
-
NOWEAPONS
No weapons except knife
-
SUPERHEAR
Hear enemies from far away
-
TIMELIMIT
No time limit in missions
-
ZOOM
Better zoom with sniper rifle
-
-
To enter these codes, you need to go to Options > Game Options > Cheat Codes > Enter Cheat Code. ```html
enter these codes, you need to go to Options > Game Options > Cheat Codes > Enter Cheat Code. You can also unlock some cheats by collecting medals in Campaign mode.
-
If you want to use some hacks such as aimbot, wallhack, or speedhack, you need to download and install some third-party tools such as IWantCheats or HackProvider. These tools can give you an unfair advantage over other players or enemies, but they can also get you banned from online servers or damage your PC. Use them at your own risk.
-
Conclusion
-
Call of Duty: Finest Hour is a classic shooter game that lets you experience the thrill and horror of World War II from different perspectives. You can download it for PC using a torrent file and enjoy its amazing gameplay and features. You can also optimize it for better performance and use cheats and hacks to spice up your experience.
-
So what are you waiting for? Download Call of Duty: Finest Hour today and join the fight for freedom and glory! You won't regret it!
-
Thank you for reading this article. I hope you found it helpful and informative. If you have any questions or feedback, please leave a comment below. I would love to hear from you.
-
FAQs
-
Here are some frequently asked questions about Call of Duty: Finest Hour:
-
-
What are some of the best alternatives to Call of Duty: Finest Hour for PC?
-
Some of the best alternatives to Call of Duty: Finest Hour for PC are:
-
-
Call of Duty 2 - The sequel to the original Call of Duty that features improved graphics, physics, and AI.
-
Medal of Honor: Allied Assault - A game that inspired Call of Duty that focuses on the Allied invasion of Europe.
-
Battlefield 1942 - A game that allows you to fight in large-scale battles with vehicles and aircraft.
-
Brothers in Arms: Road to Hill 30 - A game that emphasizes squad tactics and realism.
-
Wolfenstein: Enemy Territory - A game that offers a free online multiplayer mode with classes and objectives.
-
-
Is Call of Duty: Finest Hour compatible with Windows 10?
-
Yes, Call of Duty: Finest Hour is compatible with Windows 10. However, you may need to run it in compatibility mode or use some patches or fixes to make it work properly. You can find some solutions online or on forums such as Steam Community.
-
How long is the gameplay of Call of Duty: Finest Hour?
-
The gameplay of Call of Duty: Finest Hour depends on your skill level, difficulty level, and mode. On average, it takes about 8 hours to complete the Campaign mode, and about 10 hours to unlock all the Bonus content. The Multiplayer mode can offer unlimited hours of gameplay depending on your preference.
-
How to play Call of Duty: Finest Hour online with other players?
-
To play Call of Duty: Finest Hour online with other players, you need to have a valid CD key and an internet connection. You can either join an existing server or host your own server using the game options menu. You can also use some third-party tools such as GameRanger or Xfire to find and join online games.
-
Is Call of Duty: Finest Hour safe to download from torrent sites?
-
Downloading Call of Duty: Finest Hour from torrent sites is not recommended, as it may contain viruses, malware, or spyware that can harm your PC or steal your personal information. It may also violate the copyright laws and get you in trouble with the authorities. It is better to buy the game from a legitimate source such as Steam.
-
- ``` 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe After Effects Cc 2015 Crack Torrent.md b/spaces/1gistliPinn/ChatGPT4/Examples/Adobe After Effects Cc 2015 Crack Torrent.md
deleted file mode 100644
index f3a000344886dce9f545e079397402e857259897..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe After Effects Cc 2015 Crack Torrent.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- 4d29de3e1b
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Ajab Gazabb Love Full !!HOT!! Song Hd 720p.md b/spaces/1gistliPinn/ChatGPT4/Examples/Ajab Gazabb Love Full !!HOT!! Song Hd 720p.md
deleted file mode 100644
index 8365fe101c6698074bc6f82823f82ac033326515..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Ajab Gazabb Love Full !!HOT!! Song Hd 720p.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Pop, Hindi, Hindi, Bollywood, Bollywood Hot Top Popular Bollywood Movie Songs Of All Time. Check The Best Songs Of All Time. So, I was also going to the same place! Rajini delivered a gangster-like performance that made her one of the most popular villains in Indian cinema. Why are you going to the same place everyday?. Start Your Free Trial. You could also find it at the networked computing store. Here are the 25 Best Bollywood Songs (with English Lyrics) of All Time!. The song is composed by the duo Afzal Brothers with lyrics written by Neeraj Sridhar. The song is sung by Arijit Singh, singer for Airtel. The story is about a love triangle between an innocent, handsome boy and a married woman. List of popular romantic Hindi movie songs Bollywood songs of all time. That's when I saw him standing there with two people. As it is not yet on any platform, I am doing a hard copy compilation of the Hindi film songs with English lyrics in the best possible way. The song will be sung by Arijit Singh and it will be a duet with Neha Kakkar. Music by Arijit Singh/ Akshay Kumar/ D. Atif Aslam, Lyrics by Kausar Munir, Sameer. 11. Today I'm going to compose about the biggest aaj sutrabhoomi thay me, jo bhi aaj sutrabhoomi thay me. The songs and the music video. rages/House of Lords - Jashanmer Jaisaa or Puriyaan Makaan (Saath - Yeh Jawaani) Bollywood Song. 45 Naseeb Mein Phirse. When I turned 18 years old, I was busy singing love songs. My songs have not only always been about love and romance but my music is also timeless. Watch now or download MP3. Welcome to the dream world where you are dating a prince with magical ways and getting married to a fairy who flies in the sky and is a princess. Quick Look: Bollywood Music Box Bollywood Radio Hindi Songs Ter Bijl Shamaakriyo Bollywood Hori Ki Music Koyi Kangal Bollywood C. So it is not only about buying a property in your name but also you have to invest in it with your heart and soul. You could find it at the market place. Songs from the same genre, see below. Many 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Din5482splinestandardfiletypepdf19.md b/spaces/1gistliPinn/ChatGPT4/Examples/Din5482splinestandardfiletypepdf19.md
deleted file mode 100644
index ef78f50454a0462903bb1b240d71b4a3e4711af7..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Din5482splinestandardfiletypepdf19.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-
-. Level code B22[/url]Just Cause 2 Download patch [url= flissinneple . Ru_Mods_Just_Cause_2_Redux_v_1.2.0.00_M_Pack_v1_5.
-RuTracker.org » Other simulators » Download torrent Just Cause 2 v.1.4.0.0 [RUS].
-Just Cause 2 is the sequel to the popular game about how .
-RuTracker.org » Other simulators » Download torrent Just Cause 2 v 1.4.0.0 [RUS] + DLC [by .
-RuTracker.org » Simulations » Download torrent Just Cause 2 v.1.4.0.0 [RUS] + DLC [by .
-Just Cause 2 is the sequel to the popular game about how . 8a78ff9644
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Take A Walk Passion Pit.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Take A Walk Passion Pit.md
deleted file mode 100644
index a8a49822ddd3ffacd7ad05d15c1a721a0b502219..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Take A Walk Passion Pit.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Driver Carte Satellite Twinhan.md b/spaces/1gistliPinn/ChatGPT4/Examples/Driver Carte Satellite Twinhan.md
deleted file mode 100644
index 08495821bc080b3cd99c04cc3efb4f21d94c79d6..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Driver Carte Satellite Twinhan.md
+++ /dev/null
@@ -1,70 +0,0 @@
-
-
Driver carte satellite twinhan: A Guide for Satellite TV Lovers
-
If you love watching satellite TV on your computer, you may need a driver carte satellite twinhan to make it possible. A driver carte satellite twinhan is a software that allows your computer to communicate with your satellite TV tuner card and receive satellite signals. Without a driver carte satellite twinhan, your satellite TV tuner card may not work properly or at all.
-
In this article, we will tell you what a driver carte satellite twinhan is, where to find it, how to install it, and how to use it. We will also give you some tips and tricks on how to troubleshoot some common issues that may occur with your driver carte satellite twinhan. Read on to learn more.
A driver carte satellite twinhan is a software that enables your computer to recognize and use your satellite TV tuner card. A satellite TV tuner card is a device that you can insert into your computer's PCI or USB slot and connect to an antenna, cable, or dish to receive TV signals from satellites orbiting the Earth. A satellite TV tuner card can provide you with hundreds of channels from different countries and regions.
-
A driver carte satellite twinhan is essential for your satellite TV tuner card to work properly on your computer. It acts as a bridge between your hardware and your operating system, allowing them to exchange data and commands. It also provides you with some features and functions that you can use to control and customize your TV viewing experience.
-
There are different types of driver carte satellite twinhan for different models and brands of satellite TV tuner cards. You need to find the one that matches your specific device and operating system. Otherwise, you may encounter some compatibility or performance issues.
-
Where to Find Driver carte satellite twinhan
-
The best place to find a driver carte satellite twinhan is from the official website of your satellite TV tuner card manufacturer or vendor. There, you can find the latest and most suitable driver carte satellite twinhan for your specific model and operating system. You can also get some support and guidance from the official website if you have any questions or problems.
-
Alternatively, you can also find a driver carte satellite twinhan from some other sources online, such as software download sites or torrent sites. However, you need to be careful and choose wisely. Some of these sources may not be trustworthy or reliable. You may end up with a corrupted or infected file that does not work or even harm your computer.
-
How to Install Driver carte satellite twinhan
-
Once you have found the driver carte satellite twinhan file, you need to install it on your computer. The installation process may vary depending on the source and format of the file, but generally, you can follow these steps:
-
-
Locate the driver carte satellite twinhan file on your computer and double-click on it.
-
Follow the instructions on the screen to complete the installation process.
-
Restart your computer if prompted.
-
Connect your satellite TV tuner card to your computer and check if it works properly.
-
-
Note: You may need to uninstall any previous or incompatible drivers before installing the new driver carte satellite twinhan.
-
How to Use Driver carte satellite twinhan
-
After installing the driver carte satellite twinhan, you can use it to watch and record satellite TV on your computer. You will need a software that can access and control your satellite TV tuner card, such as Media Portal , which is a free and open-source media center software that supports various TV tuners and formats.
-
-
To use Media Portal with your driver carte satellite twinhan, you need to follow these steps:
-
-
Download and install Media Portal from its official website.
-
Launch Media Portal and go to Settings > Television > TV Servers.
-
Select your satellite TV tuner card from the list and click on Scan for Channels.
-
Wait for Media Portal to scan and find all the available channels from your satellite signal.
-
Go back to the main menu and select Television > My TV > Watch TV.
-
Choose a channel from the list and enjoy watching satellite TV on your computer.
-
-
Note: You may need to adjust some settings and preferences according to your needs and preferences, such as language, subtitles, aspect ratio, etc.
-
How to Troubleshoot Driver carte satellite twinhan Problems
-
Sometimes, you may encounter some problems or errors with your driver carte satellite twinhan that prevent you from using it properly or at all. Here are some common problems and solutions that may help you fix them:
-
-
If you get an error message that says "Driver not found" or "Device not detected", you may need to check your connection, installation, or compatibility of your driver carte satellite twinhan. Make sure your satellite TV tuner card is properly connected to your computer, your driver carte satellite twinhan is correctly installed and updated, and your operating system is compatible with your driver carte satellite twinhan.
-
If you get an error message that says "No signal" or "Weak signal", you may need to check your antenna, cable, or dish settings. Make sure they are properly aligned, connected, and configured to receive the best possible signal from your satellite provider.
-
If you get an error message that says "No channels found" or "Channel not available", you may need to check
-
What Are the Benefits of Driver carte satellite twinhan
-
Using a driver carte satellite twinhan can bring you many benefits, such as:
-
-
You can enjoy watching satellite TV on your computer with high-quality video and audio.
-
You can access hundreds of channels from different countries and regions, including news, sports, movies, music, documentaries, etc.
-
You can record your favorite programs and watch them later or share them with others.
-
You can customize your TV viewing experience with various settings and options, such as subtitles, aspect ratio, parental control, etc.
-
You can save money and space by using your computer as a TV instead of buying a separate TV set and receiver.
-
-
What Are the Drawbacks of Driver carte satellite twinhan
-
Using a driver carte satellite twinhan can also have some drawbacks, such as:
-
-
You may need to pay for a subscription or a license to use some satellite TV services or software.
-
You may need to buy a compatible satellite TV tuner card and an antenna, cable, or dish to receive satellite signals.
-
You may need to update your driver carte satellite twinhan regularly to keep up with the changes and improvements of your satellite TV tuner card and software.
-
You may experience some technical issues or errors with your driver carte satellite twinhan that may affect your TV viewing experience.
-
-
How to Choose the Best Driver carte satellite twinhan
-
There are many factors that you need to consider when choosing the best driver carte satellite twinhan for your needs and preferences, such as:
-
-
The compatibility of your driver carte satellite twinhan with your satellite TV tuner card and operating system.
-
The features and functions of your driver carte satellite twinhan that suit your TV viewing needs and preferences.
-
The reliability and security of your driver carte satellite twinhan that protect your computer and data from viruses and malware.
-
The availability and accessibility of your driver carte satellite twinhan that provide you with easy download, installation, update, and support.
-
The reputation and reviews of your driver carte satellite twinhan that reflect its quality and performance.
-
-
Conclusion
-
A driver carte satellite twinhan is a software that enables your computer to use your satellite TV tuner card and watch satellite TV on your computer. It is important to find, install, use, and update the right driver carte satellite twinhan for your specific device and operating system. A driver carte satellite twinhan can provide you with many benefits, such as access to hundreds of channels, high-quality video and audio, recording and customization features, and more. However, a driver carte satellite twinhan can also have some drawbacks, such as compatibility or performance issues, technical errors, or security risks. Therefore, you need to be careful and choose wisely when using a driver carte satellite twinhan. You can also consider some alternatives to driver carte satellite twinhan, such as standalone receivers, online streaming services, or mobile apps. However, these alternatives may have their own advantages and disadvantages as well. In conclusion, a driver carte satellite twinhan is a useful software for satellite TV lovers who want to enjoy watching satellite TV on their computer. However, it also requires some knowledge and skills to use it properly and safely.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/DRAGON BALL FighterZ NSP XCI DLC How to Download and Play on Egg NS Emulator.md b/spaces/1phancelerku/anime-remove-background/DRAGON BALL FighterZ NSP XCI DLC How to Download and Play on Egg NS Emulator.md
deleted file mode 100644
index 887d967c50182a0c0e8e080d39ff4bf5a2ce1cc7..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/DRAGON BALL FighterZ NSP XCI DLC How to Download and Play on Egg NS Emulator.md
+++ /dev/null
@@ -1,89 +0,0 @@
-
-
Dragon Ball FighterZ NSP Download: How to Play the Best Dragon Ball Game on Nintendo Switch
-
If you are a fan of Dragon Ball and fighting games, you probably have heard of Dragon Ball FighterZ, the latest and greatest game based on the iconic anime series. But did you know that you can play it on your Nintendo Switch, even if it is not available on the official eShop or in your region? In this article, we will show you how to download and install Dragon Ball FighterZ NSP file on your Switch using a custom firmware, so you can enjoy this amazing game on the go.
Dragon Ball FighterZ is a 3v3 fighting game developed by Arc System Works, the makers of Blazblue and Guilty Gear. It features a roster of 24 characters from the Dragon Ball universe, each with their own unique moves and abilities. You can choose your favorite fighters and form your own team, or let the game pick them for you randomly. You can also switch between them during battle, or call them for assist attacks.
-
One of the most impressive aspects of Dragon Ball FighterZ is its visual style, which replicates the anime perfectly. The game uses 3D models that look like 2D sprites, with cel-shaded graphics and dynamic camera angles. The animations are fluid and faithful to the source material, and the special effects are spectacular. The game also features original voice acting from both Japanese and English cast members, as well as an epic soundtrack.
-
Another highlight of Dragon Ball FighterZ is its combat system, which is simple but deep. The game uses four buttons for light, medium, heavy, and special attacks, as well as universal commands for super moves, homing dashes, vanishes, and more. The game is easy to learn, but hard to master, as it requires timing, strategy, and teamwork. The game also has various modes for different types of players, such as story mode, arcade mode, online mode, training mode, and more.
-
What is NSP and why do you need it?
-
NSP stands for Nintendo Switch Package and it is a file format for digital games that can be installed on your Switch using a custom firmware. A custom firmware is a modified version of the official Switch software that
allows you to run homebrew apps, emulators, backups, and more. NSP files can be downloaded from various sources on the internet and installed on your Switch using a NSP installer app.
-
dragon ball fighterz nsp xci rom download
-dragon ball fighterz nsp update download
-dragon ball fighterz nsp dlc download
-dragon ball fighterz nsp switch download
-dragon ball fighterz nsp free download
-dragon ball fighterz nsp torrent download
-dragon ball fighterz nsp mega download
-dragon ball fighterz nsp google drive download
-dragon ball fighterz nsp 1fichier download
-dragon ball fighterz nsp reddit download
-dragon ball fighterz nsp full game download
-dragon ball fighterz nsp latest version download
-dragon ball fighterz nsp english patch download
-dragon ball fighterz nsp online play download
-dragon ball fighterz nsp emulator download
-dragon ball fighterz xci to nsp converter download
-dragon ball fighterz xci vs nsp download
-dragon ball fighterz xci file size download
-dragon ball fighterz xci romsmania download
-dragon ball fighterz xci switch-xci.com download
-dragon ball fighterz xci base game download
-dragon ball fighterz xci update 1.27 download
-dragon ball fighterz xci all dlc download
-dragon ball fighterz xci torrent magnet download
-dragon ball fighterz xci mega.nz download
-dragon ball fighterz xci google drive link download
-dragon ball fighterz xci 1fichier premium download
-dragon ball fighterz xci reddit request download
-dragon ball fighterz xci full game cracked download
-dragon ball fighterz xci latest version patched download
-dragon ball fighterz xci english language download
-dragon ball fighterz xci online multiplayer download
-dragon ball fighterz xci emulator pc download
-how to install dragon ball fighterz nsp on switch
-how to update dragon ball fighterz nsp on switch
-how to install dlc for dragon ball fighterz nsp on switch
-how to play online with dragon ball fighterz nsp on switch
-how to fix error code 2002 4518 on dragon ball fighterz nsp on switch
-how to convert xci to nsp for dragon ball fighterz on switch
-how to install dragon ball fighterz xci on switch sx os
-how to update dragon ball fighterz xci on switch sx os
-how to install dlc for dragon ball fighterz xci on switch sx os
-how to play online with dragon ball fighterz xci on switch sx os
-how to fix error code 2002 4518 on dragon ball fighterz xci on switch sx os
-how to convert nsp to xci for dragon ball fighterz on switch sx os
-
NSP files are useful because they allow you to play games that are not available on the official eShop or that are region-locked. For example, Dragon Ball FighterZ is not available on the eShop in some countries, such as Japan, China, and Korea. By downloading and installing the NSP file, you can bypass this restriction and play the game on your Switch. NSP files also let you play games that are not yet released in your region, or that are cheaper in other regions.
-
How to download and install Dragon Ball FighterZ NSP on your Switch?
-
Before you can download and install Dragon Ball FighterZ NSP on your Switch, you need to prepare your Switch for custom firmware installation. This involves backing up your NAND, creating an emuMMC partition, and installing a custom firmware of your choice. You also need to enable sigpatches to bypass Nintendo's security checks. This process is not very difficult, but it requires some technical knowledge and caution. If you are not familiar with it, we recommend you to follow a detailed guide from a reputable source, such as this one.
-
Once you have prepared your Switch for custom firmware installation, you can proceed to download and install Dragon Ball FighterZ NSP on your Switch. Here are the steps you need to follow:
-
Step 1: Download Dragon Ball FighterZ NSP file from a reliable source
-
The first step is to download the Dragon Ball FighterZ NSP file from a reliable source. There are many websites that offer NSP files for download, but not all of them are safe and trustworthy. Some of them may contain malware, viruses, or corrupted files that can harm your Switch or your PC. Therefore, you need to be careful and choose a reputable site that has positive reviews and feedback from other users.
-
One of the best sites to download Dragon Ball FighterZ NSP file is nsw2u.com, which is a popular and trusted site for Switch games and updates. You can find the link to the game's page here. To download the NSP file from this site, you need to use a VPN and a torrent client, such as qBittorrent or uTorrent. A VPN is a service that encrypts your internet traffic and changes your IP address, so you can access blocked or restricted sites and protect your privacy. A torrent client is a software that allows you to download files from peer-to-peer networks.
-
To download the NSP file from nsw2u.com, follow these steps:
-
-
Download and install a VPN of your choice on your PC. We recommend using NordVPN or ExpressVPN, as they are fast, secure, and easy to use.
-
Connect to a VPN server in a country where nsw2u.com is not blocked, such as Canada or the Netherlands.
-
Download and install a torrent client of your choice on your PC. We recommend using qBittorrent or uTorrent, as they are lightweight, user-friendly, and free.
-
Go to the game's page on nsw2u.com and click on the magnet link icon next to the NSP file name. This will open the torrent client and start downloading the file.
-
-
Before downloading the NSP file, make sure to check the file size and the required firmware version for the game. The file size for Dragon Ball FighterZ NSP is about 6.5 GB, and the required firmware version is 11.0.1 or higher. If your Switch's firmware version is lower than that, you need to update it using ChoiDujourNX or another homebrew app.
-
Also, make sure to verify the file integrity using a checksum tool or a NSP verifier app after downloading it. A checksum tool is a software that calculates a unique code for a file based on its content, which can be used to check if the file is authentic and unmodified. A NSP verifier app is a software that checks if a NSP file is valid and compatible with your Switch. You can use tools like MD5 & SHA Checksum Utility or NSC Builder for this purpose.
-
Step 2: Transfer and install Dragon Ball FighterZ NSP file on your Switch
-
The second step is to transfer and install Dragon Ball FighterZ NSP file on your Switch. There are two ways to do this: using a USB cable or a microSD card adapter. A USB cable is a wire that connects your Switch to your PC, while a microSD card adapter is a device that allows you to insert your Switch's microSD card into your PC's card reader. Both methods require a file manager or a NSP installer app on your Switch, such as Goldleaf or Tinfoil. A file manager or a NSP installer app is a software that allows you to browse, copy, delete, and install files on your Switch. To transfer and install Dragon Ball FighterZ NSP file on your Switch using a USB cable, follow these steps: - Connect your Switch to your PC using a USB-C to USB-A cable. Make sure your Switch is in RCM mode and has the custom firmware running. - Launch the file manager or the NSP installer app on your Switch. We recommend using Goldleaf, as it is simple and compatible with most NSP files. - On your PC, download and run Quark, which is a companion app for Goldleaf that enables USB communication. You can find the link to Quark here. - On your Switch, select the USB option in Goldleaf and browse to the folder where you downloaded the Dragon Ball FighterZ NSP file. - Select the NSP file and choose to install it on your Switch's SD card or internal memory. Wait for the installation to finish. To transfer and install Dragon Ball FighterZ NSP file on your Switch using a microSD card adapter, follow these steps: - Turn off your Switch and remove the microSD card from it. Insert the microSD card into the microSD card adapter and plug it into your PC's card reader. - On your PC, open the microSD card folder and copy the Dragon Ball FighterZ NSP file to it. You can create a subfolder for it if you want. - Safely eject the microSD card adapter from your PC and remove the microSD card from it. Insert the microSD card back into your Switch and turn it on. - Launch the file manager or the NSP installer app on your Switch. We recommend using Tinfoil, as it is fast and supports multiple formats. - On your Switch, select the SD card option in Tinfoil and browse to the folder where you copied the Dragon Ball FighterZ NSP file. - Select the NSP file and choose to install it on your Switch's SD card or internal memory. Wait for the installation to finish.
Conclusion
-
Dragon Ball FighterZ is one of the best fighting games ever made, and you can play it on your Nintendo Switch using a custom firmware and a NSP file. In this article, we showed you how to download and install Dragon Ball FighterZ NSP file on your Switch using two methods: USB cable or microSD card adapter. We also explained what is Dragon Ball FighterZ, what is NSP, and why you need it. We hope you found this article helpful and informative, and that you enjoy playing Dragon Ball FighterZ on your Switch.
-
FAQs
-
Here are some frequently asked questions about Dragon Ball FighterZ NSP download:
-
-
Q: Is downloading and installing Dragon Ball FighterZ NSP legal?
-
A: Downloading and installing Dragon Ball FighterZ NSP is not legal, as it violates Nintendo's terms of service and intellectual property rights. You should only download and install Dragon Ball FighterZ NSP if you own a legitimate copy of the game or if you live in a region where the game is not available.
-
Q: Is downloading and installing Dragon Ball FighterZ NSP safe?
-
A: Downloading and installing Dragon Ball FighterZ NSP is not safe, as it exposes you to various risks, such as malware, viruses, corrupted files, bans, bricks, and more. You should only download and install Dragon Ball FighterZ NSP from reliable sources, verify the file integrity, use a VPN, backup your NAND, create an emuMMC partition, enable sigpatches, and avoid going online.
-
Q: How can I update Dragon Ball FighterZ after installing it from NSP?
-
A: You can update Dragon Ball FighterZ after installing it from NSP by downloading and installing the update NSP file from the same source as the game NSP file. You can also use homebrew apps like DBI or Awoo Installer to download and install updates directly from Nintendo's servers.
-
Q: How can I play online with Dragon Ball FighterZ after installing it from NSP?
-
A: You can play online with Dragon Ball FighterZ after installing it from NSP by using homebrew apps like 90DNS or Incognito to block Nintendo's servers and avoid bans. You can also use homebrew apps like Lan Play or XLink Kai to play online with other custom firmware users.
Q: How can I add DLC characters to Dragon Ball FighterZ after installing it from NSP?
-
A: You can add DLC characters to Dragon Ball FighterZ after installing it from NSP by downloading and installing the DLC NSP files from the same source as the game NSP file. You can also use homebrew apps like NUT or NS-USBloader to download and install DLC directly from Nintendo's servers.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Data One Piece Bounty Rush 2022 and Join the Pirate World of Luffy and His Crew.md b/spaces/1phancelerku/anime-remove-background/Download Data One Piece Bounty Rush 2022 and Join the Pirate World of Luffy and His Crew.md
deleted file mode 100644
index 659583f910b155add4ff10efa01181e5f9f34eed..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Data One Piece Bounty Rush 2022 and Join the Pirate World of Luffy and His Crew.md
+++ /dev/null
@@ -1,177 +0,0 @@
-
-
How to Download Data One Piece Bounty Rush 2022
-
If you are a fan of the popular manga and anime series One Piece, you might want to try out Data One Piece Bounty Rush, a 3D anime battle arena treasure looting game set in the pirate world of One Piece. In this game, you can join Luffy, Zoro, Nami, Sanji, and other famous characters from the series in 4 vs 4 real-time PvP battles to rush and loot treasures of berry coins for victory. You can also customize your own pirate crew by mixing and matching characters from different classes, elements, skills, and traits. You can also experience the One Piece universe in beautiful 3D graphics and battle at iconic locations from the anime.
However, before you can enjoy all these features, you need to download the game data first. The game data is a large file that contains all the necessary information and resources for the game to run smoothly on your device. By downloading the game data, you can reduce loading times, improve performance, and save storage space on your device. In this article, we will show you how to download Data One Piece Bounty Rush 2022 on Android and iOS devices. We will also show you how to update the game data when new versions are released. Finally, we will give you some tips and tricks for playing Data One Piece Bounty Rush and becoming a pirate king.
-
What is Data One Piece Bounty Rush?
-
Data One Piece Bounty Rush is a mobile game based on the One Piece franchise, developed and published by Bandai Namco Entertainment. The game is played in real-time with four-player teams in battle mode, in which the team that has the most treasure at the end wins. There are five random treasure locations on a map, and you and your teammates will have to quickly move to them and capture them by tapping the flag icon. You will also have to fight your enemies and push them
away from the treasure. You can use your character's skills and traits to gain an advantage in combat, such as stunning, freezing, or knocking back your opponents. You can also use items and boosts to enhance your character's stats and abilities. The game features over 100 characters from the One Piece series, each with their own class, element, skills, and traits. You can choose from four classes: Fighter, Warrior, Supporter, and Shooter. Each class has its own strengths and weaknesses, and you can mix and match them to create a balanced team. You can also choose from five elements: Red, Green, Blue, Black, and Yellow. Each element has an advantage over another element, except for Black and Yellow, which are neutral. You can use the element wheel to see which element is stronger or weaker against another element. You can also upgrade your characters by leveling them up, enhancing their skills, and equipping them with medals and boosts.
-
Why Download Data One Piece Bounty Rush?
-
Downloading Data One Piece Bounty Rush is highly recommended for anyone who wants to play the game without any issues or interruptions. The game data is a large file that contains all the necessary information and resources for the game to run smoothly on your device. By downloading the game data, you can enjoy the following benefits:
-
-
Faster loading times: The game data will reduce the amount of time it takes to load the game and its features. You will not have to wait for long periods of time to start playing or switch between modes.
-
Smoother performance: The game data will improve the performance of the game on your device. You will not experience any lagging, crashing, or freezing while playing. You will also be able to play at higher graphics settings without compromising the quality of the game.
-
Saving storage space: The game data will save storage space on your device by compressing the file size of the game. You will not have to worry about running out of space or deleting other apps or files to make room for the game.
-
-
Downloading Data One Piece Bounty Rush is easy and simple. All you need is a stable internet connection and enough storage space on your device. In the next sections, we will show you how to download Data One Piece Bounty Rush on Android and iOS devices.
How to Download Data One Piece Bounty Rush on Android
-
If you have an Android device, you can download Data One Piece Bounty Rush from the Google Play Store. However, before you do that, you need to make sure that your device meets the minimum and recommended specifications for the game. Here are the requirements for downloading Data One Piece Bounty Rush on Android:
-
How to download data one piece bounty rush 2022 on android
-Download data one piece bounty rush 2022 apk
-Best characters and medals for one piece bounty rush 2022
-One piece bounty rush 2022 tips and tricks
-Download data one piece bounty rush 2022 mod
-One piece bounty rush 2022 tier list
-Download data one piece bounty rush 2022 for pc
-One piece bounty rush 2022 update
-Download data one piece bounty rush 2022 ios
-One piece bounty rush 2022 reddit
-Download data one piece bounty rush 2022 hack
-One piece bounty rush 2022 discord
-Download data one piece bounty rush 2022 latest version
-One piece bounty rush 2022 review
-Download data one piece bounty rush 2022 offline
-One piece bounty rush 2022 gameplay
-Download data one piece bounty rush 2022 cheats
-One piece bounty rush 2022 wiki
-Download data one piece bounty rush 2022 obb
-One piece bounty rush 2022 codes
-Download data one piece bounty rush 2022 bandai namco
-One piece bounty rush 2022 events
-Download data one piece bounty rush 2022 google play
-One piece bounty rush 2022 guide
-Download data one piece bounty rush 2022 error
-One piece bounty rush 2022 characters
-Download data one piece bounty rush 2022 free
-One piece bounty rush 2022 medals
-Download data one piece bounty rush 2022 new characters
-One piece bounty rush 2022 release date
-Download data one piece bounty rush 2022 size
-One piece bounty rush 2022 support tags
-Download data one piece bounty rush 2022 online
-One piece bounty rush 2022 database
-Download data one piece bounty rush 2022 patch notes
-One piece bounty rush 2022 news
-Download data one piece bounty rush 2022 system requirements
-One piece bounty rush 2022 forum
-Download data one piece bounty rush 2022 lag fix
-One piece bounty rush 2022 trailer
-
Requirements for Downloading Data One Piece Bounty Rush on Android
-
-
-
Minimum Specifications
-
Recommended Specifications
-
-
-
OS: Android 6.0 or higher
-
OS: Android 8.0 or higher
-
-
-
RAM: 2 GB or more
-
RAM: 4 GB or more
-
-
-
Storage: 3 GB or more
-
Storage: 5 GB or more
-
-
-
CPU: Snapdragon 625 or equivalent
-
CPU: Snapdragon 845 or equivalent
-
-
-
GPU: Adreno 506 or equivalent
-
GPU: Adreno 630 or equivalent
-
-
-
Internet: Wi-Fi or 4G LTE
-
Internet: Wi-Fi or 5G NR
-
-
-
If your device meets these requirements, you can proceed to download Data One Piece Bounty Rush on Android. Here are the steps to follow:
-
Steps for Downloading Data One Piece Bounty Rush on Android
-
-
Open the Google Play Store app on your device and search for "Data One Piece Bounty Rush". Alternatively, you can use this link to go directly to the game page.
-
Tap on the "Install" button and wait for the game to download and install on your device. The game size is about 1.5 GB, so make sure you have enough storage space and a stable internet connection.
-
Once the game is installed, tap on the "Open" button to launch the game. You will see a splash screen with the game logo and a loading bar.
-
When the loading bar is full, you will see a pop-up window asking you to download the game data. Tap on the "Download" button to start downloading the game data. The game data size is about 1.5 GB, so make sure you have enough storage space and a stable internet connection.
-
You will see a progress bar showing the percentage of the game data downloaded. You can also see the estimated time remaining and the download speed. You can pause and resume the download at any time by tapping on the "Pause" and "Resume" buttons.
-
When the download is complete, you will see a pop-up window saying "Download Complete". Tap on the "OK" button to finish downloading the game data.
-
You will then see a pop-up window asking you to agree to the terms of service and privacy policy of the game. Read them carefully and tap on the "Agree" button if you accept them.
-
You will then see a pop-up window asking you to choose your region and language. Select your preferred options and tap on the "OK" button.
-
You will then see a pop-up window asking you to create a user name. Enter a unique and appropriate user name and tap on the "OK" button.
-
You will then see a pop-up window asking you to select your favorite character from the One Piece series. Choose one of the four options and tap on the "OK" button.
-
You will then see a tutorial video explaining the basics of the game. Watch it carefully and tap on the "Skip" button if you want to skip it.
-
You will then enter the main menu of the game, where you can access various modes and features of Data One Piece Bounty Rush. Congratulations, you have successfully downloaded Data One Piece Bounty Rush on Android!
to finish downloading the game data.
-
You will then see a pop-up window asking you to agree to the terms of service and privacy policy of the game. Read them carefully and tap on the "Agree" button if you accept them.
-
You will then see a pop-up window asking you to choose your region and language. Select your preferred options and tap on the "OK" button.
-
You will then see a pop-up window asking you to create a user name. Enter a unique and appropriate user name and tap on the "OK" button.
-
You will then see a pop-up window asking you to select your favorite character from the One Piece series. Choose one of the four options and tap on the "OK" button.
-
You will then see a tutorial video explaining the basics of the game. Watch it carefully and tap on the "Skip" button if you want to skip it.
-
You will then enter the main menu of the game, where you can access various modes and features of Data One Piece Bounty Rush. Congratulations, you have successfully downloaded Data One Piece Bounty Rush on iOS!
-
-
How to Update Data One Piece Bounty Rush
-
Data One Piece Bounty Rush is constantly updated with new features, characters, events, and bug fixes. To enjoy the latest version of the game, you need to update the game data regularly. Updating the game data is easy and simple. All you need is a stable internet connection and enough storage space on your device. In this section, we will show you how to update Data One Piece Bounty Rush on Android and iOS devices.
-
How to Check for Updates for Data One Piece Bounty Rush
-
The first step to update Data One Piece Bounty Rush is to check if there are any updates available for the game data. You can do this by following these steps:
-
-
Launch Data One Piece Bounty Rush on your device and enter the main menu.
-
Tap on the "Settings" icon at the top right corner of the screen.
-
Tap on the "Update" tab at the bottom of the screen.
-
You will see a message saying "Checking for updates..." and a loading bar.
-
If there are any updates available, you will see a message saying "Update available" and a download size.
-
If there are no updates available, you will see a message saying "No updates available" and a current version number.
-
-
You can also enable automatic updates for Data One Piece Bounty Rush by tapping on the "Auto Update" switch at the top of the screen. This will allow the game to download and install any updates automatically when they are released. However, this may consume more data and battery power, so make sure you have a stable internet connection and enough storage space on your device.
-
How to Update Data One Piece Bounty Rush Manually
-
If you have disabled automatic updates or if they are not working properly, you can update Data One Piece Bounty Rush manually by following these steps:
-
-
Launch Data One Piece Bounty Rush on your device and enter the main menu.
-
Tap on the "Settings" icon at the top right corner of the screen.
-
Tap on the "Update" tab at the bottom of the screen.
-
If there are any updates available, tap on the "Download" button to start downloading the update data. The update data size may vary depending on the version and content of the update.
-
You will see a progress bar showing the percentage of the update data downloaded. You can also see the estimated time remaining and the download speed. You can pause and resume the download at any time by tapping on the "Pause" and "Resume" buttons.
-
When the download is complete, you will see a pop-up window saying "Download Complete". Tap on the "OK" button to finish downloading the update data.
-
You will then see a pop-up window saying "Update Complete". Tap on the "OK" button to finish updating the game data.
-
You will then enter the main menu of the game, where you can access the latest features and content of Data One Piece Bounty Rush. Congratulations, you have successfully updated Data One Piece Bounty Rush!
-
-
Tips and Tricks for Playing Data One Piece Bounty Rush
-
Now that you have downloaded and updated Data One Piece Bounty Rush, you are ready to play the game and have fun. However, if you want to improve your skills and win more battles, you might want to learn some tips and tricks for playing Data One Piece Bounty Rush. In this section, we will share with you some useful tips and tricks for playing Data One Piece Bounty Rush, such as how to choose the best characters for your team, how to use medals and boosts effectively, and how to win league battles and loot treasures.
-
How to Choose the Best Characters for Your Team
-
One of the most important aspects of Data One Piece Bounty Rush is choosing the right characters for your team. You can have up to four characters in your team, and you can switch between them during battle. You can also customize your team by mixing and matching characters from different classes, elements, skills, and traits. Here are some tips on how to choose the best characters for your team:
-
-
Consider the class of your characters: There are four classes in Data One Piece Bounty Rush: Fighter, Warrior, Supporter, and Shooter. Each class has its own strengths and weaknesses, and you should balance them in your team. Fighters are good at close-range combat and have high attack power. Warriors are good at mid-range combat and have high defense power. Supporters are good at healing and buffing their allies and debuffing their enemies. Shooters are good at long-range combat and have high speed and mobility.
-
Consider the element of your characters: There are five elements in Data One Piece Bounty Rush: Red, Green, Blue, Black, and Yellow. Each element has an advantage over another element, except for Black and Yellow, which are neutral. You can use the element wheel to see which element is stronger or weaker against another element. You should choose characters that have an element advantage over your enemies, or at least avoid having an element disadvantage.
-
Consider the skills and traits of your characters: Each character has two skills and two traits that can be activated during battle. Skills are special abilities that can deal damage, heal, buff, debuff, or stun your enemies or allies. Traits are passive abilities that can enhance your character's stats or grant them certain effects. You should choose characters that have skills and traits that suit your playstyle and strategy. For example, if you like to be aggressive and deal a lot of damage, you might want to choose characters that have skills that can stun or knock back your enemies, or traits that can increase your attack power or critical rate.
-
-
How to Use Medals and Boosts Effectively
-
Another important aspect of Data One Piece Bounty Rush is using medals and boosts effectively. Medals are items that can be equipped to your characters to enhance their stats and abilities. Boosts are items that can be used before or during battle to give you an edge over your enemies. Here are some tips on how to use medals and boosts effectively:
-
-
Choose medals that match your character's class and element: There are different types of medals in Data One Piece Bounty Rush, such as Fighter medals, Warrior medals, Supporter medals, Shooter medals, Red medals, Green medals, Blue medals, Black medals, and Yellow medals. Each type of medal has different effects and bonuses for your character. You should choose medals that match your character's class and element to maximize their potential. For example, if you have a Red Fighter character, you might want to equip them with Red Fighter medals that can increase their attack power and critical rate.
-
Combine medals that have synergy effects: Some medals have synergy effects that can activate when you equip them together. These effects can give you additional bonuses or special abilities for your character. You can check the synergy effects of your medals by tapping on the "Medal Set" button at the bottom of the screen. You should combine medals that have synergy effects that suit your playstyle and strategy. For example, if you want to be more durable and tanky, you might want to combine medals that have synergy effects that can increase your defense power and HP recovery.
-
Use boosts wisely: Boosts are items that can be used before or during battle to give you an edge over your enemies. There are different types of boosts in Data One Piece Bounty Rush, such as Attack Boosts, Defense Boosts, Speed Boosts, Skill Boosts, and Berry Boosts. Each type of boost has a different effect and duration for your character. You can use up to three boosts per battle, and you can buy more boosts with berries or real money. You should use boosts wisely and strategically, depending on the situation and your goals. For example, if you want to capture treasures faster, you might want to use Speed Boosts to increase your movement speed. If you want to deal more damage, you might want to use Attack Boosts or Skill Boosts to increase your attack power or skill damage.
-
-
How to Win League Battles and Loot Treasures
-
The main mode of Data One Piece Bounty Rush is League Battle, where you can compete with other players in 4 vs 4 real-time PvP battles to rush and loot treasures of berry coins for victory. League Battle is a fun and exciting mode that tests your skills and strategy as a pirate. Here are some tips on how to win League Battles and loot treasures:
-
-
Strategize your team formation: Before you enter a League Battle, you can choose your team formation by tapping on the "Team" button at the bottom of the screen. You can select up to four characters for your team, and you can switch between them during battle. You can also see the class and element of each character, as well as their skills and traits. You should strategize your team formation based on the map, the enemy team, and your own preferences. You should balance your team with different classes and elements, and choose characters that complement each other's skills and traits.
-
Capture and defend treasures: The objective of League Battle is to capture and defend treasures on the map. There are five random treasure locations on each map, and you and your teammates will have to quickly move to them and capture them by tapping the flag icon. You will also have to fight your enemies and push them away from the treasure. The team that has the most treasure at the end of the battle wins. You should capture and defend treasures strategically, depending on the situation and your goals. You should prioritize capturing treasures that are closer to your spawn point or have less enemies around them. You should also defend treasures that are more valuable or have more enemies around them.
-
Earn more berries: Berries are the currency of Data One Piece Bounty Rush, which you can use to buy boosts, upgrade characters, or summon new characters. You can earn berries by playing League Battles, completing missions, or logging in daily. The amount of berries you earn depends on various factors, such as your rank, your score, your win rate, and your MVP rate. You should earn more berries by playing League Battles regularly, improving your skills and strategy, winning more battles, and becoming MVP more often.
-
-
Conclusion
-
Data One Piece Bounty Rush is a 3D anime battle arena treasure looting game set in the pirate world of One Piece. In this game, you can join Luffy, Zoro, Nami, Sanji, and other famous characters from the series in 4 vs 4 real-time PvP battles to rush and loot treasures of berry coins for victory. You can also customize your own pirate crew by mixing and matching characters from different classes, elements, skills, and traits. You can also experience the One Piece universe in beautiful 3D graphics and battle at iconic locations from the anime.
-
However, before you can enjoy all these features, you need to download the game data first. The game data is a large file that contains all the necessary information and resources for the game to run smoothly on your device. By downloading the game data, you can reduce loading times, improve performance, and save storage space on your device. In this article, we showed you how to download Data One Piece Bounty Rush 2022 on Android and iOS devices. We also showed you how to update the game data when new versions are released. Finally, we gave you some tips and tricks for playing Data One Piece Bounty Rush and becoming a pirate king.
-
We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. If you liked this article, please share it with your friends and fellow pirates. And if you are ready to play Data One Piece Bounty Rush, download it now from the Google Play Store or the App Store and join the fun!
-
FAQs
-
Here are some frequently asked questions about Data One Piece Bounty Rush:
-
-
Q: How can I get new characters in Data One Piece Bounty Rush?
-
A: You can get new characters in Data One Piece Bounty Rush by summoning them with scout tickets or rainbow diamonds. Scout tickets are items that can be used to summon characters from specific banners or events. Rainbow diamonds are the premium currency of Data One Piece Bounty Rush, which can be used to summon characters from any banner or event. You can get scout tickets and rainbow diamonds by playing League Battles, completing missions, logging in daily, or buying them with real money.
-
Q: How can I level up my characters in Data One Piece Bounty Rush?
-
A: You can level up your characters in Data One Piece Bounty Rush by using character fragments or EXP orbs. Character fragments are items that can be used to level up specific characters. EXP orbs are items that can be used to level up any character. You can get character fragments and EXP orbs by playing League Battles, completing missions, logging in daily, or buying them with berries or real money.
-
Q: How can I enhance my character's skills in Data One Piece Bounty Rush?
-
A: You can enhance your character's skills in Data One Piece Bounty Rush by using skill orbs or skill scrolls. Skill orbs are items that can be used to enhance any skill of any character. Skill scrolls are items that can be used to enhance specific skills of specific characters. You can get skill orbs and skill scrolls by playing League Battles, completing missions, logging in daily, or buying them with berries or real money.
-
Q: How can I join a crew in Data One Piece Bounty Rush?
-
A: You can join a crew in Data One Piece Bounty Rush by tapping on the "Crew" button at the bottom of the screen. You can then search for a crew by name, ID, rank, or language. You can also create your own crew by tapping on the "Create" button at the top of the screen. You will need 100 rainbow diamonds to create a crew. By joining a crew, you can chat with other members, participate in crew battles, and earn crew points and rewards.
-
Q: How can I contact the customer support of Data One Piece Bounty Rush?
-
A: You can contact the customer support of Data One Piece Bounty Rush by tapping on the "Settings" icon at the top right corner of the screen. Then tap on the "Support" tab at the bottom of the screen. Then tap on the "Contact Us" button at the top of the screen. You will then see a form where you can enter your name, email address, inquiry type, inquiry details, and attachments. Fill out the form and tap on the "Send" button to submit your inquiry.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Hot Lava Game APK and Customize Your Character.md b/spaces/1phancelerku/anime-remove-background/Download Hot Lava Game APK and Customize Your Character.md
deleted file mode 100644
index e71f60e2026812082ba2c18fe5f146c97c647e4a..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Hot Lava Game APK and Customize Your Character.md
+++ /dev/null
@@ -1,120 +0,0 @@
-
-
Hot Lava Game Download APK: How to Play the Ultimate Floor is Lava Challenge on Your Android Device
-
Do you remember playing the floor is lava game as a kid? You know, the one where you had to jump from one furniture to another without touching the ground, pretending that it was hot lava that would burn you if you did. Well, now you can relive that childhood fun on your Android device with Hot Lava Game, a thrilling and addictive platformer game that will test your skills and reflexes. In this article, we will tell you everything you need to know about Hot Lava Game, how to download and install it on your Android device, and how to play and enjoy it.
-
What is Hot Lava Game?
-
Hot Lava Game is a 3D platformer game developed by Jbro Studios, inspired by the popular floor is lava challenge. In this game, you have to navigate through various environments, such as a school, a park, a mall, and more, by jumping from one platform to another, avoiding the hot lava that covers the floor. You can also collect coins, gems, and power-ups along the way, as well as unlock new outfits and accessories for your character.
The concept of Hot Lava Game is simple: don't touch the floor. The gameplay is fast-paced and challenging, as you have to time your jumps carefully and avoid obstacles and enemies that can knock you off your platform. You also have to balance your speed and accuracy, as some platforms are moving or disappearing, and some levels have a time limit. You can also perform tricks and stunts in mid-air, such as flips, spins, and slides, to earn extra points and coins.
-
The features and benefits of Hot Lava Game
-
Hot Lava Game has many features and benefits that make it an enjoyable and rewarding game to play. Some of them are:
-
-
It has stunning graphics and sound effects that create an immersive and realistic experience.
-
It has a variety of environments and themes that keep the game fresh and exciting.
-
It has a simple and intuitive control system that makes it easy to play.
-
It has a multiplayer mode that allows you to compete with other players from around the world.
-
It has a leaderboard and achievements system that tracks your progress and rewards your performance.
-
It has a customization option that lets you personalize your character with different outfits and accessories.
-
-
How to Download and Install Hot Lava Game APK on Your Android Device
-
If you want to play Hot Lava Game on your Android device, you will need to download and install its APK file. An APK file is an application package file that contains all the data and files needed to run an app on an Android device. However, before you download and install Hot Lava Game APK, there are some requirements and precautions that you need to follow.
-
The requirements and precautions for downloading Hot Lava Game APK
-
The requirements for downloading Hot Lava Game APK are:
-
-
You need an Android device that runs on Android 4.4 or higher.
-
You need at least 100 MB of free storage space on your device.
-
You need a stable internet connection to download the APK file.
-
The precautions for downloading Hot Lava Game APK are:
-
-
You need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
You need to download the APK file from a trusted and reliable source, such as the official website of Hot Lava Game or a reputable APK download site. Avoid downloading the APK file from unknown or suspicious links, as they may contain malware or viruses that can harm your device.
-
You need to scan the APK file with an antivirus or anti-malware app before installing it, to ensure that it is safe and clean.
-
-
The steps for downloading and installing Hot Lava Game APK
-
The steps for downloading and installing Hot Lava Game APK are:
-
hot lava floor game free download apk
-hot lava game android apk download
-hot lava game download apk mod
-hot lava game download apk for pc
-hot lava game download apk latest version
-hot lava game download apk offline
-hot lava game download apk obb
-hot lava game download apk pure
-hot lava game download apk uptodown
-hot lava game download apk hack
-hot lava game multiplayer apk download
-hot lava game online apk download
-hot lava game 3d apk download
-hot lava game simulator apk download
-hot lava game survival apk download
-hot lava game adventure apk download
-hot lava game parkour apk download
-hot lava game action apk download
-hot lava game arcade apk download
-hot lava game runner apk download
-hot lava floor 2 game download apk
-hot lava floor 3d game download apk
-hot lava floor challenge game download apk
-hot lava floor escape game download apk
-hot lava floor impossible game download apk
-how to download hot lava game apk
-where to download hot lava game apk
-best site to download hot lava game apk
-safe site to download hot lava game apk
-trusted site to download hot lava game apk
-free site to download hot lava game apk
-easy way to download hot lava game apk
-fast way to download hot lava game apk
-simple way to download hot lava game apk
-quick way to download hot lava game apk
-tips for downloading hot lava game apk
-guide for downloading hot lava game apk
-tutorial for downloading hot lava game apk
-steps for downloading hot lava game apk
-instructions for downloading hot lava game apk
-
-
Go to the official website of Hot Lava Game or a reputable APK download site and find the download link for Hot Lava Game APK.
-
Click on the download link and wait for the APK file to be downloaded to your device.
-
Once the download is complete, locate the APK file in your device's file manager and tap on it to open it.
-
Follow the on-screen instructions and grant the necessary permissions to install the app on your device.
-
After the installation is done, you will see the Hot Lava Game icon on your device's home screen or app drawer. Tap on it to launch the game and enjoy.
-
-
How to Play and Enjoy Hot Lava Game on Your Android Device
-
Now that you have downloaded and installed Hot Lava Game on your Android device, you are ready to play and enjoy it. Here are some tips and tricks on how to play and enjoy Hot Lava Game on your Android device.
-
The controls and tips for playing Hot Lava Game
-
The controls for playing Hot Lava Game are simple and intuitive. You can use the virtual joystick on the left side of the screen to move your character, and the buttons on the right side of the screen to jump, slide, and perform tricks. You can also swipe the screen to change the camera angle and view your surroundings. Some tips for playing Hot Lava Game are:
-
-
Try to maintain a steady speed and momentum, as slowing down or stopping can make you lose balance and fall into the lava.
-
Use the power-ups wisely, as they can give you an edge over the obstacles and enemies. For example, the jetpack can help you fly over gaps, the magnet can help you collect coins easily, and the shield can protect you from damage.
-
Watch out for signs and hints that indicate where to go next, such as arrows, platforms, ropes, ladders, etc.
-
Explore different paths and routes, as they may lead you to hidden secrets and bonuses.
-
-
The modes and levels of Hot Lava Game
-
Hot Lava Game has two modes: single-player and multiplayer. In single-player mode, you can play through various levels that have different themes, such as school, park, mall, etc. Each level has its own challenges and objectives that you need to complete in order to unlock the next level. You can also earn stars based on your performance in each level. In multiplayer mode, you can compete with other players from around the world in real-time. You can join or create a room with up to four players and race against each other in different maps. You can also chat with other players and make friends.
-
The customization and social options of Hot Lava Game
-
Hot Lava Game also has a customization option that lets you personalize your character with different outfits and accessories. You can unlock new items by collecting coins and gems in the game or by purchasing them with real money. You can also mix and match different items to create your own unique style. Hot Lava Game also has a social option that lets you connect with other players and share your achievements. You can link your Facebook account to invite your friends to play with you or to see their scores and rankings. You can also follow other players and send them messages.
-
Conclusion
-
Hot Lava Game is a fun and exciting platformer game that will bring back your childhood memories of playing the floor is lava game. It has stunning graphics, addictive gameplay, various environments, multiplayer mode, customization option, social option, and more. It is easy to download and install on your Android device with its APK file. If you are looking for a game that will challenge your skills and reflexes, then you should definitely try Hot Lava Game. Download it now and enjoy!
-
A summary of the main points of the article
In this article, we have covered the following main points:
-
-
Hot Lava Game is a 3D platformer game inspired by the floor is lava challenge, where you have to jump from one platform to another without touching the hot lava that covers the floor.
-
Hot Lava Game has many features and benefits, such as stunning graphics, various environments, multiplayer mode, customization option, social option, and more.
-
Hot Lava Game can be downloaded and installed on your Android device with its APK file, which is an application package file that contains all the data and files needed to run an app on an Android device.
-
Hot Lava Game can be played and enjoyed on your Android device with simple and intuitive controls, as well as tips and tricks that will help you improve your performance and score.
-
-
A call to action for the readers to download and play Hot Lava Game
-
If you are interested in playing Hot Lava Game on your Android device, don't hesitate to download it now and join the ultimate floor is lava challenge. You will have a blast jumping, sliding, and performing tricks in different environments, as well as competing with other players from around the world. Hot Lava Game is a game that will keep you entertained and engaged for hours. Download it now and enjoy!
-
FAQs
-
Here are some frequently asked questions about Hot Lava Game:
-
Q: Is Hot Lava Game free to play?
-
A: Yes, Hot Lava Game is free to play. However, it contains in-app purchases that allow you to buy coins, gems, and items with real money.
-
Q: Is Hot Lava Game safe to download and install?
-
A: Yes, Hot Lava Game is safe to download and install, as long as you follow the requirements and precautions mentioned in this article. Make sure you download the APK file from a trusted and reliable source, enable the installation of apps from unknown sources on your device, and scan the APK file with an antivirus or anti-malware app before installing it.
-
Q: How can I update Hot Lava Game on my Android device?
-
A: You can update Hot Lava Game on your Android device by downloading and installing the latest version of its APK file from the official website of Hot Lava Game or a reputable APK download site. Alternatively, you can check for updates in the game settings or in the Google Play Store.
-
Q: How can I contact the developer of Hot Lava Game?
-
A: You can contact the developer of Hot Lava Game by sending an email to jbrostudios@gmail.com or by visiting their Facebook page at https://www.facebook.com/jbrostudios/.
-
Q: How can I share my feedback and suggestions about Hot Lava Game?
-
A: You can share your feedback and suggestions about Hot Lava Game by leaving a review or rating on the Google Play Store or by sending a message to the developer via email or Facebook.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/232labs/VToonify/vtoonify_model.py b/spaces/232labs/VToonify/vtoonify_model.py
deleted file mode 100644
index 83cd271c705742d886b59969e54abba80098dfcc..0000000000000000000000000000000000000000
--- a/spaces/232labs/VToonify/vtoonify_model.py
+++ /dev/null
@@ -1,287 +0,0 @@
-from __future__ import annotations
-import sys
-sys.path.insert(0, 'vtoonify')
-
-from util import load_psp_standalone, get_video_crop_parameter, tensor2cv2
-import torch
-import torch.nn as nn
-import numpy as np
-import dlib
-import cv2
-from model.vtoonify import VToonify
-from model.bisenet.model import BiSeNet
-import torch.nn.functional as F
-from torchvision import transforms
-from model.encoder.align_all_parallel import align_face
-import gc
-import huggingface_hub
-import os
-
-MODEL_REPO = 'saimemrekanat/vmodels'
-
-class Model():
- def __init__(self, device):
- super().__init__()
-
- self.device = device
- self.style_types = {
- 'cartoon1': ['vtoonify_d_cartoon/vtoonify_s026_d0.5.pt', 26],
- 'cartoon1-d': ['vtoonify_d_cartoon/vtoonify_s_d.pt', 26],
- 'cartoon2-d': ['vtoonify_d_cartoon/vtoonify_s_d.pt', 64],
- 'cartoon3-d': ['vtoonify_d_cartoon/vtoonify_s_d.pt', 153],
- 'cartoon4': ['vtoonify_d_cartoon/vtoonify_s299_d0.5.pt', 299],
- 'cartoon4-d': ['vtoonify_d_cartoon/vtoonify_s_d.pt', 299],
- 'cartoon5-d': ['vtoonify_d_cartoon/vtoonify_s_d.pt', 8],
- 'comic1-d': ['vtoonify_d_comic/vtoonify_s_d.pt', 28],
- 'comic2-d': ['vtoonify_d_comic/vtoonify_s_d.pt', 18],
- 'arcane1': ['vtoonify_d_arcane/vtoonify_s000_d0.5.pt', 0],
- 'arcane1-d': ['vtoonify_d_arcane/vtoonify_s_d.pt', 0],
- 'arcane2': ['vtoonify_d_arcane/vtoonify_s077_d0.5.pt', 77],
- 'arcane2-d': ['vtoonify_d_arcane/vtoonify_s_d.pt', 77],
- 'caricature1': ['vtoonify_d_caricature/vtoonify_s039_d0.5.pt', 39],
- 'caricature2': ['vtoonify_d_caricature/vtoonify_s068_d0.5.pt', 68],
- 'pixar': ['vtoonify_d_pixar/vtoonify_s052_d0.5.pt', 52],
- 'pixar-d': ['vtoonify_d_pixar/vtoonify_s_d.pt', 52],
- 'illustration1-d': ['vtoonify_d_illustration/vtoonify_s054_d_c.pt', 54],
- 'illustration2-d': ['vtoonify_d_illustration/vtoonify_s004_d_c.pt', 4],
- 'illustration3-d': ['vtoonify_d_illustration/vtoonify_s009_d_c.pt', 9],
- 'illustration4-d': ['vtoonify_d_illustration/vtoonify_s043_d_c.pt', 43],
- 'illustration5-d': ['vtoonify_d_illustration/vtoonify_s086_d_c.pt', 86],
- }
-
- self.landmarkpredictor = self._create_dlib_landmark_model()
- self.cnn_model = self._create_dlib_landmark_cnn_model()
- self.parsingpredictor = self._create_parsing_model()
- self.pspencoder = self._load_encoder()
- self.transform = transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5,0.5,0.5]),
- ])
-
- self.vtoonify, self.exstyle = self._load_default_model()
- self.color_transfer = False
- self.style_name = 'cartoon1'
- self.video_limit_cpu = 100
- self.video_limit_gpu = 300
-
- @staticmethod
- def _create_dlib_landmark_model():
- return dlib.shape_predictor(huggingface_hub.hf_hub_download(MODEL_REPO,
- 'models/shape_predictor_68_face_landmarks.dat'))
-
- @staticmethod
- def _create_dlib_landmark_cnn_model():
- return dlib.cnn_face_detection_model_v1('localmodel/mmod_human_face_detector.dat')
-
- def _create_parsing_model(self):
- parsingpredictor = BiSeNet(n_classes=19)
- parsingpredictor.load_state_dict(torch.load(huggingface_hub.hf_hub_download(MODEL_REPO, 'models/faceparsing.pth'),
- map_location=lambda storage, loc: storage))
- parsingpredictor.to(self.device).eval()
- return parsingpredictor
-
- def _load_encoder(self) -> nn.Module:
- style_encoder_path = huggingface_hub.hf_hub_download(MODEL_REPO,'models/encoder.pt')
- return load_psp_standalone(style_encoder_path, self.device)
-
- def _load_default_model(self) -> tuple[torch.Tensor, str]:
- vtoonify = VToonify(backbone = 'dualstylegan')
- vtoonify.load_state_dict(torch.load(huggingface_hub.hf_hub_download(MODEL_REPO,
- 'models/vtoonify_d_cartoon/vtoonify_s026_d0.5.pt'),
- map_location=lambda storage, loc: storage)['g_ema'])
- vtoonify.to(self.device)
- tmp = np.load(huggingface_hub.hf_hub_download(MODEL_REPO,'models/vtoonify_d_cartoon/exstyle_code.npy'), allow_pickle=True).item()
- exstyle = torch.tensor(tmp[list(tmp.keys())[26]]).to(self.device)
- with torch.no_grad():
- exstyle = vtoonify.zplus2wplus(exstyle)
- return vtoonify, exstyle
-
- def load_model(self, style_type: str) -> tuple[torch.Tensor, str]:
- if 'illustration' in style_type:
- self.color_transfer = True
- else:
- self.color_transfer = False
- if style_type not in self.style_types.keys():
- return None, 'Oops, wrong Style Type. Please select a valid model.'
- self.style_name = style_type
- model_path, ind = self.style_types[style_type]
- style_path = os.path.join('models',os.path.dirname(model_path),'exstyle_code.npy')
- self.vtoonify.load_state_dict(torch.load(huggingface_hub.hf_hub_download(MODEL_REPO,'models/'+model_path),
- map_location=lambda storage, loc: storage)['g_ema'])
- tmp = np.load(huggingface_hub.hf_hub_download(MODEL_REPO, style_path), allow_pickle=True).item()
- exstyle = torch.tensor(tmp[list(tmp.keys())[ind]]).to(self.device)
- with torch.no_grad():
- exstyle = self.vtoonify.zplus2wplus(exstyle)
- return exstyle, 'Model of %s loaded.'%(style_type)
-
- def detect_and_align(self, frame, top, bottom, left, right, return_para=False):
- message = 'Error: no face detected! Please retry or change the photo.'
- paras = get_video_crop_parameter(frame, self.landmarkpredictor, [left, right, top, bottom])
- instyle = None
- h, w, scale = 0, 0, 0
- if paras is not None:
- h,w,top,bottom,left,right,scale = paras
- H, W = int(bottom-top), int(right-left)
- # for HR image, we apply gaussian blur to it to avoid over-sharp stylization results
- kernel_1d = np.array([[0.125],[0.375],[0.375],[0.125]])
- if scale <= 0.75:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- if scale <= 0.375:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- frame = cv2.resize(frame, (w, h))[top:bottom, left:right]
- with torch.no_grad():
- I = align_face(frame, self.landmarkpredictor)
- if I is not None:
- I = self.transform(I).unsqueeze(dim=0).to(self.device)
- instyle = self.pspencoder(I)
- instyle = self.vtoonify.zplus2wplus(instyle)
- message = 'Successfully rescale the frame to (%d, %d)'%(bottom-top, right-left)
- else:
- frame = np.zeros((256,256,3), np.uint8)
- else:
- frame = np.zeros((256,256,3), np.uint8)
- if return_para:
- return frame, instyle, message, w, h, top, bottom, left, right, scale
- return frame, instyle, message
-
- #@torch.inference_mode()
- def detect_and_align_image(self, image: str, top: int, bottom: int, left: int, right: int
- ) -> tuple[np.ndarray, torch.Tensor, str]:
- if image is None:
- return np.zeros((256,256,3), np.uint8), None, 'Error: fail to load empty file.'
- frame = cv2.imread(image)
- if frame is None:
- return np.zeros((256,256,3), np.uint8), None, 'Error: fail to load the image.'
- frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
- return self.detect_and_align(frame, top, bottom, left, right)
-
- def detect_and_align_video(self, video: str, top: int, bottom: int, left: int, right: int
- ) -> tuple[np.ndarray, torch.Tensor, str]:
- if video is None:
- return np.zeros((256,256,3), np.uint8), None, 'Error: fail to load empty file.'
- video_cap = cv2.VideoCapture(video)
- if video_cap.get(7) == 0:
- video_cap.release()
- return np.zeros((256,256,3), np.uint8), torch.zeros(1,18,512).to(self.device), 'Error: fail to load the video.'
- success, frame = video_cap.read()
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
- video_cap.release()
- return self.detect_and_align(frame, top, bottom, left, right)
-
- def detect_and_align_full_video(self, video: str, top: int, bottom: int, left: int, right: int) -> tuple[str, torch.Tensor, str]:
- message = 'Error: no face detected! Please retry or change the video.'
- instyle = None
- if video is None:
- return 'default.mp4', instyle, 'Error: fail to load empty file.'
- video_cap = cv2.VideoCapture(video)
- if video_cap.get(7) == 0:
- video_cap.release()
- return 'default.mp4', instyle, 'Error: fail to load the video.'
- num = min(self.video_limit_gpu, int(video_cap.get(7)))
- if self.device == 'cpu':
- num = min(self.video_limit_cpu, num)
- success, frame = video_cap.read()
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
- frame, instyle, message, w, h, top, bottom, left, right, scale = self.detect_and_align(frame, top, bottom, left, right, True)
- if instyle is None:
- return 'default.mp4', instyle, message
- fourcc = cv2.VideoWriter_fourcc(*'mp4v')
- videoWriter = cv2.VideoWriter('input.mp4', fourcc, video_cap.get(5), (int(right-left), int(bottom-top)))
- videoWriter.write(cv2.cvtColor(frame, cv2.COLOR_RGB2BGR))
- kernel_1d = np.array([[0.125],[0.375],[0.375],[0.125]])
- for i in range(num-1):
- success, frame = video_cap.read()
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
- if scale <= 0.75:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- if scale <= 0.375:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- frame = cv2.resize(frame, (w, h))[top:bottom, left:right]
- videoWriter.write(cv2.cvtColor(frame, cv2.COLOR_RGB2BGR))
-
- videoWriter.release()
- video_cap.release()
-
- return 'input.mp4', instyle, 'Successfully rescale the video to (%d, %d)'%(bottom-top, right-left)
-
- def image_toonify(self, aligned_face: np.ndarray, instyle: torch.Tensor, exstyle: torch.Tensor, style_degree: float, style_type: str) -> tuple[np.ndarray, str]:
- #print(style_type + ' ' + self.style_name)
- if instyle is None or aligned_face is None:
- return np.zeros((256,256,3), np.uint8), 'Opps, something wrong with the input. Please go to Step 2 and Rescale Image/First Frame again.'
- if self.style_name != style_type:
- exstyle, _ = self.load_model(style_type)
- if exstyle is None:
- exstyle, _ = self.load_model(style_type)
- return np.zeros((256,256,3), np.uint8), 'Opps, something wrong with the style type. Please go to Step 1 and load model again.'
- with torch.no_grad():
- if self.color_transfer:
- s_w = exstyle
- else:
- s_w = instyle.clone()
- s_w[:,:7] = exstyle[:,:7]
-
- x = self.transform(aligned_face).unsqueeze(dim=0).to(self.device)
- x_p = F.interpolate(self.parsingpredictor(2*(F.interpolate(x, scale_factor=2, mode='bilinear', align_corners=False)))[0],
- scale_factor=0.5, recompute_scale_factor=False).detach()
- inputs = torch.cat((x, x_p/16.), dim=1)
- y_tilde = self.vtoonify(inputs, s_w.repeat(inputs.size(0), 1, 1), d_s = style_degree)
- y_tilde = torch.clamp(y_tilde, -1, 1)
- print('*** Toonify %dx%d image with style of %s'%(y_tilde.shape[2], y_tilde.shape[3], style_type))
- return ((y_tilde[0].cpu().numpy().transpose(1, 2, 0) + 1.0) * 127.5).astype(np.uint8), 'Successfully toonify the image with style of %s'%(self.style_name)
-
- def video_tooniy(self, aligned_video: str, instyle: torch.Tensor, exstyle: torch.Tensor, style_degree: float, style_type: str) -> tuple[str, str]:
- print(style_type + ' ' + self.style_name)
- exstyle, _ = self.load_model(style_type)
- if aligned_video is None:
- return 'default.mp4', 'Opps, something wrong with the input. Please go to Step 2 and Rescale Video again. 1'
- video_cap = cv2.VideoCapture(aligned_video)
- if instyle is None or aligned_video is None or video_cap.get(7) == 0:
- video_cap.release()
- return 'default.mp4', 'Opps, something wrong with the input. Please go to Step 2 and Rescale Video again. 2'
- if self.style_name != style_type:
- exstyle, _ = self.load_model(style_type)
- num = min(self.video_limit_gpu, int(video_cap.get(7)))
- if self.device == 'cpu':
- num = min(self.video_limit_cpu, num)
- fourcc = cv2.VideoWriter_fourcc(*'mp4v')
- videoWriter = cv2.VideoWriter('output.mp4', fourcc,
- video_cap.get(5), (int(video_cap.get(3)*4),
- int(video_cap.get(4)*4)))
-
- batch_frames = []
- if video_cap.get(3) != 0:
- if self.device == 'cpu':
- batch_size = max(1, int(4 * 256* 256/ video_cap.get(3) / video_cap.get(4)))
- else:
- batch_size = min(max(1, int(4 * 400 * 360/ video_cap.get(3) / video_cap.get(4))), 4)
- else:
- batch_size = 1
- print('*** Toonify using batch size of %d on %dx%d video of %d frames with style of %s'%(batch_size, int(video_cap.get(3)*4), int(video_cap.get(4)*4), num, style_type))
- with torch.no_grad():
- if self.color_transfer:
- s_w = exstyle
- else:
- s_w = instyle.clone()
- s_w[:,:7] = exstyle[:,:7]
- for i in range(num):
- success, frame = video_cap.read()
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
- batch_frames += [self.transform(frame).unsqueeze(dim=0).to(self.device)]
- if len(batch_frames) == batch_size or (i+1) == num:
- x = torch.cat(batch_frames, dim=0)
- batch_frames = []
- with torch.no_grad():
- x_p = F.interpolate(self.parsingpredictor(2*(F.interpolate(x, scale_factor=2, mode='bilinear', align_corners=False)))[0],
- scale_factor=0.5, recompute_scale_factor=False).detach()
- inputs = torch.cat((x, x_p/16.), dim=1)
- y_tilde = self.vtoonify(inputs, s_w.repeat(inputs.size(0), 1, 1), style_degree)
- y_tilde = torch.clamp(y_tilde, -1, 1)
- for k in range(y_tilde.size(0)):
- videoWriter.write(tensor2cv2(y_tilde[k].cpu()))
- gc.collect()
-
- videoWriter.release()
- video_cap.release()
- return 'output.mp4', 'Successfully toonify video of %d frames with style of %s'%(num, self.style_name)
-
-
diff --git a/spaces/7eu7d7/anime-ai-detect-fucker/attacker/PGD.py b/spaces/7eu7d7/anime-ai-detect-fucker/attacker/PGD.py
deleted file mode 100644
index 5ea381f86a267824b1f87722bc83d4a70c0d960e..0000000000000000000000000000000000000000
--- a/spaces/7eu7d7/anime-ai-detect-fucker/attacker/PGD.py
+++ /dev/null
@@ -1,84 +0,0 @@
-import torch
-from torch import nn
-from copy import deepcopy
-from .base import Attacker, Empty
-from torch.cuda import amp
-from tqdm import tqdm
-
-class PGD(Attacker):
- def __init__(self, model, img_transform=(lambda x:x, lambda x:x), use_amp=False):
- super().__init__(model, img_transform)
- self.use_amp=use_amp
- self.call_back=None
- self.img_loader=None
- self.img_hook=None
-
- self.scaler = amp.GradScaler(enabled=use_amp)
-
- def set_para(self, eps=8, alpha=lambda:8, iters=20, **kwargs):
- super().set_para(eps=eps, alpha=alpha, iters=iters, **kwargs)
-
- def set_call_back(self, call_back):
- self.call_back=call_back
-
- def set_img_loader(self, img_loader):
- self.img_loader=img_loader
-
- def step(self, images, labels, loss):
- with amp.autocast(enabled=self.use_amp):
- images.requires_grad = True
- outputs = self.model(images).logits
-
- self.model.zero_grad()
- cost = loss(outputs, labels)#+outputs[2].view(-1)[0]*0+outputs[1].view(-1)[0]*0+outputs[0].view(-1)[0]*0 #support DDP
-
- self.scaler.scale(cost).backward()
-
- adv_images = (images + self.alpha() * images.grad.sign()).detach_()
- eta = torch.clamp(adv_images - self.ori_images, min=-self.eps, max=self.eps)
- images = self.img_transform[0](torch.clamp(self.img_transform[1](self.ori_images + eta), min=0, max=1).detach_())
-
- return images
-
- def set_data(self, images, labels):
- self.ori_images = deepcopy(images)
- self.images = images
- self.labels = labels
-
- def __iter__(self):
- self.atk_step=0
- return self
-
- def __next__(self):
- self.atk_step += 1
- if self.atk_step>self.iters:
- raise StopIteration
-
- with self.model.no_sync() if isinstance(self.model, nn.parallel.DistributedDataParallel) else Empty():
- self.model.eval()
-
- self.images = self.forward(self, self.images, self.labels)
-
- self.model.zero_grad()
- self.model.train()
-
- return self.ori_images, self.images.detach(), self.labels
-
- def attack(self, images, labels):
- #images = deepcopy(images)
- self.ori_images = deepcopy(images)
-
- for i in tqdm(range(self.iters)):
- self.model.eval()
-
- images = self.forward(self, images, labels)
-
- self.model.zero_grad()
- self.model.train()
- if self.call_back:
- self.call_back(self.ori_images, images.detach(), labels)
-
- if self.img_hook is not None:
- images=self.img_hook(self.ori_images, images.detach())
-
- return images
\ No newline at end of file
diff --git a/spaces/AI-Dashboards/README/README.md b/spaces/AI-Dashboards/README/README.md
deleted file mode 100644
index aeeffd14c3f52c8aec0713de5c993edba2800522..0000000000000000000000000000000000000000
--- a/spaces/AI-Dashboards/README/README.md
+++ /dev/null
@@ -1,8 +0,0 @@
----
-title: README
-emoji: 👁
-colorFrom: indigo
-colorTo: purple
-sdk: static
-pinned: false
----
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/solvers/musicgen.py b/spaces/AIConsultant/MusicGen/audiocraft/solvers/musicgen.py
deleted file mode 100644
index bb615abf448f9dd07490aaabf3fff9b861a1b2cb..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/solvers/musicgen.py
+++ /dev/null
@@ -1,699 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from pathlib import Path
-import time
-import typing as tp
-
-import flashy
-import math
-import omegaconf
-import torch
-from torch.nn import functional as F
-
-from . import base, builders
-from .compression import CompressionSolver
-from .. import metrics as eval_metrics
-from .. import models
-from ..data.audio_dataset import AudioDataset
-from ..data.music_dataset import MusicDataset, MusicInfo, AudioInfo
-from ..data.audio_utils import normalize_audio
-from ..modules.conditioners import JointEmbedCondition, SegmentWithAttributes, WavCondition
-from ..utils.cache import CachedBatchWriter, CachedBatchLoader
-from ..utils.samples.manager import SampleManager
-from ..utils.utils import get_dataset_from_loader, is_jsonable, warn_once
-
-
-class MusicGenSolver(base.StandardSolver):
- """Solver for MusicGen training task.
-
- Used in: https://arxiv.org/abs/2306.05284
- """
- DATASET_TYPE: builders.DatasetType = builders.DatasetType.MUSIC
-
- def __init__(self, cfg: omegaconf.DictConfig):
- super().__init__(cfg)
- # easier access to sampling parameters
- self.generation_params = {
- 'use_sampling': self.cfg.generate.lm.use_sampling,
- 'temp': self.cfg.generate.lm.temp,
- 'top_k': self.cfg.generate.lm.top_k,
- 'top_p': self.cfg.generate.lm.top_p,
- }
- self._best_metric_name: tp.Optional[str] = 'ce'
-
- self._cached_batch_writer = None
- self._cached_batch_loader = None
- if cfg.cache.path:
- if cfg.cache.write:
- self._cached_batch_writer = CachedBatchWriter(Path(cfg.cache.path))
- if self.cfg.cache.write_num_shards:
- self.logger.warning("Multiple shard cache, best_metric_name will be set to None.")
- self._best_metric_name = None
- else:
- self._cached_batch_loader = CachedBatchLoader(
- Path(cfg.cache.path), cfg.dataset.batch_size, cfg.dataset.num_workers,
- min_length=self.cfg.optim.updates_per_epoch or 1)
- self.dataloaders['original_train'] = self.dataloaders['train']
- self.dataloaders['train'] = self._cached_batch_loader # type: ignore
-
- @staticmethod
- def get_eval_solver_from_sig(sig: str, dtype: tp.Optional[str] = None,
- device: tp.Optional[str] = None, autocast: bool = True,
- batch_size: tp.Optional[int] = None,
- override_cfg: tp.Optional[tp.Union[dict, omegaconf.DictConfig]] = None,
- **kwargs):
- """Mostly a convenience function around magma.train.get_solver_from_sig,
- populating all the proper param, deactivating EMA, FSDP, loading the best state,
- basically all you need to get a solver ready to "play" with in single GPU mode
- and with minimal memory overhead.
-
- Args:
- sig (str): signature to load.
- dtype (str or None): potential dtype, as a string, i.e. 'float16'.
- device (str or None): potential device, as a string, i.e. 'cuda'.
- override_cfg (dict or omegaconf.DictConfig or None): potential device, as a string, i.e. 'cuda'.
- """
- from audiocraft import train
- our_override_cfg: tp.Dict[str, tp.Any] = {'optim': {'ema': {'use': False}}}
- our_override_cfg['autocast'] = autocast
- if dtype is not None:
- our_override_cfg['dtype'] = dtype
- if device is not None:
- our_override_cfg['device'] = device
- if batch_size is not None:
- our_override_cfg['dataset'] = {'batch_size': batch_size}
- if override_cfg is None:
- override_cfg = {}
- override_cfg = omegaconf.OmegaConf.merge(
- omegaconf.DictConfig(override_cfg), omegaconf.DictConfig(our_override_cfg)) # type: ignore
- solver = train.get_solver_from_sig(
- sig, override_cfg=override_cfg,
- load_best=True, disable_fsdp=True,
- ignore_state_keys=['optimizer', 'ema'], **kwargs)
- solver.model.eval()
- return solver
-
- def get_formatter(self, stage_name: str) -> flashy.Formatter:
- return flashy.Formatter({
- 'lr': '.2E',
- 'ce': '.3f',
- 'ppl': '.3f',
- 'grad_norm': '.3E',
- }, exclude_keys=['ce_q*', 'ppl_q*'])
-
- @property
- def best_metric_name(self) -> tp.Optional[str]:
- return self._best_metric_name
-
- def build_model(self) -> None:
- """Instantiate models and optimizer."""
- # we can potentially not use all quantizers with which the EnCodec model was trained
- # (e.g. we trained the model with quantizers dropout)
- self.compression_model = CompressionSolver.wrapped_model_from_checkpoint(
- self.cfg, self.cfg.compression_model_checkpoint, device=self.device)
- assert self.compression_model.sample_rate == self.cfg.sample_rate, (
- f"Compression model sample rate is {self.compression_model.sample_rate} but "
- f"Solver sample rate is {self.cfg.sample_rate}."
- )
- # ensure we have matching configuration between LM and compression model
- assert self.cfg.transformer_lm.card == self.compression_model.cardinality, (
- "Cardinalities of the LM and compression model don't match: ",
- f"LM cardinality is {self.cfg.transformer_lm.card} vs ",
- f"compression model cardinality is {self.compression_model.cardinality}"
- )
- assert self.cfg.transformer_lm.n_q == self.compression_model.num_codebooks, (
- "Numbers of codebooks of the LM and compression models don't match: ",
- f"LM number of codebooks is {self.cfg.transformer_lm.n_q} vs ",
- f"compression model numer of codebooks is {self.compression_model.num_codebooks}"
- )
- self.logger.info("Compression model has %d codebooks with %d cardinality, and a framerate of %d",
- self.compression_model.num_codebooks, self.compression_model.cardinality,
- self.compression_model.frame_rate)
- # instantiate LM model
- self.model: models.LMModel = models.builders.get_lm_model(self.cfg).to(self.device)
- if self.cfg.fsdp.use:
- assert not self.cfg.autocast, "Cannot use autocast with fsdp"
- self.model = self.wrap_with_fsdp(self.model)
- self.register_ema('model')
- # initialize optimization
- self.optimizer = builders.get_optimizer(builders.get_optim_parameter_groups(self.model), self.cfg.optim)
- self.lr_scheduler = builders.get_lr_scheduler(self.optimizer, self.cfg.schedule, self.total_updates)
- self.register_stateful('compression_model', 'model', 'optimizer', 'lr_scheduler')
- self.register_best_state('model')
- self.autocast_dtype = {
- 'float16': torch.float16, 'bfloat16': torch.bfloat16
- }[self.cfg.autocast_dtype]
- self.scaler: tp.Optional[torch.cuda.amp.GradScaler] = None
- if self.cfg.fsdp.use:
- need_scaler = self.cfg.fsdp.param_dtype == 'float16'
- else:
- need_scaler = self.cfg.autocast and self.autocast_dtype is torch.float16
- if need_scaler:
- if self.cfg.fsdp.use:
- from torch.distributed.fsdp.sharded_grad_scaler import ShardedGradScaler
- self.scaler = ShardedGradScaler() # type: ignore
- else:
- self.scaler = torch.cuda.amp.GradScaler()
- self.register_stateful('scaler')
-
- def build_dataloaders(self) -> None:
- """Instantiate audio dataloaders for each stage."""
- self.dataloaders = builders.get_audio_datasets(self.cfg, dataset_type=self.DATASET_TYPE)
-
- def show(self) -> None:
- """Show the compression model and LM model."""
- self.logger.info("Compression model:")
- self.log_model_summary(self.compression_model)
- self.logger.info("LM model:")
- self.log_model_summary(self.model)
-
- def load_state_dict(self, state: dict) -> None:
- if 'condition_provider' in state:
- model_state = state['model']
- condition_provider_state = state.pop('condition_provider')
- prefix = 'condition_provider.'
- for key, value in condition_provider_state.items():
- key = prefix + key
- assert key not in model_state
- model_state[key] = value
- super().load_state_dict(state)
-
- def load_from_pretrained(self, name: str):
- # TODO: support native HF versions of MusicGen.
- lm_pkg = models.loaders.load_lm_model_ckpt(name)
- state: dict = {
- 'best_state': {
- 'model': lm_pkg['best_state'],
- },
- }
- return state
-
- def _compute_cross_entropy(
- self, logits: torch.Tensor, targets: torch.Tensor, mask: torch.Tensor
- ) -> tp.Tuple[torch.Tensor, tp.List[torch.Tensor]]:
- """Compute cross entropy between multi-codebook targets and model's logits.
- The cross entropy is computed per codebook to provide codebook-level cross entropy.
- Valid timesteps for each of the codebook are pulled from the mask, where invalid
- timesteps are set to 0.
-
- Args:
- logits (torch.Tensor): Model's logits of shape [B, K, T, card].
- targets (torch.Tensor): Target codes, of shape [B, K, T].
- mask (torch.Tensor): Mask for valid target codes, of shape [B, K, T].
- Returns:
- ce (torch.Tensor): Cross entropy averaged over the codebooks
- ce_per_codebook (list of torch.Tensor): Cross entropy per codebook (detached).
- """
- B, K, T = targets.shape
- assert logits.shape[:-1] == targets.shape
- assert mask.shape == targets.shape
- ce = torch.zeros([], device=targets.device)
- ce_per_codebook: tp.List[torch.Tensor] = []
- for k in range(K):
- logits_k = logits[:, k, ...].contiguous().view(-1, logits.size(-1)) # [B x T, card]
- targets_k = targets[:, k, ...].contiguous().view(-1) # [B x T]
- mask_k = mask[:, k, ...].contiguous().view(-1) # [B x T]
- ce_targets = targets_k[mask_k]
- ce_logits = logits_k[mask_k]
- q_ce = F.cross_entropy(ce_logits, ce_targets)
- ce += q_ce
- ce_per_codebook.append(q_ce.detach())
- # average cross entropy across codebooks
- ce = ce / K
- return ce, ce_per_codebook
-
- @torch.no_grad()
- def _prepare_tokens_and_attributes(
- self, batch: tp.Tuple[torch.Tensor, tp.List[SegmentWithAttributes]],
- check_synchronization_points: bool = False
- ) -> tp.Tuple[dict, torch.Tensor, torch.Tensor]:
- """Prepare input batchs for language model training.
-
- Args:
- batch (tuple[torch.Tensor, list[SegmentWithAttributes]]): Input batch with audio tensor of shape [B, C, T]
- and corresponding metadata as SegmentWithAttributes (with B items).
- check_synchronization_points (bool): Whether to check for synchronization points slowing down training.
- Returns:
- Condition tensors (dict[str, any]): Preprocessed condition attributes.
- Tokens (torch.Tensor): Audio tokens from compression model, of shape [B, K, T_s],
- with B the batch size, K the number of codebooks, T_s the token timesteps.
- Padding mask (torch.Tensor): Mask with valid positions in the tokens tensor, of shape [B, K, T_s].
- """
- if self._cached_batch_loader is None or self.current_stage != "train":
- audio, infos = batch
- audio = audio.to(self.device)
- audio_tokens = None
- assert audio.size(0) == len(infos), (
- f"Mismatch between number of items in audio batch ({audio.size(0)})",
- f" and in metadata ({len(infos)})"
- )
- else:
- audio = None
- # In that case the batch will be a tuple coming from the _cached_batch_writer bit below.
- infos, = batch # type: ignore
- assert all([isinstance(info, AudioInfo) for info in infos])
- assert all([info.audio_tokens is not None for info in infos]) # type: ignore
- audio_tokens = torch.stack([info.audio_tokens for info in infos]).to(self.device) # type: ignore
- audio_tokens = audio_tokens.long()
- for info in infos:
- if isinstance(info, MusicInfo):
- # Careful here, if you want to use this condition_wav (e.b. chroma conditioning),
- # then you must be using the chroma cache! otherwise the code will try
- # to use this segment and fail (by that I mean you will see NaN everywhere).
- info.self_wav = WavCondition(
- torch.full([1, info.channels, info.total_frames], float('NaN')),
- length=torch.tensor([info.n_frames]),
- sample_rate=[info.sample_rate],
- path=[info.meta.path],
- seek_time=[info.seek_time])
- dataset = get_dataset_from_loader(self.dataloaders['original_train'])
- assert isinstance(dataset, MusicDataset), type(dataset)
- if dataset.paraphraser is not None and info.description is not None:
- # Hackingly reapplying paraphraser when using cache.
- info.description = dataset.paraphraser.sample_paraphrase(
- info.meta.path, info.description)
- # prepare attributes
- attributes = [info.to_condition_attributes() for info in infos]
- attributes = self.model.cfg_dropout(attributes)
- attributes = self.model.att_dropout(attributes)
- tokenized = self.model.condition_provider.tokenize(attributes)
-
- # Now we should be synchronization free.
- if self.device == "cuda" and check_synchronization_points:
- torch.cuda.set_sync_debug_mode("warn")
-
- if audio_tokens is None:
- with torch.no_grad():
- audio_tokens, scale = self.compression_model.encode(audio)
- assert scale is None, "Scaled compression model not supported with LM."
-
- with self.autocast:
- condition_tensors = self.model.condition_provider(tokenized)
-
- # create a padding mask to hold valid vs invalid positions
- padding_mask = torch.ones_like(audio_tokens, dtype=torch.bool, device=audio_tokens.device)
- # replace encodec tokens from padded audio with special_token_id
- if self.cfg.tokens.padding_with_special_token:
- audio_tokens = audio_tokens.clone()
- padding_mask = padding_mask.clone()
- token_sample_rate = self.compression_model.frame_rate
- B, K, T_s = audio_tokens.shape
- for i in range(B):
- n_samples = infos[i].n_frames
- audio_sample_rate = infos[i].sample_rate
- # take the last token generated from actual audio frames (non-padded audio)
- valid_tokens = math.floor(float(n_samples) / audio_sample_rate * token_sample_rate)
- audio_tokens[i, :, valid_tokens:] = self.model.special_token_id
- padding_mask[i, :, valid_tokens:] = 0
-
- if self.device == "cuda" and check_synchronization_points:
- torch.cuda.set_sync_debug_mode("default")
-
- if self._cached_batch_writer is not None and self.current_stage == 'train':
- assert self._cached_batch_loader is None
- assert audio_tokens is not None
- for info, one_audio_tokens in zip(infos, audio_tokens):
- assert isinstance(info, AudioInfo)
- if isinstance(info, MusicInfo):
- assert not info.joint_embed, "joint_embed and cache not supported yet."
- info.self_wav = None
- assert one_audio_tokens.max() < 2**15, one_audio_tokens.max().item()
- info.audio_tokens = one_audio_tokens.short().cpu()
- self._cached_batch_writer.save(infos)
-
- return condition_tensors, audio_tokens, padding_mask
-
- def run_step(self, idx: int, batch: tp.Tuple[torch.Tensor, tp.List[SegmentWithAttributes]], metrics: dict) -> dict:
- """Perform one training or valid step on a given batch."""
- check_synchronization_points = idx == 1 and self.device == 'cuda'
-
- condition_tensors, audio_tokens, padding_mask = self._prepare_tokens_and_attributes(
- batch, check_synchronization_points)
-
- self.deadlock_detect.update('tokens_and_conditions')
-
- if check_synchronization_points:
- torch.cuda.set_sync_debug_mode('warn')
-
- with self.autocast:
- model_output = self.model.compute_predictions(audio_tokens, [], condition_tensors) # type: ignore
- logits = model_output.logits
- mask = padding_mask & model_output.mask
- ce, ce_per_codebook = self._compute_cross_entropy(logits, audio_tokens, mask)
- loss = ce
- self.deadlock_detect.update('loss')
-
- if check_synchronization_points:
- torch.cuda.set_sync_debug_mode('default')
-
- if self.is_training:
- metrics['lr'] = self.optimizer.param_groups[0]['lr']
- if self.scaler is not None:
- loss = self.scaler.scale(loss)
- self.deadlock_detect.update('scale')
- if self.cfg.fsdp.use:
- loss.backward()
- flashy.distrib.average_tensors(self.model.buffers())
- elif self.cfg.optim.eager_sync:
- with flashy.distrib.eager_sync_model(self.model):
- loss.backward()
- else:
- # this should always be slower but can be useful
- # for weird use cases like multiple backwards.
- loss.backward()
- flashy.distrib.sync_model(self.model)
- self.deadlock_detect.update('backward')
-
- if self.scaler is not None:
- self.scaler.unscale_(self.optimizer)
- if self.cfg.optim.max_norm:
- if self.cfg.fsdp.use:
- metrics['grad_norm'] = self.model.clip_grad_norm_(self.cfg.optim.max_norm) # type: ignore
- else:
- metrics['grad_norm'] = torch.nn.utils.clip_grad_norm_(
- self.model.parameters(), self.cfg.optim.max_norm
- )
- if self.scaler is None:
- self.optimizer.step()
- else:
- self.scaler.step(self.optimizer)
- self.scaler.update()
- if self.lr_scheduler:
- self.lr_scheduler.step()
- self.optimizer.zero_grad()
- self.deadlock_detect.update('optim')
- if self.scaler is not None:
- scale = self.scaler.get_scale()
- metrics['grad_scale'] = scale
- if not loss.isfinite().all():
- raise RuntimeError("Model probably diverged.")
-
- metrics['ce'] = ce
- metrics['ppl'] = torch.exp(ce)
- for k, ce_q in enumerate(ce_per_codebook):
- metrics[f'ce_q{k + 1}'] = ce_q
- metrics[f'ppl_q{k + 1}'] = torch.exp(ce_q)
-
- return metrics
-
- @torch.no_grad()
- def run_generate_step(self, batch: tp.Tuple[torch.Tensor, tp.List[SegmentWithAttributes]],
- gen_duration: float, prompt_duration: tp.Optional[float] = None,
- remove_prompt: bool = False,
- **generation_params) -> dict:
- """Run generate step on a batch of optional audio tensor and corresponding attributes.
-
- Args:
- batch (tuple[torch.Tensor, list[SegmentWithAttributes]]):
- use_prompt (bool): Whether to do audio continuation generation with prompt from audio batch.
- gen_duration (float): Target audio duration for the generation.
- prompt_duration (float, optional): Duration for the audio prompt to use for continuation.
- remove_prompt (bool, optional): Whether to remove the prompt from the generated audio.
- generation_params: Additional generation parameters.
- Returns:
- gen_outputs (dict): Generation outputs, consisting in audio, audio tokens from both the generation
- and the prompt along with additional information.
- """
- bench_start = time.time()
- audio, meta = batch
- assert audio.size(0) == len(meta), (
- f"Mismatch between number of items in audio batch ({audio.size(0)})",
- f" and in metadata ({len(meta)})"
- )
- # prepare attributes
- attributes = [x.to_condition_attributes() for x in meta]
- # TODO: Add dropout for chroma?
-
- # prepare audio prompt
- if prompt_duration is None:
- prompt_audio = None
- else:
- assert prompt_duration < gen_duration, "Prompt duration must be lower than target generation duration"
- prompt_audio_frames = int(prompt_duration * self.compression_model.sample_rate)
- prompt_audio = audio[..., :prompt_audio_frames]
-
- # get audio tokens from compression model
- if prompt_audio is None or prompt_audio.nelement() == 0:
- num_samples = len(attributes)
- prompt_tokens = None
- else:
- num_samples = None
- prompt_audio = prompt_audio.to(self.device)
- prompt_tokens, scale = self.compression_model.encode(prompt_audio)
- assert scale is None, "Compression model in MusicGen should not require rescaling."
-
- # generate by sampling from the LM
- with self.autocast:
- total_gen_len = math.ceil(gen_duration * self.compression_model.frame_rate)
- gen_tokens = self.model.generate(
- prompt_tokens, attributes, max_gen_len=total_gen_len,
- num_samples=num_samples, **self.generation_params)
-
- # generate audio from tokens
- assert gen_tokens.dim() == 3
- gen_audio = self.compression_model.decode(gen_tokens, None)
-
- bench_end = time.time()
- gen_outputs = {
- 'rtf': (bench_end - bench_start) / gen_duration,
- 'ref_audio': audio,
- 'gen_audio': gen_audio,
- 'gen_tokens': gen_tokens,
- 'prompt_audio': prompt_audio,
- 'prompt_tokens': prompt_tokens,
- }
- return gen_outputs
-
- def generate_audio(self) -> dict:
- """Audio generation stage."""
- generate_stage_name = f'{self.current_stage}'
- sample_manager = SampleManager(self.xp)
- self.logger.info(f"Generating samples in {sample_manager.base_folder}")
- loader = self.dataloaders['generate']
- updates = len(loader)
- lp = self.log_progress(generate_stage_name, loader, total=updates, updates=self.log_updates)
-
- dataset = get_dataset_from_loader(loader)
- dataset_duration = dataset.segment_duration
- assert dataset_duration is not None
- assert isinstance(dataset, AudioDataset)
- target_duration = self.cfg.generate.lm.gen_duration
- prompt_duration = self.cfg.generate.lm.prompt_duration
- if target_duration is None:
- target_duration = dataset_duration
- if prompt_duration is None:
- prompt_duration = dataset_duration / 4
- assert prompt_duration < dataset_duration, (
- f"Specified prompt duration ({prompt_duration}s) is longer",
- f" than reference audio duration ({dataset_duration}s)"
- )
-
- def get_hydrated_conditions(meta: tp.List[SegmentWithAttributes]):
- hydrated_conditions = []
- for sample in [x.to_condition_attributes() for x in meta]:
- cond_dict = {}
- for cond_type in sample.__annotations__.keys():
- for cond_key, cond_val in getattr(sample, cond_type).items():
- if cond_key not in self.model.condition_provider.conditioners.keys():
- continue
- if is_jsonable(cond_val):
- cond_dict[cond_key] = cond_val
- elif isinstance(cond_val, WavCondition):
- cond_dict[cond_key] = cond_val.path
- elif isinstance(cond_val, JointEmbedCondition):
- cond_dict[cond_key] = cond_val.text # only support text at inference for now
- else:
- # if we reached this point, it is not clear how to log the condition
- # so we just log the type.
- cond_dict[cond_key] = str(type(cond_val))
- continue
- hydrated_conditions.append(cond_dict)
- return hydrated_conditions
-
- metrics: dict = {}
- average = flashy.averager()
- for batch in lp:
- audio, meta = batch
- # metadata for sample manager
- hydrated_conditions = get_hydrated_conditions(meta)
- sample_generation_params = {
- **{f'classifier_free_guidance_{k}': v for k, v in self.cfg.classifier_free_guidance.items()},
- **self.generation_params
- }
- if self.cfg.generate.lm.unprompted_samples:
- if self.cfg.generate.lm.gen_gt_samples:
- # get the ground truth instead of generation
- self.logger.warn(
- "Use ground truth instead of audio generation as generate.lm.gen_gt_samples=true")
- gen_unprompted_audio = audio
- rtf = 1.
- else:
- gen_unprompted_outputs = self.run_generate_step(
- batch, gen_duration=target_duration, prompt_duration=prompt_duration,
- **self.generation_params)
- gen_unprompted_audio = gen_unprompted_outputs['gen_audio'].cpu()
- rtf = gen_unprompted_outputs['rtf']
- sample_manager.add_samples(
- gen_unprompted_audio, self.epoch, hydrated_conditions,
- ground_truth_wavs=audio, generation_args=sample_generation_params)
-
- if self.cfg.generate.lm.prompted_samples:
- gen_outputs = self.run_generate_step(
- batch, gen_duration=target_duration, prompt_duration=prompt_duration,
- **self.generation_params)
- gen_audio = gen_outputs['gen_audio'].cpu()
- prompt_audio = gen_outputs['prompt_audio'].cpu()
- sample_manager.add_samples(
- gen_audio, self.epoch, hydrated_conditions,
- prompt_wavs=prompt_audio, ground_truth_wavs=audio,
- generation_args=sample_generation_params)
-
- metrics['rtf'] = rtf
- metrics = average(metrics)
-
- flashy.distrib.barrier()
- return metrics
-
- def generate(self) -> dict:
- """Generate stage."""
- self.model.eval()
- with torch.no_grad():
- return self.generate_audio()
-
- def run_epoch(self):
- if self.cfg.cache.write:
- if ((self.epoch - 1) % self.cfg.cache.write_num_shards) != self.cfg.cache.write_shard:
- return
- super().run_epoch()
-
- def train(self):
- """Train stage.
- """
- if self._cached_batch_writer is not None:
- self._cached_batch_writer.start_epoch(self.epoch)
- if self._cached_batch_loader is None:
- dataset = get_dataset_from_loader(self.dataloaders['train'])
- assert isinstance(dataset, AudioDataset)
- dataset.current_epoch = self.epoch
- else:
- self._cached_batch_loader.start_epoch(self.epoch)
- return super().train()
-
- def evaluate_audio_generation(self) -> dict:
- """Evaluate audio generation with off-the-shelf metrics."""
- evaluate_stage_name = f'{self.current_stage}_generation'
- # instantiate evaluation metrics, if at least one metric is defined, run audio generation evaluation
- fad: tp.Optional[eval_metrics.FrechetAudioDistanceMetric] = None
- kldiv: tp.Optional[eval_metrics.KLDivergenceMetric] = None
- text_consistency: tp.Optional[eval_metrics.TextConsistencyMetric] = None
- chroma_cosine: tp.Optional[eval_metrics.ChromaCosineSimilarityMetric] = None
- should_run_eval = False
- eval_chroma_wavs: tp.Optional[torch.Tensor] = None
- if self.cfg.evaluate.metrics.fad:
- fad = builders.get_fad(self.cfg.metrics.fad).to(self.device)
- should_run_eval = True
- if self.cfg.evaluate.metrics.kld:
- kldiv = builders.get_kldiv(self.cfg.metrics.kld).to(self.device)
- should_run_eval = True
- if self.cfg.evaluate.metrics.text_consistency:
- text_consistency = builders.get_text_consistency(self.cfg.metrics.text_consistency).to(self.device)
- should_run_eval = True
- if self.cfg.evaluate.metrics.chroma_cosine:
- chroma_cosine = builders.get_chroma_cosine_similarity(self.cfg.metrics.chroma_cosine).to(self.device)
- # if we have predefind wavs for chroma we should purge them for computing the cosine metric
- has_predefined_eval_chromas = 'self_wav' in self.model.condition_provider.conditioners and \
- self.model.condition_provider.conditioners['self_wav'].has_eval_wavs()
- if has_predefined_eval_chromas:
- warn_once(self.logger, "Attempting to run cosine eval for config with pre-defined eval chromas! "
- 'Resetting eval chromas to None for evaluation.')
- eval_chroma_wavs = self.model.condition_provider.conditioners.self_wav.eval_wavs # type: ignore
- self.model.condition_provider.conditioners.self_wav.reset_eval_wavs(None) # type: ignore
- should_run_eval = True
-
- def get_compressed_audio(audio: torch.Tensor) -> torch.Tensor:
- audio_tokens, scale = self.compression_model.encode(audio.to(self.device))
- compressed_audio = self.compression_model.decode(audio_tokens, scale)
- return compressed_audio[..., :audio.shape[-1]]
-
- metrics: dict = {}
- if should_run_eval:
- loader = self.dataloaders['evaluate']
- updates = len(loader)
- lp = self.log_progress(f'{evaluate_stage_name} inference', loader, total=updates, updates=self.log_updates)
- average = flashy.averager()
- dataset = get_dataset_from_loader(loader)
- assert isinstance(dataset, AudioDataset)
- self.logger.info(f"Computing evaluation metrics on {len(dataset)} samples")
-
- for idx, batch in enumerate(lp):
- audio, meta = batch
- assert all([self.cfg.sample_rate == m.sample_rate for m in meta])
-
- target_duration = audio.shape[-1] / self.cfg.sample_rate
- if self.cfg.evaluate.fixed_generation_duration:
- target_duration = self.cfg.evaluate.fixed_generation_duration
-
- gen_outputs = self.run_generate_step(
- batch, gen_duration=target_duration,
- **self.generation_params
- )
- y_pred = gen_outputs['gen_audio'].detach()
- y_pred = y_pred[..., :audio.shape[-1]]
-
- normalize_kwargs = dict(self.cfg.generate.audio)
- normalize_kwargs.pop('format', None)
- y_pred = torch.stack([normalize_audio(w, **normalize_kwargs) for w in y_pred], dim=0).cpu()
- y = audio.cpu() # should already be on CPU but just in case
- sizes = torch.tensor([m.n_frames for m in meta]) # actual sizes without padding
- sample_rates = torch.tensor([m.sample_rate for m in meta]) # sample rates for audio samples
- audio_stems = [Path(m.meta.path).stem + f"_{m.seek_time}" for m in meta]
-
- if fad is not None:
- if self.cfg.metrics.fad.use_gt:
- y_pred = get_compressed_audio(y).cpu()
- fad.update(y_pred, y, sizes, sample_rates, audio_stems)
- if kldiv is not None:
- if self.cfg.metrics.kld.use_gt:
- y_pred = get_compressed_audio(y).cpu()
- kldiv.update(y_pred, y, sizes, sample_rates)
- if text_consistency is not None:
- texts = [m.description for m in meta]
- if self.cfg.metrics.text_consistency.use_gt:
- y_pred = y
- text_consistency.update(y_pred, texts, sizes, sample_rates)
- if chroma_cosine is not None:
- if self.cfg.metrics.chroma_cosine.use_gt:
- y_pred = get_compressed_audio(y).cpu()
- chroma_cosine.update(y_pred, y, sizes, sample_rates)
- # restore chroma conditioner's eval chroma wavs
- if eval_chroma_wavs is not None:
- self.model.condition_provider.conditioners['self_wav'].reset_eval_wavs(eval_chroma_wavs)
-
- flashy.distrib.barrier()
- if fad is not None:
- metrics['fad'] = fad.compute()
- if kldiv is not None:
- kld_metrics = kldiv.compute()
- metrics.update(kld_metrics)
- if text_consistency is not None:
- metrics['text_consistency'] = text_consistency.compute()
- if chroma_cosine is not None:
- metrics['chroma_cosine'] = chroma_cosine.compute()
- metrics = average(metrics)
- metrics = flashy.distrib.average_metrics(metrics, len(loader))
-
- return metrics
-
- def evaluate(self) -> dict:
- """Evaluate stage."""
- self.model.eval()
- with torch.no_grad():
- metrics: dict = {}
- if self.cfg.evaluate.metrics.base:
- metrics.update(self.common_train_valid('evaluate'))
- gen_metrics = self.evaluate_audio_generation()
- return {**metrics, **gen_metrics}
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/dataset_utils.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/dataset_utils.py
deleted file mode 100644
index a511ac10828ee6ae5e4813a367d433059a9a2b4d..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/dataset_utils.py
+++ /dev/null
@@ -1,311 +0,0 @@
-import torch.optim
-import torch.utils.data
-import numpy as np
-import torch
-import torch.optim
-import torch.utils.data
-import torch.distributions
-from text_to_speech.utils.audio.pitch.utils import norm_interp_f0, denorm_f0
-from text_to_speech.utils.commons.dataset_utils import BaseDataset, collate_1d_or_2d
-from text_to_speech.utils.commons.indexed_datasets import IndexedDataset
-from text_to_speech.utils.commons.hparams import hparams
-import random
-
-
-class BaseSpeechDataset(BaseDataset):
- def __init__(self, prefix, shuffle=False, items=None, data_dir=None):
- super().__init__(shuffle)
- from text_to_speech.utils.commons.hparams import hparams
- self.data_dir = hparams['binary_data_dir'] if data_dir is None else data_dir
- self.prefix = prefix
- self.hparams = hparams
- self.indexed_ds = None
- if items is not None:
- self.indexed_ds = items
- self.sizes = [1] * len(items)
- self.avail_idxs = list(range(len(self.sizes)))
- else:
- self.sizes = np.load(f'{self.data_dir}/{self.prefix}_lengths.npy')
- if prefix == 'test' and len(hparams['test_ids']) > 0:
- self.avail_idxs = hparams['test_ids']
- else:
- self.avail_idxs = list(range(len(self.sizes)))
- if prefix == 'train' and hparams['min_frames'] > 0:
- self.avail_idxs = [x for x in self.avail_idxs if self.sizes[x] >= hparams['min_frames']]
- try:
- self.sizes = [self.sizes[i] for i in self.avail_idxs]
- except:
- tmp_sizes = []
- for i in self.avail_idxs:
- try:
- tmp_sizes.append(self.sizes[i])
- except:
- continue
- self.sizes = tmp_sizes
-
- def _get_item(self, index):
- if hasattr(self, 'avail_idxs') and self.avail_idxs is not None:
- index = self.avail_idxs[index]
- if self.indexed_ds is None:
- self.indexed_ds = IndexedDataset(f'{self.data_dir}/{self.prefix}')
- return self.indexed_ds[index]
-
- def __getitem__(self, index):
- hparams = self.hparams
- item = self._get_item(index)
- assert len(item['mel']) == self.sizes[index], (len(item['mel']), self.sizes[index])
- max_frames = hparams['max_frames']
- spec = torch.Tensor(item['mel'])[:max_frames]
- max_frames = spec.shape[0] // hparams['frames_multiple'] * hparams['frames_multiple']
- spec = spec[:max_frames]
- ph_token = torch.LongTensor(item['ph_token'][:hparams['max_input_tokens']])
- sample = {
- "id": index,
- "item_name": item['item_name'],
- "text": item['txt'],
- "txt_token": ph_token,
- "mel": spec,
- "mel_nonpadding": spec.abs().sum(-1) > 0,
- }
- if hparams['use_spk_embed']:
- sample["spk_embed"] = torch.Tensor(item['spk_embed'])
- if hparams['use_spk_id']:
- sample["spk_id"] = int(item['spk_id'])
- return sample
-
- def collater(self, samples):
- if len(samples) == 0:
- return {}
- hparams = self.hparams
- ids = [s['id'] for s in samples]
- item_names = [s['item_name'] for s in samples]
- text = [s['text'] for s in samples]
- txt_tokens = collate_1d_or_2d([s['txt_token'] for s in samples], 0)
- mels = collate_1d_or_2d([s['mel'] for s in samples], 0.0)
- txt_lengths = torch.LongTensor([s['txt_token'].numel() for s in samples])
- mel_lengths = torch.LongTensor([s['mel'].shape[0] for s in samples])
-
- batch = {
- 'id': ids,
- 'item_name': item_names,
- 'nsamples': len(samples),
- 'text': text,
- 'txt_tokens': txt_tokens,
- 'txt_lengths': txt_lengths,
- 'mels': mels,
- 'mel_lengths': mel_lengths,
- }
-
- if hparams['use_spk_embed']:
- spk_embed = torch.stack([s['spk_embed'] for s in samples])
- batch['spk_embed'] = spk_embed
- if hparams['use_spk_id']:
- spk_ids = torch.LongTensor([s['spk_id'] for s in samples])
- batch['spk_ids'] = spk_ids
- return batch
-
-
-class FastSpeechDataset(BaseSpeechDataset):
- def __getitem__(self, index):
- sample = super(FastSpeechDataset, self).__getitem__(index)
- item = self._get_item(index)
- hparams = self.hparams
- mel = sample['mel']
- T = mel.shape[0]
- ph_token = sample['txt_token']
- sample['mel2ph'] = mel2ph = torch.LongTensor(item['mel2ph'])[:T]
- if hparams['use_pitch_embed']:
- assert 'f0' in item
- pitch = torch.LongTensor(item.get(hparams.get('pitch_key', 'pitch')))[:T]
- f0, uv = norm_interp_f0(item["f0"][:T])
- uv = torch.FloatTensor(uv)
- f0 = torch.FloatTensor(f0)
- if hparams['pitch_type'] == 'ph':
- if "f0_ph" in item:
- f0 = torch.FloatTensor(item['f0_ph'])
- else:
- f0 = denorm_f0(f0, None)
- f0_phlevel_sum = torch.zeros_like(ph_token).float().scatter_add(0, mel2ph - 1, f0)
- f0_phlevel_num = torch.zeros_like(ph_token).float().scatter_add(
- 0, mel2ph - 1, torch.ones_like(f0)).clamp_min(1)
- f0_ph = f0_phlevel_sum / f0_phlevel_num
- f0, uv = norm_interp_f0(f0_ph)
- else:
- f0, uv, pitch = None, None, None
- sample["f0"], sample["uv"], sample["pitch"] = f0, uv, pitch
- return sample
-
- def collater(self, samples):
- if len(samples) == 0:
- return {}
- batch = super(FastSpeechDataset, self).collater(samples)
- hparams = self.hparams
- if hparams['use_pitch_embed']:
- f0 = collate_1d_or_2d([s['f0'] for s in samples], 0.0)
- pitch = collate_1d_or_2d([s['pitch'] for s in samples])
- uv = collate_1d_or_2d([s['uv'] for s in samples])
- else:
- f0, uv, pitch = None, None, None
- mel2ph = collate_1d_or_2d([s['mel2ph'] for s in samples], 0.0)
- batch.update({
- 'mel2ph': mel2ph,
- 'pitch': pitch,
- 'f0': f0,
- 'uv': uv,
- })
- return batch
-
-class FastSpeechWordDataset(FastSpeechDataset):
- def __init__(self, prefix, shuffle=False, items=None, data_dir=None):
- super().__init__(prefix, shuffle, items, data_dir)
- # BERT contrastive loss & mlm loss
- # from transformers import AutoTokenizer
- # if hparams['ds_name'] in ['ljspeech', 'libritts']:
- # self.tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
- # elif hparams['ds_name'] == 'biaobei':
- # self.tokenizer = AutoTokenizer.from_pretrained('bert-base-chinese')
- # else:
- # raise NotImplementedError()
- # self.mlm_probability = 0.15
- # if hparams.get("cl_ds_name") is None:
- # pass
- # elif hparams['cl_ds_name'] == "wiki":
- # from experimental_yerfor.simcse_datasets import WikiDataset
- # self.cl_dataset = WikiDataset(prefix=prefix)
- # shuffle = True if prefix == 'train' else False
- # endless = True
- # num_workers = None if prefix == 'train' else 0
- # self.cl_dataloader = self.cl_dataset.build_dataloader(shuffle=shuffle, max_tokens=hparams.get("cl_max_tokens", 3200),
- # max_sentences=hparams.get("cl_max_sentences", 64), endless=endless, num_workers=num_workers)
- # self.cl_dl_iter = iter(self.cl_dataloader)
- # elif hparams['cl_ds_name'] == "nli":
- # from experimental_yerfor.simcse_datasets import NLIDataset
- # self.cl_dataset = NLIDataset(prefix=prefix)
- # shuffle = True if prefix == 'train' else False
- # endless = True
- # num_workers = None if prefix == 'train' else 0
- # self.cl_dataloader = self.cl_dataset.build_dataloader(shuffle=shuffle, max_tokens=hparams.get("cl_max_tokens", 4800),
- # max_sentences=hparams.get("cl_max_sentences", 128), endless=endless, num_workers=num_workers)
- # self.cl_dl_iter = iter(self.cl_dataloader)
-
- def __getitem__(self, index):
- sample = super().__getitem__(index)
- item = self._get_item(index)
- max_frames = sample['mel'].shape[0]
- if 'word' in item:
- sample['words'] = item['word']
- sample["ph_words"] = item["ph_gb_word"]
- sample["word_tokens"] = torch.LongTensor(item["word_token"])
- else:
- sample['words'] = item['words']
- sample["ph_words"] = " ".join(item["ph_words"])
- sample["word_tokens"] = torch.LongTensor(item["word_tokens"])
- sample["mel2word"] = torch.LongTensor(item.get("mel2word"))[:max_frames]
- sample["ph2word"] = torch.LongTensor(item['ph2word'][:self.hparams['max_input_tokens']])
-
- # SyntaSpeech related features
- # sample['dgl_graph'] = item['dgl_graph']
- # sample['edge_types'] = item['edge_types']
-
- # BERT related features
- # sample['bert_token'] = item['bert_token']
- # sample['bert_input_ids'] = torch.LongTensor(item['bert_input_ids'])
- # sample['bert_token2word'] = torch.LongTensor(item['bert_token2word'])
- # sample['bert_attention_mask'] = torch.LongTensor(item['bert_attention_mask'])
- # sample['bert_token_type_ids'] = torch.LongTensor(item['bert_token_type_ids'])
-
- return sample
-
- def collater(self, samples):
- samples = [s for s in samples if s is not None]
- batch = super().collater(samples)
- ph_words = [s['ph_words'] for s in samples]
- batch['ph_words'] = ph_words
- word_tokens = collate_1d_or_2d([s['word_tokens'] for s in samples], 0)
- batch['word_tokens'] = word_tokens
- mel2word = collate_1d_or_2d([s['mel2word'] for s in samples], 0)
- batch['mel2word'] = mel2word
- ph2word = collate_1d_or_2d([s['ph2word'] for s in samples], 0)
- batch['ph2word'] = ph2word
- batch['words'] = [s['words'] for s in samples]
- batch['word_lengths'] = torch.LongTensor([len(s['word_tokens']) for s in samples])
- if self.hparams['use_word_input']: # always False
- batch['txt_tokens'] = batch['word_tokens']
- batch['txt_lengths'] = torch.LongTensor([s['word_tokens'].numel() for s in samples])
- batch['mel2ph'] = batch['mel2word']
-
- # SyntaSpeech
- # graph_lst, etypes_lst = [], [] # new features for Graph-based SDP
- # for s in samples:
- # graph_lst.append(s['dgl_graph'])
- # etypes_lst.append(s['edge_types'])
- # batch.update({
- # 'graph_lst': graph_lst,
- # 'etypes_lst': etypes_lst,
- # })
-
- # BERT
- # batch['bert_feats'] = {}
- # batch['bert_feats']['bert_tokens'] = [s['bert_token'] for s in samples]
- # bert_input_ids = collate_1d_or_2d([s['bert_input_ids'] for s in samples], 0)
- # batch['bert_feats']['bert_input_ids'] = bert_input_ids
- # bert_token2word = collate_1d_or_2d([s['bert_token2word'] for s in samples], 0)
- # batch['bert_feats']['bert_token2word'] = bert_token2word
- # bert_attention_mask = collate_1d_or_2d([s['bert_attention_mask'] for s in samples], 0)
- # batch['bert_feats']['bert_attention_mask'] = bert_attention_mask
- # bert_token_type_ids = collate_1d_or_2d([s['bert_token_type_ids'] for s in samples], 0)
- # batch['bert_feats']['bert_token_type_ids'] = bert_token_type_ids
-
- # BERT contrastive loss & mlm loss & electra loss
- # if hparams.get("cl_ds_name") is None:
- # batch['cl_feats'] = {}
- # batch['cl_feats']['cl_input_ids'] = batch['bert_feats']['bert_input_ids'].unsqueeze(1).repeat([1,2,1])
- # batch['cl_feats']['cl_token2word'] = batch['bert_feats']['bert_token2word'].unsqueeze(1).repeat([1,2,1])
- # batch['cl_feats']['cl_attention_mask'] = batch['bert_feats']['bert_attention_mask'].unsqueeze(1).repeat([1,2,1])
- # batch['cl_feats']['cl_token_type_ids'] = batch['bert_feats']['bert_token_type_ids'].unsqueeze(1).repeat([1,2,1])
- # bs, _, t = batch['cl_feats']['cl_input_ids'].shape
- # mlm_input_ids, mlm_labels = self.mask_tokens(batch['bert_feats']['bert_input_ids'].reshape([bs, t]))
- # batch['cl_feats']["mlm_input_ids"] = mlm_input_ids.reshape([bs, t])
- # batch['cl_feats']["mlm_labels"] = mlm_labels.reshape([bs, t])
- # batch['cl_feats']["mlm_attention_mask"] = batch['bert_feats']['bert_attention_mask']
- # elif hparams['cl_ds_name'] in ["wiki", "nli"]:
- # try:
- # cl_feats = self.cl_dl_iter.__next__()
- # except:
- # self.cl_dl_iter = iter(self.cl_dataloader)
- # cl_feats = self.cl_dl_iter.__next__()
- # batch['cl_feats'] = cl_feats
- return batch
-
- # def mask_tokens(self, inputs, special_tokens_mask=None):
- # """
- # Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original.
- # """
- # inputs = inputs.clone()
- # labels = inputs.clone()
- # # We sample a few tokens in each sequence for MLM training (with probability `self.mlm_probability`)
- # probability_matrix = torch.full(labels.shape, self.mlm_probability)
- # if special_tokens_mask is None:
- # special_tokens_mask = [
- # self.tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist()
- # ]
- # special_tokens_mask = torch.tensor(special_tokens_mask, dtype=torch.bool)
- # else:
- # special_tokens_mask = special_tokens_mask.bool()
-
- # probability_matrix.masked_fill_(special_tokens_mask, value=0.0)
- # masked_indices = torch.bernoulli(probability_matrix).bool()
- # labels[~masked_indices] = -100 # We only compute loss on masked tokens
-
- # # 80% of the time, we replace masked input tokens with tokenizer.mask_token ([MASK])
- # indices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices
- # inputs[indices_replaced] = self.tokenizer.convert_tokens_to_ids(self.tokenizer.mask_token)
-
- # # 10% of the time, we replace masked input tokens with random word
- # indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced
- # random_words = torch.randint(len(self.tokenizer), labels.shape, dtype=torch.long)
- # inputs[indices_random] = random_words[indices_random]
-
- # # The rest of the time (10% of the time) we keep the masked input tokens unchanged
- # return inputs, labels
-
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/fs_adv.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/fs_adv.py
deleted file mode 100644
index af360054fb3184bc49338660c8b537db0426e168..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/fs_adv.py
+++ /dev/null
@@ -1,260 +0,0 @@
-import os
-import torch
-import torch.nn.functional as F
-import torch.nn as nn
-import numpy as np
-
-from text_to_speech.modules.tts.syntaspeech.multi_window_disc import Discriminator
-from tasks.tts.fs import FastSpeechTask
-from text_to_speech.modules.tts.fs import FastSpeech
-
-from text_to_speech.utils.audio.align import mel2token_to_dur
-from text_to_speech.utils.commons.hparams import hparams
-from text_to_speech.utils.nn.model_utils import num_params
-from text_to_speech.utils.commons.tensor_utils import tensors_to_scalars
-from text_to_speech.utils.audio.pitch.utils import denorm_f0, norm_f0
-from text_to_speech.utils.audio.pitch_extractors import get_pitch
-from text_to_speech.utils.metrics.dtw import dtw as DTW
-
-from text_to_speech.utils.plot.plot import spec_to_figure
-from text_to_speech.utils.text.text_encoder import build_token_encoder
-
-
-class FastSpeechAdvTask(FastSpeechTask):
- def __init__(self):
- super().__init__()
- self.build_disc_model()
- self.mse_loss_fn = torch.nn.MSELoss()
-
- def build_tts_model(self):
- dict_size = len(self.token_encoder)
- self.model = FastSpeech(dict_size, hparams)
- self.gen_params = [p for p in self.model.parameters() if p.requires_grad]
- self.dp_params = [p for k, p in self.model.named_parameters() if (('dur_predictor' in k) and p.requires_grad)]
- self.gen_params_except_dp = [p for k, p in self.model.named_parameters() if (('dur_predictor' not in k) and p.requires_grad)]
- self.bert_params = [p for k, p in self.model.named_parameters() if (('bert' in k) and p.requires_grad)]
- self.gen_params_except_bert_and_dp = [p for k, p in self.model.named_parameters() if ('dur_predictor' not in k) and ('bert' not in k) and p.requires_grad ]
- self.use_bert = True if len(self.bert_params) > 0 else False
-
-
- def build_disc_model(self):
- disc_win_num = hparams['disc_win_num']
- h = hparams['mel_disc_hidden_size']
- self.mel_disc = Discriminator(
- time_lengths=[32, 64, 128][:disc_win_num],
- freq_length=80, hidden_size=h, kernel=(3, 3)
- )
- self.disc_params = list(self.mel_disc.parameters())
-
- def _training_step(self, sample, batch_idx, optimizer_idx):
- loss_output = {}
- loss_weights = {}
- disc_start = self.global_step >= hparams["disc_start_steps"] and hparams['lambda_mel_adv'] > 0
- if optimizer_idx == 0:
- #######################
- # Generator #
- #######################
- loss_output, model_out = self.run_model(sample, infer=False)
- self.model_out_gt = self.model_out = \
- {k: v.detach() for k, v in model_out.items() if isinstance(v, torch.Tensor)}
- if disc_start:
- mel_p = model_out['mel_out']
- if hasattr(self.model, 'out2mel'):
- mel_p = self.model.out2mel(mel_p)
- o_ = self.mel_disc(mel_p)
- p_, pc_ = o_['y'], o_['y_c']
- if p_ is not None:
- loss_output['a'] = self.mse_loss_fn(p_, p_.new_ones(p_.size()))
- loss_weights['a'] = hparams['lambda_mel_adv']
- if pc_ is not None:
- loss_output['ac'] = self.mse_loss_fn(pc_, pc_.new_ones(pc_.size()))
- loss_weights['ac'] = hparams['lambda_mel_adv']
- else:
- #######################
- # Discriminator #
- #######################
- if disc_start and self.global_step % hparams['disc_interval'] == 0:
- model_out = self.model_out_gt
- mel_g = sample['mels']
- mel_p = model_out['mel_out']
- o = self.mel_disc(mel_g)
- p, pc = o['y'], o['y_c']
- o_ = self.mel_disc(mel_p)
- p_, pc_ = o_['y'], o_['y_c']
- if p_ is not None:
- loss_output["r"] = self.mse_loss_fn(p, p.new_ones(p.size()))
- loss_output["f"] = self.mse_loss_fn(p_, p_.new_zeros(p_.size()))
- if pc_ is not None:
- loss_output["rc"] = self.mse_loss_fn(pc, pc.new_ones(pc.size()))
- loss_output["fc"] = self.mse_loss_fn(pc_, pc_.new_zeros(pc_.size()))
- else:
- return None
- total_loss = sum([loss_weights.get(k, 1) * v for k, v in loss_output.items() if isinstance(v, torch.Tensor) and v.requires_grad])
- loss_output['batch_size'] = sample['txt_tokens'].size()[0]
- return total_loss, loss_output
-
-
- def validation_step(self, sample, batch_idx):
- outputs = {}
- outputs['losses'] = {}
- outputs['losses'], model_out = self.run_model(sample)
- outputs['total_loss'] = sum(outputs['losses'].values())
- outputs['nsamples'] = sample['nsamples']
- outputs = tensors_to_scalars(outputs)
- if self.global_step % hparams['valid_infer_interval'] == 0 \
- and batch_idx < hparams['num_valid_plots']:
- valid_results = self.save_valid_result(sample, batch_idx, model_out)
- wav_gt = valid_results['wav_gt']
- mel_gt = valid_results['mel_gt']
- wav_pred = valid_results['wav_pred']
- mel_pred = valid_results['mel_pred']
- f0_pred_, _ = get_pitch(wav_pred, mel_pred, hparams)
- f0_gt_, _ = get_pitch(wav_gt, mel_gt, hparams)
- manhattan_distance = lambda x, y: np.abs(x - y)
- dist, cost, acc, path = DTW(f0_pred_, f0_gt_, manhattan_distance)
- outputs['losses']['f0_dtw'] = dist / len(f0_gt_)
- return outputs
-
- def save_valid_result(self, sample, batch_idx, model_out):
- sr = hparams['audio_sample_rate']
- f0_gt = None
- mel_out = model_out['mel_out']
- if sample.get('f0') is not None:
- f0_gt = denorm_f0(sample['f0'][0].cpu(), sample['uv'][0].cpu())
- self.plot_mel(batch_idx, sample['mels'], mel_out, f0s=f0_gt)
-
- # if self.global_step > 0:
- wav_pred = self.vocoder.spec2wav(mel_out[0].cpu(), f0=f0_gt)
- self.logger.add_audio(f'wav_val_{batch_idx}', wav_pred, self.global_step, sr)
- # with gt duration
- model_out = self.run_model(sample, infer=True, infer_use_gt_dur=True)
- dur_info = self.get_plot_dur_info(sample, model_out)
- del dur_info['dur_pred']
- wav_pred = self.vocoder.spec2wav(model_out['mel_out'][0].cpu(), f0=f0_gt)
- self.logger.add_audio(f'wav_gdur_{batch_idx}', wav_pred, self.global_step, sr)
- self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'][0], f'mel_gdur_{batch_idx}',
- dur_info=dur_info, f0s=f0_gt)
-
- # with pred duration
- if not hparams['use_gt_dur']:
- model_out = self.run_model(sample, infer=True, infer_use_gt_dur=False)
- dur_info = self.get_plot_dur_info(sample, model_out)
- self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'][0], f'mel_pdur_{batch_idx}',
- dur_info=dur_info, f0s=f0_gt)
- wav_pred = self.vocoder.spec2wav(model_out['mel_out'][0].cpu(), f0=f0_gt)
- self.logger.add_audio(f'wav_pdur_{batch_idx}', wav_pred, self.global_step, sr)
- # gt wav
- mel_gt = sample['mels'][0].cpu()
- wav_gt = self.vocoder.spec2wav(mel_gt, f0=f0_gt)
- if self.global_step <= hparams['valid_infer_interval']:
- self.logger.add_audio(f'wav_gt_{batch_idx}', wav_gt, self.global_step, sr)
-
- # add attn plot
- # if self.global_step > 0 and hparams['dur_level'] == 'word':
- # self.logger.add_figure(f'attn_{batch_idx}', spec_to_figure(model_out['attn'][0]), self.global_step)
-
- return {'wav_gt': wav_gt, 'wav_pred': wav_pred, 'mel_gt': mel_gt, 'mel_pred': model_out['mel_out'][0].cpu()}
-
-
- def get_plot_dur_info(self, sample, model_out):
- # if hparams['dur_level'] == 'word':
- # T_txt = sample['word_lengths'].max()
- # dur_gt = mel2token_to_dur(sample['mel2word'], T_txt)[0]
- # dur_pred = model_out['dur'] if 'dur' in model_out else dur_gt
- # txt = sample['ph_words'][0].split(" ")
- # else:
- T_txt = sample['txt_tokens'].shape[1]
- dur_gt = mel2token_to_dur(sample['mel2ph'], T_txt)[0]
- dur_pred = model_out['dur'] if 'dur' in model_out else dur_gt
- txt = self.token_encoder.decode(sample['txt_tokens'][0].cpu().numpy())
- txt = txt.split(" ")
- return {'dur_gt': dur_gt, 'dur_pred': dur_pred, 'txt': txt}
-
- def build_optimizer(self, model):
-
- optimizer_gen = torch.optim.AdamW(
- self.gen_params,
- lr=hparams['lr'],
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
- weight_decay=hparams['weight_decay'])
-
- optimizer_disc = torch.optim.AdamW(
- self.disc_params,
- lr=hparams['disc_lr'],
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
- **hparams["discriminator_optimizer_params"]) if len(self.disc_params) > 0 else None
-
- return [optimizer_gen, optimizer_disc]
-
- def build_scheduler(self, optimizer):
- return [
- FastSpeechTask.build_scheduler(self, optimizer[0]), # Generator Scheduler
- torch.optim.lr_scheduler.StepLR(optimizer=optimizer[1], # Discriminator Scheduler
- **hparams["discriminator_scheduler_params"]),
- ]
-
- def on_before_optimization(self, opt_idx):
- if opt_idx == 0:
- nn.utils.clip_grad_norm_(self.dp_params, hparams['clip_grad_norm'])
- if self.use_bert:
- nn.utils.clip_grad_norm_(self.bert_params, hparams['clip_grad_norm'])
- nn.utils.clip_grad_norm_(self.gen_params_except_bert_and_dp, hparams['clip_grad_norm'])
- else:
- nn.utils.clip_grad_norm_(self.gen_params_except_dp, hparams['clip_grad_norm'])
- else:
- nn.utils.clip_grad_norm_(self.disc_params, hparams["clip_grad_norm"])
-
- def on_after_optimization(self, epoch, batch_idx, optimizer, optimizer_idx):
- if self.scheduler is not None:
- self.scheduler[0].step(self.global_step // hparams['accumulate_grad_batches'])
- self.scheduler[1].step(self.global_step // hparams['accumulate_grad_batches'])
-
- ############
- # infer
- ############
- def test_start(self):
- super().test_start()
- if hparams.get('save_attn', False):
- os.makedirs(f'{self.gen_dir}/attn', exist_ok=True)
- self.model.store_inverse_all()
-
- def test_step(self, sample, batch_idx):
- assert sample['txt_tokens'].shape[0] == 1, 'only support batch_size=1 in inference'
- outputs = self.run_model(sample, infer=True)
- text = sample['text'][0]
- item_name = sample['item_name'][0]
- tokens = sample['txt_tokens'][0].cpu().numpy()
- mel_gt = sample['mels'][0].cpu().numpy()
- mel_pred = outputs['mel_out'][0].cpu().numpy()
- mel2ph = sample['mel2ph'][0].cpu().numpy()
- mel2ph_pred = None
- str_phs = self.token_encoder.decode(tokens, strip_padding=True)
- base_fn = f'[{batch_idx:06d}][{item_name.replace("%", "_")}][%s]'
- if text is not None:
- base_fn += text.replace(":", "$3A")[:80]
- base_fn = base_fn.replace(' ', '_')
- gen_dir = self.gen_dir
- wav_pred = self.vocoder.spec2wav(mel_pred)
- self.saving_result_pool.add_job(self.save_result, args=[
- wav_pred, mel_pred, base_fn % 'P', gen_dir, str_phs, mel2ph_pred])
- if hparams['save_gt']:
- wav_gt = self.vocoder.spec2wav(mel_gt)
- self.saving_result_pool.add_job(self.save_result, args=[
- wav_gt, mel_gt, base_fn % 'G', gen_dir, str_phs, mel2ph])
- if hparams.get('save_attn', False):
- attn = outputs['attn'][0].cpu().numpy()
- np.save(f'{gen_dir}/attn/{item_name}.npy', attn)
- # save f0 for pitch dtw
- f0_pred_, _ = get_pitch(wav_pred, mel_pred, hparams)
- f0_gt_, _ = get_pitch(wav_gt, mel_gt, hparams)
- np.save(f'{gen_dir}/f0/{item_name}.npy', f0_pred_)
- np.save(f'{gen_dir}/f0/{item_name}_gt.npy', f0_gt_)
-
- print(f"Pred_shape: {mel_pred.shape}, gt_shape: {mel_gt.shape}")
- return {
- 'item_name': item_name,
- 'text': text,
- 'ph_tokens': self.token_encoder.decode(tokens.tolist()),
- 'wav_fn_pred': base_fn % 'P',
- 'wav_fn_gt': base_fn % 'G',
- }
diff --git a/spaces/Abhaykoul/HelpingAI-T3/index.html b/spaces/Abhaykoul/HelpingAI-T3/index.html
deleted file mode 100644
index fe78746b70d1a757f58a06af738178bd68315e19..0000000000000000000000000000000000000000
--- a/spaces/Abhaykoul/HelpingAI-T3/index.html
+++ /dev/null
@@ -1,2 +0,0 @@
-
-
diff --git a/spaces/Adapter/T2I-Adapter/ldm/models/diffusion/dpm_solver/dpm_solver.py b/spaces/Adapter/T2I-Adapter/ldm/models/diffusion/dpm_solver/dpm_solver.py
deleted file mode 100644
index 23ebfebf167a6c16f3b57e09d491998c4adf68db..0000000000000000000000000000000000000000
--- a/spaces/Adapter/T2I-Adapter/ldm/models/diffusion/dpm_solver/dpm_solver.py
+++ /dev/null
@@ -1,1217 +0,0 @@
-import torch
-import torch.nn.functional as F
-import math
-from tqdm import tqdm
-
-
-class NoiseScheduleVP:
- def __init__(
- self,
- schedule='discrete',
- betas=None,
- alphas_cumprod=None,
- continuous_beta_0=0.1,
- continuous_beta_1=20.,
- ):
- """Create a wrapper class for the forward SDE (VP type).
-
- ***
- Update: We support discrete-time diffusion models by implementing a picewise linear interpolation for log_alpha_t.
- We recommend to use schedule='discrete' for the discrete-time diffusion models, especially for high-resolution images.
- ***
-
- The forward SDE ensures that the condition distribution q_{t|0}(x_t | x_0) = N ( alpha_t * x_0, sigma_t^2 * I ).
- We further define lambda_t = log(alpha_t) - log(sigma_t), which is the half-logSNR (described in the DPM-Solver paper).
- Therefore, we implement the functions for computing alpha_t, sigma_t and lambda_t. For t in [0, T], we have:
-
- log_alpha_t = self.marginal_log_mean_coeff(t)
- sigma_t = self.marginal_std(t)
- lambda_t = self.marginal_lambda(t)
-
- Moreover, as lambda(t) is an invertible function, we also support its inverse function:
-
- t = self.inverse_lambda(lambda_t)
-
- ===============================================================
-
- We support both discrete-time DPMs (trained on n = 0, 1, ..., N-1) and continuous-time DPMs (trained on t in [t_0, T]).
-
- 1. For discrete-time DPMs:
-
- For discrete-time DPMs trained on n = 0, 1, ..., N-1, we convert the discrete steps to continuous time steps by:
- t_i = (i + 1) / N
- e.g. for N = 1000, we have t_0 = 1e-3 and T = t_{N-1} = 1.
- We solve the corresponding diffusion ODE from time T = 1 to time t_0 = 1e-3.
-
- Args:
- betas: A `torch.Tensor`. The beta array for the discrete-time DPM. (See the original DDPM paper for details)
- alphas_cumprod: A `torch.Tensor`. The cumprod alphas for the discrete-time DPM. (See the original DDPM paper for details)
-
- Note that we always have alphas_cumprod = cumprod(betas). Therefore, we only need to set one of `betas` and `alphas_cumprod`.
-
- **Important**: Please pay special attention for the args for `alphas_cumprod`:
- The `alphas_cumprod` is the \hat{alpha_n} arrays in the notations of DDPM. Specifically, DDPMs assume that
- q_{t_n | 0}(x_{t_n} | x_0) = N ( \sqrt{\hat{alpha_n}} * x_0, (1 - \hat{alpha_n}) * I ).
- Therefore, the notation \hat{alpha_n} is different from the notation alpha_t in DPM-Solver. In fact, we have
- alpha_{t_n} = \sqrt{\hat{alpha_n}},
- and
- log(alpha_{t_n}) = 0.5 * log(\hat{alpha_n}).
-
-
- 2. For continuous-time DPMs:
-
- We support two types of VPSDEs: linear (DDPM) and cosine (improved-DDPM). The hyperparameters for the noise
- schedule are the default settings in DDPM and improved-DDPM:
-
- Args:
- beta_min: A `float` number. The smallest beta for the linear schedule.
- beta_max: A `float` number. The largest beta for the linear schedule.
- cosine_s: A `float` number. The hyperparameter in the cosine schedule.
- cosine_beta_max: A `float` number. The hyperparameter in the cosine schedule.
- T: A `float` number. The ending time of the forward process.
-
- ===============================================================
-
- Args:
- schedule: A `str`. The noise schedule of the forward SDE. 'discrete' for discrete-time DPMs,
- 'linear' or 'cosine' for continuous-time DPMs.
- Returns:
- A wrapper object of the forward SDE (VP type).
-
- ===============================================================
-
- Example:
-
- # For discrete-time DPMs, given betas (the beta array for n = 0, 1, ..., N - 1):
- >>> ns = NoiseScheduleVP('discrete', betas=betas)
-
- # For discrete-time DPMs, given alphas_cumprod (the \hat{alpha_n} array for n = 0, 1, ..., N - 1):
- >>> ns = NoiseScheduleVP('discrete', alphas_cumprod=alphas_cumprod)
-
- # For continuous-time DPMs (VPSDE), linear schedule:
- >>> ns = NoiseScheduleVP('linear', continuous_beta_0=0.1, continuous_beta_1=20.)
-
- """
-
- if schedule not in ['discrete', 'linear', 'cosine']:
- raise ValueError(
- "Unsupported noise schedule {}. The schedule needs to be 'discrete' or 'linear' or 'cosine'".format(
- schedule))
-
- self.schedule = schedule
- if schedule == 'discrete':
- if betas is not None:
- log_alphas = 0.5 * torch.log(1 - betas).cumsum(dim=0)
- else:
- assert alphas_cumprod is not None
- log_alphas = 0.5 * torch.log(alphas_cumprod)
- self.total_N = len(log_alphas)
- self.T = 1.
- self.t_array = torch.linspace(0., 1., self.total_N + 1)[1:].reshape((1, -1))
- self.log_alpha_array = log_alphas.reshape((1, -1,))
- else:
- self.total_N = 1000
- self.beta_0 = continuous_beta_0
- self.beta_1 = continuous_beta_1
- self.cosine_s = 0.008
- self.cosine_beta_max = 999.
- self.cosine_t_max = math.atan(self.cosine_beta_max * (1. + self.cosine_s) / math.pi) * 2. * (
- 1. + self.cosine_s) / math.pi - self.cosine_s
- self.cosine_log_alpha_0 = math.log(math.cos(self.cosine_s / (1. + self.cosine_s) * math.pi / 2.))
- self.schedule = schedule
- if schedule == 'cosine':
- # For the cosine schedule, T = 1 will have numerical issues. So we manually set the ending time T.
- # Note that T = 0.9946 may be not the optimal setting. However, we find it works well.
- self.T = 0.9946
- else:
- self.T = 1.
-
- def marginal_log_mean_coeff(self, t):
- """
- Compute log(alpha_t) of a given continuous-time label t in [0, T].
- """
- if self.schedule == 'discrete':
- return interpolate_fn(t.reshape((-1, 1)), self.t_array.to(t.device),
- self.log_alpha_array.to(t.device)).reshape((-1))
- elif self.schedule == 'linear':
- return -0.25 * t ** 2 * (self.beta_1 - self.beta_0) - 0.5 * t * self.beta_0
- elif self.schedule == 'cosine':
- log_alpha_fn = lambda s: torch.log(torch.cos((s + self.cosine_s) / (1. + self.cosine_s) * math.pi / 2.))
- log_alpha_t = log_alpha_fn(t) - self.cosine_log_alpha_0
- return log_alpha_t
-
- def marginal_alpha(self, t):
- """
- Compute alpha_t of a given continuous-time label t in [0, T].
- """
- return torch.exp(self.marginal_log_mean_coeff(t))
-
- def marginal_std(self, t):
- """
- Compute sigma_t of a given continuous-time label t in [0, T].
- """
- return torch.sqrt(1. - torch.exp(2. * self.marginal_log_mean_coeff(t)))
-
- def marginal_lambda(self, t):
- """
- Compute lambda_t = log(alpha_t) - log(sigma_t) of a given continuous-time label t in [0, T].
- """
- log_mean_coeff = self.marginal_log_mean_coeff(t)
- log_std = 0.5 * torch.log(1. - torch.exp(2. * log_mean_coeff))
- return log_mean_coeff - log_std
-
- def inverse_lambda(self, lamb):
- """
- Compute the continuous-time label t in [0, T] of a given half-logSNR lambda_t.
- """
- if self.schedule == 'linear':
- tmp = 2. * (self.beta_1 - self.beta_0) * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb))
- Delta = self.beta_0 ** 2 + tmp
- return tmp / (torch.sqrt(Delta) + self.beta_0) / (self.beta_1 - self.beta_0)
- elif self.schedule == 'discrete':
- log_alpha = -0.5 * torch.logaddexp(torch.zeros((1,)).to(lamb.device), -2. * lamb)
- t = interpolate_fn(log_alpha.reshape((-1, 1)), torch.flip(self.log_alpha_array.to(lamb.device), [1]),
- torch.flip(self.t_array.to(lamb.device), [1]))
- return t.reshape((-1,))
- else:
- log_alpha = -0.5 * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb))
- t_fn = lambda log_alpha_t: torch.arccos(torch.exp(log_alpha_t + self.cosine_log_alpha_0)) * 2. * (
- 1. + self.cosine_s) / math.pi - self.cosine_s
- t = t_fn(log_alpha)
- return t
-
-
-def model_wrapper(
- model,
- noise_schedule,
- model_type="noise",
- model_kwargs={},
- guidance_type="uncond",
- condition=None,
- unconditional_condition=None,
- guidance_scale=1.,
- classifier_fn=None,
- classifier_kwargs={},
-):
- """Create a wrapper function for the noise prediction model.
-
- DPM-Solver needs to solve the continuous-time diffusion ODEs. For DPMs trained on discrete-time labels, we need to
- firstly wrap the model function to a noise prediction model that accepts the continuous time as the input.
-
- We support four types of the diffusion model by setting `model_type`:
-
- 1. "noise": noise prediction model. (Trained by predicting noise).
-
- 2. "x_start": data prediction model. (Trained by predicting the data x_0 at time 0).
-
- 3. "v": velocity prediction model. (Trained by predicting the velocity).
- The "v" prediction is derivation detailed in Appendix D of [1], and is used in Imagen-Video [2].
-
- [1] Salimans, Tim, and Jonathan Ho. "Progressive distillation for fast sampling of diffusion models."
- arXiv preprint arXiv:2202.00512 (2022).
- [2] Ho, Jonathan, et al. "Imagen Video: High Definition Video Generation with Diffusion Models."
- arXiv preprint arXiv:2210.02303 (2022).
-
- 4. "score": marginal score function. (Trained by denoising score matching).
- Note that the score function and the noise prediction model follows a simple relationship:
- ```
- noise(x_t, t) = -sigma_t * score(x_t, t)
- ```
-
- We support three types of guided sampling by DPMs by setting `guidance_type`:
- 1. "uncond": unconditional sampling by DPMs.
- The input `model` has the following format:
- ``
- model(x, t_input, **model_kwargs) -> noise | x_start | v | score
- ``
-
- 2. "classifier": classifier guidance sampling [3] by DPMs and another classifier.
- The input `model` has the following format:
- ``
- model(x, t_input, **model_kwargs) -> noise | x_start | v | score
- ``
-
- The input `classifier_fn` has the following format:
- ``
- classifier_fn(x, t_input, cond, **classifier_kwargs) -> logits(x, t_input, cond)
- ``
-
- [3] P. Dhariwal and A. Q. Nichol, "Diffusion models beat GANs on image synthesis,"
- in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 8780-8794.
-
- 3. "classifier-free": classifier-free guidance sampling by conditional DPMs.
- The input `model` has the following format:
- ``
- model(x, t_input, cond, **model_kwargs) -> noise | x_start | v | score
- ``
- And if cond == `unconditional_condition`, the model output is the unconditional DPM output.
-
- [4] Ho, Jonathan, and Tim Salimans. "Classifier-free diffusion guidance."
- arXiv preprint arXiv:2207.12598 (2022).
-
-
- The `t_input` is the time label of the model, which may be discrete-time labels (i.e. 0 to 999)
- or continuous-time labels (i.e. epsilon to T).
-
- We wrap the model function to accept only `x` and `t_continuous` as inputs, and outputs the predicted noise:
- ``
- def model_fn(x, t_continuous) -> noise:
- t_input = get_model_input_time(t_continuous)
- return noise_pred(model, x, t_input, **model_kwargs)
- ``
- where `t_continuous` is the continuous time labels (i.e. epsilon to T). And we use `model_fn` for DPM-Solver.
-
- ===============================================================
-
- Args:
- model: A diffusion model with the corresponding format described above.
- noise_schedule: A noise schedule object, such as NoiseScheduleVP.
- model_type: A `str`. The parameterization type of the diffusion model.
- "noise" or "x_start" or "v" or "score".
- model_kwargs: A `dict`. A dict for the other inputs of the model function.
- guidance_type: A `str`. The type of the guidance for sampling.
- "uncond" or "classifier" or "classifier-free".
- condition: A pytorch tensor. The condition for the guided sampling.
- Only used for "classifier" or "classifier-free" guidance type.
- unconditional_condition: A pytorch tensor. The condition for the unconditional sampling.
- Only used for "classifier-free" guidance type.
- guidance_scale: A `float`. The scale for the guided sampling.
- classifier_fn: A classifier function. Only used for the classifier guidance.
- classifier_kwargs: A `dict`. A dict for the other inputs of the classifier function.
- Returns:
- A noise prediction model that accepts the noised data and the continuous time as the inputs.
- """
-
- def get_model_input_time(t_continuous):
- """
- Convert the continuous-time `t_continuous` (in [epsilon, T]) to the model input time.
- For discrete-time DPMs, we convert `t_continuous` in [1 / N, 1] to `t_input` in [0, 1000 * (N - 1) / N].
- For continuous-time DPMs, we just use `t_continuous`.
- """
- if noise_schedule.schedule == 'discrete':
- return (t_continuous - 1. / noise_schedule.total_N) * 1000.
- else:
- return t_continuous
-
- def noise_pred_fn(x, t_continuous, cond=None):
- if t_continuous.reshape((-1,)).shape[0] == 1:
- t_continuous = t_continuous.expand((x.shape[0]))
- t_input = get_model_input_time(t_continuous)
- if cond is None:
- output = model(x, t_input, **model_kwargs)
- else:
- output = model(x, t_input, cond, **model_kwargs)
- if model_type == "noise":
- return output
- elif model_type == "x_start":
- alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous)
- dims = x.dim()
- return (x - expand_dims(alpha_t, dims) * output) / expand_dims(sigma_t, dims)
- elif model_type == "v":
- alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous)
- dims = x.dim()
- return expand_dims(alpha_t, dims) * output + expand_dims(sigma_t, dims) * x
- elif model_type == "score":
- sigma_t = noise_schedule.marginal_std(t_continuous)
- dims = x.dim()
- return -expand_dims(sigma_t, dims) * output
-
- def cond_grad_fn(x, t_input):
- """
- Compute the gradient of the classifier, i.e. nabla_{x} log p_t(cond | x_t).
- """
- with torch.enable_grad():
- x_in = x.detach().requires_grad_(True)
- log_prob = classifier_fn(x_in, t_input, condition, **classifier_kwargs)
- return torch.autograd.grad(log_prob.sum(), x_in)[0]
-
- def model_fn(x, t_continuous):
- """
- The noise predicition model function that is used for DPM-Solver.
- """
- if t_continuous.reshape((-1,)).shape[0] == 1:
- t_continuous = t_continuous.expand((x.shape[0]))
- if guidance_type == "uncond":
- return noise_pred_fn(x, t_continuous)
- elif guidance_type == "classifier":
- assert classifier_fn is not None
- t_input = get_model_input_time(t_continuous)
- cond_grad = cond_grad_fn(x, t_input)
- sigma_t = noise_schedule.marginal_std(t_continuous)
- noise = noise_pred_fn(x, t_continuous)
- return noise - guidance_scale * expand_dims(sigma_t, dims=cond_grad.dim()) * cond_grad
- elif guidance_type == "classifier-free":
- if guidance_scale == 1. or unconditional_condition is None:
- return noise_pred_fn(x, t_continuous, cond=condition)
- else:
- x_in = torch.cat([x] * 2)
- t_in = torch.cat([t_continuous] * 2)
- c_in = torch.cat([unconditional_condition, condition])
- noise_uncond, noise = noise_pred_fn(x_in, t_in, cond=c_in).chunk(2)
- return noise_uncond + guidance_scale * (noise - noise_uncond)
-
- assert model_type in ["noise", "x_start", "v"]
- assert guidance_type in ["uncond", "classifier", "classifier-free"]
- return model_fn
-
-
-class DPM_Solver:
- def __init__(self, model_fn, noise_schedule, predict_x0=False, thresholding=False, max_val=1.):
- """Construct a DPM-Solver.
-
- We support both the noise prediction model ("predicting epsilon") and the data prediction model ("predicting x0").
- If `predict_x0` is False, we use the solver for the noise prediction model (DPM-Solver).
- If `predict_x0` is True, we use the solver for the data prediction model (DPM-Solver++).
- In such case, we further support the "dynamic thresholding" in [1] when `thresholding` is True.
- The "dynamic thresholding" can greatly improve the sample quality for pixel-space DPMs with large guidance scales.
-
- Args:
- model_fn: A noise prediction model function which accepts the continuous-time input (t in [epsilon, T]):
- ``
- def model_fn(x, t_continuous):
- return noise
- ``
- noise_schedule: A noise schedule object, such as NoiseScheduleVP.
- predict_x0: A `bool`. If true, use the data prediction model; else, use the noise prediction model.
- thresholding: A `bool`. Valid when `predict_x0` is True. Whether to use the "dynamic thresholding" in [1].
- max_val: A `float`. Valid when both `predict_x0` and `thresholding` are True. The max value for thresholding.
-
- [1] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022b.
- """
- self.model = model_fn
- self.noise_schedule = noise_schedule
- self.predict_x0 = predict_x0
- self.thresholding = thresholding
- self.max_val = max_val
-
- def noise_prediction_fn(self, x, t):
- """
- Return the noise prediction model.
- """
- return self.model(x, t)
-
- def data_prediction_fn(self, x, t):
- """
- Return the data prediction model (with thresholding).
- """
- noise = self.noise_prediction_fn(x, t)
- dims = x.dim()
- alpha_t, sigma_t = self.noise_schedule.marginal_alpha(t), self.noise_schedule.marginal_std(t)
- x0 = (x - expand_dims(sigma_t, dims) * noise) / expand_dims(alpha_t, dims)
- if self.thresholding:
- p = 0.995 # A hyperparameter in the paper of "Imagen" [1].
- s = torch.quantile(torch.abs(x0).reshape((x0.shape[0], -1)), p, dim=1)
- s = expand_dims(torch.maximum(s, self.max_val * torch.ones_like(s).to(s.device)), dims)
- x0 = torch.clamp(x0, -s, s) / s
- return x0
-
- def model_fn(self, x, t):
- """
- Convert the model to the noise prediction model or the data prediction model.
- """
- if self.predict_x0:
- return self.data_prediction_fn(x, t)
- else:
- return self.noise_prediction_fn(x, t)
-
- def get_time_steps(self, skip_type, t_T, t_0, N, device):
- """Compute the intermediate time steps for sampling.
-
- Args:
- skip_type: A `str`. The type for the spacing of the time steps. We support three types:
- - 'logSNR': uniform logSNR for the time steps.
- - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.)
- - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.)
- t_T: A `float`. The starting time of the sampling (default is T).
- t_0: A `float`. The ending time of the sampling (default is epsilon).
- N: A `int`. The total number of the spacing of the time steps.
- device: A torch device.
- Returns:
- A pytorch tensor of the time steps, with the shape (N + 1,).
- """
- if skip_type == 'logSNR':
- lambda_T = self.noise_schedule.marginal_lambda(torch.tensor(t_T).to(device))
- lambda_0 = self.noise_schedule.marginal_lambda(torch.tensor(t_0).to(device))
- logSNR_steps = torch.linspace(lambda_T.cpu().item(), lambda_0.cpu().item(), N + 1).to(device)
- return self.noise_schedule.inverse_lambda(logSNR_steps)
- elif skip_type == 'time_uniform':
- return torch.linspace(t_T, t_0, N + 1).to(device)
- elif skip_type == 'time_quadratic':
- t_order = 2
- t = torch.linspace(t_T ** (1. / t_order), t_0 ** (1. / t_order), N + 1).pow(t_order).to(device)
- return t
- else:
- raise ValueError(
- "Unsupported skip_type {}, need to be 'logSNR' or 'time_uniform' or 'time_quadratic'".format(skip_type))
-
- def get_orders_and_timesteps_for_singlestep_solver(self, steps, order, skip_type, t_T, t_0, device):
- """
- Get the order of each step for sampling by the singlestep DPM-Solver.
-
- We combine both DPM-Solver-1,2,3 to use all the function evaluations, which is named as "DPM-Solver-fast".
- Given a fixed number of function evaluations by `steps`, the sampling procedure by DPM-Solver-fast is:
- - If order == 1:
- We take `steps` of DPM-Solver-1 (i.e. DDIM).
- - If order == 2:
- - Denote K = (steps // 2). We take K or (K + 1) intermediate time steps for sampling.
- - If steps % 2 == 0, we use K steps of DPM-Solver-2.
- - If steps % 2 == 1, we use K steps of DPM-Solver-2 and 1 step of DPM-Solver-1.
- - If order == 3:
- - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling.
- - If steps % 3 == 0, we use (K - 2) steps of DPM-Solver-3, and 1 step of DPM-Solver-2 and 1 step of DPM-Solver-1.
- - If steps % 3 == 1, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-1.
- - If steps % 3 == 2, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-2.
-
- ============================================
- Args:
- order: A `int`. The max order for the solver (2 or 3).
- steps: A `int`. The total number of function evaluations (NFE).
- skip_type: A `str`. The type for the spacing of the time steps. We support three types:
- - 'logSNR': uniform logSNR for the time steps.
- - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.)
- - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.)
- t_T: A `float`. The starting time of the sampling (default is T).
- t_0: A `float`. The ending time of the sampling (default is epsilon).
- device: A torch device.
- Returns:
- orders: A list of the solver order of each step.
- """
- if order == 3:
- K = steps // 3 + 1
- if steps % 3 == 0:
- orders = [3, ] * (K - 2) + [2, 1]
- elif steps % 3 == 1:
- orders = [3, ] * (K - 1) + [1]
- else:
- orders = [3, ] * (K - 1) + [2]
- elif order == 2:
- if steps % 2 == 0:
- K = steps // 2
- orders = [2, ] * K
- else:
- K = steps // 2 + 1
- orders = [2, ] * (K - 1) + [1]
- elif order == 1:
- K = 1
- orders = [1, ] * steps
- else:
- raise ValueError("'order' must be '1' or '2' or '3'.")
- if skip_type == 'logSNR':
- # To reproduce the results in DPM-Solver paper
- timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, K, device)
- else:
- timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, steps, device)[
- torch.cumsum(torch.tensor([0, ] + orders)).to(device)]
- return timesteps_outer, orders
-
- def denoise_to_zero_fn(self, x, s):
- """
- Denoise at the final step, which is equivalent to solve the ODE from lambda_s to infty by first-order discretization.
- """
- return self.data_prediction_fn(x, s)
-
- def dpm_solver_first_update(self, x, s, t, model_s=None, return_intermediate=False):
- """
- DPM-Solver-1 (equivalent to DDIM) from time `s` to time `t`.
-
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- s: A pytorch tensor. The starting time, with the shape (x.shape[0],).
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- model_s: A pytorch tensor. The model function evaluated at time `s`.
- If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it.
- return_intermediate: A `bool`. If true, also return the model value at time `s`.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- ns = self.noise_schedule
- dims = x.dim()
- lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t)
- h = lambda_t - lambda_s
- log_alpha_s, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff(t)
- sigma_s, sigma_t = ns.marginal_std(s), ns.marginal_std(t)
- alpha_t = torch.exp(log_alpha_t)
-
- if self.predict_x0:
- phi_1 = torch.expm1(-h)
- if model_s is None:
- model_s = self.model_fn(x, s)
- x_t = (
- expand_dims(sigma_t / sigma_s, dims) * x
- - expand_dims(alpha_t * phi_1, dims) * model_s
- )
- if return_intermediate:
- return x_t, {'model_s': model_s}
- else:
- return x_t
- else:
- phi_1 = torch.expm1(h)
- if model_s is None:
- model_s = self.model_fn(x, s)
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x
- - expand_dims(sigma_t * phi_1, dims) * model_s
- )
- if return_intermediate:
- return x_t, {'model_s': model_s}
- else:
- return x_t
-
- def singlestep_dpm_solver_second_update(self, x, s, t, r1=0.5, model_s=None, return_intermediate=False,
- solver_type='dpm_solver'):
- """
- Singlestep solver DPM-Solver-2 from time `s` to time `t`.
-
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- s: A pytorch tensor. The starting time, with the shape (x.shape[0],).
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- r1: A `float`. The hyperparameter of the second-order solver.
- model_s: A pytorch tensor. The model function evaluated at time `s`.
- If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it.
- return_intermediate: A `bool`. If true, also return the model value at time `s` and `s1` (the intermediate time).
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- if solver_type not in ['dpm_solver', 'taylor']:
- raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type))
- if r1 is None:
- r1 = 0.5
- ns = self.noise_schedule
- dims = x.dim()
- lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t)
- h = lambda_t - lambda_s
- lambda_s1 = lambda_s + r1 * h
- s1 = ns.inverse_lambda(lambda_s1)
- log_alpha_s, log_alpha_s1, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff(
- s1), ns.marginal_log_mean_coeff(t)
- sigma_s, sigma_s1, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std(t)
- alpha_s1, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_t)
-
- if self.predict_x0:
- phi_11 = torch.expm1(-r1 * h)
- phi_1 = torch.expm1(-h)
-
- if model_s is None:
- model_s = self.model_fn(x, s)
- x_s1 = (
- expand_dims(sigma_s1 / sigma_s, dims) * x
- - expand_dims(alpha_s1 * phi_11, dims) * model_s
- )
- model_s1 = self.model_fn(x_s1, s1)
- if solver_type == 'dpm_solver':
- x_t = (
- expand_dims(sigma_t / sigma_s, dims) * x
- - expand_dims(alpha_t * phi_1, dims) * model_s
- - (0.5 / r1) * expand_dims(alpha_t * phi_1, dims) * (model_s1 - model_s)
- )
- elif solver_type == 'taylor':
- x_t = (
- expand_dims(sigma_t / sigma_s, dims) * x
- - expand_dims(alpha_t * phi_1, dims) * model_s
- + (1. / r1) * expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * (
- model_s1 - model_s)
- )
- else:
- phi_11 = torch.expm1(r1 * h)
- phi_1 = torch.expm1(h)
-
- if model_s is None:
- model_s = self.model_fn(x, s)
- x_s1 = (
- expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x
- - expand_dims(sigma_s1 * phi_11, dims) * model_s
- )
- model_s1 = self.model_fn(x_s1, s1)
- if solver_type == 'dpm_solver':
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x
- - expand_dims(sigma_t * phi_1, dims) * model_s
- - (0.5 / r1) * expand_dims(sigma_t * phi_1, dims) * (model_s1 - model_s)
- )
- elif solver_type == 'taylor':
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x
- - expand_dims(sigma_t * phi_1, dims) * model_s
- - (1. / r1) * expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * (model_s1 - model_s)
- )
- if return_intermediate:
- return x_t, {'model_s': model_s, 'model_s1': model_s1}
- else:
- return x_t
-
- def singlestep_dpm_solver_third_update(self, x, s, t, r1=1. / 3., r2=2. / 3., model_s=None, model_s1=None,
- return_intermediate=False, solver_type='dpm_solver'):
- """
- Singlestep solver DPM-Solver-3 from time `s` to time `t`.
-
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- s: A pytorch tensor. The starting time, with the shape (x.shape[0],).
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- r1: A `float`. The hyperparameter of the third-order solver.
- r2: A `float`. The hyperparameter of the third-order solver.
- model_s: A pytorch tensor. The model function evaluated at time `s`.
- If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it.
- model_s1: A pytorch tensor. The model function evaluated at time `s1` (the intermediate time given by `r1`).
- If `model_s1` is None, we evaluate the model at `s1`; otherwise we directly use it.
- return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times).
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- if solver_type not in ['dpm_solver', 'taylor']:
- raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type))
- if r1 is None:
- r1 = 1. / 3.
- if r2 is None:
- r2 = 2. / 3.
- ns = self.noise_schedule
- dims = x.dim()
- lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t)
- h = lambda_t - lambda_s
- lambda_s1 = lambda_s + r1 * h
- lambda_s2 = lambda_s + r2 * h
- s1 = ns.inverse_lambda(lambda_s1)
- s2 = ns.inverse_lambda(lambda_s2)
- log_alpha_s, log_alpha_s1, log_alpha_s2, log_alpha_t = ns.marginal_log_mean_coeff(
- s), ns.marginal_log_mean_coeff(s1), ns.marginal_log_mean_coeff(s2), ns.marginal_log_mean_coeff(t)
- sigma_s, sigma_s1, sigma_s2, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std(
- s2), ns.marginal_std(t)
- alpha_s1, alpha_s2, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_s2), torch.exp(log_alpha_t)
-
- if self.predict_x0:
- phi_11 = torch.expm1(-r1 * h)
- phi_12 = torch.expm1(-r2 * h)
- phi_1 = torch.expm1(-h)
- phi_22 = torch.expm1(-r2 * h) / (r2 * h) + 1.
- phi_2 = phi_1 / h + 1.
- phi_3 = phi_2 / h - 0.5
-
- if model_s is None:
- model_s = self.model_fn(x, s)
- if model_s1 is None:
- x_s1 = (
- expand_dims(sigma_s1 / sigma_s, dims) * x
- - expand_dims(alpha_s1 * phi_11, dims) * model_s
- )
- model_s1 = self.model_fn(x_s1, s1)
- x_s2 = (
- expand_dims(sigma_s2 / sigma_s, dims) * x
- - expand_dims(alpha_s2 * phi_12, dims) * model_s
- + r2 / r1 * expand_dims(alpha_s2 * phi_22, dims) * (model_s1 - model_s)
- )
- model_s2 = self.model_fn(x_s2, s2)
- if solver_type == 'dpm_solver':
- x_t = (
- expand_dims(sigma_t / sigma_s, dims) * x
- - expand_dims(alpha_t * phi_1, dims) * model_s
- + (1. / r2) * expand_dims(alpha_t * phi_2, dims) * (model_s2 - model_s)
- )
- elif solver_type == 'taylor':
- D1_0 = (1. / r1) * (model_s1 - model_s)
- D1_1 = (1. / r2) * (model_s2 - model_s)
- D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1)
- D2 = 2. * (D1_1 - D1_0) / (r2 - r1)
- x_t = (
- expand_dims(sigma_t / sigma_s, dims) * x
- - expand_dims(alpha_t * phi_1, dims) * model_s
- + expand_dims(alpha_t * phi_2, dims) * D1
- - expand_dims(alpha_t * phi_3, dims) * D2
- )
- else:
- phi_11 = torch.expm1(r1 * h)
- phi_12 = torch.expm1(r2 * h)
- phi_1 = torch.expm1(h)
- phi_22 = torch.expm1(r2 * h) / (r2 * h) - 1.
- phi_2 = phi_1 / h - 1.
- phi_3 = phi_2 / h - 0.5
-
- if model_s is None:
- model_s = self.model_fn(x, s)
- if model_s1 is None:
- x_s1 = (
- expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x
- - expand_dims(sigma_s1 * phi_11, dims) * model_s
- )
- model_s1 = self.model_fn(x_s1, s1)
- x_s2 = (
- expand_dims(torch.exp(log_alpha_s2 - log_alpha_s), dims) * x
- - expand_dims(sigma_s2 * phi_12, dims) * model_s
- - r2 / r1 * expand_dims(sigma_s2 * phi_22, dims) * (model_s1 - model_s)
- )
- model_s2 = self.model_fn(x_s2, s2)
- if solver_type == 'dpm_solver':
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x
- - expand_dims(sigma_t * phi_1, dims) * model_s
- - (1. / r2) * expand_dims(sigma_t * phi_2, dims) * (model_s2 - model_s)
- )
- elif solver_type == 'taylor':
- D1_0 = (1. / r1) * (model_s1 - model_s)
- D1_1 = (1. / r2) * (model_s2 - model_s)
- D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1)
- D2 = 2. * (D1_1 - D1_0) / (r2 - r1)
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x
- - expand_dims(sigma_t * phi_1, dims) * model_s
- - expand_dims(sigma_t * phi_2, dims) * D1
- - expand_dims(sigma_t * phi_3, dims) * D2
- )
-
- if return_intermediate:
- return x_t, {'model_s': model_s, 'model_s1': model_s1, 'model_s2': model_s2}
- else:
- return x_t
-
- def multistep_dpm_solver_second_update(self, x, model_prev_list, t_prev_list, t, solver_type="dpm_solver"):
- """
- Multistep solver DPM-Solver-2 from time `t_prev_list[-1]` to time `t`.
-
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- model_prev_list: A list of pytorch tensor. The previous computed model values.
- t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],)
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- if solver_type not in ['dpm_solver', 'taylor']:
- raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type))
- ns = self.noise_schedule
- dims = x.dim()
- model_prev_1, model_prev_0 = model_prev_list
- t_prev_1, t_prev_0 = t_prev_list
- lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_1), ns.marginal_lambda(
- t_prev_0), ns.marginal_lambda(t)
- log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t)
- sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t)
- alpha_t = torch.exp(log_alpha_t)
-
- h_0 = lambda_prev_0 - lambda_prev_1
- h = lambda_t - lambda_prev_0
- r0 = h_0 / h
- D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1)
- if self.predict_x0:
- if solver_type == 'dpm_solver':
- x_t = (
- expand_dims(sigma_t / sigma_prev_0, dims) * x
- - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0
- - 0.5 * expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * D1_0
- )
- elif solver_type == 'taylor':
- x_t = (
- expand_dims(sigma_t / sigma_prev_0, dims) * x
- - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0
- + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1_0
- )
- else:
- if solver_type == 'dpm_solver':
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x
- - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0
- - 0.5 * expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * D1_0
- )
- elif solver_type == 'taylor':
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x
- - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0
- - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1_0
- )
- return x_t
-
- def multistep_dpm_solver_third_update(self, x, model_prev_list, t_prev_list, t, solver_type='dpm_solver'):
- """
- Multistep solver DPM-Solver-3 from time `t_prev_list[-1]` to time `t`.
-
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- model_prev_list: A list of pytorch tensor. The previous computed model values.
- t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],)
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- ns = self.noise_schedule
- dims = x.dim()
- model_prev_2, model_prev_1, model_prev_0 = model_prev_list
- t_prev_2, t_prev_1, t_prev_0 = t_prev_list
- lambda_prev_2, lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_2), ns.marginal_lambda(
- t_prev_1), ns.marginal_lambda(t_prev_0), ns.marginal_lambda(t)
- log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t)
- sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t)
- alpha_t = torch.exp(log_alpha_t)
-
- h_1 = lambda_prev_1 - lambda_prev_2
- h_0 = lambda_prev_0 - lambda_prev_1
- h = lambda_t - lambda_prev_0
- r0, r1 = h_0 / h, h_1 / h
- D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1)
- D1_1 = expand_dims(1. / r1, dims) * (model_prev_1 - model_prev_2)
- D1 = D1_0 + expand_dims(r0 / (r0 + r1), dims) * (D1_0 - D1_1)
- D2 = expand_dims(1. / (r0 + r1), dims) * (D1_0 - D1_1)
- if self.predict_x0:
- x_t = (
- expand_dims(sigma_t / sigma_prev_0, dims) * x
- - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0
- + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1
- - expand_dims(alpha_t * ((torch.exp(-h) - 1. + h) / h ** 2 - 0.5), dims) * D2
- )
- else:
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x
- - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0
- - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1
- - expand_dims(sigma_t * ((torch.exp(h) - 1. - h) / h ** 2 - 0.5), dims) * D2
- )
- return x_t
-
- def singlestep_dpm_solver_update(self, x, s, t, order, return_intermediate=False, solver_type='dpm_solver', r1=None,
- r2=None):
- """
- Singlestep DPM-Solver with the order `order` from time `s` to time `t`.
-
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- s: A pytorch tensor. The starting time, with the shape (x.shape[0],).
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3.
- return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times).
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- r1: A `float`. The hyperparameter of the second-order or third-order solver.
- r2: A `float`. The hyperparameter of the third-order solver.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- if order == 1:
- return self.dpm_solver_first_update(x, s, t, return_intermediate=return_intermediate)
- elif order == 2:
- return self.singlestep_dpm_solver_second_update(x, s, t, return_intermediate=return_intermediate,
- solver_type=solver_type, r1=r1)
- elif order == 3:
- return self.singlestep_dpm_solver_third_update(x, s, t, return_intermediate=return_intermediate,
- solver_type=solver_type, r1=r1, r2=r2)
- else:
- raise ValueError("Solver order must be 1 or 2 or 3, got {}".format(order))
-
- def multistep_dpm_solver_update(self, x, model_prev_list, t_prev_list, t, order, solver_type='dpm_solver'):
- """
- Multistep DPM-Solver with the order `order` from time `t_prev_list[-1]` to time `t`.
-
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- model_prev_list: A list of pytorch tensor. The previous computed model values.
- t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],)
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3.
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- if order == 1:
- return self.dpm_solver_first_update(x, t_prev_list[-1], t, model_s=model_prev_list[-1])
- elif order == 2:
- return self.multistep_dpm_solver_second_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type)
- elif order == 3:
- return self.multistep_dpm_solver_third_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type)
- else:
- raise ValueError("Solver order must be 1 or 2 or 3, got {}".format(order))
-
- def dpm_solver_adaptive(self, x, order, t_T, t_0, h_init=0.05, atol=0.0078, rtol=0.05, theta=0.9, t_err=1e-5,
- solver_type='dpm_solver'):
- """
- The adaptive step size solver based on singlestep DPM-Solver.
-
- Args:
- x: A pytorch tensor. The initial value at time `t_T`.
- order: A `int`. The (higher) order of the solver. We only support order == 2 or 3.
- t_T: A `float`. The starting time of the sampling (default is T).
- t_0: A `float`. The ending time of the sampling (default is epsilon).
- h_init: A `float`. The initial step size (for logSNR).
- atol: A `float`. The absolute tolerance of the solver. For image data, the default setting is 0.0078, followed [1].
- rtol: A `float`. The relative tolerance of the solver. The default setting is 0.05.
- theta: A `float`. The safety hyperparameter for adapting the step size. The default setting is 0.9, followed [1].
- t_err: A `float`. The tolerance for the time. We solve the diffusion ODE until the absolute error between the
- current time and `t_0` is less than `t_err`. The default setting is 1e-5.
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- Returns:
- x_0: A pytorch tensor. The approximated solution at time `t_0`.
-
- [1] A. Jolicoeur-Martineau, K. Li, R. Piché-Taillefer, T. Kachman, and I. Mitliagkas, "Gotta go fast when generating data with score-based models," arXiv preprint arXiv:2105.14080, 2021.
- """
- ns = self.noise_schedule
- s = t_T * torch.ones((x.shape[0],)).to(x)
- lambda_s = ns.marginal_lambda(s)
- lambda_0 = ns.marginal_lambda(t_0 * torch.ones_like(s).to(x))
- h = h_init * torch.ones_like(s).to(x)
- x_prev = x
- nfe = 0
- if order == 2:
- r1 = 0.5
- lower_update = lambda x, s, t: self.dpm_solver_first_update(x, s, t, return_intermediate=True)
- higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1,
- solver_type=solver_type,
- **kwargs)
- elif order == 3:
- r1, r2 = 1. / 3., 2. / 3.
- lower_update = lambda x, s, t: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1,
- return_intermediate=True,
- solver_type=solver_type)
- higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_third_update(x, s, t, r1=r1, r2=r2,
- solver_type=solver_type,
- **kwargs)
- else:
- raise ValueError("For adaptive step size solver, order must be 2 or 3, got {}".format(order))
- while torch.abs((s - t_0)).mean() > t_err:
- t = ns.inverse_lambda(lambda_s + h)
- x_lower, lower_noise_kwargs = lower_update(x, s, t)
- x_higher = higher_update(x, s, t, **lower_noise_kwargs)
- delta = torch.max(torch.ones_like(x).to(x) * atol, rtol * torch.max(torch.abs(x_lower), torch.abs(x_prev)))
- norm_fn = lambda v: torch.sqrt(torch.square(v.reshape((v.shape[0], -1))).mean(dim=-1, keepdim=True))
- E = norm_fn((x_higher - x_lower) / delta).max()
- if torch.all(E <= 1.):
- x = x_higher
- s = t
- x_prev = x_lower
- lambda_s = ns.marginal_lambda(s)
- h = torch.min(theta * h * torch.float_power(E, -1. / order).float(), lambda_0 - lambda_s)
- nfe += order
- print('adaptive solver nfe', nfe)
- return x
-
- def sample(self, x, steps=20, t_start=None, t_end=None, order=3, skip_type='time_uniform',
- method='singlestep', lower_order_final=True, denoise_to_zero=False, solver_type='dpm_solver',
- atol=0.0078, rtol=0.05,
- ):
- """
- Compute the sample at time `t_end` by DPM-Solver, given the initial `x` at time `t_start`.
-
- =====================================================
-
- We support the following algorithms for both noise prediction model and data prediction model:
- - 'singlestep':
- Singlestep DPM-Solver (i.e. "DPM-Solver-fast" in the paper), which combines different orders of singlestep DPM-Solver.
- We combine all the singlestep solvers with order <= `order` to use up all the function evaluations (steps).
- The total number of function evaluations (NFE) == `steps`.
- Given a fixed NFE == `steps`, the sampling procedure is:
- - If `order` == 1:
- - Denote K = steps. We use K steps of DPM-Solver-1 (i.e. DDIM).
- - If `order` == 2:
- - Denote K = (steps // 2) + (steps % 2). We take K intermediate time steps for sampling.
- - If steps % 2 == 0, we use K steps of singlestep DPM-Solver-2.
- - If steps % 2 == 1, we use (K - 1) steps of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1.
- - If `order` == 3:
- - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling.
- - If steps % 3 == 0, we use (K - 2) steps of singlestep DPM-Solver-3, and 1 step of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1.
- - If steps % 3 == 1, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of DPM-Solver-1.
- - If steps % 3 == 2, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of singlestep DPM-Solver-2.
- - 'multistep':
- Multistep DPM-Solver with the order of `order`. The total number of function evaluations (NFE) == `steps`.
- We initialize the first `order` values by lower order multistep solvers.
- Given a fixed NFE == `steps`, the sampling procedure is:
- Denote K = steps.
- - If `order` == 1:
- - We use K steps of DPM-Solver-1 (i.e. DDIM).
- - If `order` == 2:
- - We firstly use 1 step of DPM-Solver-1, then use (K - 1) step of multistep DPM-Solver-2.
- - If `order` == 3:
- - We firstly use 1 step of DPM-Solver-1, then 1 step of multistep DPM-Solver-2, then (K - 2) step of multistep DPM-Solver-3.
- - 'singlestep_fixed':
- Fixed order singlestep DPM-Solver (i.e. DPM-Solver-1 or singlestep DPM-Solver-2 or singlestep DPM-Solver-3).
- We use singlestep DPM-Solver-`order` for `order`=1 or 2 or 3, with total [`steps` // `order`] * `order` NFE.
- - 'adaptive':
- Adaptive step size DPM-Solver (i.e. "DPM-Solver-12" and "DPM-Solver-23" in the paper).
- We ignore `steps` and use adaptive step size DPM-Solver with a higher order of `order`.
- You can adjust the absolute tolerance `atol` and the relative tolerance `rtol` to balance the computatation costs
- (NFE) and the sample quality.
- - If `order` == 2, we use DPM-Solver-12 which combines DPM-Solver-1 and singlestep DPM-Solver-2.
- - If `order` == 3, we use DPM-Solver-23 which combines singlestep DPM-Solver-2 and singlestep DPM-Solver-3.
-
- =====================================================
-
- Some advices for choosing the algorithm:
- - For **unconditional sampling** or **guided sampling with small guidance scale** by DPMs:
- Use singlestep DPM-Solver ("DPM-Solver-fast" in the paper) with `order = 3`.
- e.g.
- >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=False)
- >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=3,
- skip_type='time_uniform', method='singlestep')
- - For **guided sampling with large guidance scale** by DPMs:
- Use multistep DPM-Solver with `predict_x0 = True` and `order = 2`.
- e.g.
- >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=True)
- >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=2,
- skip_type='time_uniform', method='multistep')
-
- We support three types of `skip_type`:
- - 'logSNR': uniform logSNR for the time steps. **Recommended for low-resolutional images**
- - 'time_uniform': uniform time for the time steps. **Recommended for high-resolutional images**.
- - 'time_quadratic': quadratic time for the time steps.
-
- =====================================================
- Args:
- x: A pytorch tensor. The initial value at time `t_start`
- e.g. if `t_start` == T, then `x` is a sample from the standard normal distribution.
- steps: A `int`. The total number of function evaluations (NFE).
- t_start: A `float`. The starting time of the sampling.
- If `T` is None, we use self.noise_schedule.T (default is 1.0).
- t_end: A `float`. The ending time of the sampling.
- If `t_end` is None, we use 1. / self.noise_schedule.total_N.
- e.g. if total_N == 1000, we have `t_end` == 1e-3.
- For discrete-time DPMs:
- - We recommend `t_end` == 1. / self.noise_schedule.total_N.
- For continuous-time DPMs:
- - We recommend `t_end` == 1e-3 when `steps` <= 15; and `t_end` == 1e-4 when `steps` > 15.
- order: A `int`. The order of DPM-Solver.
- skip_type: A `str`. The type for the spacing of the time steps. 'time_uniform' or 'logSNR' or 'time_quadratic'.
- method: A `str`. The method for sampling. 'singlestep' or 'multistep' or 'singlestep_fixed' or 'adaptive'.
- denoise_to_zero: A `bool`. Whether to denoise to time 0 at the final step.
- Default is `False`. If `denoise_to_zero` is `True`, the total NFE is (`steps` + 1).
-
- This trick is firstly proposed by DDPM (https://arxiv.org/abs/2006.11239) and
- score_sde (https://arxiv.org/abs/2011.13456). Such trick can improve the FID
- for diffusion models sampling by diffusion SDEs for low-resolutional images
- (such as CIFAR-10). However, we observed that such trick does not matter for
- high-resolutional images. As it needs an additional NFE, we do not recommend
- it for high-resolutional images.
- lower_order_final: A `bool`. Whether to use lower order solvers at the final steps.
- Only valid for `method=multistep` and `steps < 15`. We empirically find that
- this trick is a key to stabilizing the sampling by DPM-Solver with very few steps
- (especially for steps <= 10). So we recommend to set it to be `True`.
- solver_type: A `str`. The taylor expansion type for the solver. `dpm_solver` or `taylor`. We recommend `dpm_solver`.
- atol: A `float`. The absolute tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'.
- rtol: A `float`. The relative tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'.
- Returns:
- x_end: A pytorch tensor. The approximated solution at time `t_end`.
-
- """
- t_0 = 1. / self.noise_schedule.total_N if t_end is None else t_end
- t_T = self.noise_schedule.T if t_start is None else t_start
- device = x.device
- if method == 'adaptive':
- with torch.no_grad():
- x = self.dpm_solver_adaptive(x, order=order, t_T=t_T, t_0=t_0, atol=atol, rtol=rtol,
- solver_type=solver_type)
- elif method == 'multistep':
- assert steps >= order
- timesteps = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=steps, device=device)
- assert timesteps.shape[0] - 1 == steps
- with torch.no_grad():
- vec_t = timesteps[0].expand((x.shape[0]))
- model_prev_list = [self.model_fn(x, vec_t)]
- t_prev_list = [vec_t]
- # Init the first `order` values by lower order multistep DPM-Solver.
- for init_order in tqdm(range(1, order), desc="DPM init order"):
- vec_t = timesteps[init_order].expand(x.shape[0])
- x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, init_order,
- solver_type=solver_type)
- model_prev_list.append(self.model_fn(x, vec_t))
- t_prev_list.append(vec_t)
- # Compute the remaining values by `order`-th order multistep DPM-Solver.
- for step in tqdm(range(order, steps + 1), desc="DPM multistep"):
- vec_t = timesteps[step].expand(x.shape[0])
- if lower_order_final and steps < 15:
- step_order = min(order, steps + 1 - step)
- else:
- step_order = order
- x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, step_order,
- solver_type=solver_type)
- for i in range(order - 1):
- t_prev_list[i] = t_prev_list[i + 1]
- model_prev_list[i] = model_prev_list[i + 1]
- t_prev_list[-1] = vec_t
- # We do not need to evaluate the final model value.
- if step < steps:
- model_prev_list[-1] = self.model_fn(x, vec_t)
- elif method in ['singlestep', 'singlestep_fixed']:
- if method == 'singlestep':
- timesteps_outer, orders = self.get_orders_and_timesteps_for_singlestep_solver(steps=steps, order=order,
- skip_type=skip_type,
- t_T=t_T, t_0=t_0,
- device=device)
- elif method == 'singlestep_fixed':
- K = steps // order
- orders = [order, ] * K
- timesteps_outer = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=K, device=device)
- for i, order in enumerate(orders):
- t_T_inner, t_0_inner = timesteps_outer[i], timesteps_outer[i + 1]
- timesteps_inner = self.get_time_steps(skip_type=skip_type, t_T=t_T_inner.item(), t_0=t_0_inner.item(),
- N=order, device=device)
- lambda_inner = self.noise_schedule.marginal_lambda(timesteps_inner)
- vec_s, vec_t = t_T_inner.tile(x.shape[0]), t_0_inner.tile(x.shape[0])
- h = lambda_inner[-1] - lambda_inner[0]
- r1 = None if order <= 1 else (lambda_inner[1] - lambda_inner[0]) / h
- r2 = None if order <= 2 else (lambda_inner[2] - lambda_inner[0]) / h
- x = self.singlestep_dpm_solver_update(x, vec_s, vec_t, order, solver_type=solver_type, r1=r1, r2=r2)
- if denoise_to_zero:
- x = self.denoise_to_zero_fn(x, torch.ones((x.shape[0],)).to(device) * t_0)
- return x
-
-
-#############################################################
-# other utility functions
-#############################################################
-
-def interpolate_fn(x, xp, yp):
- """
- A piecewise linear function y = f(x), using xp and yp as keypoints.
- We implement f(x) in a differentiable way (i.e. applicable for autograd).
- The function f(x) is well-defined for all x-axis. (For x beyond the bounds of xp, we use the outmost points of xp to define the linear function.)
-
- Args:
- x: PyTorch tensor with shape [N, C], where N is the batch size, C is the number of channels (we use C = 1 for DPM-Solver).
- xp: PyTorch tensor with shape [C, K], where K is the number of keypoints.
- yp: PyTorch tensor with shape [C, K].
- Returns:
- The function values f(x), with shape [N, C].
- """
- N, K = x.shape[0], xp.shape[1]
- all_x = torch.cat([x.unsqueeze(2), xp.unsqueeze(0).repeat((N, 1, 1))], dim=2)
- sorted_all_x, x_indices = torch.sort(all_x, dim=2)
- x_idx = torch.argmin(x_indices, dim=2)
- cand_start_idx = x_idx - 1
- start_idx = torch.where(
- torch.eq(x_idx, 0),
- torch.tensor(1, device=x.device),
- torch.where(
- torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx,
- ),
- )
- end_idx = torch.where(torch.eq(start_idx, cand_start_idx), start_idx + 2, start_idx + 1)
- start_x = torch.gather(sorted_all_x, dim=2, index=start_idx.unsqueeze(2)).squeeze(2)
- end_x = torch.gather(sorted_all_x, dim=2, index=end_idx.unsqueeze(2)).squeeze(2)
- start_idx2 = torch.where(
- torch.eq(x_idx, 0),
- torch.tensor(0, device=x.device),
- torch.where(
- torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx,
- ),
- )
- y_positions_expanded = yp.unsqueeze(0).expand(N, -1, -1)
- start_y = torch.gather(y_positions_expanded, dim=2, index=start_idx2.unsqueeze(2)).squeeze(2)
- end_y = torch.gather(y_positions_expanded, dim=2, index=(start_idx2 + 1).unsqueeze(2)).squeeze(2)
- cand = start_y + (x - start_x) * (end_y - start_y) / (end_x - start_x)
- return cand
-
-
-def expand_dims(v, dims):
- """
- Expand the tensor `v` to the dim `dims`.
-
- Args:
- `v`: a PyTorch tensor with shape [N].
- `dim`: a `int`.
- Returns:
- a PyTorch tensor with shape [N, 1, 1, ..., 1] and the total dimension is `dims`.
- """
- return v[(...,) + (None,) * (dims - 1)]
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/BaseSizer.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/BaseSizer.d.ts
deleted file mode 100644
index b773fbe4b06b99c4c7d1c354d392031ad794c7f4..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/BaseSizer.d.ts
+++ /dev/null
@@ -1,739 +0,0 @@
-// import * as Phaser from 'phaser';
-import ContainerLite from '../../../plugins/containerlite.js';
-import Anchor from '../anchor/Anchor';
-import Click from '../click/Click';
-import ClickOutside from '../clickoutside/ClickOutside';
-import InTouching from '../intouching/InTouching';
-import SetChildrenInteractive from '../utils/setchildreninteractive/SetChildrenInteractive';
-import { ModalBehavoir } from '../modal/Modal';
-
-export default BaseSizer;
-
-declare namespace BaseSizer {
- type AlignTypes = number | 'center' | 'left' | 'right' | 'top' | 'bottom' |
- 'left-top' | 'left-center' | 'left-bottom' |
- 'center-top' | 'center-center' | 'center-bottom' |
- 'right-top' | 'right-center' | 'right-bottom';
-
- type PaddingTypes = number |
- {
- left?: number,
- right?: number,
- top?: number,
- bottom?: number
- };
-
- interface IConfig {
- space?: {
- left?: number, right?: number, top?: number, bottom?: number,
- },
-
- anchor?: Anchor.IConfig,
-
- name?: string,
-
- enableLayer?: boolean,
-
- draggable?: boolean | string | Phaser.GameObjects.GameObject,
-
- sizerEvents?: boolean,
- }
-
- type PrevState = {
- x: number,
- y: number,
- width: number, height: number,
- displayWidth: number, displayHeight: number,
- scaleX: number, scaleY: number
- }
-
- type OnModalCloseCallbackType = (
- data: Object
- ) => void;
-
-}
-
-declare class BaseSizer extends ContainerLite {
- isRexSizer: true;
-
- space: { [name: string]: number };
-
- constructor(
- scene: Phaser.Scene,
- x?: number, y?: number,
- minWidth?: number, minHeight?: number,
- config?: BaseSizer.IConfig
- );
-
- setMinSize(minWidth: number, minHeight: number): this;
-
- setMinWidth(minWidth: number): this;
-
- setMinHeight(minHeight: number): this;
-
- setDirty(dirty?: boolean): this;
-
- setSizerEventsEnable(enable?: boolean): this;
- sizerEventsEnable: boolean;
-
- left: number;
-
- alignLeft(value: number): this;
-
- right: number;
-
- alignRight(value: number): this;
-
- centerX: number;
-
- alignCenterX(value: number): this;
-
- top: number;
-
- alignTop(value: number): this;
-
- bottom: number;
-
- alignBottom(value: number): this;
-
- centerY: number;
-
- alignCenterY(value: number): this;
-
- pushIntoBounds(
- bounds?: Phaser.Geom.Rectangle | { left?: number, right?: number, top?: number, bottom?: number }
- ): this;
-
- readonly innerLeft: number;
-
- readonly innerRight: number;
-
- readonly innerTop: number;
-
- readonly innerBottom: number;
-
- readonly innerWidth: number;
-
- readonly innerHeight: number;
-
- readonly minInnerWidth: number;
-
- readonly minInnerHeight: number;
-
- addBackground(
- gameObject: Phaser.GameObjects.GameObject,
- padding?: BaseSizer.PaddingTypes,
- childKey?: string
- ): this;
-
- isBackground(
- gameObject: Phaser.GameObjects.GameObject
- ): boolean;
-
- layout(): this;
-
- drawBounds(
- graphics: Phaser.GameObjects.Graphics,
- color?: number
- ): this;
-
- drawBounds(
- graphics: Phaser.GameObjects.Graphics,
- config?: {
- color?: number,
- lineWidth?: number,
- name?: boolean |
- {
- createTextCallback: (scene: Phaser.Scene) => Phaser.GameObjects.GameObject,
- createTextCallbackScope?: object,
- align?: BaseSizer.AlignTypes
- }
- }
- ): this;
-
- childrenMap: {
- [key: string]:
- Phaser.GameObjects.GameObject
- };
- addChildrenMap(
- key: string,
- gameObject: Phaser.GameObjects.GameObject
- ): this;
-
- removeChildrenMap(key: string): this;
- removeChildrenMap(gameObject: Phaser.GameObjects.GameObject): this;
-
- getElement(
- name: string,
- recursive?: boolean
- ): Phaser.GameObjects.GameObject |
- Phaser.GameObjects.GameObject[] |
- { [name: string]: Phaser.GameObjects.GameObject } |
- null;
-
- getParentSizer(
- name?: string
- ): BaseSizer | null;
-
- getParentSizer(
- gameObject?: Phaser.GameObjects.GameObject,
- name?: string
- ): BaseSizer | null;
-
- getTopmostSizer(
- gameObject?: Phaser.GameObjects.GameObject
- ): BaseSizer | null;
-
- getSizerConfig(
- gameObject?: Phaser.GameObjects.GameObject
- ): { [name: string]: any };
-
- getChildPrevState(
- gameObject: Phaser.GameObjects.GameObject
- ): BaseSizer.PrevState;
-
- isInTouching(): boolean;
-
- isInTouching(
- pointer: Phaser.Input.Pointer,
- gameObject?: Phaser.GameObjects.GameObject | string
- ): boolean;
-
- isInTouching(
- gameObject?: Phaser.GameObjects.GameObject | string
- ): boolean;
-
-
- moveFrom(
- duration: number,
- x: number,
- y: number,
- ease?: string
- ): this;
-
- moveFrom(
- config: {
- x: number,
- y: number,
- speed?: number,
- duration?: number,
- ease?: string,
- }
- ): this;
-
- moveFromPromise(
- duration: number,
- x: number,
- y: number,
- ease?: string
- ): Promise;
-
- moveFromPromise(
- config: {
- x: number,
- y: number,
- speed?: number,
- duration?: number,
- ease?: string,
- }
- ): Promise;
-
- moveFromDestroy(
- duration: number,
- x: number,
- y: number,
- ease?: string
- ): this;
-
- moveFromDestroy(
- config: {
- x: number,
- y: number,
- speed?: number,
- duration?: number,
- ease?: string,
- }
- ): this;
-
- moveFromDestroyPromise(
- duration: number,
- x: number,
- y: number,
- ease?: string
- ): Promise;
-
- moveFromDestroyPromise(
- config: {
- x: number,
- y: number,
- speed?: number,
- duration?: number,
- ease?: string,
- }
- ): Promise;
-
- moveTo(
- duration: number,
- x: number,
- y: number,
- ease?: string
- ): this;
-
- moveTo(
- config: {
- x: number,
- y: number,
- speed?: number,
- duration?: number,
- ease?: string,
- }
- ): this;
-
- moveToPromise(
- duration: number,
- x: number,
- y: number,
- ease?: string
- ): Promise;
-
- moveToPromise(
- config: {
- x: number,
- y: number,
- speed?: number,
- duration?: number,
- ease?: string,
- }
- ): Promise;
-
- moveToDestroy(
- duration: number,
- x: number,
- y: number,
- ease?: string
- ): this;
-
- moveToDestroy(
- config: {
- x: number,
- y: number,
- speed?: number,
- duration?: number,
- ease?: string,
- }
- ): this;
-
- moveToDestroyPromise(
- duration: number,
- x: number,
- y: number,
- ease?: string
- ): Promise;
-
- moveToDestroyPromise(
- config: {
- x: number,
- y: number,
- speed?: number,
- duration?: number,
- ease?: string,
- }
- ): Promise;
-
- moveStop(toEnd?: boolean): this;
-
- fadeIn(
- duration: number,
- alpha?: number
- ): this;
-
- fadeInPromise(
- duration: number,
- alpha?: number
- ): Promise;
-
- fadeOutDestroy(
- duration: number
- ): this;
-
- fadeOutDestroyPromise(
- duration: number
- ): Promise;
-
- fadeOut(
- duration: number
- ): this;
-
- fadeOutPromise(
- duration: number
- ): Promise;
-
- popUp(
- duration: number,
- orientation?: 0 | 1 | 'x' | 'y',
- ease?: string
- ): this;
-
- popUpPromise(
- duration: number,
- orientation?: 0 | 1 | 'x' | 'y',
- ease?: string
- ): Promise;
-
- scaleDownDestroy(
- duration: number,
- orientation?: 0 | 1 | 'x' | 'y',
- ease?: string
- ): this;
-
- scaleDownDestroyPromise(
- duration: number,
- orientation?: 0 | 1 | 'x' | 'y',
- ease?: string
- ): Promise;
-
- scaleDown(
- duration: number,
- orientation?: 0 | 1 | 'x' | 'y',
- ease?: string
- ): this;
-
- scaleDownPromise(
- duration: number,
- orientation?: 0 | 1 | 'x' | 'y',
- ease?: string
- ): Promise;
-
- scaleYoyo(
- duration: number,
- peakValue?: number,
- repeat?: number,
- orientation?: 0 | 1 | 'x' | 'y',
- ease?: string
- ): this;
-
- scaleYoyoPromise(
- duration: number,
- peakValue?: number,
- repeat?: number,
- orientation?: 0 | 1 | 'x' | 'y',
- ease?: string
- ): Promise;
-
- shake(
- duration?: number,
- magnitude?: number,
- magnitudeMode?: 0 | 1 | 'constant' | 'decay'
- ): this;
-
- shakePromise(
- duration?: number,
- magnitude?: number,
- magnitudeMode?: 0 | 1 | 'constant' | 'decay'
- ): Promise;
-
- easeDataTo(
- key: string,
- value: number,
- duration?: number,
- ease?: string
- ): this;
-
- easeDataTo(
- config: {
- key: string,
- value: number,
- duration?: number,
- ease?: string,
- speed?: number
- }
- ): this;
-
- easeDataToPromise(
- key: string,
- value: number,
- duration?: number,
- ease?: string
- ): Promise;
-
- easeDataToPromise(
- config: {
- key: string,
- value: number,
- duration?: number,
- ease?: string,
- speed?: number
- }
- ): Promise;
-
- stopEaseData(
- key: string,
- toEnd?: boolean
- ): this;
-
- stopAllEaseData(
- toEnd?: boolean
- ): this;
-
- setAnchor(config: {
- left?: string, right?: string, centerX?: string, x?: string,
- top?: string, bottom?: string, centerY?: string, y?: string
- }): this;
-
- setDraggable(
- senser: boolean | string | Phaser.GameObjects.GameObject,
- draggable?: boolean
- ): this;
-
- onClick(
- callback: (
- click: Click,
- gameObject: Phaser.GameObjects.GameObject,
- pointer: Phaser.Input.Pointer,
- event: Phaser.Types.Input.EventData
- ) => void,
- scope?: object,
- config?: Click.IConfig
- ): this;
-
-
- onClick(
- gameObject: Phaser.GameObjects.GameObject,
- callback: (
- click: Click,
- gameObject: Phaser.GameObjects.GameObject,
- pointer: Phaser.Input.Pointer,
- event: Phaser.Types.Input.EventData
- ) => void,
- scope?: object,
- config?: Click.IConfig
- ): this;
-
- offClick(
- callback: Function,
- scope?: object
- ): this;
-
- offClick(
- gameObject: Phaser.GameObjects.GameObject,
- callback: Function,
- scope?: object
- ): this;
-
- enableClick(enabled?: boolean): this;
-
- enableClick(
- gameObject: Phaser.GameObjects.GameObject,
- enabled?: boolean
- ): this;
-
- disableClick(): this;
-
- disableClick(gameObject: Phaser.GameObjects.GameObject): this;
-
- onClickOutside(
- callback: (
- clickOutside: ClickOutside,
- gameObject: Phaser.GameObjects.GameObject,
- pointer: Phaser.Input.Pointer
- ) => void,
- scope?: object,
- config?: ClickOutside.IConfig
- ): this;
-
- onClickOutside(
- gameObject: Phaser.GameObjects.GameObject,
- callback: (
- clickOutside: ClickOutside,
- gameObject: Phaser.GameObjects.GameObject,
- pointer: Phaser.Input.Pointer
- ) => void,
- scope?: object,
- config?: ClickOutside.IConfig
- ): this;
-
- offClickOutside(
- callback: Function,
- scope?: object
- ): this;
-
- offClickOutside(
- gameObject: Phaser.GameObjects.GameObject,
- callback: Function,
- scope?: object
- ): this;
-
-
- enableClickOutside(enabled?: boolean): this;
-
- enableClickOutside(
- gameObject: Phaser.GameObjects.GameObject,
- enabled?: boolean
- ): this;
-
- disableClickOutside(): this;
-
- disableClickOutside(gameObject: Phaser.GameObjects.GameObject): this;
-
- isPointerInBounds(): boolean;
- isPointerInBounds(gameObject: Phaser.GameObjects.GameObject): boolean;
- isPointerInBounds(name: string): boolean;
-
- onTouching(
- callback: (
- inTouch: InTouching,
- gameObject: Phaser.GameObjects.GameObject,
- pointer: Phaser.Input.Pointer,
- ) => void,
- scope?: object,
- config?: InTouching.IConfig
- ): this;
-
- onTouching(
- gameObject: Phaser.GameObjects.GameObject,
- callback: (
- inTouch: InTouching,
- gameObject: Phaser.GameObjects.GameObject,
- pointer: Phaser.Input.Pointer,
- ) => void,
- scope?: object,
- config?: InTouching.IConfig
- ): this;
-
- offTouching(
- callback: Function,
- scope?: object
- ): this;
-
- offTouching(
- gameObject: Phaser.GameObjects.GameObject,
- callback: Function,
- scope?: object
- ): this;
-
- onTouchingEnd(
- callback: (
- inTouch: InTouching,
- gameObject: Phaser.GameObjects.GameObject,
- pointer: Phaser.Input.Pointer,
- ) => void,
- scope?: object,
- config?: InTouching.IConfig
- ): this;
-
- onTouchingEnd(
- gameObject: Phaser.GameObjects.GameObject,
- callback: (
- inTouch: InTouching,
- gameObject: Phaser.GameObjects.GameObject,
- pointer: Phaser.Input.Pointer,
- ) => void,
- scope?: object,
- config?: InTouching.IConfig
- ): this;
-
- offTouchingEnd(
- callback: Function,
- scope?: object
- ): this;
-
- offTouchingEnd(
- gameObject: Phaser.GameObjects.GameObject,
- callback: Function,
- scope?: object
- ): this;
-
- enableTouching(enable?: boolean): this;
-
- enableTouching(
- gameObject: Phaser.GameObjects.GameObject,
- enable?: boolean
- ): this;
-
- disableTouching(): this;
-
- disableTouching(gameObject: Phaser.GameObjects.GameObject): this;
-
- setChildrenInteractive(
- config: SetChildrenInteractive.IConfig
- ): this;
-
- show(
- gameObject?: Phaser.GameObjects.GameObject
- ): this;
-
- hide(
- gameObject?: Phaser.GameObjects.GameObject
- ): this;
-
- isShow(
- gameObject: Phaser.GameObjects.GameObject
- ): boolean;
-
- onCreateModalBehavior: (self: this) => void;
-
- modal(
- config?: ModalBehavoir.IConfig,
- onClose?: BaseSizer.OnModalCloseCallbackType
- ): this;
-
- modal(
- onClose?: BaseSizer.OnModalCloseCallbackType
- ): this;
-
- modalPromise(
- config?: ModalBehavoir.IConfig
- ): Promise