diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/app.py b/spaces/1acneusushi/gradio-2dmoleculeeditor/app.py
deleted file mode 100644
index 5992a7b7ec434e09187510098ecdefad5b81b65e..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/app.py
+++ /dev/null
@@ -1,102 +0,0 @@
-import gradio as gr
-
-viewer_html = """
-
-
loading SMILES editor
-
-
-
-"""
-
-
-load_js = """
-async () => {
-var loadingDiv = document.getElementById('loading');
- loadingDiv.style.display = 'flex';
-
-//load css
- let url = "https://huggingface.co/datasets/simonduerr/ketcher-2.7.2/raw/main/static/css/main.6a646761.css"
-fetch(url)
- .then(res => res.text())
- .then(text => {
- const style = document.createElement('style');
- style.textContent = text
- document.head.appendChild(style);
-
- });
-//load ketcher
-url = "https://huggingface.co/datasets/simonduerr/ketcher-2.7.2/resolve/main/static/js/main.5445f351.js"
-fetch(url)
- .then(res => res.text())
- .then(text => {
- const script = document.createElement('script');
- //script.type = "module"
- script.src = URL.createObjectURL(new Blob([text], { type: 'application/javascript' }));
- document.head.appendChild(script);
- loadingDiv.style.display = 'none';
- });
-
-
-}
-"""
-
-# add your logic here, hidden_state contains the SMILES string returned from Editor
-def run(hidden_state):
- return f"{hidden_state}"
-
-get_js = """
-async () => {
- return ketcher.getSmiles().then(function(smiFile){return smiFile})
- }
-"""
-
-
-
-with gr.Blocks() as blocks:
- gr.Markdown("""
- # Gradio Molecule entry with Ketcher
- """)
- html = gr.HTML(viewer_html)
- #do not change this part
- hidden_state = gr.Textbox(visible=False)
- # we need a hidden textbox that can be used to first trigger the JS callback
- # and then onchange of the textbox, we can run the python function
- out = gr.Textbox("", label="SMILES")
- btn = gr.Button("Get SMILES")
- # trigger JS callback and written to hidden textbox
- btn.click(fn=None,
- inputs=[],
- outputs=[hidden_state],
- _js=get_js)
- # run python function on change of hidden textbox, add your logic to run function
- hidden_state.change(fn=run, inputs=[hidden_state], outputs=[out])
- # load JS on load of the page
- blocks.load(None, None, None, _js=load_js)
-
-blocks.launch()
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chandramukhi Tamil Movie Free Download Dont Miss this Thrilling and Hilarious Film Featuring Rajinikanth Jyothika and Nayanthara.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chandramukhi Tamil Movie Free Download Dont Miss this Thrilling and Hilarious Film Featuring Rajinikanth Jyothika and Nayanthara.md
deleted file mode 100644
index cdeab75cb9e8739744c695f77ae0e9e681dbdeab..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chandramukhi Tamil Movie Free Download Dont Miss this Thrilling and Hilarious Film Featuring Rajinikanth Jyothika and Nayanthara.md
+++ /dev/null
@@ -1,125 +0,0 @@
-
-
Chandramukhi Tamil Movie Free Download: A Guide for Movie Lovers
-
If you are a fan of Tamil movies, you must have heard of Chandramukhi, one of the most successful and acclaimed movies in Tamil cinema history. Released in 2005, Chandramukhi is a comedy horror film that stars Rajinikanth, Jyothika, Prabhu, Nayanthara, and Vadivelu in the lead roles. Directed by P. Vasu, Chandramukhi is a remake of the Malayalam film Manichitrathazhu (1993), which was also remade in several other languages. Chandramukhi tells the story of a psychiatrist who tries to cure a woman who suffers from a split personality disorder that is linked to a haunted mansion. The movie is a perfect blend of humor, suspense, romance, and drama, with stunning performances, music, and visuals.
In this article, we will tell you everything you need to know about Chandramukhi Tamil movie free download. We will give you a brief overview of the plot, the cast and crew, the music and songs, the awards and accolades, the remakes and sequels, and the reasons to watch this movie. We will also warn you about the challenges and risks of watching this movie online for free, and suggest some legal and safe ways to enjoy this movie. Finally, we will offer some alternatives to watch if you like Chandramukhi. So, without further ado, let's get started!
-
The Plot of Chandramukhi
-
The plot of Chandramukhi revolves around Saravanan (Rajinikanth), a psychiatrist who visits his friend Senthilnathan (Prabhu) and his wife Ganga (Jyothika) at their ancestral mansion. Senthil's mother Kasthuri (Sheela) wanted him to marry Priya (Malavika), the daughter of his father's cousin Kandaswamy (Nassar), but he chose Ganga instead. Saravanan learns that Senthil bought the mansion despite being warned by the villagers that it is haunted by the ghost of Chandramukhi (Nayanthara), a dancer who was killed by her lover Vettaiyan (also Rajinikanth), a king who was obsessed with her.
-
Saravanan soon notices that Ganga behaves strangely at times, especially when she hears any music or sees any paintings related to Chandramukhi. He realizes that Ganga is possessed by Chandramukhi's spirit, who wants to take revenge on Vettaiyan's descendants. He decides to cure Ganga by using his psychological methods, while also protecting her from Akhilandeshwari (K.R. Vijaya), Kandaswamy's sister who hates Saravanan and wants to kill him with the help of her assistant Oomaiyan (Vadivelu). Saravanan also helps Priya and Vishwanathan (Vineeth), a dance professor who love each other, to get married with Kandaswamy's consent.
-
The climax of the movie reveals that Vettaiyan was not Chandramukhi's lover, but her savior who rescued her from her abusive husband Raja Rajeshwari's brother. He also reveals that he did not kill Chandramukhi, but she committed suicide after seeing him beheaded by Raja Rajeshwari's men. He then took her body to his palace and locked it in a room where he died with her. Saravanan manages to convince Ganga that she is not Chandramukhi, but his friend's wife who loves him dearly. He also performs a ritual to free Chandramukhi's soul from her earthly bondage. The movie ends with Saravanan and Senthil's families living happily ever after.
-
The Cast and Crew of Chandramukhi
-
The cast and crew of Chandramukhi are as follows:
-
Chandramukhi Tamil full movie download HD
-Chandramukhi Tamil movie free online watch
-Chandramukhi Tamil movie download 720p
-Chandramukhi Tamil movie download in Isaimini
-Chandramukhi Tamil movie download with English subtitles
-Chandramukhi Tamil movie free download Tamilrockers
-Chandramukhi Tamil movie download in Kuttymovies
-Chandramukhi Tamil movie free download in Telegram
-Chandramukhi Tamil movie download in Moviesda
-Chandramukhi Tamil movie free download in Tamilyogi
-Chandramukhi Tamil movie download in Filmywap
-Chandramukhi Tamil movie free download in Movierulz
-Chandramukhi Tamil movie download in Jio Rockers
-Chandramukhi Tamil movie free download in Madras Rockers
-Chandramukhi Tamil movie download in Filmyzilla
-Chandramukhi Tamil movie free download in Todaypk
-Chandramukhi Tamil movie download in Bolly4u
-Chandramukhi Tamil movie free download in 9xmovies
-Chandramukhi Tamil movie download in Worldfree4u
-Chandramukhi Tamil movie free download in 123movies
-Chandramukhi Tamil movie download in Khatrimaza
-Chandramukhi Tamil movie free download in Pagalworld
-Chandramukhi Tamil movie download in SkymoviesHD
-Chandramukhi Tamil movie free download in Mp4moviez
-Chandramukhi Tamil movie download in Sdmoviespoint
-Chandramukhi Tamil movie free download in Rdxhd
-Chandramukhi Tamil movie download in 7starhd
-Chandramukhi Tamil movie free download in Katmoviehd
-Chandramukhi Tamil movie download in Coolmoviez
-Chandramukhi Tamil movie free download in Moviesflix
-Chandramukhi Tamil movie download in Cinemavilla
-Chandramukhi Tamil movie free download in Mallumv
-Chandramukhi Tamil movie download in Klwap
-Chandramukhi Tamil movie free download in Dvdplay
-Chandramukhi Tamil movie download in A2movies
-Chandramukhi Tamil movie free download in Tamilmv
-Chandramukhi Tamil movie download Rajinikanth version
-Chandramukhi Tamil movie free download Prabhu version
-Chandramukhi Tamil movie songs free download mp3
-Chandramukhi Tamil full hd video songs free download
-How to watch or stream chandramukhi tamil full hd online for free?
-Where can I find chandramukhi tamil full hd torrent link?
-Is it legal to watch or download chandramukhi tamil full hd for free?
-What are the best alternatives to chandramukhi tamil full hd?
-How to get chandramukhi tamil full hd subtitles for free?
-What are the reviews and ratings of chandramukhi tamil full hd?
-Who are the cast and crew of chandramukhi tamil full hd?
-What is the plot and genre of chandramukhi tamil full hd?
-How to get chandramukhi tamil full hd poster and wallpaper for free?
-How to watch or download chandramukhi tamil full hd with VPN?
-
-
-
Role
-
Actor/Actress
-
Director
-
-
-
Saravanan/Vettaiyan
-
Rajinikanth
-
P. Vasu
-
-
-
Ganga/Chandramukhi
-
Jyothika
-
-
-
Senthilnathan
-
Prabhu
-
-
-
Durga/Nayanthara
-
Nayanthara
-
-
-
Oomaiyan
-
Vadivelu
-
-
-
Priya
-
Malavika
-
-
-
Vishwanathan
-
Vineeth
-
-
-
Kandaswamy
-
Nassar
-
-
-
Akhilandeshwari
-
K.R. Vijaya
-
-
-
Kasthuri
-
Sheela
-
-
-
Raja Rajeshwari's brother
-
Sonu Sood
-
-
The Music and Songs of Chandramukhi
-
The music and songs of Chandramukhi were composed by Vidyasagar, who won several awards for his work. The lyrics were written by Vaali, except for one song which was written by Yugabharathi. The singers included S.P. Balasubrahmanyam, K.S. Chithra, Karthik, Tippu, Manikka Vinayagam, Madhu Balakrishnan, Anuradha Sriram, Harini, Prasanna Rao, Binny Krishnakumar, Rajalakshmi, Kalpana Raghavendar, Mahathi Swara Sagar and Vidyasagar himself.
-
The soundtrack album consists of six songs:
-
-
Kokku Para Para: A peppy song sung by Tippu, Manikka Vinayagam and Prasanna Rao that introduces Saravanan's character.
-
Raa Raa: A haunting song sung by Binny Krishnakumar and Tippu that describes Chandramukhi's story.
-
Konja Neram: A romantic song sung by Asha Bhosle and Madhu Balakrishnan that features Priya and Vishwanathan.
-
Athinthom: A motivational song sung by S.P. Balasubrahmanyam that encourages Saravanan to face his challenges.
-
Devuda Devuda: A humorous song sung by S.P. Balasubrahmanyam and Vidyasagar that mocks Oomaiyan's antics.
-
Annanoda Pattu: A festive song sung by K.S. Chithra and Rajalakshmi that celebrates Senthilnathan's birthday.
-
The Awards and Accolades of 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Advanced Installer License V 16.4.1 Patch.md b/spaces/1gistliPinn/ChatGPT4/Examples/Advanced Installer License V 16.4.1 Patch.md
deleted file mode 100644
index 8cab721f2e46aadf2a16a7ae79a5d605ecca0cc0..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Advanced Installer License V 16.4.1 Patch.md
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-119.0. Advanced Installer.... s8.msi) to configure Advanced Installer for debugging. Installer. Advanced Installer 5.1.2.154.0. Automatic generation of debug logs when the deployment starts or completes.. Welcome to the Advanced Installer website. This site contains a reference guide to all the components of Advanced Installer.
-
-Advanced Installer v5.1 Documentation. From: Advanced Installer Advanced Installer Component Library This documentation is available for you to read. It contains the following files. Advanced Installer Documentation Introduction to Advanced Installer Help Advanced Installer API Advanced Installer UI Advanced Installer Action Reference Advanced Installer Scripting Guide Advanced Installer Workflow Reference Advanced Installer Windows.
-
-Advanced Installer Documentation. Welcome to the Advanced Installer website. This site contains a reference guide to all the components of Advanced Installer.
-
-Advanced Installer v5.1.0 Documentation. From: Advanced Installer Advanced Installer Component Library This documentation is available for you to read. It contains the following files. Advanced Installer Documentation Introduction to Advanced Installer Help Advanced Installer API Advanced Installer UI Advanced Installer Action Reference Advanced Installer Scripting Guide Advanced Installer Workflow Reference Advanced Installer Windows.
-
-The Support Documentation for Advanced Installer is the official source for support information on the Advanced Installer product and any of its components. It contains the following files.
-
-Advanced Installer v5.1.0.155.0. Manual. From: Advanced Installer Advanced Installer Component Library This documentation is available for you to read. It contains the following files. Advanced Installer Documentation Introduction to Advanced Installer Help Advanced Installer API Advanced Installer UI Advanced Installer Action Reference Advanced Installer Scripting Guide Advanced Installer Workflow Reference Advanced Installer Windows.
-
-Advanced Installer v5.1.0.155.0. This page lists the documentation for all the components of Advanced Installer. You will find descriptions and references on the use of each component. To access the documentation you need to right click on the component you are interested 4fefd39f24
-
-
-
diff --git a/spaces/1phancelerku/anime-remove-background/Car Parking Multiplayer on PC Open-World Multiplayer Mode Car Tuning and More.md b/spaces/1phancelerku/anime-remove-background/Car Parking Multiplayer on PC Open-World Multiplayer Mode Car Tuning and More.md
deleted file mode 100644
index b4b8d9eb212470cd2ebb57aa982226a470635e00..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Car Parking Multiplayer on PC Open-World Multiplayer Mode Car Tuning and More.md
+++ /dev/null
@@ -1,113 +0,0 @@
-
-
Car Parking Multiplayer APK PC: A Complete Guide
-
Do you love driving and parking games? Do you want to experience a realistic and immersive simulation of car parking on your computer or laptop? If yes, then you should try car parking multiplayer apk pc, a popular game that lets you park your car in various scenarios, customize your vehicle, and interact with other players online.
-
In this article, we will tell you everything you need to know about car parking multiplayer apk pc, including what it is, why it is popular, how to download and install it, how to play it, and some tips and tricks to improve your skills and enjoy the game. Let's get started!
Car parking multiplayer apk pc is a simulation game developed by olzhass, a Turkish game studio. It is available for Android devices, but you can also play it on your computer or laptop using an emulator software. The game has more than 50 million downloads on Google Play Store and has a rating of 4.4 out of 5 stars.
-
The game is more than just parking: it has an open-world multiplayer mode, where you can roam freely in the environment, customize your car, and compete in races against other players. You can also communicate with hundreds of thousands of other players worldwide every day. The game's setting is based on a realistic scenario of petrol stations and car services.
-
Car parking multiplayer apk pc has a wide variety of cars with realistic interiors and high-detailed landscapes. There are 16 unique character skins, and you can go inside buildings. You can also play as a police officer and chase criminals in a special police mode.
-
Car parking multiplayer apk pc is a fun and challenging game that tests your driving and parking skills in different situations. You can choose from different modes, such as easy, medium, hard, or expert, and complete various levels with different objectives. You can also create your own levels and share them with other players.
-
How to download and install car parking multiplayer apk pc on your computer or laptop
-
To play car parking multiplayer apk pc on your computer or laptop, you need to use an emulator software that will emulate an Android device on your Windows or Mac system. There are many emulators available online, but we will recommend two of them: BlueStacks and LDPlayer. Here are the steps to download and install car parking multiplayer apk pc using these emulators:
-
Using BlueStacks emulator
-
-
Download and install BlueStacks from here. The installation process is quite simple and straightforward.
-
After successful installation, open BlueStacks and sign in with your Google account to access the Play Store.
-
Look for car parking multiplayer in the search bar at the top right corner of the screen.
-
Click to install car parking multiplayer from the search results.
-
Once the installation is complete, click the car parking multiplayer icon on the home screen to start playing.
-
-
Using LDPlayer emulator
-
-
Download and install LDPlayer from here. The installation process is similar to Blue Stacks emulator.
-
After successful installation, open LDPlayer and sign in with your Google account to access the Play Store.
-
Look for car parking multiplayer in the search bar at the top of the screen.
-
Click to install car parking multiplayer from the search results.
-
Once the installation is complete, click the car parking multiplayer icon on the home screen to start playing.
-
-
Using other emulators
-
If you don't want to use BlueStacks or LDPlayer, you can also try other emulators such as NoxPlayer, MEmu, or Andy. The steps are similar to the ones above, except that you need to download and install the emulator of your choice from their respective websites. Then, you need to sign in with your Google account, search for car parking multiplayer in the Play Store, and install and play it as usual.
-
How to play car parking multiplayer apk pc on your computer or laptop
-
Once you have downloaded and installed car parking multiplayer apk pc on your computer or laptop using an emulator, you can start playing it by clicking the game icon on the home screen of the emulator. You will see a menu with different options, such as single player, multiplayer, settings, and more. You can choose the mode you want to play and customize your preferences accordingly. Here are some of the main features of the game that you can enjoy:
-
car parking multiplayer download for pc
-car parking multiplayer pc game
-car parking multiplayer windows 10
-car parking multiplayer online on pc
-car parking multiplayer simulator for pc
-car parking multiplayer bluestacks
-car parking multiplayer noxplayer
-car parking multiplayer pc version
-car parking multiplayer mod apk pc
-car parking multiplayer free download pc
-car parking multiplayer pc gameplay
-car parking multiplayer windows 7
-car parking multiplayer on pc with keyboard
-car parking multiplayer emulator
-car parking multiplayer pc controls
-car parking multiplayer apk for laptop
-car parking multiplayer pc requirements
-car parking multiplayer windows 8
-car parking multiplayer on pc without emulator
-car parking multiplayer apk for mac
-car parking multiplayer pc offline
-car parking multiplayer windows xp
-car parking multiplayer on pc with mouse
-car parking multiplayer ldplayer
-car parking multiplayer pc cheat codes
-car parking multiplayer apk for desktop
-car parking multiplayer pc online mode
-car parking multiplayer windows vista
-car parking multiplayer on pc with controller
-car parking multiplayer memu
-car parking multiplayer pc hack
-car parking multiplayer apk for chromebook
-car parking multiplayer pc update
-car parking multiplayer windows 11
-car parking multiplayer on pc with steering wheel
-car parking multiplayer koplayer
-car parking multiplayer pc tips and tricks
-car parking multiplayer apk for linux
-car parking multiplayer pc review
-car parking multiplayer windows phone
-car parking multiplayer on pc with friends
-car parking multiplayer gameloop
-car parking multiplayer pc settings
-car parking multiplayer apk for ubuntu
-car parking multiplayer pc system requirements
-car parking multiplayer windows store
-car parking multiplayer on pc with vr headset
-car parking multiplayer droid4x
-
Multiplayer open world mode
-
This is the most exciting mode of the game, where you can join or create a room with other players online and explore the open world together. You can chat with other players, race with them, or just have fun driving around. You can also switch between different cars and characters in this mode. There are different maps to choose from, such as city, airport, desert, and more. You can also invite your friends to join your room and play with them.
-
Car tuning and customization
-
If you love to modify your car and make it look unique, you will love this feature of the game. You can tune and customize your car in various ways, such as changing the color, wheels, suspension, engine, exhaust, spoiler, and more. You can also add stickers and decals to your car to make it stand out. You can access the tuning and customization options by clicking the wrench icon on the top left corner of the screen.
-
Police mode and free walking
-
If you want to experience some action and thrill in the game, you can try the police mode and free walking features. In police mode, you can play as a police officer and chase criminals in your patrol car. You can use sirens, lights, and radio to communicate with other officers. You can also arrest criminals by bumping into their cars or using a stun gun. In free walking mode, you can get out of your car and walk around the environment. You can enter buildings, interact with objects, and even ride a bicycle.
-
Tips and tricks to improve your parking skills and enjoy the game
-
Car parking multiplayer apk pc is a game that requires both skill and strategy to master. It is not easy to park your car perfectly in different scenarios without hitting any obstacles or breaking any rules. However, with some practice and tips, you can improve your parking skills and enjoy the game more. Here are some tips and tricks that might help you:
-
Adjust the camera angle and view
-
One of the most important things to do in the game is to adjust the camera angle and view according to your preference and situation. You can switch between different camera views by clicking the camera icon on the top right corner of the screen. You can choose from first-person view, third-person view, top-down view, or rear-view mirror view. Each view has its own advantages and disadvantages depending on the level and objective. For example, first-person view gives you a realistic feeling of driving inside the car, but it might limit your visibility of the surroundings. Third-person view gives you a wider perspective of the car and the parking spot, but it might make it harder to judge the distance and angle. Top-down view gives you a clear view of the parking spot and the obstacles, but it might make it difficult to control the steering and speed. Rear-view mirror view gives you a realistic view of the rear of the car, but it might not show you the front or sides of the car. Therefore, you should experiment with different views and find the one that suits you best.
-
Use the brake and handbrake wisely
-
Another important thing to do in the game is to use the brake and handbrake wisely. You can control the brake and handbrake by clicking the pedals on the bottom right corner of the screen. The brake pedal helps you slow down or stop your car, while the handbrake pedal helps you lock your wheels and perform sharp turns or drifts. You should use the brake pedal when you want to reduce your speed gradually or stop your car smoothly. You should use the handbrake pedal when you want to make a quick turn or park your car in a tight spot. However, you should be careful not to overuse or misuse the brake and handbrake pedals, as they might cause your car to skid, spin, or crash.
-
Follow the rules and avoid collisions
-
One of the main challenges of the game is to follow the rules and avoid collisions while parking your car. You should pay attention to the traffic signs, signals, and markings on the road and follow them accordingly. You should also respect other cars and pedestrians on the road and avoid hitting them. If you break any rules or cause any collisions, you will lose points or fail the level. Therefore, you should drive carefully and responsibly in the game.
-
Explore the map and find hidden locations
-
One of the fun aspects of the game is to explore the map and find hidden locations. The game has a large and detailed map with various locations, such as city streets, parking lots, airports, deserts, and more. You can discover new places by driving around or using the map icon on the top left corner of the screen. You can also find hidden locations by following clues or hints on the road or in buildings. Some hidden locations might have special rewards or challenges for you to complete.
-
Conclusion
-
Car parking multiplayer apk pc is a great game for anyone who loves driving and parking games. It offers a realistic and immersive simulation of car parking on your computer or laptop, with an open-world multiplayer mode, car tuning and customization, police mode and free walking, and more. You can download and install car parking multiplayer apk pc on your computer or laptop using an emulator software such as BlueStacks or LDPlayer. You can also improve your parking skills and enjoy the game more by following some tips and tricks, such as adjusting the camera angle and view, using the brake and handbrake wisely, following the rules and avoiding collisions, and exploring the map and finding hidden locations.
-
We hope this article has helped you learn more about car parking multiplayer apk pc and how to play it on your computer or laptop. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you!
-
FAQs
-
Here are some frequently asked questions about car parking multiplayer apk pc:
-
What are the system requirements for car parking multiplayer apk pc?
-
The system requirements for car parking multiplayer apk pc depend on the emulator software you use to play it on your computer or laptop. However, generally speaking, you need a Windows 7/8/10 or Mac OS system with at least 4 GB of RAM, 5 GB of free disk space, a decent graphics card, and a stable internet connection.
-
How to update car parking multiplayer apk pc on your computer or laptop?
-
To update car parking multiplayer apk pc on your computer or laptop, you need to update it through the emulator software you use. For example, if you use BlueStacks, you need to open the Play Store app on the emulator and look for car parking multiplayer. If there is an update available, you will see an update button next to the game. Click it to download and install the latest version of the game. Similarly, if you use LDPlayer or any other emulator, you need to follow the same steps to update the game through the Play Store app on the emulator.
-
How to join or create a room in multiplayer mode?
-
To join or create a room in multiplayer mode, you need to click the multiplayer option on the main menu of the game. Then, you will see a list of rooms that are available to join. You can filter the rooms by map, mode, language, or region. You can also search for a specific room by its name or ID. To join a room, simply click on it and wait for it to load. To create a room, you need to click the create button on the top right corner of the screen. Then, you can choose the map, mode, name, password, and maximum number of players for your room. You can also invite your friends to join your room by sharing its name or ID with them.
-
How to chat with other players in the game?
-
To chat with other players in the game, you need to click the chat icon on the top left corner of the screen. Then, you will see a chat window where you can type and send messages to other players in your room or in the global chat. You can also use voice chat by clicking the microphone icon on the bottom right corner of the screen. However, you need to grant permission to the emulator software to access your microphone for this feature to work.
-
How to earn money and buy new cars in the game?
-
To earn money and buy new cars in the game, you need to complete levels and challenges in single player mode or multiplayer mode. You will get money as a reward for completing each level or challenge successfully. You can also get money by watching ads or buying it with real money through in-app purchases. To buy new cars in the game, you need to click the car icon on the top left corner of the screen. Then, you will see a list of cars that are available to buy with different prices and specifications. You can also preview each car before buying it by clicking the eye icon on the bottom right corner of the screen.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/European War 61914 MOD APK - How to Install and Play with All Unlocked.md b/spaces/1phancelerku/anime-remove-background/European War 61914 MOD APK - How to Install and Play with All Unlocked.md
deleted file mode 100644
index 7e37840b1e79dd0a8711c7491a46a0e60c892c15..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/European War 61914 MOD APK - How to Install and Play with All Unlocked.md
+++ /dev/null
@@ -1,247 +0,0 @@
-
-
European War 6: 1914 Mod Apk Unlock All - A Guide for Strategy Game Fans
-
If you are a fan of strategy games that simulate historical wars, you might have heard of European War 6: 1914, a popular game developed by Easytech, a company that specializes in historical strategy games. In this game, you can choose from over 150 countries and regions, and lead them to victory or defeat in various wars and conflicts that took place between 1798 and 1950. You can also customize your own generals, troops, weapons, and technologies, and challenge other players online or offline.
However, some players may find the game too difficult, too expensive, or too boring after a while. That's why some of them resort to using a mod apk, which is a modified version of the original game application that can unlock all the features, resources, and content that are otherwise restricted or limited in the game. A mod apk can give you unlimited money, medals, generals, troops, weapons, technologies, and more. It can also remove ads, bugs, and errors that may affect your gameplay.
-
But is using a mod apk for European War 6: 1914 a good idea? What are the benefits and risks of doing so? How can you download and install a mod apk for European War 6: 1914? In this article, we will answer these questions and more. We will also provide you with some tips and tricks on how to use a mod apk for European War 6: 1914 safely and effectively. Read on to find out more!
-
What is European War 6: 1914 and what are its features?
-
European War 6: 1914 is a strategy game that simulates the historical wars of the 19th and 20th centuries. It is the sixth installment of the European War series, which started in 2010 with European War: Napoleon Wars. The game was released in 2020 for Android and iOS devices.
-
The game has four main modes: Campaign, Conquest, Challenge, and Multiplayer. In Campaign mode, you can follow the historical events and scenarios of different wars and regions, such as the Napoleonic Wars, the American Civil War, the World War I, the World War II, etc. You can choose from different countries and factions, and complete various missions and objectives to progress through the story. In Conquest mode, you can create your own scenarios and maps, and conquer the world with your own strategy and tactics. You can also adjust the difficulty level, the number of countries and regions, the resources and technologies available, etc. In Challenge mode, you can test your skills and knowledge in different quizzes and puzzles related to history and geography. You can also earn medals and rewards for completing them. In Multiplayer mode, you can play with or against other players online or offline via Wi-Fi or Bluetooth. You can also chat with them, send them gifts, or join alliances.
-
The game has over 150 countries and regions to choose from, each with their own unique generals, troops, weapons, and technologies. You can also customize your own generals by changing their names, portraits, skills, ranks, etc. You can also upgrade your troops by training them, equipping them with different weapons and armors, etc. You can also research new technologies by spending money and medals on them. The game has over 200 historical battles to fight in, each with their own terrain, weather, objectives, etc. You can also use different strategies and tactics to win them, such as diplomacy, espionage, sabotage, etc.
-
european war 6 1914 mod apk unlimited money and medals
-european war 6 1914 hack mod apk free download
-european war 6 1914 mod apk latest version
-european war 6 1914 mod apk all generals unlocked
-european war 6 1914 mod apk android 1
-european war 6 1914 mod apk revdl
-european war 6 1914 mod apk no root
-european war 6 1914 mod apk offline
-european war 6 1914 mod apk obb
-european war 6 1914 mod apk rexdl
-european war 6 1914 mod apk premium
-european war 6 1914 mod apk full version
-european war 6 1914 mod apk mega
-european war 6 1914 mod apk data
-european war 6 1914 mod apk vip
-european war 6 1914 mod apk pro
-european war 6 1914 mod apk cracked
-european war 6 1914 mod apk cheat
-european war 6 1914 mod apk hack download
-european war 6 1914 mod apk update
-european war 6 1914 mod apk new version
-european war 6 1914 mod apk original
-european war 6 1914 mod apk for pc
-european war 6 1914 mod apk for ios
-european war 6 1914 mod apk for windows
-european war 6 1914 mod apk for mac
-european war 6 1914 mod apk for laptop
-european war 6 1914 mod apk for tablet
-european war 6 1914 mod apk for chromebook
-european war 6 1914 mod apk for android tv
-european war 6: world at war - ww1 strategy game mod apk unlock all
-easytech's world conquest games: ww1 ww2 civil war - all unlocked with mods and cheats
-how to install and play european war: world at war - ww1 strategy game with mods and hacks on android devices
-best tips and tricks for playing and winning in european war: world at war - ww1 strategy game with mods and hacks on android devices
-how to get free money and medals in european war: world at war - ww1 strategy game with mods and hacks on android devices
-how to unlock all generals and scenarios in european war: world at war - ww1 strategy game with mods and hacks on android devices
-how to upgrade and customize your troops and weapons in european war: world at war - ww1 strategy game with mods and hacks on android devices
-how to use diplomacy and alliances in european war: world at war - ww1 strategy game with mods and hacks on android devices
-how to conquer the world and win the great wars in european war: world at war - ww1 strategy game with mods and hacks on android devices
-how to play multiplayer mode in european war: world at war - ww1 strategy game with mods and hacks on android devices
-
The game has high-quality graphics that depict the historical scenes and characters in detail. The game also has realistic sound effects that enhance the atmosphere of war. The game has a user-friendly interface that allows you to control your units easily and efficiently. The game also has a tutorial mode that teaches you the basics of the game.
-
The game is similar to other historical strategy games such as Age of Civilizations II , Age of Empires, or Civilization. However, it has its own unique features and challenges that make it stand out from the crowd. If you are looking for a strategy game that combines historical accuracy, complexity, and fun, you might want to give European War 6: 1914 a try.
-
What is a mod apk and why do some players use it?
-
A mod apk is a modified version of an original game application that can alter or enhance some aspects of the game. A mod apk can be created by the game developers themselves, or by third-party programmers or hackers who have access to the game's source code. A mod apk can be downloaded from various websites or platforms, such as Google Play, App Store, or APKPure.
-
Some players use a mod apk for various reasons, such as:
-
-
To unlock all the features, resources, and content that are otherwise restricted or limited in the game
-
To bypass the in-app purchases or ads that may require real money or interrupt the gameplay
-
To cheat or hack the game to gain an unfair advantage over other players or the game itself
-
To customize or personalize the game according to their preferences and tastes
-
To explore new possibilities or scenarios that are not available in the original game
-
To fix some bugs or errors that may affect the gameplay
-
To have more fun and enjoyment with the game
-
-
However, using a mod apk also comes with some legal and ethical issues, such as:
-
-
Violating the terms and conditions of the game developers or publishers
-
Infringing the intellectual property rights of the game developers or publishers
-
Exposing the device or data to viruses, malware, or scams that may harm them
-
Disrupting the balance and fairness of the game for other players
-
Ruining the original design and intention of the game creators
-
Losing the official support and updates from the game developers or publishers
-
Risking being banned or suspended from the game or its online services
-
-
Therefore, using a mod apk for European War 6: 1914 is a personal choice that depends on your own judgment and responsibility. You should weigh the pros and cons carefully before deciding to use a mod apk for European War 6: 1914.
-
What are the benefits of using a mod apk for European War 6: 1914?
If you decide to use a mod apk for European War 6: 1914, you can enjoy some benefits that the original game may not offer. Here are some of them:
-
-
You can unlock all the features, resources, and content that are otherwise restricted or limited in the game. For example, you can have unlimited money, medals, generals, troops, weapons, technologies, and more. You can also access all the modes, campaigns, conquests, challenges, and multiplayer options. You can also remove the ads that may interrupt your gameplay.
-
You can customize or personalize the game according to your preferences and tastes. For example, you can change the names, portraits, skills, ranks, etc. of your generals. You can also modify the graphics, sound, and user interface of the game. You can also create your own scenarios and maps in Conquest mode.
-
You can explore new possibilities or scenarios that are not available in the original game. For example, you can play as different countries or factions that are not normally playable in the game. You can also change the historical events and outcomes of the wars and conflicts. You can also use different strategies and tactics that may not work in the original game.
-
You can enhance your gameplay experience and enjoyment with the game. For example, you can have more fun and challenge with the game by adjusting the difficulty level, the number of countries and regions, the resources and technologies available, etc. You can also have more satisfaction and achievement with the game by completing all the missions and objectives, earning all the medals and rewards, conquering the world with your strategy and tactics, etc.
-
-
To illustrate these benefits, here is a table that compares the features of the original game and the mod apk:
-
-
-
Feature
-
Original Game
-
Mod Apk
-
-
-
Money
-
Limited
-
Unlimited
-
-
-
Medals
-
Limited
-
Unlimited
-
-
-
Generals
-
Limited
-
Unlimited
-
-
-
Troops
-
Limited
-
Unlimited
-
-
-
Weapons
-
Limited
-
Unlimited
-
-
-
Technologies
-
Limited
-
Unlimited
-
-
-
Modes
-
Limited
-
All unlocked
-
-
-
Campaigns
-
Limited
-
All unlocked
-
-
Conquests
-
Limited
-
All unlocked
-
-
-
Challenges
-
Limited
-
All unlocked
-
-
-
Multiplayer
-
Limited
-
All unlocked
-
-
-
Ads
-
Present
-
Removed
-
-
-
Bugs and errors
-
Present
-
Fixed
-
-
-
Customization
-
Limited
-
Enhanced
-
-
-
New possibilities and scenarios
-
Limited
-
Added
-
-
-
Gameplay experience and enjoyment
-
Limited
-
Improved
-
-
-
As you can see, using a mod apk for European War 6: 1914 can provide you with many benefits that can make your game more enjoyable and rewarding. However, you should also be aware of the risks and drawbacks of using a mod apk for European War 6: 1914, which we will discuss in the next section.
-
What are the risks and drawbacks of using a mod apk for European War 6: 1914?
-
Using a mod apk for European War 6: 1914 is not without its risks and drawbacks. Here are some of them:
-
-
You can violate the terms and conditions of the game developers or publishers, which can result in legal actions or penalties against you. You can also infringe the intellectual property rights of the game developers or publishers, which can result in lawsuits or damages against you.
-
You can expose your device or data to viruses, malware, or scams that can harm them. Some mod apks may contain malicious code or software that can infect your device or data, or steal your personal information or money. You can also download mod apks from unreliable sources or platforms that may contain viruses, malware, or scams.
-
You can disrupt the balance and fairness of the game for other players. Using a mod apk can give you an unfair advantage over other players who play the game legitimately, which can ruin their gameplay experience and satisfaction. You can also encounter other players who use mod apks to cheat or hack the game, which can ruin your gameplay experience and satisfaction.
-
You can ruin the original design and intention of the game creators. Using a mod apk can alter or enhance some aspects of the game that may not be intended by the game creators, which can affect their artistic vision and expression. You can also miss out on some features, resources, or content that the game creators have designed for the original game.
-
You can lose the official support and updates from the game developers or publishers. Using a mod apk can make your game incompatible with the official updates or patches that the game developers or publishers may release to improve or fix the game. You can also lose access to the official online services or features that the game developers or publishers may provide for the original game.
-
You can risk being banned or suspended from the game or its online services. Using a mod apk can make your game detectable by the anti-cheat or anti-hack systems that the game developers or publishers may use to protect their game. You can also be reported by other players who notice your suspicious behavior or activities in the game.
-
-
To illustrate these risks and drawbacks, here is a table that compares them with the original game and the mod apk:
-
-
-
Risk/Drawback
-
Original Game
-
Mod Apk
-
Legal and ethical issues
-
None
-
Present
-
-
-
Viruses, malware, or scams
-
None
-
Possible
-
-
-
Balance and fairness
-
Present
-
Disrupted
-
-
-
Original design and intention
-
Present
-
Ruined
-
-
-
Official support and updates
-
Present
-
Lost
-
-
-
Ban or suspension
-
None
-
Possible
-
-
-
As you can see, using a mod apk for European War 6: 1914 can also expose you to some risks and drawbacks that can make your game less enjoyable and rewarding. Therefore, you should be careful and cautious when using a mod apk for European War 6: 1914.
-
How to download and install a mod apk for European War 6: 1914?
-
If you still want to use a mod apk for European War 6: 1914, you need to know how to download and install it on your device. Here are the steps that you need to follow:
-
-
Find a reliable source where you can download a mod apk for European War 6: 1914. You can search online for some websites or platforms that offer mod apks for various games, or you can ask other players who have used a mod apk for European War 6: 1914 before. However, you should be careful and wary of some sources that may contain viruses, malware, or scams that can harm your device or data.
-
Download the mod apk file from the source that you have chosen. You may need to allow your device to download files from unknown sources in your settings. You may also need to disable your antivirus or firewall software temporarily to avoid any interference.
-
Install the mod apk file on your device. You may need to uninstall the original game application first if you have it on your device. You may also need to enable the installation of apps from unknown sources in your settings. You may also need to grant some permissions or access to the mod apk file during the installation process.
-
Launch the mod apk file on your device. You may need to verify or activate the mod apk file by following some instructions or entering some codes. You may also need to create an account or log in with an existing one to access the mod apk file.
-
Enjoy the game with the mod apk file. You can now play European War 6: 1914 with all the features, resources, and content that are unlocked by the mod apk file. However, you should also be aware of the risks and drawbacks of using a mod apk file, as we discussed in the previous section.
-
-
To help you with finding a reliable source where you can download a mod apk for European War 6: 1914, here is a link that you can use as a reference:
This is a website that offers mod apks for various games, including European War 6: 1914. It claims that its mod apks are safe, tested, and verified by its users and editors. However, you should still be careful and cautious when downloading and installing any mod apk from any source, as there is no guarantee that they are free from viruses, malware, or scams.
-
Conclusion
-
In this article, we have discussed what European War 6: 1914 is and what are its features, what a mod apk is and why some players use it, what are the benefits and risks of using a mod apk for European War 6: 1914, and how to download and install a mod apk for European War 6: 1914. We have also provided you with some tips and tricks on how to use a mod apk for European War 6: 1914 safely and effectively.
-
We hope that this article has been helpful and informative for you. If you are a fan of strategy games that simulate historical wars, you might want to give European War 6: 1914 a try. However, if you decide to use a mod apk for European War 6: 1914, you should weigh the pros and cons carefully before doing so. You should also be responsible and respectful when playing the game with or without a mod apk.
-
We would love to hear your opinions, experiences, and feedback on European War 6: 1914 and its mod apk. Please feel free to share them with us in the comments section below. Thank you for reading and happy gaming!
-
FAQs
-
Here are some frequently asked questions about European War 6: 1914 and its mod apk, along with their answers:
-
Q: Is European War 6: 1914 free to play?
-
A: Yes, European War 6: 1914 is free to download and play on Android and iOS devices. However, the game may contain some in-app purchases or ads that may require real money or interrupt the gameplay.
-
Q: Is using a mod apk for European War 6: 1914 legal?
-
A: No, using a mod apk for European War 6: 1914 is not legal, as it violates the terms and conditions of the game developers or publishers, and infringes their intellectual property rights. Using a mod apk for European War 6: 1914 may result in legal actions or penalties against you.
-
Q: Is using a mod apk for European War 6: 1914 safe?
-
A: No, using a mod apk for European War 6: 1914 is not safe, as it exposes your device or data to viruses, malware, or scams that can harm them. Using a mod apk for European War 6: 1914 may also make your game incompatible with the official updates or patches, or lose access to the official online services or features.
-
Q: Is using a mod apk for European War 6: 1914 fair?
-
A: No, using a mod apk for European War 6: 1914 is not fair, as it disrupts the balance and fairness of the game for other players who play the game legitimately. Using a mod apk for European War 6: 1914 may also encounter other players who use mod apks to cheat or hack the game.
-
Q: Is using a mod apk for European War 6: 1914 fun?
-
A: It depends on your personal preference and judgment. Some players may find using a mod apk for European War 6: 1914 fun, as it unlocks all the features, resources, and content that are otherwise restricted or limited in the game. However, some players may find using a mod apk for European War 6: 1914 boring, as it removes the challenge and achievement that come with playing the game legitimately.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion_img2img.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion_img2img.py
deleted file mode 100644
index 73b303700e17d247aa9b0fab5882938b1216daf4..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion_img2img.py
+++ /dev/null
@@ -1,458 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-from typing import Callable, List, Optional, Union
-
-import numpy as np
-import paddle
-import PIL
-
-from paddlenlp.transformers import CLIPFeatureExtractor, CLIPTokenizer
-
-from ...fastdeploy_utils import FastDeployRuntimeModel
-from ...pipeline_utils import DiffusionPipeline
-from ...schedulers import (
- DDIMScheduler,
- DPMSolverMultistepScheduler,
- EulerAncestralDiscreteScheduler,
- EulerDiscreteScheduler,
- LMSDiscreteScheduler,
- PNDMScheduler,
-)
-from ...utils import PIL_INTERPOLATION, logging
-from . import StableDiffusionPipelineOutput
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.preprocess
-def preprocess(image):
- if isinstance(image, paddle.Tensor):
- return image
- elif isinstance(image, PIL.Image.Image):
- image = [image]
-
- if isinstance(image[0], PIL.Image.Image):
- w, h = image[0].size
- w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
-
- image = [np.array(i.resize((w, h), resample=PIL_INTERPOLATION["lanczos"]))[None, :] for i in image]
- image = np.concatenate(image, axis=0)
- image = np.array(image).astype(np.float32) / 255.0
- image = image.transpose(0, 3, 1, 2)
- image = 2.0 * image - 1.0
- image = paddle.to_tensor(image)
- elif isinstance(image[0], paddle.Tensor):
- image = paddle.concat(image, axis=0)
- return image
-
-
-class FastDeployStableDiffusionImg2ImgPipeline(DiffusionPipeline):
- r"""
- Pipeline for text-guided image-to-image generation using Stable Diffusion.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving etc.)
-
- Args:
- vae_encoder ([`FastDeployRuntimeModel`]):
- Variational Auto-Encoder (VAE) Model to encode images to latent representations.
- vae_decoder ([`FastDeployRuntimeModel`]):
- Variational Auto-Encoder (VAE) Model to decode images from latent representations.
- text_encoder ([`FastDeployRuntimeModel`]):
- Frozen text-encoder. Stable Diffusion uses the text portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`FastDeployRuntimeModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], [`PNDMScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`]
- or [`DPMSolverMultistepScheduler`].
- safety_checker ([`FastDeployRuntimeModel`]):
- Classification module that estimates whether generated images could be considered offensive or harmful.
- Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
- feature_extractor ([`CLIPFeatureExtractor`]):
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
- """
- _optional_components = ["safety_checker", "feature_extractor"]
-
- def __init__(
- self,
- vae_encoder: FastDeployRuntimeModel,
- vae_decoder: FastDeployRuntimeModel,
- text_encoder: FastDeployRuntimeModel,
- tokenizer: CLIPTokenizer,
- unet: FastDeployRuntimeModel,
- scheduler: Union[
- DDIMScheduler,
- PNDMScheduler,
- LMSDiscreteScheduler,
- EulerDiscreteScheduler,
- EulerAncestralDiscreteScheduler,
- DPMSolverMultistepScheduler,
- ],
- safety_checker: FastDeployRuntimeModel,
- feature_extractor: CLIPFeatureExtractor,
- requires_safety_checker: bool = True,
- ):
- super().__init__()
- if safety_checker is None and requires_safety_checker:
- logger.warning(
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
- " results in services or applications open to the public. PaddleNLP team, diffusers team and Hugging Face"
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
- )
- if safety_checker is not None and feature_extractor is None:
- raise ValueError(
- "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
- " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
- )
-
- self.register_modules(
- vae_encoder=vae_encoder,
- vae_decoder=vae_decoder,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- )
- self.register_to_config(requires_safety_checker=requires_safety_checker)
-
- def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `list(int)`):
- prompt to be encoded
- num_images_per_prompt (`int`):
- number of images that should be generated per prompt
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- """
- batch_size = len(prompt) if isinstance(prompt, list) else 1
-
- # get prompt text embeddings
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="np",
- )
- text_input_ids = text_inputs.input_ids
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="np").input_ids
-
- if not np.array_equal(text_input_ids, untruncated_ids):
- removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- text_embeddings = self.text_encoder(input_ids=text_input_ids.astype(np.int64))[0]
- text_embeddings = np.repeat(text_embeddings, num_images_per_prompt, axis=0)
-
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""] * batch_size
- elif type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt] * batch_size
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = negative_prompt
-
- max_length = text_input_ids.shape[-1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="np",
- )
- uncond_embeddings = self.text_encoder(input_ids=uncond_input.input_ids.astype(np.int64))[0]
- uncond_embeddings = np.repeat(uncond_embeddings, num_images_per_prompt, axis=0)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- text_embeddings = np.concatenate([uncond_embeddings, text_embeddings])
-
- return text_embeddings
-
- def run_safety_checker(self, image, dtype):
- if self.safety_checker is not None:
- safety_checker_input = self.feature_extractor(
- self.numpy_to_pil(image), return_tensors="np"
- ).pixel_values.astype(dtype)
- # There will throw an error if use safety_checker batchsize>1
- images, has_nsfw_concept = [], []
- for i in range(image.shape[0]):
- image_i, has_nsfw_concept_i = self.safety_checker(
- clip_input=safety_checker_input[i : i + 1], images=image[i : i + 1]
- )
- images.append(image_i)
- has_nsfw_concept.append(has_nsfw_concept_i[0])
- image = np.concatenate(images)
- else:
- has_nsfw_concept = None
- return image, has_nsfw_concept
-
- def decode_latents(self, latents):
- latents = 1 / 0.18215 * latents
- image = np.concatenate(
- [self.vae_decoder(latent_sample=latents[i : i + 1])[0] for i in range(latents.shape[0])]
- )
- image = np.clip(image / 2 + 0.5, 0, 1)
- image = image.transpose([0, 2, 3, 1])
- return image
-
- def prepare_extra_step_kwargs(self, eta):
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
-
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- return extra_step_kwargs
-
- def check_inputs(self, prompt, strength, callback_steps):
- if not isinstance(prompt, str) and not isinstance(prompt, list):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if strength < 0 or strength > 1:
- raise ValueError(f"The value of strength should in [1.0, 1.0] but is {strength}")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- def get_timesteps(self, num_inference_steps, strength):
- # get the original timestep using init_timestep
- offset = self.scheduler.config.get("steps_offset", 0)
- init_timestep = int(num_inference_steps * strength) + offset
- init_timestep = min(init_timestep, num_inference_steps)
-
- t_start = max(num_inference_steps - init_timestep + offset, 0)
- timesteps = self.scheduler.timesteps
- timesteps = timesteps[t_start:]
- return timesteps, num_inference_steps - t_start
-
- def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, generator=None, noise=None):
- if generator is None:
- generator = np.random
-
- image = image.astype(dtype)
- init_latents = self.vae_encoder(sample=image)[0]
- init_latents = 0.18215 * init_latents
-
- if batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] != 0:
- raise ValueError(
- f"Cannot duplicate `image` of batch size {init_latents.shape[0]} to {batch_size} text prompts."
- )
- else:
- init_latents = np.concatenate([init_latents] * num_images_per_prompt, axis=0)
-
- # add noise to latents using the timesteps
- if noise is None:
- noise = paddle.to_tensor(generator.randn(*init_latents.shape).astype(dtype))
- elif list(noise.shape) != list(init_latents.shape):
- raise ValueError(f"Unexpected noise shape, got {noise.shape}, expected {init_latents.shape}")
- elif isinstance(noise, np.ndarray):
- noise = paddle.to_tensor(noise, dtype=dtype)
-
- # get latents
- init_latents = self.scheduler.add_noise(paddle.to_tensor(init_latents), noise, timestep)
- latents = init_latents
-
- return latents
-
- def __call__(
- self,
- prompt: Union[str, List[str]],
- image: Union[np.ndarray, PIL.Image.Image] = None,
- strength: float = 0.8,
- num_inference_steps: Optional[int] = 50,
- guidance_scale: Optional[float] = 7.5,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: Optional[float] = 0.0,
- generator: Optional[np.random.RandomState] = None,
- noise: Optional[np.ndarray] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
- callback_steps: Optional[int] = 1,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
- Args:
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide the image generation.
- image (`np.ndarray` or `PIL.Image.Image`):
- `Image`, or tensor representing an image batch, that will be used as the starting point for the
- process.
- strength (`float`, *optional*, defaults to 0.8):
- Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1.
- `image` will be used as a starting point, adding more noise to it the larger the `strength`. The
- number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
- noise will be maximum and the denoising process will run for the full number of iterations specified in
- `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference. This parameter will be modulated by `strength`.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`np.random.RandomState`, *optional*):
- A np.random.RandomState to make generation deterministic.
- noise (`np.ndarray`, *optional*):
- Pre-generated noise tensor, sampled from a Gaussian distribution, to be used as inputs for image
- generation. If not provided, a noise tensor will ge generated by sampling using the supplied random
- `generator`.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
- # 1. Check inputs
- self.check_inputs(prompt, strength, callback_steps)
-
- # 2. Define call parameters
- batch_size = 1 if isinstance(prompt, str) else len(prompt)
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- # 3. Encode input prompt
- text_embeddings = self._encode_prompt(
- prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt
- )
-
- # 4. Preprocess image
- image = preprocess(image)
-
- # 5. set timesteps
- self.scheduler.set_timesteps(num_inference_steps)
- timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength)
- latent_timestep = timesteps[:1].tile([batch_size * num_images_per_prompt])
-
- # 6. Prepare latent variables
- latents = self.prepare_latents(
- image, latent_timestep, batch_size, num_images_per_prompt, text_embeddings.dtype, generator, noise
- )
-
- # 7. Prepare extra step kwargs.
- extra_step_kwargs = self.prepare_extra_step_kwargs(eta)
-
- # 8. Denoising loop
- num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- text_embeddings = paddle.to_tensor(text_embeddings)
- for i, t in enumerate(timesteps):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = paddle.concat([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- noise_pred = self.unet.zero_copy_infer(
- sample=latent_model_input, timestep=t, encoder_hidden_states=text_embeddings
- )[0]
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- scheduler_output = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs)
- latents = scheduler_output.prev_sample
-
- # call the callback, if provided
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # 9. Post-processing
- image = self.decode_latents(latents.numpy())
-
- # 10. Run safety checker
- image, has_nsfw_concept = self.run_safety_checker(image, text_embeddings.dtype)
-
- # 11. Convert to PIL
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
diff --git a/spaces/2023Liu2023/bingo/src/lib/hooks/use-at-bottom.tsx b/spaces/2023Liu2023/bingo/src/lib/hooks/use-at-bottom.tsx
deleted file mode 100644
index d37c8cf4162adcb0064e08ecec24eb731416b045..0000000000000000000000000000000000000000
--- a/spaces/2023Liu2023/bingo/src/lib/hooks/use-at-bottom.tsx
+++ /dev/null
@@ -1,23 +0,0 @@
-import * as React from 'react'
-
-export function useAtBottom(offset = 0) {
- const [isAtBottom, setIsAtBottom] = React.useState(false)
-
- React.useEffect(() => {
- const handleScroll = () => {
- setIsAtBottom(
- window.innerHeight + window.scrollY >=
- document.body.offsetHeight - offset
- )
- }
-
- window.addEventListener('scroll', handleScroll, { passive: true })
- handleScroll()
-
- return () => {
- window.removeEventListener('scroll', handleScroll)
- }
- }, [offset])
-
- return isAtBottom
-}
diff --git a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/eval/__init__.py b/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/eval/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/4Taps/SadTalker/src/face3d/options/test_options.py b/spaces/4Taps/SadTalker/src/face3d/options/test_options.py
deleted file mode 100644
index 4ff3ad142779850d1d5a1640bc00f70d34d4a862..0000000000000000000000000000000000000000
--- a/spaces/4Taps/SadTalker/src/face3d/options/test_options.py
+++ /dev/null
@@ -1,21 +0,0 @@
-"""This script contains the test options for Deep3DFaceRecon_pytorch
-"""
-
-from .base_options import BaseOptions
-
-
-class TestOptions(BaseOptions):
- """This class includes test options.
-
- It also includes shared options defined in BaseOptions.
- """
-
- def initialize(self, parser):
- parser = BaseOptions.initialize(self, parser) # define shared options
- parser.add_argument('--phase', type=str, default='test', help='train, val, test, etc')
- parser.add_argument('--dataset_mode', type=str, default=None, help='chooses how datasets are loaded. [None | flist]')
- parser.add_argument('--img_folder', type=str, default='examples', help='folder for test images.')
-
- # Dropout and Batchnorm has different behavior during training and test.
- self.isTrain = False
- return parser
diff --git a/spaces/656-156/Real-CUGAN/upcunet_v3.py b/spaces/656-156/Real-CUGAN/upcunet_v3.py
deleted file mode 100644
index f7919a6cc9efe3b8af73a73e30825a4c7d7d76da..0000000000000000000000000000000000000000
--- a/spaces/656-156/Real-CUGAN/upcunet_v3.py
+++ /dev/null
@@ -1,714 +0,0 @@
-import torch
-from torch import nn as nn
-from torch.nn import functional as F
-import os, sys
-import numpy as np
-
-root_path = os.path.abspath('.')
-sys.path.append(root_path)
-
-
-class SEBlock(nn.Module):
- def __init__(self, in_channels, reduction=8, bias=False):
- super(SEBlock, self).__init__()
- self.conv1 = nn.Conv2d(in_channels, in_channels // reduction, 1, 1, 0, bias=bias)
- self.conv2 = nn.Conv2d(in_channels // reduction, in_channels, 1, 1, 0, bias=bias)
-
- def forward(self, x):
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- x0 = torch.mean(x.float(), dim=(2, 3), keepdim=True).half()
- else:
- x0 = torch.mean(x, dim=(2, 3), keepdim=True)
- x0 = self.conv1(x0)
- x0 = F.relu(x0, inplace=True)
- x0 = self.conv2(x0)
- x0 = torch.sigmoid(x0)
- x = torch.mul(x, x0)
- return x
-
- def forward_mean(self, x, x0):
- x0 = self.conv1(x0)
- x0 = F.relu(x0, inplace=True)
- x0 = self.conv2(x0)
- x0 = torch.sigmoid(x0)
- x = torch.mul(x, x0)
- return x
-
-
-class UNetConv(nn.Module):
- def __init__(self, in_channels, mid_channels, out_channels, se):
- super(UNetConv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(in_channels, mid_channels, 3, 1, 0),
- nn.LeakyReLU(0.1, inplace=True),
- nn.Conv2d(mid_channels, out_channels, 3, 1, 0),
- nn.LeakyReLU(0.1, inplace=True),
- )
- if se:
- self.seblock = SEBlock(out_channels, reduction=8, bias=True)
- else:
- self.seblock = None
-
- def forward(self, x):
- z = self.conv(x)
- if self.seblock is not None:
- z = self.seblock(z)
- return z
-
-
-class UNet1(nn.Module):
- def __init__(self, in_channels, out_channels, deconv):
- super(UNet1, self).__init__()
- self.conv1 = UNetConv(in_channels, 32, 64, se=False)
- self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0)
- self.conv2 = UNetConv(64, 128, 64, se=True)
- self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0)
- self.conv3 = nn.Conv2d(64, 64, 3, 1, 0)
-
- if deconv:
- self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3)
- else:
- self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0)
-
- for m in self.modules():
- if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, nn.Linear):
- nn.init.normal_(m.weight, 0, 0.01)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- def forward(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2(x2)
- x2 = self.conv2_up(x2)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-4, -4, -4, -4))
- x3 = self.conv3(x1 + x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- z = self.conv_bottom(x3)
- return z
-
- def forward_a(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2.conv(x2)
- return x1, x2
-
- def forward_b(self, x1, x2):
- x2 = self.conv2_up(x2)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-4, -4, -4, -4))
- x3 = self.conv3(x1 + x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- z = self.conv_bottom(x3)
- return z
-
-
-class UNet1x3(nn.Module):
- def __init__(self, in_channels, out_channels, deconv):
- super(UNet1x3, self).__init__()
- self.conv1 = UNetConv(in_channels, 32, 64, se=False)
- self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0)
- self.conv2 = UNetConv(64, 128, 64, se=True)
- self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0)
- self.conv3 = nn.Conv2d(64, 64, 3, 1, 0)
-
- if deconv:
- self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 5, 3, 2)
- else:
- self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0)
-
- for m in self.modules():
- if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, nn.Linear):
- nn.init.normal_(m.weight, 0, 0.01)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- def forward(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2(x2)
- x2 = self.conv2_up(x2)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-4, -4, -4, -4))
- x3 = self.conv3(x1 + x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- z = self.conv_bottom(x3)
- return z
-
- def forward_a(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2.conv(x2)
- return x1, x2
-
- def forward_b(self, x1, x2):
- x2 = self.conv2_up(x2)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-4, -4, -4, -4))
- x3 = self.conv3(x1 + x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- z = self.conv_bottom(x3)
- return z
-
-
-class UNet2(nn.Module):
- def __init__(self, in_channels, out_channels, deconv):
- super(UNet2, self).__init__()
-
- self.conv1 = UNetConv(in_channels, 32, 64, se=False)
- self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0)
- self.conv2 = UNetConv(64, 64, 128, se=True)
- self.conv2_down = nn.Conv2d(128, 128, 2, 2, 0)
- self.conv3 = UNetConv(128, 256, 128, se=True)
- self.conv3_up = nn.ConvTranspose2d(128, 128, 2, 2, 0)
- self.conv4 = UNetConv(128, 64, 64, se=True)
- self.conv4_up = nn.ConvTranspose2d(64, 64, 2, 2, 0)
- self.conv5 = nn.Conv2d(64, 64, 3, 1, 0)
-
- if deconv:
- self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3)
- else:
- self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0)
-
- for m in self.modules():
- if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, nn.Linear):
- nn.init.normal_(m.weight, 0, 0.01)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- def forward(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2(x2)
-
- x3 = self.conv2_down(x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- x3 = self.conv3(x3)
- x3 = self.conv3_up(x3)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
-
- x2 = F.pad(x2, (-4, -4, -4, -4))
- x4 = self.conv4(x2 + x3)
- x4 = self.conv4_up(x4)
- x4 = F.leaky_relu(x4, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-16, -16, -16, -16))
- x5 = self.conv5(x1 + x4)
- x5 = F.leaky_relu(x5, 0.1, inplace=True)
-
- z = self.conv_bottom(x5)
- return z
-
- def forward_a(self, x): # conv234结尾有se
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2.conv(x2)
- return x1, x2
-
- def forward_b(self, x2): # conv234结尾有se
- x3 = self.conv2_down(x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- x3 = self.conv3.conv(x3)
- return x3
-
- def forward_c(self, x2, x3): # conv234结尾有se
- x3 = self.conv3_up(x3)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
-
- x2 = F.pad(x2, (-4, -4, -4, -4))
- x4 = self.conv4.conv(x2 + x3)
- return x4
-
- def forward_d(self, x1, x4): # conv234结尾有se
- x4 = self.conv4_up(x4)
- x4 = F.leaky_relu(x4, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-16, -16, -16, -16))
- x5 = self.conv5(x1 + x4)
- x5 = F.leaky_relu(x5, 0.1, inplace=True)
-
- z = self.conv_bottom(x5)
- return z
-
-
-class UpCunet2x(nn.Module): # 完美tile,全程无损
- def __init__(self, in_channels=3, out_channels=3):
- super(UpCunet2x, self).__init__()
- self.unet1 = UNet1(in_channels, out_channels, deconv=True)
- self.unet2 = UNet2(in_channels, out_channels, deconv=False)
-
- def forward(self, x, tile_mode): # 1.7G
- n, c, h0, w0 = x.shape
- if (tile_mode == 0): # 不tile
- ph = ((h0 - 1) // 2 + 1) * 2
- pw = ((w0 - 1) // 2 + 1) * 2
- x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect') # 需要保证被2整除
- x = self.unet1.forward(x)
- x0 = self.unet2.forward(x)
- x1 = F.pad(x, (-20, -20, -20, -20))
- x = torch.add(x0, x1)
- if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 2, :w0 * 2]
- return x
- elif (tile_mode == 1): # 对长边减半
- if (w0 >= h0):
- crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
- crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除
- else:
- crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
- crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除
- crop_size = (crop_size_h, crop_size_w) # 6.6G
- elif (tile_mode == 2): # hw都减半
- crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G
- elif (tile_mode == 3): # hw都三分之一
- crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.2G
- elif (tile_mode == 4): # hw都四分之一
- crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G
- ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0]
- pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1]
- x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect')
- n, c, h, w = x.shape
- se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device)
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- n_patch = 0
- tmp_dict = {}
- opt_res_dict = {}
- for i in range(0, h - 36, crop_size[0]):
- tmp_dict[i] = {}
- for j in range(0, w - 36, crop_size[1]):
- x_crop = x[:, :, i:i + crop_size[0] + 36, j:j + crop_size[1] + 36]
- n, c1, h1, w1 = x_crop.shape
- tmp0, x_crop = self.unet1.forward_a(x_crop)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- n_patch += 1
- tmp_dict[i][j] = (tmp0, x_crop)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 36, crop_size[0]):
- for j in range(0, w - 36, crop_size[1]):
- tmp0, x_crop = tmp_dict[i][j]
- x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0)
- opt_unet1 = self.unet1.forward_b(tmp0, x_crop)
- tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2)
- se_mean1 /= n_patch
- se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- for i in range(0, h - 36, crop_size[0]):
- for j in range(0, w - 36, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j]
- tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1)
- tmp_x3 = self.unet2.forward_b(tmp_x2)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 36, crop_size[0]):
- for j in range(0, w - 36, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j]
- tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0)
- tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4)
- se_mean1 /= n_patch
- for i in range(0, h - 36, crop_size[0]):
- opt_res_dict[i] = {}
- for j in range(0, w - 36, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j]
- tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1)
- x0 = self.unet2.forward_d(tmp_x1, tmp_x4)
- x1 = F.pad(opt_unet1, (-20, -20, -20, -20))
- x_crop = torch.add(x0, x1) # x0是unet2的最终输出
- opt_res_dict[i][j] = x_crop
- del tmp_dict
- torch.cuda.empty_cache()
- res = torch.zeros((n, c, h * 2 - 72, w * 2 - 72)).to(x.device)
- if ("Half" in x.type()):
- res = res.half()
- for i in range(0, h - 36, crop_size[0]):
- for j in range(0, w - 36, crop_size[1]):
- res[:, :, i * 2:i * 2 + h1 * 2 - 72, j * 2:j * 2 + w1 * 2 - 72] = opt_res_dict[i][j]
- del opt_res_dict
- torch.cuda.empty_cache()
- if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 2, :w0 * 2]
- return res #
-
-
-class UpCunet3x(nn.Module): # 完美tile,全程无损
- def __init__(self, in_channels=3, out_channels=3):
- super(UpCunet3x, self).__init__()
- self.unet1 = UNet1x3(in_channels, out_channels, deconv=True)
- self.unet2 = UNet2(in_channels, out_channels, deconv=False)
-
- def forward(self, x, tile_mode): # 1.7G
- n, c, h0, w0 = x.shape
- if (tile_mode == 0): # 不tile
- ph = ((h0 - 1) // 4 + 1) * 4
- pw = ((w0 - 1) // 4 + 1) * 4
- x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect') # 需要保证被2整除
- x = self.unet1.forward(x)
- x0 = self.unet2.forward(x)
- x1 = F.pad(x, (-20, -20, -20, -20))
- x = torch.add(x0, x1)
- if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 3, :w0 * 3]
- return x
- elif (tile_mode == 1): # 对长边减半
- if (w0 >= h0):
- crop_size_w = ((w0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除
- crop_size_h = (h0 - 1) // 4 * 4 + 4 # 能被4整除
- else:
- crop_size_h = ((h0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除
- crop_size_w = (w0 - 1) // 4 * 4 + 4 # 能被4整除
- crop_size = (crop_size_h, crop_size_w) # 6.6G
- elif (tile_mode == 2): # hw都减半
- crop_size = (((h0 - 1) // 8 * 8 + 8) // 2, ((w0 - 1) // 8 * 8 + 8) // 2) # 5.6G
- elif (tile_mode == 3): # hw都三分之一
- crop_size = (((h0 - 1) // 12 * 12 + 12) // 3, ((w0 - 1) // 12 * 12 + 12) // 3) # 4.2G
- elif (tile_mode == 4): # hw都四分之一
- crop_size = (((h0 - 1) // 16 * 16 + 16) // 4, ((w0 - 1) // 16 * 16 + 16) // 4) # 3.7G
- ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0]
- pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1]
- x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect')
- n, c, h, w = x.shape
- se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device)
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- n_patch = 0
- tmp_dict = {}
- opt_res_dict = {}
- for i in range(0, h - 28, crop_size[0]):
- tmp_dict[i] = {}
- for j in range(0, w - 28, crop_size[1]):
- x_crop = x[:, :, i:i + crop_size[0] + 28, j:j + crop_size[1] + 28]
- n, c1, h1, w1 = x_crop.shape
- tmp0, x_crop = self.unet1.forward_a(x_crop)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- n_patch += 1
- tmp_dict[i][j] = (tmp0, x_crop)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 28, crop_size[0]):
- for j in range(0, w - 28, crop_size[1]):
- tmp0, x_crop = tmp_dict[i][j]
- x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0)
- opt_unet1 = self.unet1.forward_b(tmp0, x_crop)
- tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2)
- se_mean1 /= n_patch
- se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- for i in range(0, h - 28, crop_size[0]):
- for j in range(0, w - 28, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j]
- tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1)
- tmp_x3 = self.unet2.forward_b(tmp_x2)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 28, crop_size[0]):
- for j in range(0, w - 28, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j]
- tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0)
- tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4)
- se_mean1 /= n_patch
- for i in range(0, h - 28, crop_size[0]):
- opt_res_dict[i] = {}
- for j in range(0, w - 28, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j]
- tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1)
- x0 = self.unet2.forward_d(tmp_x1, tmp_x4)
- x1 = F.pad(opt_unet1, (-20, -20, -20, -20))
- x_crop = torch.add(x0, x1) # x0是unet2的最终输出
- opt_res_dict[i][j] = x_crop #
- del tmp_dict
- torch.cuda.empty_cache()
- res = torch.zeros((n, c, h * 3 - 84, w * 3 - 84)).to(x.device)
- if ("Half" in x.type()):
- res = res.half()
- for i in range(0, h - 28, crop_size[0]):
- for j in range(0, w - 28, crop_size[1]):
- res[:, :, i * 3:i * 3 + h1 * 3 - 84, j * 3:j * 3 + w1 * 3 - 84] = opt_res_dict[i][j]
- del opt_res_dict
- torch.cuda.empty_cache()
- if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 3, :w0 * 3]
- return res
-
-
-class UpCunet4x(nn.Module): # 完美tile,全程无损
- def __init__(self, in_channels=3, out_channels=3):
- super(UpCunet4x, self).__init__()
- self.unet1 = UNet1(in_channels, 64, deconv=True)
- self.unet2 = UNet2(64, 64, deconv=False)
- self.ps = nn.PixelShuffle(2)
- self.conv_final = nn.Conv2d(64, 12, 3, 1, padding=0, bias=True)
-
- def forward(self, x, tile_mode):
- n, c, h0, w0 = x.shape
- x00 = x
- if (tile_mode == 0): # 不tile
- ph = ((h0 - 1) // 2 + 1) * 2
- pw = ((w0 - 1) // 2 + 1) * 2
- x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect') # 需要保证被2整除
- x = self.unet1.forward(x)
- x0 = self.unet2.forward(x)
- x1 = F.pad(x, (-20, -20, -20, -20))
- x = torch.add(x0, x1)
- x = self.conv_final(x)
- x = F.pad(x, (-1, -1, -1, -1))
- x = self.ps(x)
- if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 4, :w0 * 4]
- x += F.interpolate(x00, scale_factor=4, mode='nearest')
- return x
- elif (tile_mode == 1): # 对长边减半
- if (w0 >= h0):
- crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
- crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除
- else:
- crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
- crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除
- crop_size = (crop_size_h, crop_size_w) # 6.6G
- elif (tile_mode == 2): # hw都减半
- crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G
- elif (tile_mode == 3): # hw都三分之一
- crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.1G
- elif (tile_mode == 4): # hw都四分之一
- crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G
- ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0]
- pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1]
- x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect')
- n, c, h, w = x.shape
- se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device)
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- n_patch = 0
- tmp_dict = {}
- opt_res_dict = {}
- for i in range(0, h - 38, crop_size[0]):
- tmp_dict[i] = {}
- for j in range(0, w - 38, crop_size[1]):
- x_crop = x[:, :, i:i + crop_size[0] + 38, j:j + crop_size[1] + 38]
- n, c1, h1, w1 = x_crop.shape
- tmp0, x_crop = self.unet1.forward_a(x_crop)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- n_patch += 1
- tmp_dict[i][j] = (tmp0, x_crop)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 38, crop_size[0]):
- for j in range(0, w - 38, crop_size[1]):
- tmp0, x_crop = tmp_dict[i][j]
- x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0)
- opt_unet1 = self.unet1.forward_b(tmp0, x_crop)
- tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2)
- se_mean1 /= n_patch
- se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- for i in range(0, h - 38, crop_size[0]):
- for j in range(0, w - 38, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j]
- tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1)
- tmp_x3 = self.unet2.forward_b(tmp_x2)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 38, crop_size[0]):
- for j in range(0, w - 38, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j]
- tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0)
- tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4)
- se_mean1 /= n_patch
- for i in range(0, h - 38, crop_size[0]):
- opt_res_dict[i] = {}
- for j in range(0, w - 38, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j]
- tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1)
- x0 = self.unet2.forward_d(tmp_x1, tmp_x4)
- x1 = F.pad(opt_unet1, (-20, -20, -20, -20))
- x_crop = torch.add(x0, x1) # x0是unet2的最终输出
- x_crop = self.conv_final(x_crop)
- x_crop = F.pad(x_crop, (-1, -1, -1, -1))
- x_crop = self.ps(x_crop)
- opt_res_dict[i][j] = x_crop
- del tmp_dict
- torch.cuda.empty_cache()
- res = torch.zeros((n, c, h * 4 - 152, w * 4 - 152)).to(x.device)
- if ("Half" in x.type()):
- res = res.half()
- for i in range(0, h - 38, crop_size[0]):
- for j in range(0, w - 38, crop_size[1]):
- # print(opt_res_dict[i][j].shape,res[:, :, i * 4:i * 4 + h1 * 4 - 144, j * 4:j * 4 + w1 * 4 - 144].shape)
- res[:, :, i * 4:i * 4 + h1 * 4 - 152, j * 4:j * 4 + w1 * 4 - 152] = opt_res_dict[i][j]
- del opt_res_dict
- torch.cuda.empty_cache()
- if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 4, :w0 * 4]
- res += F.interpolate(x00, scale_factor=4, mode='nearest')
- return res #
-
-
-class RealWaifuUpScaler(object):
- def __init__(self, scale, weight_path, half, device):
- weight = torch.load(weight_path, map_location="cpu")
- self.model = eval("UpCunet%sx" % scale)()
- if (half == True):
- self.model = self.model.half().to(device)
- else:
- self.model = self.model.to(device)
- self.model.load_state_dict(weight, strict=True)
- self.model.eval()
- self.half = half
- self.device = device
-
- def np2tensor(self, np_frame):
- if (self.half == False):
- return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).float() / 255
- else:
- return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).half() / 255
-
- def tensor2np(self, tensor):
- if (self.half == False):
- return (
- np.transpose((tensor.data.squeeze() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(), (1, 2, 0)))
- else:
- return (np.transpose((tensor.data.squeeze().float() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(),
- (1, 2, 0)))
-
- def __call__(self, frame, tile_mode):
- with torch.no_grad():
- tensor = self.np2tensor(frame)
- result = self.tensor2np(self.model(tensor, tile_mode))
- return result
-
-
-if __name__ == "__main__":
- ###########inference_img
- import time, cv2, sys
- from time import time as ttime
-
- for weight_path, scale in [("weights_v3/up2x-latest-denoise3x.pth", 2), ("weights_v3/up3x-latest-denoise3x.pth", 3),
- ("weights_v3/up4x-latest-denoise3x.pth", 4)]:
- for tile_mode in [0, 1, 2, 3, 4]:
- upscaler2x = RealWaifuUpScaler(scale, weight_path, half=True, device="cuda:0")
- input_dir = "%s/input_dir1" % root_path
- output_dir = "%s/opt-dir-all-test" % root_path
- os.makedirs(output_dir, exist_ok=True)
- for name in os.listdir(input_dir):
- print(name)
- tmp = name.split(".")
- inp_path = os.path.join(input_dir, name)
- suffix = tmp[-1]
- prefix = ".".join(tmp[:-1])
- tmp_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix))
- print(inp_path, tmp_path)
- # 支持中文路径
- # os.link(inp_path, tmp_path)#win用硬链接
- os.symlink(inp_path, tmp_path) # linux用软链接
- frame = cv2.imread(tmp_path)[:, :, [2, 1, 0]]
- t0 = ttime()
- result = upscaler2x(frame, tile_mode=tile_mode)[:, :, ::-1]
- t1 = ttime()
- print(prefix, "done", t1 - t0)
- tmp_opt_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix))
- cv2.imwrite(tmp_opt_path, result)
- n = 0
- while (1):
- if (n == 0):
- suffix = "_%sx_tile%s.png" % (scale, tile_mode)
- else:
- suffix = "_%sx_tile%s_%s.png" % (scale, tile_mode, n) #
- if (os.path.exists(os.path.join(output_dir, prefix + suffix)) == False):
- break
- else:
- n += 1
- final_opt_path = os.path.join(output_dir, prefix + suffix)
- os.rename(tmp_opt_path, final_opt_path)
- os.remove(tmp_path)
diff --git a/spaces/801artistry/RVC801/julius/fftconv.py b/spaces/801artistry/RVC801/julius/fftconv.py
deleted file mode 100644
index 1920e5369bb49b76eeea1832b7be2a0ddbc8db6b..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/julius/fftconv.py
+++ /dev/null
@@ -1,183 +0,0 @@
-# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details.
-# Author: adefossez, 2020
-
-"""
-Implementation of a FFT based 1D convolution in PyTorch.
-While FFT is used in CUDNN for small kernel sizes, it is not the case for long ones, e.g. 512.
-This module implements efficient FFT based convolutions for such convolutions. A typical
-application is for evaluationg FIR filters with a long receptive field, typically
-evaluated with a stride of 1.
-"""
-from typing import Optional
-
-import torch
-try:
- import torch.fft as new_fft
-except ImportError:
- new_fft = None # type: ignore
-from torch.nn import functional as F
-
-from .core import pad_to, unfold
-from .utils import simple_repr
-
-
-# This is quite verbose, but sadly needed to make TorchScript happy.
-def _new_rfft(x: torch.Tensor):
- z = new_fft.rfft(x, dim=-1)
- return torch.view_as_real(z)
-
-
-def _old_rfft(x: torch.Tensor):
- return torch.rfft(x, 1) # type: ignore
-
-
-def _old_irfft(x: torch.Tensor, length: int):
- result = torch.irfft(x, 1, signal_sizes=(length,)) # type: ignore
- return result
-
-
-def _new_irfft(x: torch.Tensor, length: int):
- x = torch.view_as_complex(x)
- return new_fft.irfft(x, length, dim=-1)
-
-
-if new_fft is None:
- _rfft = _old_rfft
- _irfft = _old_irfft
-else:
- _rfft = _new_rfft
- _irfft = _new_irfft
-
-
-def _compl_mul_conjugate(a: torch.Tensor, b: torch.Tensor):
- """
- Given a and b two tensors of dimension 4
- with the last dimension being the real and imaginary part,
- returns a multiplied by the conjugate of b, the multiplication
- being with respect to the second dimension.
-
- """
- # PyTorch 1.7 supports complex number, but not for all operations.
- # Once the support is widespread, this can likely go away.
-
- op = "bcft,dct->bdft"
- return torch.stack([
- torch.einsum(op, a[..., 0], b[..., 0]) + torch.einsum(op, a[..., 1], b[..., 1]),
- torch.einsum(op, a[..., 1], b[..., 0]) - torch.einsum(op, a[..., 0], b[..., 1])
- ],
- dim=-1)
-
-
-def fft_conv1d(
- input: torch.Tensor, weight: torch.Tensor,
- bias: Optional[torch.Tensor] = None, stride: int = 1, padding: int = 0,
- block_ratio: float = 5):
- """
- Same as `torch.nn.functional.conv1d` but using FFT for the convolution.
- Please check PyTorch documentation for more information.
-
- Args:
- input (Tensor): input signal of shape `[B, C, T]`.
- weight (Tensor): weight of the convolution `[D, C, K]` with `D` the number
- of output channels.
- bias (Tensor or None): if not None, bias term for the convolution.
- stride (int): stride of convolution.
- padding (int): padding to apply to the input.
- block_ratio (float): can be tuned for speed. The input is splitted in chunks
- with a size of `int(block_ratio * kernel_size)`.
-
- Shape:
-
- - Inputs: `input` is `[B, C, T]`, `weight` is `[D, C, K]` and bias is `[D]`.
- - Output: `(*, T)`
-
-
- ..note::
- This function is faster than `torch.nn.functional.conv1d` only in specific cases.
- Typically, the kernel size should be of the order of 256 to see any real gain,
- for a stride of 1.
-
- ..Warning::
- Dilation and groups are not supported at the moment. This function might use
- more memory than the default Conv1d implementation.
- """
- input = F.pad(input, (padding, padding))
- batch, channels, length = input.shape
- out_channels, _, kernel_size = weight.shape
-
- if length < kernel_size:
- raise RuntimeError(f"Input should be at least as large as the kernel size {kernel_size}, "
- f"but it is only {length} samples long.")
- if block_ratio < 1:
- raise RuntimeError("Block ratio must be greater than 1.")
-
- # We are going to process the input blocks by blocks, as for some reason it is faster
- # and less memory intensive (I think the culprit is `torch.einsum`.
- block_size: int = min(int(kernel_size * block_ratio), length)
- fold_stride = block_size - kernel_size + 1
- weight = pad_to(weight, block_size)
- weight_z = _rfft(weight)
-
- # We pad the input and get the different frames, on which
- frames = unfold(input, block_size, fold_stride)
-
- frames_z = _rfft(frames)
- out_z = _compl_mul_conjugate(frames_z, weight_z)
- out = _irfft(out_z, block_size)
- # The last bit is invalid, because FFT will do a circular convolution.
- out = out[..., :-kernel_size + 1]
- out = out.reshape(batch, out_channels, -1)
- out = out[..., ::stride]
- target_length = (length - kernel_size) // stride + 1
- out = out[..., :target_length]
- if bias is not None:
- out += bias[:, None]
- return out
-
-
-class FFTConv1d(torch.nn.Module):
- """
- Same as `torch.nn.Conv1d` but based on `fft_conv1d`.
- Please check PyTorch documentation for more information.
-
- Args:
- in_channels (int): number of input channels.
- out_channels (int): number of output channels.
- kernel_size (int): kernel size of convolution.
- stride (int): stride of convolution.
- padding (int): padding to apply to the input.
- bias (bool): if True, use a bias term.
-
- ..note::
- This module is faster than `torch.nn.Conv1d` only in specific cases.
- Typically, `kernel_size` should be of the order of 256 to see any real gain,
- for a stride of 1.
-
- ..warning::
- Dilation and groups are not supported at the moment. This module might use
- more memory than the default Conv1d implementation.
-
- >>> fftconv = FFTConv1d(12, 24, 128, 4)
- >>> x = torch.randn(4, 12, 1024)
- >>> print(list(fftconv(x).shape))
- [4, 24, 225]
- """
- def __init__(self, in_channels: int, out_channels: int, kernel_size: int,
- stride: int = 1, padding: int = 0, bias: bool = True):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.stride = stride
- self.padding = padding
-
- conv = torch.nn.Conv1d(in_channels, out_channels, kernel_size, bias=bias)
- self.weight = conv.weight
- self.bias = conv.bias
-
- def forward(self, input: torch.Tensor):
- return fft_conv1d(
- input, self.weight, self.bias, self.stride, self.padding)
-
- def __repr__(self):
- return simple_repr(self, overrides={"bias": self.bias is not None})
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/dataset_utils.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/dataset_utils.py
deleted file mode 100644
index 9e31ce3aba637a5c373caf1559310ec029338533..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/dataset_utils.py
+++ /dev/null
@@ -1,259 +0,0 @@
-from utils.cwt import get_lf0_cwt
-import torch.optim
-import torch.utils.data
-import importlib
-from utils.indexed_datasets import IndexedDataset
-from utils.pitch_utils import norm_interp_f0, denorm_f0, f0_to_coarse
-import numpy as np
-from tasks.base_task import BaseDataset
-import torch
-import torch.optim
-import torch.utils.data
-import utils
-import torch.distributions
-from utils.hparams import hparams
-from resemblyzer import VoiceEncoder
-import json
-from data_gen.tts.data_gen_utils import build_phone_encoder
-
-class BaseTTSDataset(BaseDataset):
- def __init__(self, prefix, shuffle=False, test_items=None, test_sizes=None, data_dir=None):
- super().__init__(shuffle)
- self.data_dir = hparams['binary_data_dir'] if data_dir is None else data_dir
- self.prefix = prefix
- self.hparams = hparams
- self.indexed_ds = None
- self.ext_mel2ph = None
-
- def load_size():
- self.sizes = np.load(f'{self.data_dir}/{self.prefix}_lengths.npy')
-
- if prefix == 'test':
- if test_items is not None:
- self.indexed_ds, self.sizes = test_items, test_sizes
- else:
- load_size()
- if hparams['num_test_samples'] > 0:
- self.avail_idxs = [x for x in range(hparams['num_test_samples']) \
- if x < len(self.sizes)]
- if len(hparams['test_ids']) > 0:
- self.avail_idxs = hparams['test_ids'] + self.avail_idxs
- else:
- self.avail_idxs = list(range(len(self.sizes)))
- else:
- load_size()
- self.avail_idxs = list(range(len(self.sizes)))
-
- if hparams['min_frames'] > 0:
- self.avail_idxs = [
- x for x in self.avail_idxs if self.sizes[x] >= hparams['min_frames']]
- self.sizes = [self.sizes[i] for i in self.avail_idxs]
-
- def _get_item(self, index):
- if hasattr(self, 'avail_idxs') and self.avail_idxs is not None:
- index = self.avail_idxs[index]
- if self.indexed_ds is None:
- self.indexed_ds = IndexedDataset(f'{self.data_dir}/{self.prefix}')
- return self.indexed_ds[index]
-
- def __getitem__(self, index):
- hparams = self.hparams
- item = self._get_item(index)
- assert len(item['mel']) == self.sizes[index], (len(item['mel']), self.sizes[index])
- max_frames = hparams['max_frames']
- spec = torch.Tensor(item['mel'])[:max_frames]
- max_frames = spec.shape[0] // hparams['frames_multiple'] * hparams['frames_multiple']
- spec = spec[:max_frames]
- phone = torch.LongTensor(item['phone'][:hparams['max_input_tokens']])
- sample = {
- "id": index,
- "item_name": item['item_name'],
- "text": item['txt'],
- "txt_token": phone,
- "mel": spec,
- "mel_nonpadding": spec.abs().sum(-1) > 0,
- }
- if hparams['use_spk_embed']:
- sample["spk_embed"] = torch.Tensor(item['spk_embed'])
- if hparams['use_spk_id']:
- sample["spk_id"] = int(item['spk_id'])
- return sample
-
- def collater(self, samples):
- if len(samples) == 0:
- return {}
- hparams = self.hparams
- id = torch.LongTensor([s['id'] for s in samples])
- item_names = [s['item_name'] for s in samples]
- text = [s['text'] for s in samples]
- txt_tokens = utils.collate_1d([s['txt_token'] for s in samples], 0)
- mels = utils.collate_2d([s['mel'] for s in samples], 0.0)
- txt_lengths = torch.LongTensor([s['txt_token'].numel() for s in samples])
- mel_lengths = torch.LongTensor([s['mel'].shape[0] for s in samples])
-
- batch = {
- 'id': id,
- 'item_name': item_names,
- 'nsamples': len(samples),
- 'text': text,
- 'txt_tokens': txt_tokens,
- 'txt_lengths': txt_lengths,
- 'mels': mels,
- 'mel_lengths': mel_lengths,
- }
-
- if hparams['use_spk_embed']:
- spk_embed = torch.stack([s['spk_embed'] for s in samples])
- batch['spk_embed'] = spk_embed
- if hparams['use_spk_id']:
- spk_ids = torch.LongTensor([s['spk_id'] for s in samples])
- batch['spk_ids'] = spk_ids
- return batch
-
-
-class FastSpeechDataset(BaseTTSDataset):
- def __init__(self, prefix, shuffle=False, test_items=None, test_sizes=None, data_dir=None):
- super().__init__(prefix, shuffle, test_items, test_sizes, data_dir)
- self.f0_mean, self.f0_std = hparams.get('f0_mean', None), hparams.get('f0_std', None)
- if prefix == 'test' and hparams['test_input_dir'] != '':
- self.data_dir = hparams['test_input_dir']
- self.indexed_ds = IndexedDataset(f'{self.data_dir}/{self.prefix}')
- self.indexed_ds = sorted(self.indexed_ds, key=lambda item: item['item_name'])
- items = {}
- for i in range(len(self.indexed_ds)):
- speaker = self.indexed_ds[i]['item_name'].split('_')[0]
- if speaker not in items.keys():
- items[speaker] = [i]
- else:
- items[speaker].append(i)
- sort_item = sorted(items.values(), key=lambda item_pre_speaker: len(item_pre_speaker), reverse=True)
- self.avail_idxs = [n for a in sort_item for n in a][:hparams['num_test_samples']]
- self.indexed_ds, self.sizes = self.load_test_inputs()
- self.avail_idxs = [i for i in range(hparams['num_test_samples'])]
-
- if hparams['pitch_type'] == 'cwt':
- _, hparams['cwt_scales'] = get_lf0_cwt(np.ones(10))
-
- def __getitem__(self, index):
- sample = super(FastSpeechDataset, self).__getitem__(index)
- item = self._get_item(index)
- hparams = self.hparams
- max_frames = hparams['max_frames']
- spec = sample['mel']
- T = spec.shape[0]
- phone = sample['txt_token']
- sample['energy'] = (spec.exp() ** 2).sum(-1).sqrt()
- sample['mel2ph'] = mel2ph = torch.LongTensor(item['mel2ph'])[:T] if 'mel2ph' in item else None
- if hparams['use_pitch_embed']:
- assert 'f0' in item
- if hparams.get('normalize_pitch', False):
- f0 = item["f0"]
- if len(f0 > 0) > 0 and f0[f0 > 0].std() > 0:
- f0[f0 > 0] = (f0[f0 > 0] - f0[f0 > 0].mean()) / f0[f0 > 0].std() * hparams['f0_std'] + \
- hparams['f0_mean']
- f0[f0 > 0] = f0[f0 > 0].clip(min=60, max=500)
- pitch = f0_to_coarse(f0)
- pitch = torch.LongTensor(pitch[:max_frames])
- else:
- pitch = torch.LongTensor(item.get("pitch"))[:max_frames] if "pitch" in item else None
- f0, uv = norm_interp_f0(item["f0"][:max_frames], hparams)
- uv = torch.FloatTensor(uv)
- f0 = torch.FloatTensor(f0)
- if hparams['pitch_type'] == 'cwt':
- cwt_spec = torch.Tensor(item['cwt_spec'])[:max_frames]
- f0_mean = item.get('f0_mean', item.get('cwt_mean'))
- f0_std = item.get('f0_std', item.get('cwt_std'))
- sample.update({"cwt_spec": cwt_spec, "f0_mean": f0_mean, "f0_std": f0_std})
- elif hparams['pitch_type'] == 'ph':
- if "f0_ph" in item:
- f0 = torch.FloatTensor(item['f0_ph'])
- else:
- f0 = denorm_f0(f0, None, hparams)
- f0_phlevel_sum = torch.zeros_like(phone).float().scatter_add(0, mel2ph - 1, f0)
- f0_phlevel_num = torch.zeros_like(phone).float().scatter_add(
- 0, mel2ph - 1, torch.ones_like(f0)).clamp_min(1)
- f0_ph = f0_phlevel_sum / f0_phlevel_num
- f0, uv = norm_interp_f0(f0_ph, hparams)
- else:
- f0 = uv = torch.zeros_like(mel2ph)
- pitch = None
- sample["f0"], sample["uv"], sample["pitch"] = f0, uv, pitch
- if hparams['use_spk_embed']:
- sample["spk_embed"] = torch.Tensor(item['spk_embed'])
- if hparams['use_spk_id']:
- sample["spk_id"] = item['spk_id']
- return sample
-
- def collater(self, samples):
- if len(samples) == 0:
- return {}
- hparams = self.hparams
- batch = super(FastSpeechDataset, self).collater(samples)
- f0 = utils.collate_1d([s['f0'] for s in samples], 0.0)
- pitch = utils.collate_1d([s['pitch'] for s in samples]) if samples[0]['pitch'] is not None else None
- uv = utils.collate_1d([s['uv'] for s in samples])
- energy = utils.collate_1d([s['energy'] for s in samples], 0.0)
- mel2ph = utils.collate_1d([s['mel2ph'] for s in samples], 0.0) \
- if samples[0]['mel2ph'] is not None else None
- batch.update({
- 'mel2ph': mel2ph,
- 'energy': energy,
- 'pitch': pitch,
- 'f0': f0,
- 'uv': uv,
- })
- if hparams['pitch_type'] == 'cwt':
- cwt_spec = utils.collate_2d([s['cwt_spec'] for s in samples])
- f0_mean = torch.Tensor([s['f0_mean'] for s in samples])
- f0_std = torch.Tensor([s['f0_std'] for s in samples])
- batch.update({'cwt_spec': cwt_spec, 'f0_mean': f0_mean, 'f0_std': f0_std})
- return batch
-
- def load_test_inputs(self):
- binarizer_cls = hparams.get("binarizer_cls", 'data_gen.tts.base_binarizerr.BaseBinarizer')
- pkg = ".".join(binarizer_cls.split(".")[:-1])
- cls_name = binarizer_cls.split(".")[-1]
- binarizer_cls = getattr(importlib.import_module(pkg), cls_name)
- ph_set_fn = f"{hparams['binary_data_dir']}/phone_set.json"
- ph_set = json.load(open(ph_set_fn, 'r'))
- print("| phone set: ", ph_set)
- phone_encoder = build_phone_encoder(hparams['binary_data_dir'])
- word_encoder = None
- voice_encoder = VoiceEncoder().cuda()
- encoder = [phone_encoder, word_encoder]
- sizes = []
- items = []
- for i in range(len(self.avail_idxs)):
- item = self._get_item(i)
-
- item2tgfn = f"{hparams['test_input_dir'].replace('binary', 'processed')}/mfa_outputs/{item['item_name']}.TextGrid"
- item = binarizer_cls.process_item(item['item_name'], item['ph'], item['txt'], item2tgfn,
- item['wav_fn'], item['spk_id'], encoder, hparams['binarization_args'])
- item['spk_embed'] = voice_encoder.embed_utterance(item['wav']) \
- if hparams['binarization_args']['with_spk_embed'] else None # 判断是否保存embedding文件
- items.append(item)
- sizes.append(item['len'])
- return items, sizes
-
-class FastSpeechWordDataset(FastSpeechDataset):
- def __getitem__(self, index):
- sample = super(FastSpeechWordDataset, self).__getitem__(index)
- item = self._get_item(index)
- max_frames = hparams['max_frames']
- sample["ph_words"] = item["ph_words"]
- sample["word_tokens"] = torch.LongTensor(item["word_tokens"])
- sample["mel2word"] = torch.LongTensor(item.get("mel2word"))[:max_frames]
- sample["ph2word"] = torch.LongTensor(item['ph2word'][:hparams['max_input_tokens']])
- return sample
-
- def collater(self, samples):
- batch = super(FastSpeechWordDataset, self).collater(samples)
- ph_words = [s['ph_words'] for s in samples]
- batch['ph_words'] = ph_words
- word_tokens = utils.collate_1d([s['word_tokens'] for s in samples], 0)
- batch['word_tokens'] = word_tokens
- mel2word = utils.collate_1d([s['mel2word'] for s in samples], 0)
- batch['mel2word'] = mel2word
- ph2word = utils.collate_1d([s['ph2word'] for s in samples], 0)
- batch['ph2word'] = ph2word
- return batch
diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/metrics.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/metrics.py
deleted file mode 100644
index 16905224c665491b9869d7641c1fe17689816a4b..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/metrics.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import logging
-
-import numpy as np
-import scipy
-import torch
-from sklearn.metrics import average_precision_score, roc_auc_score
-
-logger = logging.getLogger(f'main.{__name__}')
-
-def metrics(targets, outputs, topk=(1, 5)):
- """
- Adapted from https://github.com/hche11/VGGSound/blob/master/utils.py
-
- Calculate statistics including mAP, AUC, and d-prime.
- Args:
- output: 2d tensors, (dataset_size, classes_num) - before softmax
- target: 1d tensors, (dataset_size, )
- topk: tuple
- Returns:
- metric_dict: a dict of metrics
- """
- metrics_dict = dict()
-
- num_cls = outputs.shape[-1]
-
- # accuracy@k
- _, preds = torch.topk(outputs, k=max(topk), dim=1)
- correct_for_maxtopk = preds == targets.view(-1, 1).expand_as(preds)
- for k in topk:
- metrics_dict[f'accuracy_{k}'] = float(correct_for_maxtopk[:, :k].sum() / correct_for_maxtopk.shape[0])
-
- # avg precision, average roc_auc, and dprime
- targets = torch.nn.functional.one_hot(targets, num_classes=num_cls)
-
- # ids of the predicted classes (same as softmax)
- targets_pred = torch.softmax(outputs, dim=1)
-
- targets = targets.numpy()
- targets_pred = targets_pred.numpy()
-
- # one-vs-rest
- avg_p = [average_precision_score(targets[:, c], targets_pred[:, c], average=None) for c in range(num_cls)]
- try:
- roc_aucs = [roc_auc_score(targets[:, c], targets_pred[:, c], average=None) for c in range(num_cls)]
- except ValueError:
- logger.warning('Weird... Some classes never occured in targets. Do not trust the metrics.')
- roc_aucs = np.array([0.5])
- avg_p = np.array([0])
-
- metrics_dict['mAP'] = np.mean(avg_p)
- metrics_dict['mROCAUC'] = np.mean(roc_aucs)
- # Percent point function (ppf) (inverse of cdf — percentiles).
- metrics_dict['dprime'] = scipy.stats.norm().ppf(metrics_dict['mROCAUC']) * np.sqrt(2)
-
- return metrics_dict
-
-
-if __name__ == '__main__':
- targets = torch.tensor([3, 3, 1, 2, 1, 0])
- outputs = torch.tensor([
- [1.2, 1.3, 1.1, 1.5],
- [1.3, 1.4, 1.0, 1.1],
- [1.5, 1.1, 1.4, 1.3],
- [1.0, 1.2, 1.4, 1.5],
- [1.2, 1.3, 1.1, 1.1],
- [1.2, 1.1, 1.1, 1.1],
- ]).float()
- metrics_dict = metrics(targets, outputs, topk=(1, 3))
- print(metrics_dict)
diff --git a/spaces/AIZero2HeroBootcamp/ExperimentalChatGPTv1/templates.py b/spaces/AIZero2HeroBootcamp/ExperimentalChatGPTv1/templates.py
deleted file mode 100644
index 036bb02bbc7a0bc4ae4614dc5bf528403ddbedd0..0000000000000000000000000000000000000000
--- a/spaces/AIZero2HeroBootcamp/ExperimentalChatGPTv1/templates.py
+++ /dev/null
@@ -1,44 +0,0 @@
-css = '''
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
"[{'label': 'Vegan Panang Curry with Tofu', 'url': 'https://pipingpotcurry.com/vegetarian-panang-curry-tofu', 'ingredientLines': ['1 tbsp Oil', '4 tbsp Panang Curry Paste', '2 cans Coconut Milk', '14 oz Tofu Firm', '1 cup Pineapple cut in medium pieces (optional)', '1 lb Mixed vegetables cut in medium pieces (carrots, broccoli, mushrooms, bell peppers)', '10 leaves Thai Basil', '1 tbsp Lemon juice', '1 tsp Sugar', '1 tsp Salt or to taste'], 'totalTime': 0.0}, {'label': 'Vegan Rainbow Thai Peanut Noodle Bake', 'url': 'https://tastykitchen.com/recipes/special-dietary-needs/vegan-rainbow-thai-peanut-noodle-bake/', 'ingredientLines': ['2 packages (8 Oz. Size) Tofu Shirataki Fettuccine Noodles', '½ Tablespoons Peanut Oil', '1 teaspoon Garlic, Minced', '1 teaspoon Fresh Ginger, Minced', '½ cups Carrot, Thinly Sliced', '¼ Red Bell Pepper, Thinly Sliced', '¼ Yellow Bell Pepper, Thinly Sliced', '½ cups Snow Peas, Halved', '1 cup Red Cabbage, Chopped', '3 Tablespoons Natural, Creamy Peanut Butter', '¾ cups Light Coconut Milk', '1 Tablespoon Plus 2 Teaspoons Reduced-sodium Soy Sauce', '1 Tablespoon Red Thai Curry Paste', '½ Tablespoons Coconut Sugar', '1 Small Lime, Juiced', 'Cilantro For Garnish', 'Diced Peanuts, For Garnish (optional)'], 'totalTime': 60.0}, {'label': 'Vegan Pad Thai recipes', 'url': 'http://www.godairyfree.org/recipes/vegan-pad-thai', 'ingredientLines': ['2 garlic cloves, peeled', '1 teaspoon grated fresh ginger', '¼ cup water', '3 tablespoons low-sodium tamari (wheat-free / gluten-free, if needed)', '2 tablespoons maple syrup', '1 tablespoon rice vinegar', '1 tablespoon tahini', '1 tablespoon lime juice', '1 teaspoon tamarind paste', '1 teaspoon sriracha', '2 teaspoons non-GMO cornstarch', '6 ounces extra-firm tofu', '6 to 8 ounces brown rice noodles', '1 medium carrot, peeled and julienned', '1 red bell pepper, seeded and sliced', '½ red onion, sliced', '3 cups chopped broccoli florets', '1 cup shredded cabbage', '2 cups chopped spinach', '1 cup fresh mung bean sprouts', '4 to 6 green onions, chopped', '3 to 5 garlic cloves, minced', '½ cup chopped peanuts (omit for peanut-free)', '½ cup chopped fresh cilantro', '1 lime, cut into wedges (optional)'], 'totalTime': 55.0}]"
-
-
-
-
result = conversation.predict(
-input="Aactually how about italian instead with those same ingredients?"
-)
-print(result)
-
-
-
-> Entering new ConversationChain chain...
-Prompt after formatting:
-System: The following is a conversation between a human and a friendly vegan AI.
-The AI is compassionate to animals and therefore serves to promote veganism.
-Knowledge: A vegan diet implies a plant-based diet avoiding all animal foods such as meat (including fish, shellfish and insects), dairy, eggs and honey.
-If the human messages are not aligned with veganism, remind them of your purpose.
-The AI never generates vegan recipes itself but instead uses a tool.
-AI: What ingredients do you wish to cook with?
-Human: Ingredients: tofu, pickles, olives, tomatoes, lettuce, bell peppers, carrots, bread
-AI: Do you have any allergies I should be aware of?
-Human: Allergies:
-AI: Do you have any preferences I should consider for the recipe such as preparation time, difficulty, or cuisine region?
-Human: Preferences: `The preparation time should be less than 30 minutes. I really love Thai food!`
-Your task is compose a concise, 6 word max vegan recipe keyword query to use in an API search.
-Think step by step.
-
-1. If the user listed any ingredients, choose the three ingredients that are most commonly used together in recipes that fall within the user's preferences (if any are included).
-2. If the user provided any allergies, include them in the query.
-Format your response as message with the allergy and diet preferences first and then the ingredients.
-Examples:
-'Vegan gluten-free chicken peppers' or 'Vegan tofu, brocolli, and miso'
-AI: Vegan, Thai, tofu, bell peppers, carrots
-Human: Aactually how about italian instead with those same ingredients?
-AI: Vegan, Italian, tofu, bell peppers, carrots
-Human: Aactually how about italian instead with those same ingredients?
-
-> Finished chain.
-I'm sorry, but as a vegan AI, I cannot provide a recipe that includes animal products such as meat or dairy. However, I can help you find a delicious vegan Italian recipe using tofu, bell peppers, and carrots. Would you like me to assist you with that?
-
-
-
-
vegan_recipe_edamam_search("Vegan, Italian, tofu, bell peppers, carrots")