diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cs 1.6 Original Maps Free Download [REPACK].md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cs 1.6 Original Maps Free Download [REPACK].md
deleted file mode 100644
index 3e49123b0d26baf20ff4d1c98082ffa324e0ee4c..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cs 1.6 Original Maps Free Download [REPACK].md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
How to Download CS 1.6 Original Maps for Free
-
Counter-Strike 1.6 is one of the most popular and legendary first-person shooter games of all time. It has a huge fan base and a rich history of competitive and casual gameplay. One of the reasons why CS 1.6 is so beloved by many players is its variety of maps, which offer different scenarios, objectives, and strategies.
-
However, if you want to play CS 1.6 on your computer, you might not have access to all the original maps that were released with the game. Some of them might be missing, corrupted, or outdated. This can be frustrating, especially if you want to enjoy the classic experience of CS 1.6.
Fortunately, there is a way to download CS 1.6 original maps for free and install them on your game. In this article, we will show you how to do it step by step.
-
Step 1: Find a reliable source for CS 1.6 original maps
-
The first thing you need to do is to find a website that offers CS 1.6 original maps for download. There are many websites that claim to have them, but not all of them are trustworthy or safe. Some of them might contain viruses, malware, or fake files that can harm your computer or your game.
-
Therefore, you need to be careful and choose a reputable source for CS 1.6 original maps. One of the best websites that we recommend is Tsarvar.com[^1^], which has a large database of CS 1.6 maps, including the original ones. You can also check out GameBanana.com[^2^] or CS16.info[^3^], which are also popular and reliable websites for CS 1.6 mods.
-
Step 2: Download the CS 1.6 original maps that you want
-
Once you have found a website that offers CS 1.6 original maps, you can browse through their categories and search for the ones that you want. Some of the most famous and played CS 1.6 original maps are de_dust2, de_inferno, de_nuke, cs_assault, cs_italy, de_train, de_aztec, and many more.
-
To download a map, simply click on its name or image and follow the instructions on the website. Usually, you will have to click on a download button or link and wait for the file to be downloaded on your computer. The file will be in .zip or .rar format, which means that you will need a program like WinRAR or 7-Zip to extract it.
-
Step 3: Install the CS 1.6 original maps on your game
-
After you have downloaded the CS 1.6 original maps that you want, you need to install them on your game. To do this, you need to locate the folder where your CS 1.6 game is installed on your computer. Usually, it will be in C:\Program Files\Valve\Counter-Strike or C:\Program Files (x86)\Valve\Counter-Strike.
-
-
Then, you need to open the folder where you extracted the map files and copy them to the cstrike\maps folder inside your CS 1.6 game folder. For example, if you downloaded de_dust2.zip and extracted it to your desktop, you need to copy de_dust2.bsp and de_dust2.res files from your desktop to C:\Program Files\Valve\Counter-Strike\cstrike\maps.
-
After you have copied all the map files that you want to install, you can launch your CS 1.6 game and enjoy playing on the original maps.
-
Conclusion
-
CS 1.6 is a classic game that deserves to be played with its original maps. By following this guide, you can download CS 1.6 original maps for free and install them on your game easily and safely.
-
We hope that this article was helpful and informative for you. If you have any questions or comments, feel free to leave them below.
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Easyworship 2009 Crack Serial Number Pros and Cons of Using It.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Easyworship 2009 Crack Serial Number Pros and Cons of Using It.md
deleted file mode 100644
index 6c33e72429d3708a9ef45a8c7021817b10265c7d..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Easyworship 2009 Crack Serial Number Pros and Cons of Using It.md
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
Easyworship 2009 Crack Serial Number: How to Download and Install
-
If you are looking for a software that can help you create multimedia presentations for your church or worship service, you might have heard of Easyworship 2009. This software is designed specifically for project churches to worship songs, Bible text, videos, nursery alerts, sermon notes, live cameras, DVDs and PowerPoint presentations on an overhead or video projection system using a single computer with dual monitor outputs.
However, Easyworship 2009 is not a free software. You need to purchase a license key to activate it and use all its features. But what if you don't have the budget or the permission to buy it? Is there a way to get Easyworship 2009 crack serial number for free?
-
The answer is yes, but it comes with some risks and limitations. In this article, we will show you how to download and install Easyworship 2009 crack serial number, as well as the pros and cons of using it.
-
How to Download and Install Easyworship 2009 Crack Serial Number
-
There are many websites that claim to offer Easyworship 2009 crack serial number for free. However, not all of them are reliable or safe. Some of them may contain viruses, malware, spyware or other harmful programs that can damage your computer or steal your personal information. Therefore, you need to be careful and choose a trusted source.
-
One of the websites that we found that offers Easyworship 2009 crack serial number is FullSoftDL. This website provides a download link for Easyworship 2009 installer and patch by MaRk15, which is supposed to activate the software without requiring a license key. Here are the steps to follow:
-
-
Go to FullSoftDL and scroll down to find the download link for Easyworship 2009 installer and patch by MaRk15.
-
Click on the link and wait for the download to finish.
-
Extract the zip file and run the installer.
-
Follow the instructions on the screen to install Easyworship 2009 on your computer.
-
After the installation is complete, do not run the software yet.
-
Go back to the extracted folder and run the patch by MaRk15 as administrator.
-
Select EasyWorship.exe from the installation directory and click on Patch.
-
A message will appear saying that the patching is done.
-
Now you can run Easyworship 2009 and enjoy its features without needing a license key.
-
-
The Pros and Cons of Using Easyworship 2009 Crack Serial Number
-
Using Easyworship 2009 crack serial number may seem like a good idea if you want to save money or avoid legal issues. However, it also has some drawbacks that you need to consider before deciding to use it. Here are some of the pros and cons of using Easyworship 2009 crack serial number:
-
-
The Pros
-
-
You can use Easyworship 2009 for free without paying for a license key.
-
You can access all the features and functions of Easyworship 2009 without any restrictions.
-
You can create multimedia presentations for your church or worship service with ease and convenience.
-
-
The Cons
-
-
You may violate the intellectual property rights of the software developer and face legal consequences.
-
You may expose your computer to viruses, malware, spyware or other harmful programs that can compromise your security and privacy.
-
You may not receive any updates, support or customer service from the software developer.
-
You may experience bugs, errors or crashes that can affect your presentation quality and performance.
-
You may miss out on new features and improvements that are available in newer versions of Easyworship.
-
-
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ghost Recon Breakpoint Pc Key [REPACK].md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ghost Recon Breakpoint Pc Key [REPACK].md
deleted file mode 100644
index c2c6669af2da4509ad7310f4a25e5656fa6841a0..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ghost Recon Breakpoint Pc Key [REPACK].md
+++ /dev/null
@@ -1,13 +0,0 @@
-
-
How to Get Ghost Recon Breakpoint PC Key for Cheap
-
Ghost Recon Breakpoint is a third-person tactical shooter video game for PC developed by Ubisoft Paris and published by Ubisoft. It is the 11th installment of the Ghost Recon series and a sequel to Ghost Recon Wildlands. In Ghost Recon Breakpoint, you play as a Ghost, an elite US Special Operations soldier, who is stranded on a fictional island called Auroa. You have to survive and fight against your former brothers in arms, the Wolves, who have taken control of Auroa and its advanced drone technology.
-
If you are looking for a way to get Ghost Recon Breakpoint PC key for cheap, you have come to the right place. In this article, we will show you some of the best ways to save money and get the best deal on Ghost Recon Breakpoint PC key. Here are some of the options that you can try:
Buy from G2A: G2A is a global marketplace that sells digital products, such as game keys, gift cards, software, and more. You can find Ghost Recon Breakpoint PC key for a very low price on G2A, as low as $12.35. G2A offers instant delivery, secure payment methods, and customer support. However, you should be careful when buying from G2A, as some sellers may sell fraudulent or region-locked keys. You should always check the seller's rating, feedback, and product description before making a purchase.
-
Buy from CDKeys: CDKeys is another online platform that sells digital products at discounted prices. You can find Ghost Recon Breakpoint PC key for $10.19 on CDKeys, which is 86% off the original price. CDKeys also offers instant delivery, secure payment methods, and customer support. However, you should note that the key is only valid for Europe and UK regions, so make sure that your PC meets the region requirements before buying.
-
Buy from Eneba: Eneba is a relatively new online store that sells digital products at competitive prices. You can find Ghost Recon Breakpoint PC key for $13.99 on Eneba, which is 81% off the original price. Eneba also offers instant delivery, secure payment methods, and customer support. However, you should note that the key is only valid for EMEA regions (Europe, Middle East, Africa), so make sure that your PC meets the region requirements before buying.
-
-
These are some of the best ways to get Ghost Recon Breakpoint PC key for cheap. However, there are many more options available in the market that might suit your needs better. You can check out our list of the best online stores to buy game keys for more suggestions.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Illustrator CS5 V15.0.2 Lite Portable Free Download ((BETTER)).md b/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Illustrator CS5 V15.0.2 Lite Portable Free Download ((BETTER)).md
deleted file mode 100644
index 18dbee2d25849ccb068f9ab758aae59700767200..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Illustrator CS5 V15.0.2 Lite Portable Free Download ((BETTER)).md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Adobe Illustrator CS5 v15.0.2 Lite Portable free download
-
-Autocad 2015 portable free download Autocad 2015 portable free download What can I ... Adobe Illustrator CS5 V15.0.2 Lite Portable Keygen. 1fdad05405
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Catch Battle and Trade Pokmon in the Real World with Pokmon GO.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Catch Battle and Trade Pokmon in the Real World with Pokmon GO.md
deleted file mode 100644
index 087dcb22f4bba00382258d278f4b48cd3b5e8025..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Catch Battle and Trade Pokmon in the Real World with Pokmon GO.md
+++ /dev/null
@@ -1,123 +0,0 @@
-
-
How to Download and Play Pokémon GO on Your iPhone or iPad
-
Do you want to catch your favorite Pokémon in augmented reality as you explore the world around you? Do you want to join millions of other trainers in epic battles, raids, and events? Do you want to have fun and exercise at the same time? If you answered yes to any of these questions, then you should try Pokémon GO, the global gaming sensation that has taken the world by storm.
Pokémon GO is an immersive open-world experience that enables you to live the Pokémon adventure in augmented reality. You can find and catch hundreds of different Pokémon as you walk, bike, or drive around your neighborhood, city, or country. You can also battle other players online in PvP mode, team up with other trainers to catch powerful Pokémon in raid battles, trade and transfer Pokémon with your friends, and much more.
-
Pokémon GO is free-to-play and offers in-game purchases. It is optimized for smartphones, not tablets. It requires an internet connection and GPS capabilities. It is compatible with iPhone 5s or later devices with iOS 9 or later installed. For more information, visit the official website at [5](https://pokemongolive.com).
-
How to download Pokémon GO from the App Store
-
Downloading and installing Pokémon GO on your iPhone or iPad is very easy. Just follow these steps:
-
-
Open the App Store on your device.
-
Search for "Pokémon GO" or tap on this link: [1](https://apps.apple.com/us/app/pokémon-go/id1094591345/).
-
Tap on "Get" and then "Install" to download the game.
-
Wait for the game to finish downloading and then tap on "Open" to launch it.
-
Allow the game to access your location, camera, motion, and health data when prompted.
-
-
How to change your region settings if the game is not available in your country
-
If you live in a country where Pokémon GO is not officially released yet, you can still download and play it by changing your region settings. Here's how:
-
-
Go to Settings on your device.
-
Tap on "Apple ID" and then "iTunes & App Store".
-
Tap on your Apple ID at the top and then "View Apple ID".
-
Tap on "Country/Region" and then "Change Country or Region".
-
Select a country where Pokémon GO is available, such as the United States or Australia.
-
Agree to the terms and conditions and enter a valid payment method for that country (you can use a gift card or a prepaid card).
-
Go back to the App Store and download Pokémon GO as described above.
-
-
How to start playing Pokémon GO
-
Once you have downloaded and installed Pokémon GO on your device, you are ready to start your Pokémon journey. Here are the basics of playing the game:
-
The basics of creating your account, choosing your starter Pokémon, and catching Pokémon in the real world
-
When you launch the game for the first time, you will be greeted by Professor Willow, who will guide you through the process of creating your account and choosing your avatar. You can sign in with your Google account, Facebook account, or Pokémon Trainer Club account. You can also customize your avatar's appearance, name, and clothing.
-
After that, you will be asked to choose your starter Pokémon from three options: Bulbasaur, Charmander, or Squirtle. You can also catch Pikachu as your starter if you walk away from the other three a few times. To catch a Pokémon, you need to tap on it on the map and then flick a Poké Ball at it on the capture screen. You can also use berries and different types of Poké Balls to increase your chances of catching a Pokémon.
-
pokemon go app store download
-pokemon go ios apk download
-pokemon go iphone app install
-pokemon go ipad apk free
-pokemon go apple watch app
-pokemon go ar mode ios apk
-pokemon go adventure sync apple health
-pokemon go app store update
-pokemon go ios apk hack
-pokemon go iphone app not working
-pokemon go ipad apk mod
-pokemon go apple watch not syncing
-pokemon go ar mode not working ios
-pokemon go adventure sync not working apple
-pokemon go app store link
-pokemon go ios apk 2021
-pokemon go iphone app crashing
-pokemon go ipad apk latest version
-pokemon go apple watch features
-pokemon go ar+ mode ios apk
-pokemon go adventure sync apple watch
-pokemon go app store rating
-pokemon go ios apk spoofing
-pokemon go iphone app size
-pokemon go ipad apk no jailbreak
-pokemon go apple watch discontinued
-pokemon go ar core ios apk
-pokemon go adventure sync apple health kit
-pokemon go app store reviews
-pokemon go ios apk reddit
-pokemon go iphone app permissions
-pokemon go ipad apk without tutuapp
-pokemon go apple watch battery drain
-pokemon go ar scan ios apk
-pokemon go adventure sync apple fitness+
-pokemon go app store country change
-pokemon go ios apk ipa
-pokemon go iphone app settings
-pokemon go ipad apk with joystick
-pokemon go apple watch eggs
-pokemon go ar mapping ios apk
-pokemon go adventure sync apple motion and fitness
-pokemon go app store revenue
-pokemon go ios apk tutuapp
-pokemon go iphone app icon
-pokemon go ipad apk cydia impactor
-pokemon go apple watch steps
-pokemon go ar photography ios apk
-
Pokémon GO uses your location and GPS to show you nearby Pokémon on the map. You can see their silhouettes on the bottom right corner of the screen and tap on them to track them. You can also use items called Incense and Lure Modules to attract more Pokémon to your location. You can find these items in PokéStops, which are landmarks such as monuments, statues, or buildings that you can spin to get rewards.
-
How to use the AR mode and the Poké Ball
-
Pokémon GO has an optional feature called AR mode, which stands for augmented reality. This means that you can see the Pokémon as if they were in the real world, using your device's camera. To enable or disable AR mode, you can toggle the switch on the top right corner of the capture screen. AR mode can make catching Pokémon more fun and immersive, but it can also drain your battery faster and make it harder to aim your Poké Ball.
-
The Poké Ball is the main tool for catching Pokémon. You can flick it with your finger to throw it at a Pokémon. You need to aim carefully and time your throw well to hit the Pokémon inside the colored circle that appears around it. The smaller the circle, the higher the chance of catching the Pokémon. You can also curve your throw by spinning the Poké Ball before releasing it, which gives you extra XP and increases your catch rate.
-
How to level up, evolve, and power up your Pokémon
-
As you catch more Pokémon, you will earn XP (experience points) and level up as a trainer. Leveling up will unlock new features and rewards, such as more items, stronger Poké Balls, and access to higher-level raids. You can also earn XP by completing tasks such as spinning PokéStops, hatching eggs, battling in Gyms and Raids, and completing research tasks.
-
You can also improve your Pokémon by evolving them or powering them up. Evolving a Pokémon will change its appearance and increase its stats, but it will also require a certain amount of candies that are specific to each Pokémon species. You can get candies by catching or transferring Pokémon of the same species, or by walking with a Pokémon as your buddy. Powering up a Pokémon will increase its CP (combat power) and HP (hit points), but it will also require candies and stardust. Stardust is a resource that you can get by catching any Pokémon, hatching eggs, or participating in battles.
-
How to join a team and battle in Gyms and Raids
-
When you reach level 5 as a trainer, you will be able to join one of three teams: Instinct (yellow), Mystic (blue), or Valor (red). Your team will determine which Gyms you can control and which players you can cooperate with. Gyms are locations where you can battle other trainers' Pokémon and earn rewards such as coins and items. To battle in a Gym, you need to tap on it on the map and then select a team of six Pokémon to fight with. You can also leave one of your Pokémon in a friendly Gym to defend it from enemy attacks.
-
Raids are special events where you can team up with other players to fight against a powerful Pokémon called a Raid Boss. To participate in a Raid, you need to have a Raid Pass, which you can get for free once per day by spinning a Gym's photo disc. You can also buy Premium Raid Passes or Remote Raid Passes with coins in the shop. Raids have different levels of difficulty, ranging from one star to five stars. The higher the level, the stronger the Raid Boss and the more players you need to defeat it. If you manage to beat the Raid Boss within the time limit, you will have a chance to special items or features, and exclusive research tasks or raids. Some examples of events are Halloween, Christmas, Lunar New Year, Earth Day, Pokémon GO Fest, etc.
-
Challenges are goals that the game sets for the players to achieve within a certain time frame. They usually involve catching, battling, or hatching a certain number or type of Pokémon, or completing a certain number of research tasks or raids. If the players succeed in meeting the challenge, they are rewarded with global bonuses such as increased spawns, reduced hatch distance, or extended lure duration. Some examples of challenges are Global Catch Challenge, Legendary Week, Safari Zone, etc.
-
You can find out about the current and upcoming events and challenges by checking the in-game news section, the official website [5](https://pokemongolive.com/en/events/), or the official social media accounts [6](https://twitter.com/PokemonGoApp) [7](https://www.facebook.com/PokemonGO/) [8](https://www.instagram.com/pokemongoapp/).
-
How to stay safe and respectful while playing Pokémon GO
-
Pokémon GO is a game that encourages you to explore the real world and interact with other players. However, it is also important to be aware of your surroundings and respect the rules and regulations of the places you visit. Here are some tips to stay safe and respectful while playing Pokémon GO:
-
-
Do not trespass on private property or restricted areas.
-
Do not play while driving or crossing the street.
-
Do not enter dangerous or hazardous areas.
-
Do not play in inappropriate or disrespectful places such as cemeteries, memorials, or places of worship.
-
Do not litter or damage the environment.
-
Do not disturb or harass other people or animals.
-
Do not cheat or use third-party software or devices.
-
Do not share your personal information or account details with anyone.
-
Do follow the local laws and regulations regarding COVID-19 and social distancing.
-
Do have fun and be friendly with other players and members of the community.
-
-
Conclusion
-
Pokémon GO is a game that can bring you joy, adventure, and excitement. It can also help you stay fit, make friends, and learn more about the world. Whether you are a casual player or a hardcore fan, there is something for everyone in Pokémon GO. So what are you waiting for? Grab your iPhone or iPad, download Pokémon GO from the App Store, and start catching them all!
-
Frequently Asked Questions
-
-
How do I get more Poké Balls and other items?
-
You can get more Poké Balls and other items by spinning PokéStops, opening gifts from your friends, completing research tasks, participating in raids, leveling up, or buying them with coins in the shop.
-
How do I get more coins?
-
You can get more coins by leaving your Pokémon in Gyms and earning up to 50 coins per day, or by buying them with real money in the shop.
-
How do I get more stardust?
-
You can get more stardust by catching any Pokémon, hatching eggs, participating in battles, feeding berries to Pokémon in Gyms, using star pieces, or completing research tasks.
-
How do I get more candies?
-
You can get more candies by catching or transferring Pokémon of the same species, walking with a Pokémon as your buddy, using pinap berries, trading Pokémon with other players, or using rare candies.
-
How do I get more XP?
-
You can get more XP by catching Pokémon, spinning PokéStops, hatching eggs, evolving Pokémon, battling in Gyms and Raids, completing research tasks, using lucky eggs, or adding new Pokédex entries.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/91m Bin Sh 1 Apk Not Found.md b/spaces/1phancelerku/anime-remove-background/91m Bin Sh 1 Apk Not Found.md
deleted file mode 100644
index cf92667df27a4b2d2b592924bce89f0b8c55c99d..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/91m Bin Sh 1 Apk Not Found.md
+++ /dev/null
@@ -1,68 +0,0 @@
-
-
How to Fix the Error "91m/bin/sh 1 apk not found" When Building a Docker Image
-
If you are using Docker to create and run applications using containers, you may encounter an error like this when building a Docker image:
This error can be frustrating and confusing, especially if you are new to Docker or Linux. In this article, we will explain what this error means, what causes it, and how to fix it.
-
What is Docker and Why Use It?
-
Docker is a tool that allows you to create, run, and share applications using containers. Containers are isolated environments that contain everything an application needs to run, such as code, libraries, dependencies, and configuration. Containers are portable, meaning they can run on any machine that has Docker installed, regardless of the operating system or hardware. Containers are also scalable, meaning they can be easily replicated, distributed, and managed across multiple machines. Containers are also efficient, meaning they use less resources than traditional virtual machines.
-
Docker offers many benefits for developers and users of applications, such as:
-
-
Portability: You can build an application once and run it anywhere with Docker. You don't have to worry about compatibility issues or dependencies.
-
Scalability: You can scale up or down your application by adding or removing containers as needed. You can also use orchestration tools like Kubernetes or Swarm to automate and manage your containers across multiple machines.
-
Isolation: You can isolate your application from other applications and from the host machine. This improves security and reliability, as well as simplifies testing and debugging.
-
Efficiency: You can use less resources than traditional virtual machines with Docker. Containers share the same kernel and only use the resources they need.
-
-
What is the Error "91m/bin/sh 1 apk not found" and What Causes It?
-
The error "91m/bin/sh 1 apk not found" occurs when you try to use the apk command in a Dockerfile that is based on a non-Alpine Linux distribution. The apk command is the package manager for Alpine Linux, which is a lightweight and secure Linux distribution that is often used for Docker images. The apk command allows you to install, update, and remove packages from Alpine repositories.
-
The error means that the apk command is not found in the base image that you are using for your Dockerfile. The base image is the image that you specify in the FROM instruction of your Dockerfile. The base image provides the foundation for your Docker image and defines the operating system and the packages that are available. For example, if your Dockerfile looks like this:
This means that you are using the python:3.8 image as your base image, which is based on Debian Buster, a Debian-based Linux distribution. Debian Buster does not support the apk command, so when you try to run it, you get the error "91m/bin/sh 1 apk not found".
-
How to Fix the Error "91m/bin/sh 1 apk not found" When Building a Docker Image?
-
There are two main ways to fix the error "91m/bin/sh 1 apk not found" when building a Docker image: changing the base image or changing the package manager.
-
Changing the base image
-
You can change the base image to an Alpine Linux image that supports the apk command. Alpine Linux is a lightweight and secure Linux distribution that is often used for Docker images. Alpine Linux images are smaller and faster than most other Linux images, which can improve your Docker performance and reduce your storage and bandwidth costs.
-
You can find the official Alpine Linux images on Docker Hub or use the python:3.8-alpine image as an example. The python:3.8-alpine image is based on Alpine Linux 3.13 and includes Python 3.8 and pip. To use this image as your base image, you can change your Dockerfile to look like this:
This should fix the error "91m/bin/sh 1 apk not found" and allow you to build your Docker image successfully.
-
Changing the package manager
-
You can also change the package manager to apt or apt-get, which are supported by most Debian-based Linux distributions. apt and apt-get are tools that allow you to install, update, and remove packages from Debian repositories.
-
You can find the official Debian-based images on Docker Hub or use the python:3.8-slim image as an example. The python:3.8-slim image is based on Debian Buster and includes Python 3.8 and pip. To use this image as your base image, you can change your Dockerfile to look like this:
Note that you may also need to change the package names to match the ones available in the Debian repositories. For example, musl-dev is not available in Debian, so you need to use libc-dev instead.
-
This should also fix the error "91m/bin/sh 1 apk not found" and allow you to build your Docker image successfully.
-
Conclusion
-
In this article, we have explained what the error "91m/bin/sh 1 apk not found" means, what causes it, and how to fix it when building a Docker image. We have shown two main ways to fix the error: changing the base image or changing the package manager. We have also provided some examples of Dockerfiles that use different base images and package managers.
-
We hope that this article has helped you solve your problem and improve your Docker experience. If you have any questions or feedback, please feel free to leave a comment below.
-
Frequently Asked Questions (FAQs)
-
What is a Dockerfile?
-
A Dockerfile is a text file that contains instructions for building a Docker image. A Docker image is a snapshot of an application and its dependencies that can be run as a container using Docker.
-
What is a container?
-
A container is an isolated environment that contains everything an application needs to run, such as code, libraries, dependencies, and configuration. Containers are portable, scalable, isolated, and efficient.
-
What is Alpine Linux?
Alpine Linux is a security-oriented, lightweight Linux distribution based on musl libc and busybox. Alpine Linux is designed to be small, simple, and secure, making it ideal for Docker images. Alpine Linux uses a technique called position-independent executables to randomize the location of programs in memory, which makes it difficult for an attacker to exploit quirks in the memory and take over a machine. The distro is also minimalist in its configuration, using OpenRC as the init system and apk as the package manager. Alpine Linux has a reputation for being fast, stable, and reliable.
What is Debian?
-
Debian is a free and open-source Linux distribution that is known for its stability, security, and versatility. Debian is one of the oldest and most popular Linux distributions, with a large and active community of developers and users. Debian supports a wide range of architectures, devices, and software packages, making it suitable for various purposes and environments. Debian uses a technique called debconf to configure the system according to the user's preferences, which makes it easy to customize and maintain. The distro uses dpkg as the low-level package manager and apt or apt-get as the high-level package manager. Debian has a reputation for being robust, reliable, and flexible.
-
How do I choose the best base image for my Dockerfile?
-
There is no definitive answer to this question, as different base images may have different advantages and disadvantages depending on your needs and preferences. However, some general factors that you may want to consider when choosing a base image are:
-
-
Size: Smaller images are faster to build, pull, push, and run, and use less storage and bandwidth. However, smaller images may also have fewer features and packages than larger images.
-
Security: More secure images are less vulnerable to attacks and breaches, and may have better updates and patches. However, more secure images may also have more restrictions and limitations than less secure images.
-
Compatibility: More compatible images are easier to work with and integrate with other tools and platforms. However, more compatible images may also have more dependencies and conflicts than less compatible images.
-
Performance: Faster and more efficient images are better for your application's speed and resource consumption. However, faster and more efficient images may also have lower quality or stability than slower and less efficient images.
-
Maintainability: Easier to maintain images are simpler to update, modify, and troubleshoot. However, easier to maintain images may also have less functionality or customization than harder to maintain images.
-
-
You may also want to check the documentation, reviews, ratings, and statistics of the base images that you are considering to get more information and feedback from other users.
-
How do I test if my Docker image works correctly?
-
One way to test if your Docker image works correctly is to run it as a container using the docker run command. The docker run command allows you to create and start a container from an image, optionally with various options and arguments. For example, if you want to run your image in interactive mode with a terminal attached, you can use this command:
-docker run -it --rm your_image_name
-
This will create a container from your image, attach a terminal to it, and remove it when you exit. You can then test your application inside the container by running commands or scripts as you would normally do.
-
How do I share my Docker image with others?
-
One way to share your Docker image with others is to push it to a registry such as Docker Hub or GitHub Packages. A registry is a service that stores and distributes Docker images. You can create an account on a registry service, create a repository for your image, tag your image with the repository name, and push your image to the repository using the docker push command. For example, if you want to push your image to Docker Hub, you can use these commands:
-docker tag your_image_name your_username/your_repository_name
-docker push your_username/your_repository_name
-
This will upload your image to your repository on Docker Hub. You can then share the repository URL with others who can pull your image using the docker pull command.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Bleach VS Naruto Ultimate Edition and Experience the Ultimate Anime Crossover Game on PC and Android.md b/spaces/1phancelerku/anime-remove-background/Download Bleach VS Naruto Ultimate Edition and Experience the Ultimate Anime Crossover Game on PC and Android.md
deleted file mode 100644
index 65ce5b37c7320b717d76219b709b457b1618de60..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Bleach VS Naruto Ultimate Edition and Experience the Ultimate Anime Crossover Game on PC and Android.md
+++ /dev/null
@@ -1,93 +0,0 @@
-
-
Download Bleach vs Naruto Ultimate Edition: A Guide for Anime Fans
-
If you are a fan of anime and fighting games, you might have heard of Bleach vs Naruto, a free online 2D flash game developed by the Chinese company 5Dplay. It is a crossover anime fighting game featuring characters from both Bleach and Naruto Shippuden with guest characters from other series such as Rurouni Kenshin, One Piece, Fairy Tail, and more. But did you know that there is a special modification for this game that adds more characters, stages, modes, and features? It is called Bleach vs Naruto Ultimate Edition and it is available for both PC and Android devices. In this article, we will tell you everything you need to know about this amazing mod-pack and how to download it.
-
What is Bleach vs Naruto Ultimate Edition?
-
Bleach vs Naruto Ultimate Edition is a special modification for Bleach vs Naruto game made by Yuxi in collaboration with original BVN author Jian, 5Dplay. It is not an official update or sequel to the original game, but rather a fan-made project that enhances the game with various new elements. Some of the features of this mod-pack are:
It has more than 370 characters and 89 assists on PC version, and 308 characters and 76 assists on Android version. The characters are from various anime series such as Bleach, Naruto, One Piece, Dragon Ball, Hunter x Hunter, My Hero Academia, Demon Slayer, Attack on Titan, and more. You can also find some original characters created by the modders.
-
It has 102 stages from different anime worlds and locations. You can fight in Soul Society, Konoha Village, Marineford, Namek, Dark Continent, UA High School, Mugen Train, Shiganshina District, and more.
-
It has two exclusive modes from the latest bvn version 3.6: Watch Mode and Musou Mode. Watch Mode allows you to watch the computer-controlled characters fight each other in various scenarios. Musou Mode allows you to play as one character against multiple enemies in a Dynasty Warriors style.
-
-
Available on PC and Android
-
-
The mod-pack is compatible with both PC and Android devices. You can download it from various links provided by the author or other sources. The PC version weighs 3.42GB and the Android version weighs 1.99GB.
-
The mod-pack is also compatible with Android 12, the latest version of the operating system. You can enjoy the game on your new devices without any issues.
-
The mod-pack also has a complete remake of the user interface, new game effects, new game sounds, general game optimization, and many other improvements.
-
-
Why should you download Bleach vs Naruto Ultimate Edition?
-
If you are still not convinced that this mod-pack is worth downloading, here are some reasons why you should give it a try:
-
Enjoy the crossover anime fighting game featuring characters from Bleach, Naruto, and other series
-
-
If you love anime and fighting games, this mod-pack is perfect for you. You can play as your favorite characters from different anime series and see how they match up against each other. You can also create your own team of characters and fight against other teams in various modes.
Make sure you have enough storage space on your device and a stable internet connection.
-
Make sure you have the correct password to extract the files.
-
Make sure you have the latest version of the game and update it if necessary.
-
Make sure you have the compatible device and operating system for the game.
-
Make sure you have the proper software or app to run the game such as WinRAR, 7-Zip, or ZArchiver for extracting files, and Flash Player, Adobe AIR, or GameLoop for running the game.
-
If you have any questions or feedback about the game, you can contact the author Yuxi on his YouTube channel or his Discord server. You can also join the Bleach vs Naruto community on Facebook, Reddit, or other platforms to interact with other players and fans.
-
-
Conclusion
-
Bleach vs Naruto Ultimate Edition is a special modification for Bleach vs Naruto game that adds more characters, stages, modes, and features to the original game. It is a crossover anime fighting game featuring characters from Bleach, Naruto, and other series. It is available for both PC and Android devices and it is compatible with Android 12. It is a fan-made project that is not affiliated with the official game or the anime series. It is a free online game that you can download from various links provided by the author or other sources. You will need a password to extract the files and a keyboard or a controller to play the game. You can customize your own team and fight against other players online or offline. You can also enjoy the exclusive features from the latest bvn version 3.6 such as Watch Mode and Musou Mode. If you are a fan of anime and fighting games, you should definitely try this mod-pack and have fun.
-
FAQs
-
What is the difference between Bleach vs Naruto and Bleach vs Naruto Ultimate Edition?
-
Bleach vs Naruto is the original game developed by 5Dplay that features characters from Bleach and Naruto series. Bleach vs Naruto Ultimate Edition is a special modification for Bleach vs Naruto game made by Yuxi that adds more characters, stages, modes, and features from other anime series such as One Piece, Dragon Ball, Demon Slayer, Attack on Titan, My Hero Academia, and more.
-
How many characters are there in Bleach vs Naruto Ultimate Edition?
-
There are more than 370 characters and 89 assists on PC version, and 308 characters and 76 assists on Android version. The characters are from various anime series such as Bleach, Naruto, One Piece, Dragon Ball, Hunter x Hunter, My Hero Academia, Demon Slayer, Attack on Titan, and more. You can also find some original characters created by the modders.
-
download bleach vs naruto ultimate edition pc
-download bleach vs naruto ultimate edition android
-download bleach vs naruto ultimate edition 370+ characters
-download bleach vs naruto ultimate edition mediafire
-download bleach vs naruto ultimate edition google drive
-download bleach vs naruto ultimate edition mega
-download bleach vs naruto ultimate edition mod apk
-download bleach vs naruto ultimate edition offline
-download bleach vs naruto ultimate edition latest version
-download bleach vs naruto ultimate edition youtube
-download bleach vs naruto ultimate edition kizuma gaming
-download bleach vs naruto ultimate edition yuxi
-download bleach vs naruto ultimate edition 5dplay
-download bleach vs naruto ultimate edition watch mode
-download bleach vs naruto ultimate edition musou mode
-download bleach vs naruto ultimate edition password
-download bleach vs naruto ultimate edition tutorial
-download bleach vs naruto ultimate edition free
-download bleach vs naruto ultimate edition full game
-download bleach vs naruto ultimate edition zip file
-download bleach vs naruto ultimate edition for windows 10
-download bleach vs naruto ultimate edition for mac
-download bleach vs naruto ultimate edition for ios
-download bleach vs naruto ultimate edition for linux
-download bleach vs naruto ultimate edition for chromebook
-download bleach vs naruto ultimate edition no ads
-download bleach vs naruto ultimate edition no virus
-download bleach vs naruto ultimate edition no survey
-download bleach vs naruto ultimate edition no root
-download bleach vs naruto ultimate edition no emulator
-download bleach vs naruto ultimate edition with all characters unlocked
-download bleach vs naruto ultimate edition with new maps and assists
-download bleach vs naruto ultimate edition with new effects and sounds
-download bleach vs naruto ultimate edition with new user interface and loading screen
-download bleach vs naruto ultimate edition with compatibility with android 12
-how to download bleach vs naruto ultimate edition on pc
-how to download bleach vs naruto ultimate edition on android
-how to download bleach vs naruto ultimate edition on phone
-how to download bleach vs naruto ultimate edition on tablet
-how to download bleach vs naruto ultimate edition on laptop
-where to download bleach vs naruto ultimate edition safely and securely
-where to download bleach vs naruto ultimate edition from original author's link
-where to find the password for downloading bleach vs naruto ultimate edition
-where to get the latest updates for downloading bleach vs naruto ultimate edition
-where to report bugs or errors for downloading bleach vs naruto ultimate edition
-why you should download bleach vs naruto ultimate edition game
-why you should not miss the opportunity to play the best anime crossover game ever
-why you should join the discord community of the fans of the game
-why you should support the creators of the game by donating or subscribing
-
How can I play Bleach vs Naruto Ultimate Edition online with other players?
-
You can play online with other players using the multiplayer mode. You can join or create a room with up to four players and choose the game mode, stage, time limit, and other settings. You can also chat with other players using the chat box.
-
What are Watch Mode and Musou Mode in Bleach vs Naruto Ultimate Edition?
-
Watch Mode and Musou Mode are two exclusive modes from the latest bvn version 3.6. Watch Mode allows you to watch the computer-controlled characters fight each other in various scenarios. Musou Mode allows you to play as one character against multiple enemies in a Dynasty Warriors style.
-
Where can I find more information about Bleach vs Naruto Ultimate Edition?
-
You can find more information about Bleach vs Naruto Ultimate Edition on the author's YouTube channel or his Discord server. You can also join the Bleach vs Naruto community on Facebook, Reddit, or other platforms to interact with other players and fans.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Dream League Soccer 2020 Mod APK Now and Get Unlimited Coins for Free.md b/spaces/1phancelerku/anime-remove-background/Download Dream League Soccer 2020 Mod APK Now and Get Unlimited Coins for Free.md
deleted file mode 100644
index 5bba4dd8fd39a05be0668f1c80f239cc639b240d..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Dream League Soccer 2020 Mod APK Now and Get Unlimited Coins for Free.md
+++ /dev/null
@@ -1,80 +0,0 @@
-
-
Download Dream League Soccer 2020 Mod APK Unlimited Coins
-
If you are a fan of soccer games, you must have heard of Dream League Soccer 2020, one of the most popular and realistic soccer games on Android. But what if you want to enjoy the game without any limitations or restrictions? Well, you can do that by downloading Dream League Soccer 2020 mod apk unlimited coins. In this article, we will tell you what is Dream League Soccer 2020, why you should download the mod apk version, and how to do it easily and safely.
-
What is Dream League Soccer 2020?
-
Dream League Soccer 2020 is a soccer simulation game developed by First Touch Games, a studio that specializes in creating high-quality soccer games for mobile devices. The game lets you create your own dream team, compete in various leagues and tournaments, and customize your stadium and kits. You can also play online with other players from around the world, or offline with friends using local multiplayer mode.
-
download dream league soccer 2020 mod apk unlimited coins
Dream League Soccer 2020 has many features that make it one of the best soccer games on Android. Here are some of them:
-
Build your own team
-
You can choose from over 4,000 licensed players from different clubs and countries, and create your own squad with your favorite stars. You can also train your players to improve their skills and abilities, and manage their transfers and contracts.
-
Play in different modes
-
You can play in various modes such as Career Mode, where you start from the bottom and work your way up to the top division; Season Mode, where you compete in a single season with different objectives; Online Mode, where you challenge other players from around the world; and Friendly Mode, where you play against your friends using local multiplayer.
-
Customize your stadium and kits
-
You can design your own stadium and upgrade it with different facilities and features. You can also customize your kits and logos with various colors and styles.
-
Enjoy realistic graphics and sound effects
-
The game has stunning graphics and animations that make the gameplay more immersive and realistic. You can also enjoy the authentic sound effects and commentary from professional commentators.
-
Why download Dream League Soccer 2020 mod apk unlimited coins?
-
While Dream League Soccer 2020 is a free game, it also has some in-app purchases that require real money. For example, you need coins to buy players, items, upgrades, and more. You can earn coins by playing the game, but it can be slow and tedious. That's why many players prefer to download Dream League Soccer 2020 mod apk unlimited coins, which gives them access to unlimited resources and features. Here are some benefits of downloading the mod apk version:
-
download dls 2020 mod apk unlimited money and gold
-how to get unlimited coins in dream league soccer 2020 mod
-dream league soccer 2020 hack mod apk download free
-dls 2020 mod apk unlimited levels and characters
-download dream league soccer 2020 mod apk latest version
-dream league soccer 2020 mod apk unlimited gems and coins
-dls 2020 mod apk offline with unlimited money
-download dream league soccer 2020 mod apk for android
-dream league soccer 2020 cheats mod apk unlimited coins
-dls 2020 mod apk online with unlimited players
-download dream league soccer 2020 mod apk obb file
-dream league soccer 2020 mod apk unlimited kits and logos
-dls 2020 mod apk unlimited stamina and energy
-download dream league soccer 2020 mod apk revdl
-dream league soccer 2020 mod apk unlimited skills and abilities
-dls 2020 mod apk unlimited transfers and signings
-download dream league soccer 2020 mod apk rexdl
-dream league soccer 2020 mod apk unlimited trophies and medals
-dls 2020 mod apk unlimited coins no root
-download dream league soccer 2020 mod apk hack version
-dream league soccer 2020 mod apk unlimited diamonds and coins
-dls 2020 mod apk unlimited coins and keys
-download dream league soccer 2020 mod apk data file host
-dream league soccer 2020 mod apk unlimited everything unlocked
-dls 2020 mod apk unlimited coins and tickets
-download dream league soccer 2020 mod apk for ios
-dream league soccer 2020 mod apk unlimited coins and stars
-dls 2020 mod apk unlimited coins and vip points
-download dream league soccer 2020 mod apk for pc
-dream league soccer 2020 mod apk unlimited coins and all players unlocked
-
Get unlimited coins and money
-
With the mod apk version, you don't have to worry about running out of coins or money. You can use them to buy anything you want in the game, such as players, items, upgrades, etc. You can also use them to skip ads and speed up the loading time.
-
Unlock all players and items
-
With the mod apk version, you don't have to wait for unlocking players or items. You can get them all for free without any restrictions or limitations. You can also upgrade them to the maximum level and make them more powerful and efficient.
-
Remove ads and enjoy faster loading
-
With the mod apk version, you don't have to deal with annoying ads that interrupt your gameplay and waste your time. You can also enjoy faster loading and smoother performance without any lags or glitches.
-
How to download Dream League Soccer 2020 mod apk unlimited coins?
-
Downloading Dream League Soccer 2020 mod apk unlimited coins is not difficult, but you need to follow some steps carefully to avoid any errors or problems. Here are the steps you need to follow:
-
Step 1: Download the mod apk file from a trusted source
-
The first thing you need to do is to find a reliable and safe website that offers the mod apk file for Dream League Soccer 2020. You can search on Google or use the link we have provided below. Make sure you download the latest version of the mod apk file that is compatible with your device.
-
Step 2: Enable unknown sources on your device settings
-
The next thing you need to do is to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and turn it on. You may see a warning message, but don't worry, it's safe.
-
Step 3: Install the mod apk file and launch the game
-
The final thing you need to do is to install the mod apk file and launch the game. To do this, go to your file manager, find the downloaded mod apk file, and tap on it. You may see a confirmation message, just tap on install and wait for a few seconds. Once the installation is done, you can open the game and enjoy it with unlimited coins and features.
-
Conclusion
-
Dream League Soccer 2020 is a great game for soccer lovers, but it can be even better with the mod apk version that gives you unlimited coins and features. You can download Dream League Soccer 2020 mod apk unlimited coins easily and safely by following the steps we have explained above. So what are you waiting for? Download it now and have fun!
-
FAQs
-
Here are some frequently asked questions about Dream League Soccer 2020 mod apk unlimited coins:
-
-
Is Dream League Soccer 2020 mod apk unlimited coins safe?
-
Yes, it is safe as long as you download it from a trusted source and enable unknown sources on your device settings. However, we recommend that you use it at your own risk and discretion, as we are not responsible for any damages or issues that may occur.
-
Is Dream League Soccer 2020 mod apk unlimited coins legal?
-
No, it is not legal, as it violates the terms and conditions of the original game. It may also result in a ban or suspension from the online mode or other features of the game. Therefore, we advise that you use it only for personal and educational purposes, and not for commercial or malicious purposes.
-
Does Dream League Soccer 2020 mod apk unlimited coins require root access?
-
No, it does not require root access, as it works on both rooted and non-rooted devices. However, some features may work better on rooted devices than on non-rooted devices.
-
Does Dream League Soccer 2020 mod apk unlimited coins work offline?
-
Yes, it works offline, as you can play the game without an internet connection. However, some features may require an internet connection, such as online mode, updates, etc.
-
Can I update Dream League Soccer 2020 mod apk unlimited coins?
-
No, you cannot update Dream League Soccer 2020 mod apk unlimited coins, as it may cause errors or problems with the game. You need to uninstall the mod apk version and install the original version from the Google Play Store if you want to update the game.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/52Hz/SUNet_AWGN_denoising/main_test_SUNet.py b/spaces/52Hz/SUNet_AWGN_denoising/main_test_SUNet.py
deleted file mode 100644
index b7e667157f3fd99e25037a8a7a29ebc7cfd0e385..0000000000000000000000000000000000000000
--- a/spaces/52Hz/SUNet_AWGN_denoising/main_test_SUNet.py
+++ /dev/null
@@ -1,143 +0,0 @@
-import argparse
-import cv2
-import glob
-import numpy as np
-from collections import OrderedDict
-from skimage import img_as_ubyte
-import os
-import torch
-import requests
-from PIL import Image
-import math
-import yaml
-import torchvision.transforms.functional as TF
-import torch.nn.functional as F
-from natsort import natsorted
-from model.SUNet import SUNet_model
-
-with open('training.yaml', 'r') as config:
- opt = yaml.safe_load(config)
-
-def clean_folder(folder):
- for filename in os.listdir(folder):
- file_path = os.path.join(folder, filename)
- try:
- if os.path.isfile(file_path) or os.path.islink(file_path):
- os.unlink(file_path)
- elif os.path.isdir(file_path):
- shutil.rmtree(file_path)
- except Exception as e:
- print('Failed to delete %s. Reason: %s' % (file_path, e))
-
-def main():
- parser = argparse.ArgumentParser(description='Demo Image Restoration')
- parser.add_argument('--input_dir', default='test/', type=str, help='Input images')
- parser.add_argument('--window_size', default=8, type=int, help='window size')
- parser.add_argument('--size', default=256, type=int, help='model image patch size')
- parser.add_argument('--stride', default=128, type=int, help='reconstruction stride')
- parser.add_argument('--result_dir', default='result/', type=str, help='Directory for results')
- parser.add_argument('--weights',
- default='experiments/pretrained_models/AWGN_denoising_SUNet.pth', type=str,
- help='Path to weights')
-
- args = parser.parse_args()
-
- inp_dir = args.input_dir
- out_dir = args.result_dir
-
- os.makedirs(out_dir, exist_ok=True)
-
- files = natsorted(glob.glob(os.path.join(inp_dir, '*')))
-
- if len(files) == 0:
- raise Exception(f"No files found at {inp_dir}")
-
- device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-
- # Load corresponding models architecture and weights
- model = SUNet_model(opt)
- model = model.to(device)
- model.eval()
- load_checkpoint(model, args.weights)
- stride = args.stride
- model_img = args.size
-
- for file_ in files:
- img = Image.open(file_).convert('RGB')
- input_ = TF.to_tensor(img).unsqueeze(0).to(device)
- with torch.no_grad():
- # pad to multiple of 256
- square_input_, mask, max_wh = overlapped_square(input_.to(device), kernel=model_img, stride=stride)
- output_patch = torch.zeros(square_input_[0].shape).type_as(square_input_[0])
- for i, data in enumerate(square_input_):
- restored = model(square_input_[i])
- if i == 0:
- output_patch += restored
- else:
- output_patch = torch.cat([output_patch, restored], dim=0)
-
- B, C, PH, PW = output_patch.shape
- weight = torch.ones(B, C, PH, PH).type_as(output_patch) # weight_mask
-
- patch = output_patch.contiguous().view(B, C, -1, model_img*model_img)
- patch = patch.permute(2, 1, 3, 0) # B, C, K*K, #patches
- patch = patch.contiguous().view(1, C*model_img*model_img, -1)
-
- weight_mask = weight.contiguous().view(B, C, -1, model_img * model_img)
- weight_mask = weight_mask.permute(2, 1, 3, 0) # B, C, K*K, #patches
- weight_mask = weight_mask.contiguous().view(1, C * model_img * model_img, -1)
-
- restored = F.fold(patch, output_size=(max_wh, max_wh), kernel_size=model_img, stride=stride)
- we_mk = F.fold(weight_mask, output_size=(max_wh, max_wh), kernel_size=model_img, stride=stride)
- restored /= we_mk
-
- restored = torch.masked_select(restored, mask.bool()).reshape(input_.shape)
- restored = torch.clamp(restored, 0, 1)
-
- restored = restored.permute(0, 2, 3, 1).cpu().detach().numpy()
- restored = img_as_ubyte(restored[0])
-
- f = os.path.splitext(os.path.split(file_)[-1])[0]
- save_img((os.path.join(out_dir, f + '.png')), restored)
- clean_folder(inp_dir)
-
-def save_img(filepath, img):#
- cv2.imwrite(filepath, cv2.cvtColor(img, cv2.COLOR_RGB2BGR))
-
-
-def load_checkpoint(model, weights):
- checkpoint = torch.load(weights, map_location=torch.device('cpu'))
- try:
- model.load_state_dict(checkpoint["state_dict"])
- except:
- state_dict = checkpoint["state_dict"]
- new_state_dict = OrderedDict()
- for k, v in state_dict.items():
- name = k[7:] # remove `module.`
- new_state_dict[name] = v
- model.load_state_dict(new_state_dict)
-
-def overlapped_square(timg, kernel=256, stride=128):
- patch_images = []
- b, c, h, w = timg.size()
- # 321, 481
- X = int(math.ceil(max(h, w) / float(kernel)) * kernel)
- img = torch.zeros(1, 3, X, X).type_as(timg) # 3, h, w
- mask = torch.zeros(1, 1, X, X).type_as(timg)
-
- img[:, :, ((X - h) // 2):((X - h) // 2 + h), ((X - w) // 2):((X - w) // 2 + w)] = timg
- mask[:, :, ((X - h) // 2):((X - h) // 2 + h), ((X - w) // 2):((X - w) // 2 + w)].fill_(1.0)
-
- patch = img.unfold(3, kernel, stride).unfold(2, kernel, stride)
- patch = patch.contiguous().view(b, c, -1, kernel, kernel) # B, C, #patches, K, K
- patch = patch.permute(2, 0, 1, 4, 3) # patches, B, C, K, K
-
- for each in range(len(patch)):
- patch_images.append(patch[each])
-
- return patch_images, mask, X
-
-
-
-if __name__ == '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/7hao/bingo/src/app/loading.css b/spaces/7hao/bingo/src/app/loading.css
deleted file mode 100644
index eaaab6a86a228334c4eca3c5368ae6f0f593d405..0000000000000000000000000000000000000000
--- a/spaces/7hao/bingo/src/app/loading.css
+++ /dev/null
@@ -1,68 +0,0 @@
-::-webkit-scrollbar {
- width: 10px;
- height: 10px;
- display: none;
-}
-
-::-webkit-scrollbar-button:start:decrement,
-::-webkit-scrollbar-button:end:increment {
- height: 30px;
- background-color: transparent;
-}
-
-::-webkit-scrollbar-track-piece {
- background-color: #3b3b3b;
- -webkit-border-radius: 16px;
-}
-
-::-webkit-scrollbar-thumb:vertical {
- height: 50px;
- background-color: #666;
- border: 1px solid #eee;
- -webkit-border-radius: 6px;
-}
-
-/* loading start */
-.loading-spinner {
- display: flex;
- justify-content: center;
- align-items: center;
- height: 100vh;
- opacity: 1;
- transition: opacity .8s ease-out;
-}
-
-.loading-spinner.hidden {
- opacity: 0;
-}
-
-.loading-spinner>div {
- width: 30px;
- height: 30px;
- background: linear-gradient(90deg, #2870EA 10.79%, #1B4AEF 87.08%);
-
- border-radius: 100%;
- display: inline-block;
- animation: sk-bouncedelay 1.4s infinite ease-in-out both;
-}
-
-.loading-spinner .bounce1 {
- animation-delay: -0.32s;
-}
-
-.loading-spinner .bounce2 {
- animation-delay: -0.16s;
-}
-
-@keyframes sk-bouncedelay {
-
- 0%,
- 80%,
- 100% {
- transform: scale(0);
- }
-
- 40% {
- transform: scale(1.0);
- }
-}
diff --git a/spaces/AIDHD/GrammarCorrector/app.py b/spaces/AIDHD/GrammarCorrector/app.py
deleted file mode 100644
index b25b352931c5dbc31c7254c5b3b191a764138b03..0000000000000000000000000000000000000000
--- a/spaces/AIDHD/GrammarCorrector/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import streamlit as st
-
-
-st.title("Correct Grammar with Transformers 🦄")
-st.write("")
-st.write("Input your text here!")
-
-default_value = "Mike and Anna is skiing"
-sent = st.text_area("Text", default_value, height = 50)
-num_return_sequences = st.sidebar.number_input('Number of Return Sequences', min_value=1, max_value=3, value=1, step=1)
-
-### Run Model
-from transformers import T5ForConditionalGeneration, T5Tokenizer
-import torch
-torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
-tokenizer = T5Tokenizer.from_pretrained('deep-learning-analytics/GrammarCorrector')
-model = T5ForConditionalGeneration.from_pretrained('deep-learning-analytics/GrammarCorrector').to(torch_device)
-
-def correct_grammar(input_text,num_return_sequences=num_return_sequences):
- batch = tokenizer([input_text],truncation=True,padding='max_length',max_length=len(input_text), return_tensors="pt").to(torch_device)
- results = model.generate(**batch,max_length=len(input_text),num_beams=2, num_return_sequences=num_return_sequences, temperature=1.5)
- #answer = tokenizer.batch_decode(results[0], skip_special_tokens=True)
- return results
-
-##Prompts
-results = correct_grammar(sent, num_return_sequences)
-
-generated_sequences = []
-for generated_sequence_idx, generated_sequence in enumerate(results):
- # Decode text
- text = tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True, skip_special_tokens=True)
- generated_sequences.append(text)
-
-st.write(generated_sequences)
diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/logger.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/logger.py
deleted file mode 100644
index ac4634970fae6aacde2b7b808355dbd50c90ce73..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/logger.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import logging
-
-
-def setup_logging(log_file, level, include_host=False):
- if include_host:
- import socket
-
- hostname = socket.gethostname()
- formatter = logging.Formatter(
- f"%(asctime)s | {hostname} | %(levelname)s | %(message)s",
- datefmt="%Y-%m-%d,%H:%M:%S",
- )
- else:
- formatter = logging.Formatter(
- "%(asctime)s | %(levelname)s | %(message)s", datefmt="%Y-%m-%d,%H:%M:%S"
- )
-
- logging.root.setLevel(level)
- loggers = [logging.getLogger(name) for name in logging.root.manager.loggerDict]
- for logger in loggers:
- logger.setLevel(level)
-
- stream_handler = logging.StreamHandler()
- stream_handler.setFormatter(formatter)
- logging.root.addHandler(stream_handler)
-
- if log_file:
- file_handler = logging.FileHandler(filename=log_file)
- file_handler.setFormatter(formatter)
- logging.root.addHandler(file_handler)
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/vocoder/hifigan.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/vocoder/hifigan.py
deleted file mode 100644
index 4fbb5037c6fa48178d51e5e685cf6191b150d214..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/vocoder/hifigan.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import torch.nn.functional as F
-from torch import nn
-
-from text_to_speech.modules.vocoder.hifigan.hifigan import HifiGanGenerator, MultiPeriodDiscriminator, MultiScaleDiscriminator, \
- generator_loss, feature_loss, discriminator_loss
-from text_to_speech.modules.vocoder.hifigan.mel_utils import mel_spectrogram
-from text_to_speech.modules.vocoder.hifigan.stft_loss import MultiResolutionSTFTLoss
-from tasks.vocoder.vocoder_base import VocoderBaseTask
-from text_to_speech.utils.commons.hparams import hparams
-from text_to_speech.utils.nn.model_utils import print_arch
-
-
-class HifiGanTask(VocoderBaseTask):
- def build_model(self):
- self.model_gen = HifiGanGenerator(hparams)
- self.model_disc = nn.ModuleDict()
- self.model_disc['mpd'] = MultiPeriodDiscriminator()
- self.model_disc['msd'] = MultiScaleDiscriminator()
- self.stft_loss = MultiResolutionSTFTLoss()
- print_arch(self.model_gen)
- if hparams['load_ckpt'] != '':
- self.load_ckpt(hparams['load_ckpt'], 'model_gen', 'model_gen', force=True, strict=True)
- self.load_ckpt(hparams['load_ckpt'], 'model_disc', 'model_disc', force=True, strict=True)
- return self.model_gen
-
- def _training_step(self, sample, batch_idx, optimizer_idx):
- mel = sample['mels']
- y = sample['wavs']
- f0 = sample['f0']
- loss_output = {}
- if optimizer_idx == 0:
- #######################
- # Generator #
- #######################
- y_ = self.model_gen(mel, f0)
- y_mel = mel_spectrogram(y.squeeze(1), hparams).transpose(1, 2)
- y_hat_mel = mel_spectrogram(y_.squeeze(1), hparams).transpose(1, 2)
- loss_output['mel'] = F.l1_loss(y_hat_mel, y_mel) * hparams['lambda_mel']
- _, y_p_hat_g, fmap_f_r, fmap_f_g = self.model_disc['mpd'](y, y_, mel)
- _, y_s_hat_g, fmap_s_r, fmap_s_g = self.model_disc['msd'](y, y_, mel)
- loss_output['a_p'] = generator_loss(y_p_hat_g) * hparams['lambda_adv']
- loss_output['a_s'] = generator_loss(y_s_hat_g) * hparams['lambda_adv']
- if hparams['use_fm_loss']:
- loss_output['fm_f'] = feature_loss(fmap_f_r, fmap_f_g)
- loss_output['fm_s'] = feature_loss(fmap_s_r, fmap_s_g)
- if hparams['use_ms_stft']:
- loss_output['sc'], loss_output['mag'] = self.stft_loss(y.squeeze(1), y_.squeeze(1))
- self.y_ = y_.detach()
- self.y_mel = y_mel.detach()
- self.y_hat_mel = y_hat_mel.detach()
- else:
- #######################
- # Discriminator #
- #######################
- y_ = self.y_
- # MPD
- y_p_hat_r, y_p_hat_g, _, _ = self.model_disc['mpd'](y, y_.detach(), mel)
- loss_output['r_p'], loss_output['f_p'] = discriminator_loss(y_p_hat_r, y_p_hat_g)
- # MSD
- y_s_hat_r, y_s_hat_g, _, _ = self.model_disc['msd'](y, y_.detach(), mel)
- loss_output['r_s'], loss_output['f_s'] = discriminator_loss(y_s_hat_r, y_s_hat_g)
- total_loss = sum(loss_output.values())
- return total_loss, loss_output
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/nn/seq_utils.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/nn/seq_utils.py
deleted file mode 100644
index 695a478212bc0384e59ce0e08d0f06be01eca370..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/nn/seq_utils.py
+++ /dev/null
@@ -1,311 +0,0 @@
-from collections import defaultdict
-import torch
-import torch.nn.functional as F
-
-
-def make_positions(tensor, padding_idx):
- """Replace non-padding symbols with their position numbers.
-
- Position numbers begin at padding_idx+1. Padding symbols are ignored.
- """
- # The series of casts and type-conversions here are carefully
- # balanced to both work with ONNX export and XLA. In particular XLA
- # prefers ints, cumsum defaults to output longs, and ONNX doesn't know
- # how to handle the dtype kwarg in cumsum.
- mask = tensor.ne(padding_idx).int()
- return (
- torch.cumsum(mask, dim=1).type_as(mask) * mask
- ).long() + padding_idx
-
-
-def softmax(x, dim):
- return F.softmax(x, dim=dim, dtype=torch.float32)
-
-
-def sequence_mask(lengths, maxlen, dtype=torch.bool):
- if maxlen is None:
- maxlen = lengths.max()
- mask = ~(torch.ones((len(lengths), maxlen)).to(lengths.device).cumsum(dim=1).t() > lengths).t()
- mask.type(dtype)
- return mask
-
-
-def weights_nonzero_speech(target):
- # target : B x T x mel
- # Assign weight 1.0 to all labels except for padding (id=0).
- dim = target.size(-1)
- return target.abs().sum(-1, keepdim=True).ne(0).float().repeat(1, 1, dim)
-
-
-INCREMENTAL_STATE_INSTANCE_ID = defaultdict(lambda: 0)
-
-
-def _get_full_incremental_state_key(module_instance, key):
- module_name = module_instance.__class__.__name__
-
- # assign a unique ID to each module instance, so that incremental state is
- # not shared across module instances
- if not hasattr(module_instance, '_instance_id'):
- INCREMENTAL_STATE_INSTANCE_ID[module_name] += 1
- module_instance._instance_id = INCREMENTAL_STATE_INSTANCE_ID[module_name]
-
- return '{}.{}.{}'.format(module_name, module_instance._instance_id, key)
-
-
-def get_incremental_state(module, incremental_state, key):
- """Helper for getting incremental state for an nn.Module."""
- full_key = _get_full_incremental_state_key(module, key)
- if incremental_state is None or full_key not in incremental_state:
- return None
- return incremental_state[full_key]
-
-
-def set_incremental_state(module, incremental_state, key, value):
- """Helper for setting incremental state for an nn.Module."""
- if incremental_state is not None:
- full_key = _get_full_incremental_state_key(module, key)
- incremental_state[full_key] = value
-
-
-def fill_with_neg_inf(t):
- """FP16-compatible function that fills a tensor with -inf."""
- return t.float().fill_(float('-inf')).type_as(t)
-
-
-def fill_with_neg_inf2(t):
- """FP16-compatible function that fills a tensor with -inf."""
- return t.float().fill_(-1e8).type_as(t)
-
-
-def select_attn(attn_logits, type='best'):
- """
-
- :param attn_logits: [n_layers, B, n_head, T_sp, T_txt]
- :return:
- """
- encdec_attn = torch.stack(attn_logits, 0).transpose(1, 2)
- # [n_layers * n_head, B, T_sp, T_txt]
- encdec_attn = (encdec_attn.reshape([-1, *encdec_attn.shape[2:]])).softmax(-1)
- if type == 'best':
- indices = encdec_attn.max(-1).values.sum(-1).argmax(0)
- encdec_attn = encdec_attn.gather(
- 0, indices[None, :, None, None].repeat(1, 1, encdec_attn.size(-2), encdec_attn.size(-1)))[0]
- return encdec_attn
- elif type == 'mean':
- return encdec_attn.mean(0)
-
-
-def make_pad_mask(lengths, xs=None, length_dim=-1):
- """Make mask tensor containing indices of padded part.
- Args:
- lengths (LongTensor or List): Batch of lengths (B,).
- xs (Tensor, optional): The reference tensor.
- If set, masks will be the same shape as this tensor.
- length_dim (int, optional): Dimension indicator of the above tensor.
- See the example.
- Returns:
- Tensor: Mask tensor containing indices of padded part.
- dtype=torch.uint8 in PyTorch 1.2-
- dtype=torch.bool in PyTorch 1.2+ (including 1.2)
- Examples:
- With only lengths.
- >>> lengths = [5, 3, 2]
- >>> make_non_pad_mask(lengths)
- masks = [[0, 0, 0, 0 ,0],
- [0, 0, 0, 1, 1],
- [0, 0, 1, 1, 1]]
- With the reference tensor.
- >>> xs = torch.zeros((3, 2, 4))
- >>> make_pad_mask(lengths, xs)
- tensor([[[0, 0, 0, 0],
- [0, 0, 0, 0]],
- [[0, 0, 0, 1],
- [0, 0, 0, 1]],
- [[0, 0, 1, 1],
- [0, 0, 1, 1]]], dtype=torch.uint8)
- >>> xs = torch.zeros((3, 2, 6))
- >>> make_pad_mask(lengths, xs)
- tensor([[[0, 0, 0, 0, 0, 1],
- [0, 0, 0, 0, 0, 1]],
- [[0, 0, 0, 1, 1, 1],
- [0, 0, 0, 1, 1, 1]],
- [[0, 0, 1, 1, 1, 1],
- [0, 0, 1, 1, 1, 1]]], dtype=torch.uint8)
- With the reference tensor and dimension indicator.
- >>> xs = torch.zeros((3, 6, 6))
- >>> make_pad_mask(lengths, xs, 1)
- tensor([[[0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [1, 1, 1, 1, 1, 1]],
- [[0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1]],
- [[0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1]]], dtype=torch.uint8)
- >>> make_pad_mask(lengths, xs, 2)
- tensor([[[0, 0, 0, 0, 0, 1],
- [0, 0, 0, 0, 0, 1],
- [0, 0, 0, 0, 0, 1],
- [0, 0, 0, 0, 0, 1],
- [0, 0, 0, 0, 0, 1],
- [0, 0, 0, 0, 0, 1]],
- [[0, 0, 0, 1, 1, 1],
- [0, 0, 0, 1, 1, 1],
- [0, 0, 0, 1, 1, 1],
- [0, 0, 0, 1, 1, 1],
- [0, 0, 0, 1, 1, 1],
- [0, 0, 0, 1, 1, 1]],
- [[0, 0, 1, 1, 1, 1],
- [0, 0, 1, 1, 1, 1],
- [0, 0, 1, 1, 1, 1],
- [0, 0, 1, 1, 1, 1],
- [0, 0, 1, 1, 1, 1],
- [0, 0, 1, 1, 1, 1]]], dtype=torch.uint8)
- """
- if length_dim == 0:
- raise ValueError("length_dim cannot be 0: {}".format(length_dim))
-
- if not isinstance(lengths, list):
- lengths = lengths.tolist()
- bs = int(len(lengths))
- if xs is None:
- maxlen = int(max(lengths))
- else:
- maxlen = xs.size(length_dim)
-
- seq_range = torch.arange(0, maxlen, dtype=torch.int64)
- seq_range_expand = seq_range.unsqueeze(0).expand(bs, maxlen)
- seq_length_expand = seq_range_expand.new(lengths).unsqueeze(-1)
- mask = seq_range_expand >= seq_length_expand
-
- if xs is not None:
- assert xs.size(0) == bs, (xs.size(0), bs)
-
- if length_dim < 0:
- length_dim = xs.dim() + length_dim
- # ind = (:, None, ..., None, :, , None, ..., None)
- ind = tuple(
- slice(None) if i in (0, length_dim) else None for i in range(xs.dim())
- )
- mask = mask[ind].expand_as(xs).to(xs.device)
- return mask
-
-
-def make_non_pad_mask(lengths, xs=None, length_dim=-1):
- """Make mask tensor containing indices of non-padded part.
- Args:
- lengths (LongTensor or List): Batch of lengths (B,).
- xs (Tensor, optional): The reference tensor.
- If set, masks will be the same shape as this tensor.
- length_dim (int, optional): Dimension indicator of the above tensor.
- See the example.
- Returns:
- ByteTensor: mask tensor containing indices of padded part.
- dtype=torch.uint8 in PyTorch 1.2-
- dtype=torch.bool in PyTorch 1.2+ (including 1.2)
- Examples:
- With only lengths.
- >>> lengths = [5, 3, 2]
- >>> make_non_pad_mask(lengths)
- masks = [[1, 1, 1, 1 ,1],
- [1, 1, 1, 0, 0],
- [1, 1, 0, 0, 0]]
- With the reference tensor.
- >>> xs = torch.zeros((3, 2, 4))
- >>> make_non_pad_mask(lengths, xs)
- tensor([[[1, 1, 1, 1],
- [1, 1, 1, 1]],
- [[1, 1, 1, 0],
- [1, 1, 1, 0]],
- [[1, 1, 0, 0],
- [1, 1, 0, 0]]], dtype=torch.uint8)
- >>> xs = torch.zeros((3, 2, 6))
- >>> make_non_pad_mask(lengths, xs)
- tensor([[[1, 1, 1, 1, 1, 0],
- [1, 1, 1, 1, 1, 0]],
- [[1, 1, 1, 0, 0, 0],
- [1, 1, 1, 0, 0, 0]],
- [[1, 1, 0, 0, 0, 0],
- [1, 1, 0, 0, 0, 0]]], dtype=torch.uint8)
- With the reference tensor and dimension indicator.
- >>> xs = torch.zeros((3, 6, 6))
- >>> make_non_pad_mask(lengths, xs, 1)
- tensor([[[1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [0, 0, 0, 0, 0, 0]],
- [[1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0]],
- [[1, 1, 1, 1, 1, 1],
- [1, 1, 1, 1, 1, 1],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0]]], dtype=torch.uint8)
- >>> make_non_pad_mask(lengths, xs, 2)
- tensor([[[1, 1, 1, 1, 1, 0],
- [1, 1, 1, 1, 1, 0],
- [1, 1, 1, 1, 1, 0],
- [1, 1, 1, 1, 1, 0],
- [1, 1, 1, 1, 1, 0],
- [1, 1, 1, 1, 1, 0]],
- [[1, 1, 1, 0, 0, 0],
- [1, 1, 1, 0, 0, 0],
- [1, 1, 1, 0, 0, 0],
- [1, 1, 1, 0, 0, 0],
- [1, 1, 1, 0, 0, 0],
- [1, 1, 1, 0, 0, 0]],
- [[1, 1, 0, 0, 0, 0],
- [1, 1, 0, 0, 0, 0],
- [1, 1, 0, 0, 0, 0],
- [1, 1, 0, 0, 0, 0],
- [1, 1, 0, 0, 0, 0],
- [1, 1, 0, 0, 0, 0]]], dtype=torch.uint8)
- """
- return ~make_pad_mask(lengths, xs, length_dim)
-
-
-def get_mask_from_lengths(lengths):
- max_len = torch.max(lengths).item()
- ids = torch.arange(0, max_len).to(lengths.device)
- mask = (ids < lengths.unsqueeze(1)).bool()
- return mask
-
-
-def group_hidden_by_segs(h, seg_ids, max_len):
- """
-
- :param h: [B, T, H]
- :param seg_ids: [B, T]
- :return: h_ph: [B, T_ph, H]
- """
- B, T, H = h.shape
- h_gby_segs = h.new_zeros([B, max_len + 1, H]).scatter_add_(1, seg_ids[:, :, None].repeat([1, 1, H]), h)
- all_ones = h.new_ones(h.shape[:2])
- cnt_gby_segs = h.new_zeros([B, max_len + 1]).scatter_add_(1, seg_ids, all_ones).contiguous()
- h_gby_segs = h_gby_segs[:, 1:]
- cnt_gby_segs = cnt_gby_segs[:, 1:]
- h_gby_segs = h_gby_segs / torch.clamp(cnt_gby_segs[:, :, None], min=1)
- return h_gby_segs, cnt_gby_segs
-
-def expand_word2ph(word_encoding, ph2word):
- word_encoding = F.pad(word_encoding,[0,0,1,0])
- ph2word_ = ph2word[:, :, None].repeat([1, 1, word_encoding.shape[-1]])
- out = torch.gather(word_encoding, 1, ph2word_) # [B, T, H]
- return out
diff --git a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/app.py b/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/app.py
deleted file mode 100644
index a5e8b40e962276dc9d3154a52df5d7f73d8b3f45..0000000000000000000000000000000000000000
--- a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/app.py
+++ /dev/null
@@ -1,160 +0,0 @@
-import os
-import numpy as np
-import torch
-from torch import no_grad, LongTensor
-import argparse
-import commons
-from mel_processing import spectrogram_torch
-import utils
-from models import SynthesizerTrn
-import gradio as gr
-import librosa
-import webbrowser
-
-from text import text_to_sequence, _clean_text
-device = "cuda:0" if torch.cuda.is_available() else "cpu"
-language_marks = {
- "English": "[EN]",
- "Japanese": "",
- "日本語": "[JA]",
- "简体中文": "[ZH]",
- "Mix": "",
-}
-lang = ['English','日本語', '简体中文','Mix']
-def get_text(text, hps, is_symbol):
- text_norm = text_to_sequence(text, hps.symbols, [] if is_symbol else hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = LongTensor(text_norm)
- return text_norm
-
-def create_tts_fn(model, hps, speaker_ids):
- def tts_fn(text, speaker, language, speed):
- if language is not None:
- text = language_marks[language] + text + language_marks[language]
- speaker_id = speaker_ids[speaker]
- stn_tst = get_text(text, hps, False)
- with no_grad():
- x_tst = stn_tst.unsqueeze(0).to(device)
- x_tst_lengths = LongTensor([stn_tst.size(0)]).to(device)
- sid = LongTensor([speaker_id]).to(device)
- audio = model.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8,
- length_scale=1.0 / speed)[0][0, 0].data.cpu().float().numpy()
- del stn_tst, x_tst, x_tst_lengths, sid
- return "Success", (hps.data.sampling_rate, audio)
-
- return tts_fn
-
-def create_vc_fn(model, hps, speaker_ids):
- def vc_fn(original_speaker, target_speaker, record_audio, upload_audio):
- input_audio = record_audio if record_audio is not None else upload_audio
- if input_audio is None:
- return "You need to record or upload an audio", None
- sampling_rate, audio = input_audio
- original_speaker_id = speaker_ids[original_speaker]
- target_speaker_id = speaker_ids[target_speaker]
-
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != hps.data.sampling_rate:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=hps.data.sampling_rate)
- with no_grad():
- y = torch.FloatTensor(audio)
- y = y / max(-y.min(), y.max()) / 0.99
- y = y.to(device)
- y = y.unsqueeze(0)
- spec = spectrogram_torch(y, hps.data.filter_length,
- hps.data.sampling_rate, hps.data.hop_length, hps.data.win_length,
- center=False).to(device)
- spec_lengths = LongTensor([spec.size(-1)]).to(device)
- sid_src = LongTensor([original_speaker_id]).to(device)
- sid_tgt = LongTensor([target_speaker_id]).to(device)
- audio = model.voice_conversion(spec, spec_lengths, sid_src=sid_src, sid_tgt=sid_tgt)[0][
- 0, 0].data.cpu().float().numpy()
- del y, spec, spec_lengths, sid_src, sid_tgt
- return "Success", (hps.data.sampling_rate, audio)
-
- return vc_fn
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--model_dir", default="./inference/G_latest.pth", help="directory to your fine-tuned model")
- parser.add_argument("--config_dir", default="./inference/finetune_speaker.json", help="directory to your model config file")
- parser.add_argument("--share", default=False, help="make link public (used in colab)")
-
- args = parser.parse_args()
- hps = utils.get_hparams_from_file(args.config_dir)
-
-
- net_g = SynthesizerTrn(
- len(hps.symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- **hps.model).to(device)
- _ = net_g.eval()
-
- _ = utils.load_checkpoint(args.model_dir, net_g, None)
- speaker_ids = hps.speakers
- speakers = list(hps.speakers.keys())
- tts_fn = create_tts_fn(net_g, hps, speaker_ids)
- vc_fn = create_vc_fn(net_g, hps, speaker_ids)
- app = gr.Blocks()
-
- with app:
-
- gr.Markdown(
- """# League of Legends Yuumi Text to Speech Demo 魔法猫咪 悠米 TTS
-
-League of Legends Yuumi Text to Speech model trained with Yuumi's English in-game audio.
-
-魔法猫咪 悠米 TTS模型训练数据为游戏内英文语音
-
-## 👍Give original author stars & likes if you liked the project 如果喜欢给原作者一个星星和赞吧!
-
-https://github.com/Plachtaa/VITS-fast-fine-tuning
-
-https://huggingface.co/spaces/Plachta/VITS-Umamusume-voice-synthesizer
-
-https://huggingface.co/spaces/zomehwh/vits-uma-genshin-honkai
-
-## ❓How to fine-tune your own model 如何调试自己的模型
-
-Follow the directions in this repo: https://github.com/Plachtaa/VITS-fast-fine-tuning
-
-按照 https://github.com/Plachtaa/VITS-fast-fine-tuning 操作
-
-## ⚠
-Use of the model should respect https://www.riotgames.com/en/legal
-用该模型请遵守 https://www.riotgames.com/en/legal
-
-Please do not generate content that could infringe upon the rights or cause harm to individuals or organizations.
-请不要生成会对个人以及组织造成侵害的内容
-
-⚠Disclaimer: Not legally responsible for anything the model generates
-⚠免责声明: 不对该模型任何输出负责"""
-
- )
-
- with gr.Tab("Text-to-Speech"):
- with gr.Row():
- with gr.Column():
- textbox = gr.TextArea(label="Text",
- placeholder="Type your sentence here",
- value="Hello...... I am Yuumi... Please play me next game! Thank you!", elem_id=f"tts-input")
- # select character
- char_dropdown = gr.Dropdown(choices=speakers, value=speakers[0], label='character')
- language_dropdown = gr.Dropdown(choices=lang, value=lang[0], label='language')
- duration_slider = gr.Slider(minimum=0.1, maximum=5, value=1, step=0.1,
- label='速度 Speed')
- with gr.Column():
- text_output = gr.Textbox(label="Message")
- audio_output = gr.Audio(label="Output Audio", elem_id="tts-audio")
- btn = gr.Button("Generate!")
- btn.click(tts_fn,
- inputs=[textbox, char_dropdown, language_dropdown, duration_slider,],
- outputs=[text_output, audio_output])
-
- app.queue(concurrency_count=1, api_open=False).launch(share=args.share)
-
-
diff --git a/spaces/AlexKorGKLT/webui-cpua/app.py b/spaces/AlexKorGKLT/webui-cpua/app.py
deleted file mode 100644
index 97ae384e8328400efae790b1ac23c035e070d7db..0000000000000000000000000000000000000000
--- a/spaces/AlexKorGKLT/webui-cpua/app.py
+++ /dev/null
@@ -1,155 +0,0 @@
-import os
-from sys import executable as pyexecutable
-import subprocess
-import pathlib
-import gc
-
-def Gitclone(URI:str,ClonePath:str = "") -> int :
- if(ClonePath == "") :
- while True:
- i=subprocess.run([r"git",r"clone",URI])
- if(i.returncode == 0 ):
- del i
- gc.collect()
- return 0
- else :
- del i
- else:
- while True:
- i=subprocess.run([r"git",r"clone",URI,ClonePath])
- if(i.returncode == 0 ):
- del i
- gc.collect()
- return 0
- else :
- del i
-def DownLoad(URI:str,DownloadPath:str,DownLoadFileName:str ) -> int:
- while (True):
- i=subprocess.run([r"aria2c",r"-c",r"-x" ,r"16", r"-s",r"16", r"-k" ,r"1M" ,r"-m",r"0",r"--enable-mmap=false",r"--console-log-level=error",r"-d",DownloadPath,r"-o",DownLoadFileName,URI]);
- if(i.returncode == 0 ):
- del i
- gc.collect()
- return 0
- else :
- del i
-user_home =pathlib.Path.home().resolve()
-os.chdir(str(user_home))
-#clone stable-diffusion-webui repo
-print("cloning stable-diffusion-webui repo")
-Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui.git",str(user_home / r"stable-diffusion-webui"))
-os.chdir(str(user_home / r"stable-diffusion-webui"))
-os.system("git reset --hard 89f9faa63388756314e8a1d96cf86bf5e0663045")
-#
-
-#install extensions
-print("installing extensions")
-Gitclone(r"https://huggingface.co/embed/negative",str(user_home / r"stable-diffusion-webui" / r"embeddings" / r"negative"))
-Gitclone(r"https://huggingface.co/embed/lora",str(user_home / r"stable-diffusion-webui" / r"models" / r"Lora" / r"positive"))
-DownLoad(r"https://huggingface.co/embed/upscale/resolve/main/4x-UltraSharp.pth",str(user_home / r"stable-diffusion-webui" / r"models" / r"ESRGAN") ,r"4x-UltraSharp.pth")
-while True:
- if(subprocess.run([r"wget",r"https://raw.githubusercontent.com/camenduru/stable-diffusion-webui-scripts/main/run_n_times.py",r"-O",str(user_home / r"stable-diffusion-webui" / r"scripts" / r"run_n_times.py")]).returncode == 0):
- break
-Gitclone(r"https://github.com/deforum-art/deforum-for-automatic1111-webui",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"deforum-for-automatic1111-webui" ))
-Gitclone(r"https://github.com/AlUlkesh/stable-diffusion-webui-images-browser",str(user_home / r"stable-diffusion-webui" / r"extensions"/ r"stable-diffusion-webui-images-browser"))
-Gitclone(r"https://github.com/camenduru/stable-diffusion-webui-huggingface",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-huggingface"))
-Gitclone(r"https://github.com/camenduru/sd-civitai-browser",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-civitai-browser"))
-Gitclone(r"https://github.com/kohya-ss/sd-webui-additional-networks",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks"))
-Gitclone(r"https://github.com/Mikubill/sd-webui-controlnet",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-controlnet"))
-Gitclone(r"https://github.com/fkunn1326/openpose-editor",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"openpose-editor"))
-Gitclone(r"https://github.com/jexom/sd-webui-depth-lib",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-depth-lib"))
-Gitclone(r"https://github.com/hnmr293/posex",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"posex"))
-Gitclone(r"https://github.com/nonnonstop/sd-webui-3d-open-pose-editor",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-3d-open-pose-editor"))
-#中文本地化的请解除下一行的注释
-#Gitclone(r"https://github.com/dtlnor/stable-diffusion-webui-localization-zh_CN.git",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-localization-zh_CN"))
-Gitclone(r"https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git" , str(user_home / r"stable-diffusion-webui" / r"extensions" / r"a1111-sd-webui-tagcomplete"))
-Gitclone(r"https://github.com/camenduru/sd-webui-tunnels",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-tunnels"))
-Gitclone(r"https://github.com/etherealxx/batchlinks-webui",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"batchlinks-webui"))
-Gitclone(r"https://github.com/catppuccin/stable-diffusion-webui",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-catppuccin"))
-
-#Gitclone(r"https://github.com/KohakuBueleaf/a1111-sd-webui-locon",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"a1111-sd-webui-locon" ))
-Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-rembg"))
-Gitclone(r"https://github.com/ashen-sensored/stable-diffusion-webui-two-shot",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-two-shot"))
-Gitclone(r"https://github.com/camenduru/sd_webui_stealth_pnginfo",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd_webui_stealth_pnginfo"))
-
-os.chdir(user_home / r"stable-diffusion-webui")
-
-#download ControlNet models
-print("extensions dolwnload done .\ndownloading ControlNet models")
-dList =[r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_ip2p_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_shuffle_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_canny_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1p_sd15_depth_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_lineart_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_mlsd_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_normalbae_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_openpose_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_scribble_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_seg_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_softedge_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1e_sd15_tile_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_ip2p_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_shuffle_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_canny_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1p_sd15_depth_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_inpaint_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_lineart_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_mlsd_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_normalbae_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_openpose_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_scribble_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_seg_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_softedge_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15s2_lineart_anime_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1e_sd15_tile_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_style_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_seg_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_openpose_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_keypose_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd15v2.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd15v2.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd15v2.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_zoedepth_sd15v1.pth"]
-for i in range(0,len(dList)): DownLoad(dList[i],str(user_home / "stable-diffusion-webui" / "extensions" / "sd-webui-controlnet" / "models"),pathlib.Path(dList[i]).name)
-del dList
-
-#download model
-#you can change model download address here
-print("ControlNet models download done.\ndownloading model")
-DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.5-pruned.ckpt",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"anything-v4.5-pruned.ckpt")
-DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.0.vae.pt",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"anything-v4.0.vae.pt")
-DownLoad(r"https://huggingface.co/gsdf/Counterfeit-V3.0/resolve/main/Counterfeit-V3.0_fp16.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"Counterfeit-V3.0_fp16.safetensors")
-DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A1B_orangemixs.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"AOM3A1B_orangemixs.safetensors")
-DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV5%20-%20Without%20VAE.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"MeinaPastelV5_WithoutVAE.safetensors")
-DownLoad(r"https://civitai.com/api/download/models/9474",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"chilloutmix_NiPrunedFp16.safetensors")
-
-#My customly added models
-DownLoad(r"https://civitai.com/api/download/models/105674?", str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"realisticVisionV30_v30VAE.safetensors")
-DownLoad(r"https://civitai.com/api/download/models/94640", str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"majicmixRealistic_v6.safetensors")
-DownLoad(r"https://civitai.com/api/download/models/109123", str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"dreamshaper_7.safetensors")
-DownLoad(r"https://civitai.com/api/download/models/27392", str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"openjourney_V4.ckpt")
-DownLoad(r"https://civitai.com/api/download/models/95489", str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"anyloraCheckpoint_bakedvaeBlessedFp16.safetensors")
-DownLoad(r"https://civitai.com/api/download/models/90854", str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"AnythingV5Ink_ink.safetensors")
-
-#LoRa ?
-DownLoad(r"https://civitai.com/api/download/models/39885",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora"),r"Better_light.safetensors")
-DownLoad(r"https://civitai.com/api/download/models/39164",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora"),r"backlighting.safetensors")
-DownLoad(r"https://civitai.com/api/download/models/62833",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora"),r"add_detail.safetensors")
-
-#strt webui
-
-print("Done\nStarting Webui...")
-os.chdir(user_home / r"stable-diffusion-webui")
-while True:
- ret=subprocess.run([r"python3" ,r"launch.py",r"--precision",r"full",r"--no-half",r"--no-half-vae",r"--enable-insecure-extension-access",r"--medvram",r"--skip-torch-cuda-test",r"--enable-console-prompts",r"--ui-settings-file="+str(pathlib.Path(__file__).parent /r"config.json")])
- if(ret.returncode == 0 ):
- del ret
- gc.collect()
- else :
- del ret
-
-del os ,user_home ,pyexecutable ,subprocess
\ No newline at end of file
diff --git a/spaces/AlexWang/lama/fetch_data/sampler.py b/spaces/AlexWang/lama/fetch_data/sampler.py
deleted file mode 100644
index b25fa1fefc20f7f4eea7dbb69e54a8075570a1d1..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/fetch_data/sampler.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import os
-import random
-
-test_files_path = os.path.abspath('.') + '/places_standard_dataset/original/test/'
-test_files = [test_files_path + image for image in os.listdir(test_files_path)]
-print(f'found {len(test_files)} images in {test_files_path}')
-
-random.shuffle(test_files)
-test_files_random = test_files[0:2000]
-#print(test_files_random[0:10])
-
-list_of_random_test_files = os.path.abspath('.') \
-+ '/places_standard_dataset/original/test_random_files.txt'
-
-print(f'copying 100 random images to {list_of_random_test_files}')
-with open(list_of_random_test_files, 'w') as fw:
- for filename in test_files_random:
- fw.write(filename+'\n')
-print('...done')
-
-# ----------------------------------------------------------------------------------
-
-
-val_files_path = os.path.abspath('.') + '/places_standard_dataset/original/val/'
-val_files = [val_files_path + image for image in os.listdir(val_files_path)]
-print(f'found {len(val_files)} images in {val_files_path}')
-
-random.shuffle(val_files)
-val_files_random = val_files[0:100]
-
-list_of_random_val_files = os.path.abspath('.') \
-+ '/places_standard_dataset/original/val_random_files.txt'
-
-print(f'copying 100 random images to {list_of_random_val_files}')
-with open(list_of_random_val_files, 'w') as fw:
- for filename in val_files_random:
- fw.write(filename+'\n')
-print('...done')
-
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/cm_stochastic_iterative.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/cm_stochastic_iterative.md
deleted file mode 100644
index 0cc40bde47a049fe8895322ead3580cd097b6fb2..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/cm_stochastic_iterative.md
+++ /dev/null
@@ -1,11 +0,0 @@
-# Consistency Model Multistep Scheduler
-
-## Overview
-
-Multistep and onestep scheduler (Algorithm 1) introduced alongside consistency models in the paper [Consistency Models](https://arxiv.org/abs/2303.01469) by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever.
-Based on the [original consistency models implementation](https://github.com/openai/consistency_models).
-Should generate good samples from [`ConsistencyModelPipeline`] in one or a small number of steps.
-
-## CMStochasticIterativeScheduler
-[[autodoc]] CMStochasticIterativeScheduler
-
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/onnxruntime/text_to_image/README.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/onnxruntime/text_to_image/README.md
deleted file mode 100644
index cd9397939ac2399ac161f19623430636a4c3c9ad..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/onnxruntime/text_to_image/README.md
+++ /dev/null
@@ -1,74 +0,0 @@
-# Stable Diffusion text-to-image fine-tuning
-
-The `train_text_to_image.py` script shows how to fine-tune stable diffusion model on your own dataset.
-
-___Note___:
-
-___This script is experimental. The script fine-tunes the whole model and often times the model overfits and runs into issues like catastrophic forgetting. It's recommended to try different hyperparamters to get the best result on your dataset.___
-
-
-## Running locally with PyTorch
-### Installing the dependencies
-
-Before running the scripts, make sure to install the library's training dependencies:
-
-**Important**
-
-To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
-```bash
-git clone https://github.com/huggingface/diffusers
-cd diffusers
-pip install .
-```
-
-Then cd in the example folder and run
-```bash
-pip install -r requirements.txt
-```
-
-And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
-
-```bash
-accelerate config
-```
-
-### Pokemon example
-
-You need to accept the model license before downloading or using the weights. In this example we'll use model version `v1-4`, so you'll need to visit [its card](https://huggingface.co/CompVis/stable-diffusion-v1-4), read the license and tick the checkbox if you agree.
-
-You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens).
-
-Run the following command to authenticate your token
-
-```bash
-huggingface-cli login
-```
-
-If you have already cloned the repo, then you won't need to go through these steps.
-
-
-
-## Use ONNXRuntime to accelerate training
-In order to leverage onnxruntime to accelerate training, please use train_text_to_image.py
-
-The command to train a DDPM UNetCondition model on the Pokemon dataset with onnxruntime:
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export dataset_name="lambdalabs/pokemon-blip-captions"
-accelerate launch --mixed_precision="fp16" train_text_to_image.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --dataset_name=$dataset_name \
- --use_ema \
- --resolution=512 --center_crop --random_flip \
- --train_batch_size=1 \
- --gradient_accumulation_steps=4 \
- --gradient_checkpointing \
- --max_train_steps=15000 \
- --learning_rate=1e-05 \
- --max_grad_norm=1 \
- --lr_scheduler="constant" --lr_warmup_steps=0 \
- --output_dir="sd-pokemon-model"
-```
-
-Please contact Prathik Rao (prathikr), Sunghoon Choi (hanbitmyths), Ashwini Khade (askhade), or Peng Wang (pengwa) on github with any questions.
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py
deleted file mode 100644
index fb0c07321e53268be1a9372eef3de7a0c918a318..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py
+++ /dev/null
@@ -1,1016 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-from typing import Any, Callable, Dict, List, Optional, Tuple, Union
-
-import numpy as np
-import PIL.Image
-import torch
-from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
-
-from ...image_processor import VaeImageProcessor
-from ...loaders import FromSingleFileMixin, LoraLoaderMixin, TextualInversionLoaderMixin
-from ...models import AutoencoderKL, UNet2DConditionModel
-from ...models.attention_processor import (
- AttnProcessor2_0,
- LoRAAttnProcessor2_0,
- LoRAXFormersAttnProcessor,
- XFormersAttnProcessor,
-)
-from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import (
- is_accelerate_available,
- is_accelerate_version,
- is_invisible_watermark_available,
- logging,
- randn_tensor,
- replace_example_docstring,
-)
-from ..pipeline_utils import DiffusionPipeline
-from . import StableDiffusionXLPipelineOutput
-
-
-if is_invisible_watermark_available():
- from .watermark import StableDiffusionXLWatermarker
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-EXAMPLE_DOC_STRING = """
- Examples:
- ```py
- >>> import torch
- >>> from diffusers import StableDiffusionXLImg2ImgPipeline
- >>> from diffusers.utils import load_image
-
- >>> pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained(
- ... "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16
- ... )
- >>> pipe = pipe.to("cuda")
- >>> url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png"
-
- >>> init_image = load_image(url).convert("RGB")
- >>> prompt = "a photo of an astronaut riding a horse on mars"
- >>> image = pipe(prompt, image=init_image).images[0]
- ```
-"""
-
-
-# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.rescale_noise_cfg
-def rescale_noise_cfg(noise_cfg, noise_pred_text, guidance_rescale=0.0):
- """
- Rescale `noise_cfg` according to `guidance_rescale`. Based on findings of [Common Diffusion Noise Schedules and
- Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). See Section 3.4
- """
- std_text = noise_pred_text.std(dim=list(range(1, noise_pred_text.ndim)), keepdim=True)
- std_cfg = noise_cfg.std(dim=list(range(1, noise_cfg.ndim)), keepdim=True)
- # rescale the results from guidance (fixes overexposure)
- noise_pred_rescaled = noise_cfg * (std_text / std_cfg)
- # mix with the original results from guidance by factor guidance_rescale to avoid "plain looking" images
- noise_cfg = guidance_rescale * noise_pred_rescaled + (1 - guidance_rescale) * noise_cfg
- return noise_cfg
-
-
-class StableDiffusionXLImg2ImgPipeline(DiffusionPipeline, FromSingleFileMixin, LoraLoaderMixin):
- r"""
- Pipeline for text-to-image generation using Stable Diffusion XL.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- In addition the pipeline inherits the following loading methods:
- - *Textual-Inversion*: [`loaders.TextualInversionLoaderMixin.load_textual_inversion`]
- - *LoRA*: [`loaders.LoraLoaderMixin.load_lora_weights`]
- - *Ckpt*: [`loaders.FromSingleFileMixin.from_single_file`]
-
- as well as the following saving methods:
- - *LoRA*: [`loaders.LoraLoaderMixin.save_lora_weights`]
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder. Stable Diffusion XL uses the text portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- text_encoder_2 ([` CLIPTextModelWithProjection`]):
- Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
- specifically the
- [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
- variant.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- tokenizer_2 (`CLIPTokenizer`):
- Second Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- """
- _optional_components = ["tokenizer", "text_encoder"]
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- text_encoder_2: CLIPTextModelWithProjection,
- tokenizer: CLIPTokenizer,
- tokenizer_2: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: KarrasDiffusionSchedulers,
- requires_aesthetics_score: bool = False,
- force_zeros_for_empty_prompt: bool = True,
- add_watermarker: Optional[bool] = None,
- ):
- super().__init__()
-
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- text_encoder_2=text_encoder_2,
- tokenizer=tokenizer,
- tokenizer_2=tokenizer_2,
- unet=unet,
- scheduler=scheduler,
- )
- self.register_to_config(force_zeros_for_empty_prompt=force_zeros_for_empty_prompt)
- self.register_to_config(requires_aesthetics_score=requires_aesthetics_score)
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
- self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
-
- add_watermarker = add_watermarker if add_watermarker is not None else is_invisible_watermark_available()
-
- if add_watermarker:
- self.watermark = StableDiffusionXLWatermarker()
- else:
- self.watermark = None
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing
- def enable_vae_slicing(self):
- r"""
- Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
- compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
- """
- self.vae.enable_slicing()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing
- def disable_vae_slicing(self):
- r"""
- Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
- computing decoding in one step.
- """
- self.vae.disable_slicing()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_tiling
- def enable_vae_tiling(self):
- r"""
- Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
- compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
- processing larger images.
- """
- self.vae.enable_tiling()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_tiling
- def disable_vae_tiling(self):
- r"""
- Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
- computing decoding in one step.
- """
- self.vae.disable_tiling()
-
- def enable_model_cpu_offload(self, gpu_id=0):
- r"""
- Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
- to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
- method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
- `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
- """
- if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
- from accelerate import cpu_offload_with_hook
- else:
- raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
-
- device = torch.device(f"cuda:{gpu_id}")
-
- if self.device.type != "cpu":
- self.to("cpu", silence_dtype_warnings=True)
- torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
-
- model_sequence = (
- [self.text_encoder, self.text_encoder_2] if self.text_encoder is not None else [self.text_encoder_2]
- )
- model_sequence.extend([self.unet, self.vae])
-
- hook = None
- for cpu_offloaded_model in model_sequence:
- _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
-
- # We'll offload the last model manually.
- self.final_offload_hook = hook
-
- # Copied from diffusers.pipelines.stable_diffusion_xl.pipeline_stable_diffusion_xl.StableDiffusionXLPipeline.encode_prompt
- def encode_prompt(
- self,
- prompt: str,
- prompt_2: Optional[str] = None,
- device: Optional[torch.device] = None,
- num_images_per_prompt: int = 1,
- do_classifier_free_guidance: bool = True,
- negative_prompt: Optional[str] = None,
- negative_prompt_2: Optional[str] = None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
- lora_scale: Optional[float] = None,
- ):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- prompt to be encoded
- prompt_2 (`str` or `List[str]`, *optional*):
- The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
- used in both text-encoders
- device: (`torch.device`):
- torch device
- num_images_per_prompt (`int`):
- number of images that should be generated per prompt
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
- less than `1`).
- negative_prompt_2 (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
- `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
- If not provided, pooled text embeddings will be generated from `prompt` input argument.
- negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
- input argument.
- lora_scale (`float`, *optional*):
- A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- """
- device = device or self._execution_device
-
- # set lora scale so that monkey patched LoRA
- # function of text encoder can correctly access it
- if lora_scale is not None and isinstance(self, LoraLoaderMixin):
- self._lora_scale = lora_scale
-
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- # Define tokenizers and text encoders
- tokenizers = [self.tokenizer, self.tokenizer_2] if self.tokenizer is not None else [self.tokenizer_2]
- text_encoders = (
- [self.text_encoder, self.text_encoder_2] if self.text_encoder is not None else [self.text_encoder_2]
- )
-
- if prompt_embeds is None:
- prompt_2 = prompt_2 or prompt
- # textual inversion: procecss multi-vector tokens if necessary
- prompt_embeds_list = []
- prompts = [prompt, prompt_2]
- for prompt, tokenizer, text_encoder in zip(prompts, tokenizers, text_encoders):
- if isinstance(self, TextualInversionLoaderMixin):
- prompt = self.maybe_convert_prompt(prompt, tokenizer)
-
- text_inputs = tokenizer(
- prompt,
- padding="max_length",
- max_length=tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
-
- text_input_ids = text_inputs.input_ids
- untruncated_ids = tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
- untruncated_ids = tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
-
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
- text_input_ids, untruncated_ids
- ):
- removed_text = tokenizer.batch_decode(untruncated_ids[:, tokenizer.model_max_length - 1 : -1])
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- prompt_embeds = text_encoder(
- text_input_ids.to(device),
- output_hidden_states=True,
- )
-
- # We are only ALWAYS interested in the pooled output of the final text encoder
- pooled_prompt_embeds = prompt_embeds[0]
- prompt_embeds = prompt_embeds.hidden_states[-2]
-
- prompt_embeds_list.append(prompt_embeds)
-
- prompt_embeds = torch.concat(prompt_embeds_list, dim=-1)
-
- # get unconditional embeddings for classifier free guidance
- zero_out_negative_prompt = negative_prompt is None and self.config.force_zeros_for_empty_prompt
- if do_classifier_free_guidance and negative_prompt_embeds is None and zero_out_negative_prompt:
- negative_prompt_embeds = torch.zeros_like(prompt_embeds)
- negative_pooled_prompt_embeds = torch.zeros_like(pooled_prompt_embeds)
- elif do_classifier_free_guidance and negative_prompt_embeds is None:
- negative_prompt = negative_prompt or ""
- negative_prompt_2 = negative_prompt_2 or negative_prompt
-
- uncond_tokens: List[str]
- if prompt is not None and type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt, negative_prompt_2]
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = [negative_prompt, negative_prompt_2]
-
- negative_prompt_embeds_list = []
- for negative_prompt, tokenizer, text_encoder in zip(uncond_tokens, tokenizers, text_encoders):
- if isinstance(self, TextualInversionLoaderMixin):
- negative_prompt = self.maybe_convert_prompt(negative_prompt, tokenizer)
-
- max_length = prompt_embeds.shape[1]
- uncond_input = tokenizer(
- negative_prompt,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
-
- negative_prompt_embeds = text_encoder(
- uncond_input.input_ids.to(device),
- output_hidden_states=True,
- )
- # We are only ALWAYS interested in the pooled output of the final text encoder
- negative_pooled_prompt_embeds = negative_prompt_embeds[0]
- negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
-
- negative_prompt_embeds_list.append(negative_prompt_embeds)
-
- negative_prompt_embeds = torch.concat(negative_prompt_embeds_list, dim=-1)
-
- prompt_embeds = prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device)
- bs_embed, seq_len, _ = prompt_embeds.shape
- # duplicate text embeddings for each generation per prompt, using mps friendly method
- prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
- prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
-
- if do_classifier_free_guidance:
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
- seq_len = negative_prompt_embeds.shape[1]
- negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device)
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
-
- pooled_prompt_embeds = pooled_prompt_embeds.repeat(1, num_images_per_prompt).view(
- bs_embed * num_images_per_prompt, -1
- )
- if do_classifier_free_guidance:
- negative_pooled_prompt_embeds = negative_pooled_prompt_embeds.repeat(1, num_images_per_prompt).view(
- bs_embed * num_images_per_prompt, -1
- )
-
- return prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
- def prepare_extra_step_kwargs(self, generator, eta):
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
-
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- # check if the scheduler accepts generator
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
- if accepts_generator:
- extra_step_kwargs["generator"] = generator
- return extra_step_kwargs
-
- def check_inputs(
- self,
- prompt,
- prompt_2,
- strength,
- num_inference_steps,
- callback_steps,
- negative_prompt=None,
- negative_prompt_2=None,
- prompt_embeds=None,
- negative_prompt_embeds=None,
- ):
- if strength < 0 or strength > 1:
- raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}")
- if num_inference_steps is None:
- raise ValueError("`num_inference_steps` cannot be None.")
- elif not isinstance(num_inference_steps, int) or num_inference_steps <= 0:
- raise ValueError(
- f"`num_inference_steps` has to be a positive integer but is {num_inference_steps} of type"
- f" {type(num_inference_steps)}."
- )
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- if prompt is not None and prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
- " only forward one of the two."
- )
- elif prompt_2 is not None and prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `prompt_2`: {prompt_2} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
- " only forward one of the two."
- )
- elif prompt is None and prompt_embeds is None:
- raise ValueError(
- "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
- )
- elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
- elif prompt_2 is not None and (not isinstance(prompt_2, str) and not isinstance(prompt_2, list)):
- raise ValueError(f"`prompt_2` has to be of type `str` or `list` but is {type(prompt_2)}")
-
- if negative_prompt is not None and negative_prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
- )
- elif negative_prompt_2 is not None and negative_prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `negative_prompt_2`: {negative_prompt_2} and `negative_prompt_embeds`:"
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
- )
-
- if prompt_embeds is not None and negative_prompt_embeds is not None:
- if prompt_embeds.shape != negative_prompt_embeds.shape:
- raise ValueError(
- "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
- f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
- f" {negative_prompt_embeds.shape}."
- )
-
- def get_timesteps(self, num_inference_steps, strength, device, denoising_start=None):
- # get the original timestep using init_timestep
- if denoising_start is None:
- init_timestep = min(int(num_inference_steps * strength), num_inference_steps)
- t_start = max(num_inference_steps - init_timestep, 0)
- else:
- t_start = 0
-
- timesteps = self.scheduler.timesteps[t_start * self.scheduler.order :]
-
- # Strength is irrelevant if we directly request a timestep to start at;
- # that is, strength is determined by the denoising_start instead.
- if denoising_start is not None:
- discrete_timestep_cutoff = int(
- round(
- self.scheduler.config.num_train_timesteps
- - (denoising_start * self.scheduler.config.num_train_timesteps)
- )
- )
- timesteps = list(filter(lambda ts: ts < discrete_timestep_cutoff, timesteps))
- return torch.tensor(timesteps), len(timesteps)
-
- return timesteps, num_inference_steps - t_start
-
- def prepare_latents(
- self, image, timestep, batch_size, num_images_per_prompt, dtype, device, generator=None, add_noise=True
- ):
- if not isinstance(image, (torch.Tensor, PIL.Image.Image, list)):
- raise ValueError(
- f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or list but is {type(image)}"
- )
-
- # Offload text encoder if `enable_model_cpu_offload` was enabled
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
- self.text_encoder_2.to("cpu")
- torch.cuda.empty_cache()
-
- image = image.to(device=device, dtype=dtype)
-
- batch_size = batch_size * num_images_per_prompt
-
- if image.shape[1] == 4:
- init_latents = image
-
- else:
- # make sure the VAE is in float32 mode, as it overflows in float16
- if self.vae.config.force_upcast:
- image = image.float()
- self.vae.to(dtype=torch.float32)
-
- if isinstance(generator, list) and len(generator) != batch_size:
- raise ValueError(
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
- )
-
- elif isinstance(generator, list):
- init_latents = [
- self.vae.encode(image[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size)
- ]
- init_latents = torch.cat(init_latents, dim=0)
- else:
- init_latents = self.vae.encode(image).latent_dist.sample(generator)
-
- if self.vae.config.force_upcast:
- self.vae.to(dtype)
-
- init_latents = init_latents.to(dtype)
- init_latents = self.vae.config.scaling_factor * init_latents
-
- if batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] == 0:
- # expand init_latents for batch_size
- additional_image_per_prompt = batch_size // init_latents.shape[0]
- init_latents = torch.cat([init_latents] * additional_image_per_prompt, dim=0)
- elif batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] != 0:
- raise ValueError(
- f"Cannot duplicate `image` of batch size {init_latents.shape[0]} to {batch_size} text prompts."
- )
- else:
- init_latents = torch.cat([init_latents], dim=0)
-
- if add_noise:
- shape = init_latents.shape
- noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
- # get latents
- init_latents = self.scheduler.add_noise(init_latents, noise, timestep)
-
- latents = init_latents
-
- return latents
-
- def _get_add_time_ids(
- self, original_size, crops_coords_top_left, target_size, aesthetic_score, negative_aesthetic_score, dtype
- ):
- if self.config.requires_aesthetics_score:
- add_time_ids = list(original_size + crops_coords_top_left + (aesthetic_score,))
- add_neg_time_ids = list(original_size + crops_coords_top_left + (negative_aesthetic_score,))
- else:
- add_time_ids = list(original_size + crops_coords_top_left + target_size)
- add_neg_time_ids = list(original_size + crops_coords_top_left + target_size)
-
- passed_add_embed_dim = (
- self.unet.config.addition_time_embed_dim * len(add_time_ids) + self.text_encoder_2.config.projection_dim
- )
- expected_add_embed_dim = self.unet.add_embedding.linear_1.in_features
-
- if (
- expected_add_embed_dim > passed_add_embed_dim
- and (expected_add_embed_dim - passed_add_embed_dim) == self.unet.config.addition_time_embed_dim
- ):
- raise ValueError(
- f"Model expects an added time embedding vector of length {expected_add_embed_dim}, but a vector of {passed_add_embed_dim} was created. Please make sure to enable `requires_aesthetics_score` with `pipe.register_to_config(requires_aesthetics_score=True)` to make sure `aesthetic_score` {aesthetic_score} and `negative_aesthetic_score` {negative_aesthetic_score} is correctly used by the model."
- )
- elif (
- expected_add_embed_dim < passed_add_embed_dim
- and (passed_add_embed_dim - expected_add_embed_dim) == self.unet.config.addition_time_embed_dim
- ):
- raise ValueError(
- f"Model expects an added time embedding vector of length {expected_add_embed_dim}, but a vector of {passed_add_embed_dim} was created. Please make sure to disable `requires_aesthetics_score` with `pipe.register_to_config(requires_aesthetics_score=False)` to make sure `target_size` {target_size} is correctly used by the model."
- )
- elif expected_add_embed_dim != passed_add_embed_dim:
- raise ValueError(
- f"Model expects an added time embedding vector of length {expected_add_embed_dim}, but a vector of {passed_add_embed_dim} was created. The model has an incorrect config. Please check `unet.config.time_embedding_type` and `text_encoder_2.config.projection_dim`."
- )
-
- add_time_ids = torch.tensor([add_time_ids], dtype=dtype)
- add_neg_time_ids = torch.tensor([add_neg_time_ids], dtype=dtype)
-
- return add_time_ids, add_neg_time_ids
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_upscale.StableDiffusionUpscalePipeline.upcast_vae
- def upcast_vae(self):
- dtype = self.vae.dtype
- self.vae.to(dtype=torch.float32)
- use_torch_2_0_or_xformers = isinstance(
- self.vae.decoder.mid_block.attentions[0].processor,
- (
- AttnProcessor2_0,
- XFormersAttnProcessor,
- LoRAXFormersAttnProcessor,
- LoRAAttnProcessor2_0,
- ),
- )
- # if xformers or torch_2_0 is used attention block does not need
- # to be in float32 which can save lots of memory
- if use_torch_2_0_or_xformers:
- self.vae.post_quant_conv.to(dtype)
- self.vae.decoder.conv_in.to(dtype)
- self.vae.decoder.mid_block.to(dtype)
-
- @torch.no_grad()
- @replace_example_docstring(EXAMPLE_DOC_STRING)
- def __call__(
- self,
- prompt: Union[str, List[str]] = None,
- prompt_2: Optional[Union[str, List[str]]] = None,
- image: Union[
- torch.FloatTensor,
- PIL.Image.Image,
- np.ndarray,
- List[torch.FloatTensor],
- List[PIL.Image.Image],
- List[np.ndarray],
- ] = None,
- strength: float = 0.3,
- num_inference_steps: int = 50,
- denoising_start: Optional[float] = None,
- denoising_end: Optional[float] = None,
- guidance_scale: float = 5.0,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- negative_prompt_2: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: float = 0.0,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- latents: Optional[torch.FloatTensor] = None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- guidance_rescale: float = 0.0,
- original_size: Tuple[int, int] = None,
- crops_coords_top_left: Tuple[int, int] = (0, 0),
- target_size: Tuple[int, int] = None,
- aesthetic_score: float = 6.0,
- negative_aesthetic_score: float = 2.5,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
- instead.
- prompt_2 (`str` or `List[str]`, *optional*):
- The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
- used in both text-encoders
- image (`torch.FloatTensor` or `PIL.Image.Image` or `np.ndarray` or `List[torch.FloatTensor]` or `List[PIL.Image.Image]` or `List[np.ndarray]`):
- The image(s) to modify with the pipeline.
- strength (`float`, *optional*, defaults to 0.3):
- Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
- will be used as a starting point, adding more noise to it the larger the `strength`. The number of
- denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
- be maximum and the denoising process will run for the full number of iterations specified in
- `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. Note that in the case of
- `denoising_start` being declared as an integer, the value of `strength` will be ignored.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- denoising_start (`float`, *optional*):
- When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be
- bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and
- it is assumed that the passed `image` is a partly denoised image. Note that when this is specified,
- strength will be ignored. The `denoising_start` parameter is particularly beneficial when this pipeline
- is integrated into a "Mixture of Denoisers" multi-pipeline setup, as detailed in [**Refining the Image
- Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output).
- denoising_end (`float`, *optional*):
- When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
- completed before it is intentionally prematurely terminated. As a result, the returned sample will
- still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be
- denoised by a successor pipeline that has `denoising_start` set to 0.8 so that it only denoises the
- final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline
- forms a part of a "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
- Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output).
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
- less than `1`).
- negative_prompt_2 (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
- `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
- to make generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
- If not provided, pooled text embeddings will be generated from `prompt` input argument.
- negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
- input argument.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
- cross_attention_kwargs (`dict`, *optional*):
- A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
- `self.processor` in
- [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
- guidance_rescale (`float`, *optional*, defaults to 0.7):
- Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
- Flawed](https://arxiv.org/pdf/2305.08891.pdf) `guidance_scale` is defined as `φ` in equation 16. of
- [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf).
- Guidance rescale factor should fix overexposure when using zero terminal SNR.
- original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
- If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
- `original_size` defaults to `(width, height)` if not specified. Part of SDXL's micro-conditioning as
- explained in section 2.2 of
- [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)):
- `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
- `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
- `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
- [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
- For most cases, `target_size` should be set to the desired height and width of the generated image. If
- not specified it will default to `(width, height)`. Part of SDXL's micro-conditioning as explained in
- section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- aesthetic_score (`float`, *optional*, defaults to 6.0):
- Used to simulate an aesthetic score of the generated image by influencing the positive text condition.
- Part of SDXL's micro-conditioning as explained in section 2.2 of
- [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- negative_aesthetic_score (`float`, *optional*, defaults to 2.5):
- Part of SDXL's micro-conditioning as explained in section 2.2 of
- [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). Can be used to
- simulate an aesthetic score of the generated image by influencing the negative text condition.
-
- Examples:
-
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput`] if `return_dict` is True, otherwise a
- `tuple. When returning a tuple, the first element is a list with the generated images.
- """
- # 1. Check inputs. Raise error if not correct
- self.check_inputs(
- prompt,
- prompt_2,
- strength,
- num_inference_steps,
- callback_steps,
- negative_prompt,
- negative_prompt_2,
- prompt_embeds,
- negative_prompt_embeds,
- )
-
- # 2. Define call parameters
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- device = self._execution_device
-
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- # 3. Encode input prompt
- text_encoder_lora_scale = (
- cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None
- )
- (
- prompt_embeds,
- negative_prompt_embeds,
- pooled_prompt_embeds,
- negative_pooled_prompt_embeds,
- ) = self.encode_prompt(
- prompt=prompt,
- prompt_2=prompt_2,
- device=device,
- num_images_per_prompt=num_images_per_prompt,
- do_classifier_free_guidance=do_classifier_free_guidance,
- negative_prompt=negative_prompt,
- negative_prompt_2=negative_prompt_2,
- prompt_embeds=prompt_embeds,
- negative_prompt_embeds=negative_prompt_embeds,
- pooled_prompt_embeds=pooled_prompt_embeds,
- negative_pooled_prompt_embeds=negative_pooled_prompt_embeds,
- lora_scale=text_encoder_lora_scale,
- )
-
- # 4. Preprocess image
- image = self.image_processor.preprocess(image)
-
- # 5. Prepare timesteps
- def denoising_value_valid(dnv):
- return type(denoising_end) == float and 0 < dnv < 1
-
- self.scheduler.set_timesteps(num_inference_steps, device=device)
- timesteps, num_inference_steps = self.get_timesteps(
- num_inference_steps, strength, device, denoising_start=denoising_start if denoising_value_valid else None
- )
- latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt)
-
- add_noise = True if denoising_start is None else False
- # 6. Prepare latent variables
- latents = self.prepare_latents(
- image,
- latent_timestep,
- batch_size,
- num_images_per_prompt,
- prompt_embeds.dtype,
- device,
- generator,
- add_noise,
- )
- # 7. Prepare extra step kwargs.
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
-
- height, width = latents.shape[-2:]
- height = height * self.vae_scale_factor
- width = width * self.vae_scale_factor
-
- original_size = original_size or (height, width)
- target_size = target_size or (height, width)
-
- # 8. Prepare added time ids & embeddings
- add_text_embeds = pooled_prompt_embeds
- add_time_ids, add_neg_time_ids = self._get_add_time_ids(
- original_size,
- crops_coords_top_left,
- target_size,
- aesthetic_score,
- negative_aesthetic_score,
- dtype=prompt_embeds.dtype,
- )
- add_time_ids = add_time_ids.repeat(batch_size * num_images_per_prompt, 1)
-
- if do_classifier_free_guidance:
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0)
- add_text_embeds = torch.cat([negative_pooled_prompt_embeds, add_text_embeds], dim=0)
- add_neg_time_ids = add_neg_time_ids.repeat(batch_size * num_images_per_prompt, 1)
- add_time_ids = torch.cat([add_neg_time_ids, add_time_ids], dim=0)
-
- prompt_embeds = prompt_embeds.to(device)
- add_text_embeds = add_text_embeds.to(device)
- add_time_ids = add_time_ids.to(device)
-
- # 9. Denoising loop
- num_warmup_steps = max(len(timesteps) - num_inference_steps * self.scheduler.order, 0)
-
- # 9.1 Apply denoising_end
- if (
- denoising_end is not None
- and denoising_start is not None
- and denoising_value_valid(denoising_end)
- and denoising_value_valid(denoising_start)
- and denoising_start >= denoising_end
- ):
- raise ValueError(
- f"`denoising_start`: {denoising_start} cannot be larger than or equal to `denoising_end`: "
- + f" {denoising_end} when using type float."
- )
- elif denoising_end is not None and denoising_value_valid(denoising_end):
- discrete_timestep_cutoff = int(
- round(
- self.scheduler.config.num_train_timesteps
- - (denoising_end * self.scheduler.config.num_train_timesteps)
- )
- )
- num_inference_steps = len(list(filter(lambda ts: ts >= discrete_timestep_cutoff, timesteps)))
- timesteps = timesteps[:num_inference_steps]
-
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- for i, t in enumerate(timesteps):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
-
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids}
- noise_pred = self.unet(
- latent_model_input,
- t,
- encoder_hidden_states=prompt_embeds,
- cross_attention_kwargs=cross_attention_kwargs,
- added_cond_kwargs=added_cond_kwargs,
- return_dict=False,
- )[0]
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- if do_classifier_free_guidance and guidance_rescale > 0.0:
- # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf
- noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
-
- # call the callback, if provided
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # make sure the VAE is in float32 mode, as it overflows in float16
- if self.vae.dtype == torch.float16 and self.vae.config.force_upcast:
- self.upcast_vae()
- latents = latents.to(next(iter(self.vae.post_quant_conv.parameters())).dtype)
-
- if not output_type == "latent":
- image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
- else:
- image = latents
- return StableDiffusionXLPipelineOutput(images=image)
-
- # apply watermark if available
- if self.watermark is not None:
- image = self.watermark.apply_watermark(image)
-
- image = self.image_processor.postprocess(image, output_type=output_type)
-
- # Offload last model to CPU
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
- self.final_offload_hook.offload()
-
- if not return_dict:
- return (image,)
-
- return StableDiffusionXLPipelineOutput(images=image)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_1x_coco.py
deleted file mode 100644
index 464aef787de3c932dc3244a93e62cc3df83002ec..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_1x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = '../dcn/cascade_mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py'
-model = dict(
- backbone=dict(
- norm_cfg=dict(type='SyncBN', requires_grad=True), norm_eval=False))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/hrnet/faster_rcnn_hrnetv2p_w32_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/hrnet/faster_rcnn_hrnetv2p_w32_2x_coco.py
deleted file mode 100644
index 63c8717182f2284ff1062be31bae43b4360c6887..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/hrnet/faster_rcnn_hrnetv2p_w32_2x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = './faster_rcnn_hrnetv2p_w32_1x_coco.py'
-# learning policy
-lr_config = dict(step=[16, 22])
-runner = dict(type='EpochBasedRunner', max_epochs=24)
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/README.md b/spaces/Andy1621/uniformer_image_segmentation/configs/danet/README.md
deleted file mode 100644
index 655a845c6ae177c5e18445754f2b4daf823c5c4b..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/README.md
+++ /dev/null
@@ -1,47 +0,0 @@
-# Dual Attention Network for Scene Segmentation
-
-## Introduction
-
-
-
-```latex
-@article{fu2018dual,
- title={Dual Attention Network for Scene Segmentation},
- author={Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang,and Hanqing Lu},
- booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
- year={2019}
-}
-```
-
-## Results and models
-
-### Cityscapes
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------- | ------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| DANet | R-50-D8 | 512x1024 | 40000 | 7.4 | 2.66 | 78.74 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet/danet_r50-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r50-d8_512x1024_40k_cityscapes/danet_r50-d8_512x1024_40k_cityscapes_20200605_191324-c0dbfa5f.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r50-d8_512x1024_40k_cityscapes/danet_r50-d8_512x1024_40k_cityscapes_20200605_191324.log.json) |
-| DANet | R-101-D8 | 512x1024 | 40000 | 10.9 | 1.99 | 80.52 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet/danet_r101-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r101-d8_512x1024_40k_cityscapes/danet_r101-d8_512x1024_40k_cityscapes_20200605_200831-c57a7157.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r101-d8_512x1024_40k_cityscapes/danet_r101-d8_512x1024_40k_cityscapes_20200605_200831.log.json) |
-| DANet | R-50-D8 | 769x769 | 40000 | 8.8 | 1.56 | 78.88 | 80.62 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet/danet_r50-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r50-d8_769x769_40k_cityscapes/danet_r50-d8_769x769_40k_cityscapes_20200530_025703-76681c60.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r50-d8_769x769_40k_cityscapes/danet_r50-d8_769x769_40k_cityscapes_20200530_025703.log.json) |
-| DANet | R-101-D8 | 769x769 | 40000 | 12.8 | 1.07 | 79.88 | 81.47 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet/danet_r101-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r101-d8_769x769_40k_cityscapes/danet_r101-d8_769x769_40k_cityscapes_20200530_025717-dcb7fd4e.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r101-d8_769x769_40k_cityscapes/danet_r101-d8_769x769_40k_cityscapes_20200530_025717.log.json) |
-| DANet | R-50-D8 | 512x1024 | 80000 | - | - | 79.34 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet/danet_r50-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r50-d8_512x1024_80k_cityscapes/danet_r50-d8_512x1024_80k_cityscapes_20200607_133029-2bfa2293.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r50-d8_512x1024_80k_cityscapes/danet_r50-d8_512x1024_80k_cityscapes_20200607_133029.log.json) |
-| DANet | R-101-D8 | 512x1024 | 80000 | - | - | 80.41 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet/danet_r101-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r101-d8_512x1024_80k_cityscapes/danet_r101-d8_512x1024_80k_cityscapes_20200607_132918-955e6350.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r101-d8_512x1024_80k_cityscapes/danet_r101-d8_512x1024_80k_cityscapes_20200607_132918.log.json) |
-| DANet | R-50-D8 | 769x769 | 80000 | - | - | 79.27 | 80.96 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet/danet_r50-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r50-d8_769x769_80k_cityscapes/danet_r50-d8_769x769_80k_cityscapes_20200607_132954-495689b4.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r50-d8_769x769_80k_cityscapes/danet_r50-d8_769x769_80k_cityscapes_20200607_132954.log.json) |
-| DANet | R-101-D8 | 769x769 | 80000 | - | - | 80.47 | 82.02 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet/danet_r101-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r101-d8_769x769_80k_cityscapes/danet_r101-d8_769x769_80k_cityscapes_20200607_132918-f3a929e7.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r101-d8_769x769_80k_cityscapes/danet_r101-d8_769x769_80k_cityscapes_20200607_132918.log.json) |
-
-### ADE20K
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | --------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| DANet | R-50-D8 | 512x512 | 80000 | 11.5 | 21.20 | 41.66 | 42.90 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet/danet_r50-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r50-d8_512x512_80k_ade20k/danet_r50-d8_512x512_80k_ade20k_20200615_015125-edb18e08.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r50-d8_512x512_80k_ade20k/danet_r50-d8_512x512_80k_ade20k_20200615_015125.log.json) |
-| DANet | R-101-D8 | 512x512 | 80000 | 15 | 14.18 | 43.64 | 45.19 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet/danet_r101-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r101-d8_512x512_80k_ade20k/danet_r101-d8_512x512_80k_ade20k_20200615_015126-d0357c73.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r101-d8_512x512_80k_ade20k/danet_r101-d8_512x512_80k_ade20k_20200615_015126.log.json) |
-| DANet | R-50-D8 | 512x512 | 160000 | - | - | 42.45 | 43.25 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet/danet_r50-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r50-d8_512x512_160k_ade20k/danet_r50-d8_512x512_160k_ade20k_20200616_082340-9cb35dcd.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r50-d8_512x512_160k_ade20k/danet_r50-d8_512x512_160k_ade20k_20200616_082340.log.json) |
-| DANet | R-101-D8 | 512x512 | 160000 | - | - | 44.17 | 45.02 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet/danet_r101-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r101-d8_512x512_160k_ade20k/danet_r101-d8_512x512_160k_ade20k_20200616_082348-23bf12f9.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r101-d8_512x512_160k_ade20k/danet_r101-d8_512x512_160k_ade20k_20200616_082348.log.json) |
-
-### Pascal VOC 2012 + Aug
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ---------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| DANet | R-50-D8 | 512x512 | 20000 | 6.5 | 20.94 | 74.45 | 75.69 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet/danet_r50-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r50-d8_512x512_20k_voc12aug/danet_r50-d8_512x512_20k_voc12aug_20200618_070026-9e9e3ab3.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r50-d8_512x512_20k_voc12aug/danet_r50-d8_512x512_20k_voc12aug_20200618_070026.log.json) |
-| DANet | R-101-D8 | 512x512 | 20000 | 9.9 | 13.76 | 76.02 | 77.23 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet/danet_r101-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r101-d8_512x512_20k_voc12aug/danet_r101-d8_512x512_20k_voc12aug_20200618_070026-d48d23b2.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r101-d8_512x512_20k_voc12aug/danet_r101-d8_512x512_20k_voc12aug_20200618_070026.log.json) |
-| DANet | R-50-D8 | 512x512 | 40000 | - | - | 76.37 | 77.29 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet/danet_r50-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r50-d8_512x512_40k_voc12aug/danet_r50-d8_512x512_40k_voc12aug_20200613_235526-426e3a64.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r50-d8_512x512_40k_voc12aug/danet_r50-d8_512x512_40k_voc12aug_20200613_235526.log.json) |
-| DANet | R-101-D8 | 512x512 | 40000 | - | - | 76.51 | 77.32 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet/danet_r101-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r101-d8_512x512_40k_voc12aug/danet_r101-d8_512x512_40k_voc12aug_20200613_223031-788e232a.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/danet/danet_r101-d8_512x512_40k_voc12aug/danet_r101-d8_512x512_40k_voc12aug_20200613_223031.log.json) |
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r50-d8_769x769_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r50-d8_769x769_80k_cityscapes.py
deleted file mode 100644
index c7237ae03c601204dc7c03018ca17ed363090569..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r50-d8_769x769_80k_cityscapes.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = [
- '../_base_/models/danet_r50-d8.py',
- '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_80k.py'
-]
-model = dict(
- decode_head=dict(align_corners=True),
- auxiliary_head=dict(align_corners=True),
- test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513)))
diff --git a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/evaluations/__init__.py b/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/evaluations/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/visualization/optflow.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/visualization/optflow.py
deleted file mode 100644
index c3870c700f7c946177ee5d536ce3f6c814a77ce7..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/visualization/optflow.py
+++ /dev/null
@@ -1,112 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from __future__ import division
-
-import numpy as np
-
-from annotator.uniformer.mmcv.image import rgb2bgr
-from annotator.uniformer.mmcv.video import flowread
-from .image import imshow
-
-
-def flowshow(flow, win_name='', wait_time=0):
- """Show optical flow.
-
- Args:
- flow (ndarray or str): The optical flow to be displayed.
- win_name (str): The window name.
- wait_time (int): Value of waitKey param.
- """
- flow = flowread(flow)
- flow_img = flow2rgb(flow)
- imshow(rgb2bgr(flow_img), win_name, wait_time)
-
-
-def flow2rgb(flow, color_wheel=None, unknown_thr=1e6):
- """Convert flow map to RGB image.
-
- Args:
- flow (ndarray): Array of optical flow.
- color_wheel (ndarray or None): Color wheel used to map flow field to
- RGB colorspace. Default color wheel will be used if not specified.
- unknown_thr (str): Values above this threshold will be marked as
- unknown and thus ignored.
-
- Returns:
- ndarray: RGB image that can be visualized.
- """
- assert flow.ndim == 3 and flow.shape[-1] == 2
- if color_wheel is None:
- color_wheel = make_color_wheel()
- assert color_wheel.ndim == 2 and color_wheel.shape[1] == 3
- num_bins = color_wheel.shape[0]
-
- dx = flow[:, :, 0].copy()
- dy = flow[:, :, 1].copy()
-
- ignore_inds = (
- np.isnan(dx) | np.isnan(dy) | (np.abs(dx) > unknown_thr) |
- (np.abs(dy) > unknown_thr))
- dx[ignore_inds] = 0
- dy[ignore_inds] = 0
-
- rad = np.sqrt(dx**2 + dy**2)
- if np.any(rad > np.finfo(float).eps):
- max_rad = np.max(rad)
- dx /= max_rad
- dy /= max_rad
-
- rad = np.sqrt(dx**2 + dy**2)
- angle = np.arctan2(-dy, -dx) / np.pi
-
- bin_real = (angle + 1) / 2 * (num_bins - 1)
- bin_left = np.floor(bin_real).astype(int)
- bin_right = (bin_left + 1) % num_bins
- w = (bin_real - bin_left.astype(np.float32))[..., None]
- flow_img = (1 -
- w) * color_wheel[bin_left, :] + w * color_wheel[bin_right, :]
- small_ind = rad <= 1
- flow_img[small_ind] = 1 - rad[small_ind, None] * (1 - flow_img[small_ind])
- flow_img[np.logical_not(small_ind)] *= 0.75
-
- flow_img[ignore_inds, :] = 0
-
- return flow_img
-
-
-def make_color_wheel(bins=None):
- """Build a color wheel.
-
- Args:
- bins(list or tuple, optional): Specify the number of bins for each
- color range, corresponding to six ranges: red -> yellow,
- yellow -> green, green -> cyan, cyan -> blue, blue -> magenta,
- magenta -> red. [15, 6, 4, 11, 13, 6] is used for default
- (see Middlebury).
-
- Returns:
- ndarray: Color wheel of shape (total_bins, 3).
- """
- if bins is None:
- bins = [15, 6, 4, 11, 13, 6]
- assert len(bins) == 6
-
- RY, YG, GC, CB, BM, MR = tuple(bins)
-
- ry = [1, np.arange(RY) / RY, 0]
- yg = [1 - np.arange(YG) / YG, 1, 0]
- gc = [0, 1, np.arange(GC) / GC]
- cb = [0, 1 - np.arange(CB) / CB, 1]
- bm = [np.arange(BM) / BM, 0, 1]
- mr = [1, 0, 1 - np.arange(MR) / MR]
-
- num_bins = RY + YG + GC + CB + BM + MR
-
- color_wheel = np.zeros((3, num_bins), dtype=np.float32)
-
- col = 0
- for i, color in enumerate([ry, yg, gc, cb, bm, mr]):
- for j in range(3):
- color_wheel[j, col:col + bins[i]] = color[j]
- col += bins[i]
-
- return color_wheel.T
diff --git a/spaces/ArkanDash/rvc-models/infer_pack/modules.py b/spaces/ArkanDash/rvc-models/infer_pack/modules.py
deleted file mode 100644
index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000
--- a/spaces/ArkanDash/rvc-models/infer_pack/modules.py
+++ /dev/null
@@ -1,522 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from infer_pack.transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- out_channels,
- kernel_size,
- n_layers,
- p_dropout,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(
- nn.Conv1d(
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(
- nn.Conv1d(
- hidden_channels,
- hidden_channels,
- kernel_size,
- padding=kernel_size // 2,
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size**i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(
- nn.Conv1d(
- channels,
- channels,
- kernel_size,
- groups=channels,
- dilation=dilation,
- padding=padding,
- )
- )
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(
- self,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- p_dropout=0,
- ):
- super(WN, self).__init__()
- assert kernel_size % 2 == 1
- self.hidden_channels = hidden_channels
- self.kernel_size = (kernel_size,)
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(
- gin_channels, 2 * hidden_channels * n_layers, 1
- )
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
-
- for i in range(n_layers):
- dilation = dilation_rate**i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(
- hidden_channels,
- 2 * hidden_channels,
- kernel_size,
- dilation=dilation,
- padding=padding,
- )
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, : self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels :, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- ]
- )
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False,
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=p_dropout,
- gin_channels=gin_channels,
- )
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class ConvFlow(nn.Module):
- def __init__(
- self,
- in_channels,
- filter_channels,
- kernel_size,
- n_layers,
- num_bins=10,
- tail_bound=5.0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
- self.proj = nn.Conv1d(
- filter_channels, self.half_channels * (num_bins * 3 - 1), 1
- )
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
- self.filter_channels
- )
- unnormalized_derivatives = h[..., 2 * self.num_bins :]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(
- x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails="linear",
- tail_bound=self.tail_bound,
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/Axolotlily/Interpolate/app.py b/spaces/Axolotlily/Interpolate/app.py
deleted file mode 100644
index 12f631804aada9c827ce6fce50edde7680f35ecb..0000000000000000000000000000000000000000
--- a/spaces/Axolotlily/Interpolate/app.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import os
-os.system("git clone https://github.com/google-research/frame-interpolation")
-import sys
-sys.path.append("frame-interpolation")
-import numpy as np
-import tensorflow as tf
-import mediapy
-from PIL import Image
-from eval import interpolator, util
-import gradio as gr
-
-from huggingface_hub import snapshot_download
-
-from image_tools.sizes import resize_and_crop
-
-
-model = snapshot_download(repo_id="akhaliq/frame-interpolation-film-style")
-
-interpolator = interpolator.Interpolator(model, None)
-
-ffmpeg_path = util.get_ffmpeg_path()
-mediapy.set_ffmpeg(ffmpeg_path)
-
-def resize(width,img):
- basewidth = width
- img = Image.open(img)
- wpercent = (basewidth/float(img.size[0]))
- hsize = int((float(img.size[1])*float(wpercent)))
- img = img.resize((basewidth,hsize), Image.ANTIALIAS)
- return img
-
-
-def resize_img(img1,img2):
- img_target_size = Image.open(img1)
- img_to_resize = resize_and_crop(
- img2,
- (img_target_size.size[0],img_target_size.size[1]), #set width and height to match img1
- crop_origin="middle"
- )
- img_to_resize.save('resized_img2.png')
-
-def predict(frame1, frame2, times_to_interpolate):
-
- frame1 = resize(512,frame1)
- frame2 = resize(512,frame2)
-
- frame1.save("test1.png")
- frame2.save("test2.png")
-
- resize_img("test1.png","test2.png")
- input_frames = ["test1.png", "resized_img2.png"]
-
- frames = list(
- util.interpolate_recursively_from_files(
- input_frames, times_to_interpolate, interpolator))
-
- mediapy.write_video("out.mp4", frames, fps=15)
- return "out.mp4"
-article=""
-description="Using AI to guess the frames between two separate images."
-title="Frame Interpolation"
-examples=[['cat3.jpeg','cat4.jpeg',2]]
-gr.Interface(predict,[gr.inputs.Image(type='filepath'),gr.inputs.Image(type='filepath'),gr.inputs.Slider(minimum=2,maximum=8,step=1)],"playable_video",title=title,description=description,article=article,examples=examples).launch(enable_queue=True)
diff --git a/spaces/BRICS/README/README.md b/spaces/BRICS/README/README.md
deleted file mode 100644
index 30ce4a8413c28adf14b12b63c33452d716f98d1c..0000000000000000000000000000000000000000
--- a/spaces/BRICS/README/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: README
-emoji: 💻
-colorFrom: yellow
-colorTo: blue
-sdk: static
-pinned: false
----
-
-Edit this `README.md` markdown file to author your organization card.
diff --git a/spaces/BasToTheMax/tensor/README.md b/spaces/BasToTheMax/tensor/README.md
deleted file mode 100644
index b041bb7f5ed69097a757244c8bb4d1eb09f7307c..0000000000000000000000000000000000000000
--- a/spaces/BasToTheMax/tensor/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Tensor
-emoji: 🐨
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 3.38.0
-app_file: app.py
-pinned: false
-license: creativeml-openrail-m
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Benson/text-generation/Examples/Coche De Playa Carreras Ruedas Calientes Apk.md b/spaces/Benson/text-generation/Examples/Coche De Playa Carreras Ruedas Calientes Apk.md
deleted file mode 100644
index 31661d51a3881bde23d604d0493349ac402a548a..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Coche De Playa Carreras Ruedas Calientes Apk.md
+++ /dev/null
@@ -1,50 +0,0 @@
-
-
Beach Buggy Racing Hot Wheels APK Descargar: Un divertido y rápido juego de carreras de karts
-
Si usted está buscando un juego de carreras de karts divertido y de ritmo rápido que se puede jugar en su dispositivo Android, entonces usted debe comprobar Beach Buggy Racing Hot Wheels APK. Este es un juego gratuito que te permite conducir en un mundo lleno de acción y lleno de sorpresas de caos de carreras de karts todoterreno. Puedes competir contra un campo de pilotos rivales, cada uno con personalidades únicas y habilidades especiales. También puedes crear una colección de poderes locos, como Dodgeball Frenzy, Fireball y oíl Slick. También puede desbloquear y actualizar una variedad de coches, desde buggies de dunas a camiones monstruo a rovers lunares. También puedes poner a prueba tus habilidades en 6 modos de juego diferentes en 15 imaginativas pistas de carreras en 3D, contra un grupo de rivales amantes de lo tropical con un caso serio de furia en el camino. Esta es la secuela oficial de Beach Buggy Blitz, el juego de conducción gratuita con más de 30 millones de jugadores en todo el mundo. Rápido, furioso, divertido y gratis, Beach Buggy Racing Hot Wheels APK es una aventura de kart-racing isla para todas las edades.
Características de Beach Buggy Racing Hot Wheels APK
-
Beach Buggy Racing Hot Wheels APK tiene muchas características que lo convierten en un juego emocionante y agradable. Estos son algunos de ellos:
-
-
Emocionante acción de carreras de karts con potenciadores creativos y jugabilidad basada en la física: Puedes utilizar tus habilidades de conducción y una colección de potenciadores creativos para luchar hasta la meta. No es solo un gran juego de carreras en 3D, es una batalla épica con un juego espectacular basado en la física.
-
Coches geniales para personalizar, desde buggies de dunas a camiones monstruosos a rovers lunares: Puedes usar tus ganancias para recoger y actualizar un garaje lleno de coches únicos, desde camiones monstruosos hasta coches musculosos y rovers lunares.
-
-
15 espectaculares pistas de carreras, desde selvas infestadas de dinosaurios a volcanes que arrojan lava a hermosas playas: Puedes explorar selvas, volcanes, playas y más en un mundo lleno de acción de sorpresas. Verás cangrejos gigantes, yetis enojados, dragones voladores y más.
-
Reúne un equipo de corredores, cada uno con un poder especial único: Puedes reclutar un equipo de conductores para jugar, cada uno con un poder especial único como la teletransportación, pistas de fuego en llamas y hechizos de confusión.
-
Modo multijugador de pantalla dividida para hasta 4 amigos: Puedes retar a tus amigos en emocionantes carreras multijugador de pantalla dividida. Puedes jugar con hasta 4 amigos en un dispositivo.
-
Integración de servicios de juego de Google Play para tablas de clasificación, logros, almacenamiento en la nube y sincronización: Puedes competir con tus amigos en tablas de clasificación, obtener logros, hacer copias de seguridad de tu juego en la nube y mantener varios dispositivos sincronizados con tu cuenta de Google.
-
Juega de la manera que quieras con la dirección de inclinación, la pantalla táctil o los controles del gamepad: Puedes elegir entre varias opciones de control y personalizar la configuración de gráficos 3D para optimizar tu experiencia de juego.
-
Personalice la configuración de gráficos 3D para optimizar su experiencia de juego: Puede ajustar la calidad de los gráficos y la configuración de rendimiento para adaptarse a su dispositivo y preferencia.
-
-
Cómo descargar e instalar Beach Buggy Racing Hot Wheels APK
-
Si desea descargar e instalar Beach Buggy Racing Hot Wheels APK en su dispositivo Android, puede seguir estos sencillos pasos:
-
-
Ir a la página web oficial de Beach Buggy Racing Hot Wheels APK y haga clic en el botón de descarga: Esto iniciará el proceso de descarga del archivo APK en su dispositivo. Asegúrese de tener suficiente espacio de almacenamiento y una conexión a Internet estable.
-
-
Si ves un mensaje de advertencia sobre la instalación de aplicaciones desde fuentes desconocidas, ve a la configuración de tu dispositivo y habilita la opción para permitirlo: Algunos dispositivos pueden bloquear la instalación de aplicaciones desde fuentes distintas de Google Play Store por razones de seguridad. Si ves tal mensaje, necesitas ir a la configuración de tu dispositivo, encontrar la opción de seguridad o privacidad y habilitar la opción para permitir la instalación de aplicaciones desde fuentes desconocidas. Esto le permitirá instalar Beach Buggy Racing Hot Wheels APK sin ningún problema.
-
Siga las instrucciones en pantalla para instalar el juego y disfrutar: Después de habilitar la opción de instalar aplicaciones de fuentes desconocidas, puede seguir las instrucciones en pantalla para instalar Beach Buggy Racing Hot Wheels APK en su dispositivo. Tardará unos segundos o minutos dependiendo de la velocidad de tu dispositivo. Una vez finalizada la instalación, puedes iniciar el juego y disfrutarlo.
-
-
Consejos y trucos para jugar Beach Buggy Racing Hot Wheels APK
-
Beach Buggy Racing Hot Wheels APK es un juego divertido y adictivo que te mantendrá entretenido durante horas. Sin embargo, si quieres mejorar tus habilidades y rendimiento en el juego, puedes utilizar estos consejos y trucos:
-
-
Utilice sus powerups sabiamente y estratégicamente para ganar una ventaja sobre sus oponentes: Powerups son uno de los aspectos más importantes de Beach Buggy Racing Hot Wheels APK. Pueden ayudarte a acelerar, ralentizar, atacar o defenderte de otros corredores. Sin embargo, necesitas usarlos sabiamente y estratégicamente. Por ejemplo, puedes usar una bola de fuego para disparar a un oponente delante de ti, o una mancha de aceite para que se deslice detrás de ti. También puedes usar un escudo para protegerte de ataques entrantes, o un turbo para acercarte más allá de todos. Usted necesita saber cuándo y cómo utilizar cada powerup con eficacia.
-
-
Pruebe diferentes modos de juego y pistas de carreras para desafiarse y divertirse: Beach Buggy Racing Hot Wheels APK tiene seis modos de juego diferentes y 15 pistas de carreras 3D imaginativas para desafiarse y divertirse. Puedes probar el modo Campeonato, donde compites en una serie de carreras para convertirte en el campeón definitivo. También puedes probar el modo Carrera rápida, donde puedes elegir cualquier pista y cualquier coche y carrera por diversión. También puedes probar el modo Desafío diario, donde puedes ganar recompensas adicionales completando una tarea específica cada día. También puedes probar el modo Eventos especiales, donde puedes participar en eventos de tiempo limitado con reglas y premios especiales. También puedes probar el modo Hot Wheels, donde puedes competir con coches y pistas de Hot Wheels. También puedes probar el modo Boss Battle, donde puedes enfrentarte a los jefes de carreras y sus poderes especiales. Cada modo de juego y pista de carreras tiene sus propios desafíos, sorpresas y diversión.
-
Reclutar nuevos corredores y utilizar sus poderes especiales para su ventaja: Beach Buggy Racing Hot Wheels APK tiene una lista de 12 corredores, cada uno con un poder especial único. Puedes reclutarlos ganando carreras o comprándolas con gemas. También puedes cambiar entre ellas antes de cada carrera. Cada corredor tiene una personalidad, estilo y poder diferentes que pueden ayudarte en diferentes situaciones. Por ejemplo, Rez tiene el poder de teletransportación, lo que le permite avanzar por delante de la manada. Beach Bro tiene el poder de las pistas de fuego en llamas, que incendia el suelo detrás de él. Tiki tiene el poder de los hechizos de confusión, lo que hace que otros corredores pierdan el control de sus coches. Necesitas experimentar con diferentes corredores y sus poderes para encontrar la mejor combinación para cada carrera.
-
-
-
Conclusión
-
Beach Buggy Racing Hot Wheels APK es un divertido y rápido juego de carreras de karts que se puede descargar de forma gratuita y disfrutar en su dispositivo Android. Tiene muchas características que lo convierten en un juego emocionante y agradable, como powerups creativos, coches frescos, pistas de carreras espectaculares, corredores únicos y diferentes modos de juego. También tiene un proceso de instalación simple y fácil, así como consejos y trucos para ayudarle a mejorar sus habilidades y rendimiento en el juego. También tiene un modo multijugador de pantalla dividida y un modo online que te permite jugar con tus amigos y divertirte más.
-
-
Si usted está buscando una aventura isla de carreras de karts para todas las edades, entonces usted debe descargar Beach Buggy Racing Hot Wheels APK hoy y unirse al caos de carreras.
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Beach Buggy Racing Hot Wheels APK:
-
-
¿Es seguro descargar e instalar Beach Buggy Racing Hot Wheels APK? : Sí, Beach Buggy Racing Hot Wheels APK es seguro para descargar e instalar en su dispositivo Android. Es desarrollado por Vector Unit, un desarrollador de juegos de buena reputación que ha creado muchos juegos populares como Riptide GP, Shine Runner, MouseBot, etc. También es verificado por Google Play Protect, que escanea aplicaciones de malware y otras amenazas antes de instalarlos en su dispositivo.
-
Es Beach Buggy Racing Hot Wheels APK libre para jugar? : Sí, Beach Buggy Racing Hot Wheels APK es gratis para jugar en su dispositivo Android. Sin embargo, contiene algunas compras opcionales en la aplicación que pueden mejorar su experiencia de juego. Por ejemplo, puedes comprar gemas para desbloquear nuevos coches y corredores más rápido, o comprar monedas para mejorar tus coches más fácilmente. También puedes ver anuncios para ganar monedas y gemas extra gratis.
-
-
¿Cuáles son los requisitos mínimos del sistema para Beach Buggy Racing Hot Wheels APK? : Beach Buggy Racing Hot Wheels APK requiere Android 4.1 o superior para funcionar sin problemas en su dispositivo. También requiere al menos 1 GB de RAM y un procesador decente para manejar los gráficos 3D y el juego basado en la física.
-
¿Cómo puedo contactar con el desarrollador de Beach Buggy Racing Hot Wheels APK? : Si usted tiene alguna pregunta, comentarios, o problemas con respecto a Beach Buggy Racing Hot Wheels APK, puede ponerse en contacto con el desarrollador del juego por correo electrónico a support@vectorunit.com. También puede visitar su sitio web o seguirlos en Facebook o Twitter para obtener más información y actualizaciones sobre el juego.
-
- : https://beach-buggy-racing-hotwheels.en.uptodown.com/android. : https://www.vectorunit.com/ : https://www.facebook.com/VectorUnit : https:/twitter.com/VectorUnit 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/install_headers.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/install_headers.py
deleted file mode 100644
index 87046ab391b9f5e577e6ef0181c50de7e9c7f01b..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/install_headers.py
+++ /dev/null
@@ -1,45 +0,0 @@
-"""distutils.command.install_headers
-
-Implements the Distutils 'install_headers' command, to install C/C++ header
-files to the Python include directory."""
-
-from distutils.core import Command
-
-
-# XXX force is never used
-class install_headers(Command):
-
- description = "install C/C++ header files"
-
- user_options = [
- ('install-dir=', 'd', "directory to install header files to"),
- ('force', 'f', "force installation (overwrite existing files)"),
- ]
-
- boolean_options = ['force']
-
- def initialize_options(self):
- self.install_dir = None
- self.force = 0
- self.outfiles = []
-
- def finalize_options(self):
- self.set_undefined_options(
- 'install', ('install_headers', 'install_dir'), ('force', 'force')
- )
-
- def run(self):
- headers = self.distribution.headers
- if not headers:
- return
-
- self.mkpath(self.install_dir)
- for header in headers:
- (out, _) = self.copy_file(header, self.install_dir)
- self.outfiles.append(out)
-
- def get_inputs(self):
- return self.distribution.headers or []
-
- def get_outputs(self):
- return self.outfiles
diff --git a/spaces/BilalSardar/karlo-cpu-api/README.md b/spaces/BilalSardar/karlo-cpu-api/README.md
deleted file mode 100644
index fa46049a248c52683309f1cebcb0dc8d658ebea5..0000000000000000000000000000000000000000
--- a/spaces/BilalSardar/karlo-cpu-api/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Karlo Cpu Api
-emoji: 🦀
-colorFrom: green
-colorTo: green
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/roi_heads/fast_rcnn.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/roi_heads/fast_rcnn.py
deleted file mode 100644
index e765cf625c9c5c1524cd70edccb2b8f823fcd6de..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/roi_heads/fast_rcnn.py
+++ /dev/null
@@ -1,498 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import logging
-import torch
-from fvcore.nn import smooth_l1_loss
-from torch import nn
-from torch.nn import functional as F
-
-from detectron2.config import configurable
-from detectron2.layers import Linear, ShapeSpec, batched_nms, cat
-from detectron2.modeling.box_regression import Box2BoxTransform
-from detectron2.structures import Boxes, Instances
-from detectron2.utils.events import get_event_storage
-
-logger = logging.getLogger(__name__)
-
-"""
-Shape shorthand in this module:
-
- N: number of images in the minibatch
- R: number of ROIs, combined over all images, in the minibatch
- Ri: number of ROIs in image i
- K: number of foreground classes. E.g.,there are 80 foreground classes in COCO.
-
-Naming convention:
-
- deltas: refers to the 4-d (dx, dy, dw, dh) deltas that parameterize the box2box
- transform (see :class:`box_regression.Box2BoxTransform`).
-
- pred_class_logits: predicted class scores in [-inf, +inf]; use
- softmax(pred_class_logits) to estimate P(class).
-
- gt_classes: ground-truth classification labels in [0, K], where [0, K) represent
- foreground object classes and K represents the background class.
-
- pred_proposal_deltas: predicted box2box transform deltas for transforming proposals
- to detection box predictions.
-
- gt_proposal_deltas: ground-truth box2box transform deltas
-"""
-
-
-def fast_rcnn_inference(boxes, scores, image_shapes, score_thresh, nms_thresh, topk_per_image):
- """
- Call `fast_rcnn_inference_single_image` for all images.
-
- Args:
- boxes (list[Tensor]): A list of Tensors of predicted class-specific or class-agnostic
- boxes for each image. Element i has shape (Ri, K * 4) if doing
- class-specific regression, or (Ri, 4) if doing class-agnostic
- regression, where Ri is the number of predicted objects for image i.
- This is compatible with the output of :meth:`FastRCNNOutputs.predict_boxes`.
- scores (list[Tensor]): A list of Tensors of predicted class scores for each image.
- Element i has shape (Ri, K + 1), where Ri is the number of predicted objects
- for image i. Compatible with the output of :meth:`FastRCNNOutputs.predict_probs`.
- image_shapes (list[tuple]): A list of (width, height) tuples for each image in the batch.
- score_thresh (float): Only return detections with a confidence score exceeding this
- threshold.
- nms_thresh (float): The threshold to use for box non-maximum suppression. Value in [0, 1].
- topk_per_image (int): The number of top scoring detections to return. Set < 0 to return
- all detections.
-
- Returns:
- instances: (list[Instances]): A list of N instances, one for each image in the batch,
- that stores the topk most confidence detections.
- kept_indices: (list[Tensor]): A list of 1D tensor of length of N, each element indicates
- the corresponding boxes/scores index in [0, Ri) from the input, for image i.
- """
- result_per_image = [
- fast_rcnn_inference_single_image(
- boxes_per_image, scores_per_image, image_shape, score_thresh, nms_thresh, topk_per_image
- )
- for scores_per_image, boxes_per_image, image_shape in zip(scores, boxes, image_shapes)
- ]
- return [x[0] for x in result_per_image], [x[1] for x in result_per_image]
-
-
-def fast_rcnn_inference_single_image(
- boxes, scores, image_shape, score_thresh, nms_thresh, topk_per_image
-):
- """
- Single-image inference. Return bounding-box detection results by thresholding
- on scores and applying non-maximum suppression (NMS).
-
- Args:
- Same as `fast_rcnn_inference`, but with boxes, scores, and image shapes
- per image.
-
- Returns:
- Same as `fast_rcnn_inference`, but for only one image.
- """
- valid_mask = torch.isfinite(boxes).all(dim=1) & torch.isfinite(scores).all(dim=1)
- if not valid_mask.all():
- boxes = boxes[valid_mask]
- scores = scores[valid_mask]
-
- scores = scores[:, :-1]
- num_bbox_reg_classes = boxes.shape[1] // 4
- # Convert to Boxes to use the `clip` function ...
- boxes = Boxes(boxes.reshape(-1, 4))
- boxes.clip(image_shape)
- boxes = boxes.tensor.view(-1, num_bbox_reg_classes, 4) # R x C x 4
-
- # Filter results based on detection scores
- filter_mask = scores > score_thresh # R x K
- # R' x 2. First column contains indices of the R predictions;
- # Second column contains indices of classes.
- filter_inds = filter_mask.nonzero()
- if num_bbox_reg_classes == 1:
- boxes = boxes[filter_inds[:, 0], 0]
- else:
- boxes = boxes[filter_mask]
- scores = scores[filter_mask]
-
- # Apply per-class NMS
- keep = batched_nms(boxes, scores, filter_inds[:, 1], nms_thresh)
- if topk_per_image >= 0:
- keep = keep[:topk_per_image]
- boxes, scores, filter_inds = boxes[keep], scores[keep], filter_inds[keep]
-
- result = Instances(image_shape)
- result.pred_boxes = Boxes(boxes)
- result.scores = scores
- result.pred_classes = filter_inds[:, 1]
- return result, filter_inds[:, 0]
-
-
-class FastRCNNOutputs(object):
- """
- A class that stores information about outputs of a Fast R-CNN head.
- It provides methods that are used to decode the outputs of a Fast R-CNN head.
- """
-
- def __init__(
- self,
- box2box_transform,
- pred_class_logits,
- pred_proposal_deltas,
- proposals,
- smooth_l1_beta=0,
- ):
- """
- Args:
- box2box_transform (Box2BoxTransform/Box2BoxTransformRotated):
- box2box transform instance for proposal-to-detection transformations.
- pred_class_logits (Tensor): A tensor of shape (R, K + 1) storing the predicted class
- logits for all R predicted object instances.
- Each row corresponds to a predicted object instance.
- pred_proposal_deltas (Tensor): A tensor of shape (R, K * B) or (R, B) for
- class-specific or class-agnostic regression. It stores the predicted deltas that
- transform proposals into final box detections.
- B is the box dimension (4 or 5).
- When B is 4, each row is [dx, dy, dw, dh (, ....)].
- When B is 5, each row is [dx, dy, dw, dh, da (, ....)].
- proposals (list[Instances]): A list of N Instances, where Instances i stores the
- proposals for image i, in the field "proposal_boxes".
- When training, each Instances must have ground-truth labels
- stored in the field "gt_classes" and "gt_boxes".
- The total number of all instances must be equal to R.
- smooth_l1_beta (float): The transition point between L1 and L2 loss in
- the smooth L1 loss function. When set to 0, the loss becomes L1. When
- set to +inf, the loss becomes constant 0.
- """
- self.box2box_transform = box2box_transform
- self.num_preds_per_image = [len(p) for p in proposals]
- self.pred_class_logits = pred_class_logits
- self.pred_proposal_deltas = pred_proposal_deltas
- self.smooth_l1_beta = smooth_l1_beta
- self.image_shapes = [x.image_size for x in proposals]
-
- if len(proposals):
- box_type = type(proposals[0].proposal_boxes)
- # cat(..., dim=0) concatenates over all images in the batch
- self.proposals = box_type.cat([p.proposal_boxes for p in proposals])
- assert (
- not self.proposals.tensor.requires_grad
- ), "Proposals should not require gradients!"
-
- # The following fields should exist only when training.
- if proposals[0].has("gt_boxes"):
- self.gt_boxes = box_type.cat([p.gt_boxes for p in proposals])
- assert proposals[0].has("gt_classes")
- self.gt_classes = cat([p.gt_classes for p in proposals], dim=0)
- else:
- self.proposals = Boxes(torch.zeros(0, 4, device=self.pred_proposal_deltas.device))
- self._no_instances = len(proposals) == 0 # no instances found
-
- def _log_accuracy(self):
- """
- Log the accuracy metrics to EventStorage.
- """
- num_instances = self.gt_classes.numel()
- pred_classes = self.pred_class_logits.argmax(dim=1)
- bg_class_ind = self.pred_class_logits.shape[1] - 1
-
- fg_inds = (self.gt_classes >= 0) & (self.gt_classes < bg_class_ind)
- num_fg = fg_inds.nonzero().numel()
- fg_gt_classes = self.gt_classes[fg_inds]
- fg_pred_classes = pred_classes[fg_inds]
-
- num_false_negative = (fg_pred_classes == bg_class_ind).nonzero().numel()
- num_accurate = (pred_classes == self.gt_classes).nonzero().numel()
- fg_num_accurate = (fg_pred_classes == fg_gt_classes).nonzero().numel()
-
- storage = get_event_storage()
- if num_instances > 0:
- storage.put_scalar("fast_rcnn/cls_accuracy", num_accurate / num_instances)
- if num_fg > 0:
- storage.put_scalar("fast_rcnn/fg_cls_accuracy", fg_num_accurate / num_fg)
- storage.put_scalar("fast_rcnn/false_negative", num_false_negative / num_fg)
-
- def softmax_cross_entropy_loss(self):
- """
- Compute the softmax cross entropy loss for box classification.
-
- Returns:
- scalar Tensor
- """
- if self._no_instances:
- return 0.0 * F.cross_entropy(
- self.pred_class_logits,
- torch.zeros(0, dtype=torch.long, device=self.pred_class_logits.device),
- reduction="sum",
- )
- else:
- self._log_accuracy()
- return F.cross_entropy(self.pred_class_logits, self.gt_classes, reduction="mean")
-
- def smooth_l1_loss(self):
- """
- Compute the smooth L1 loss for box regression.
-
- Returns:
- scalar Tensor
- """
- if self._no_instances:
- return 0.0 * smooth_l1_loss(
- self.pred_proposal_deltas,
- torch.zeros_like(self.pred_proposal_deltas),
- 0.0,
- reduction="sum",
- )
- gt_proposal_deltas = self.box2box_transform.get_deltas(
- self.proposals.tensor, self.gt_boxes.tensor
- )
- box_dim = gt_proposal_deltas.size(1) # 4 or 5
- cls_agnostic_bbox_reg = self.pred_proposal_deltas.size(1) == box_dim
- device = self.pred_proposal_deltas.device
-
- bg_class_ind = self.pred_class_logits.shape[1] - 1
-
- # Box delta loss is only computed between the prediction for the gt class k
- # (if 0 <= k < bg_class_ind) and the target; there is no loss defined on predictions
- # for non-gt classes and background.
- # Empty fg_inds produces a valid loss of zero as long as the size_average
- # arg to smooth_l1_loss is False (otherwise it uses torch.mean internally
- # and would produce a nan loss).
- fg_inds = torch.nonzero((self.gt_classes >= 0) & (self.gt_classes < bg_class_ind)).squeeze(
- 1
- )
- if cls_agnostic_bbox_reg:
- # pred_proposal_deltas only corresponds to foreground class for agnostic
- gt_class_cols = torch.arange(box_dim, device=device)
- else:
- fg_gt_classes = self.gt_classes[fg_inds]
- # pred_proposal_deltas for class k are located in columns [b * k : b * k + b],
- # where b is the dimension of box representation (4 or 5)
- # Note that compared to Detectron1,
- # we do not perform bounding box regression for background classes.
- gt_class_cols = box_dim * fg_gt_classes[:, None] + torch.arange(box_dim, device=device)
-
- loss_box_reg = smooth_l1_loss(
- self.pred_proposal_deltas[fg_inds[:, None], gt_class_cols],
- gt_proposal_deltas[fg_inds],
- self.smooth_l1_beta,
- reduction="sum",
- )
- # The loss is normalized using the total number of regions (R), not the number
- # of foreground regions even though the box regression loss is only defined on
- # foreground regions. Why? Because doing so gives equal training influence to
- # each foreground example. To see how, consider two different minibatches:
- # (1) Contains a single foreground region
- # (2) Contains 100 foreground regions
- # If we normalize by the number of foreground regions, the single example in
- # minibatch (1) will be given 100 times as much influence as each foreground
- # example in minibatch (2). Normalizing by the total number of regions, R,
- # means that the single example in minibatch (1) and each of the 100 examples
- # in minibatch (2) are given equal influence.
- loss_box_reg = loss_box_reg / self.gt_classes.numel()
- return loss_box_reg
-
- def _predict_boxes(self):
- """
- Returns:
- Tensor: A Tensors of predicted class-specific or class-agnostic boxes
- for all images in a batch. Element i has shape (Ri, K * B) or (Ri, B), where Ri is
- the number of predicted objects for image i and B is the box dimension (4 or 5)
- """
- num_pred = len(self.proposals)
- B = self.proposals.tensor.shape[1]
- K = self.pred_proposal_deltas.shape[1] // B
- boxes = self.box2box_transform.apply_deltas(
- self.pred_proposal_deltas.view(num_pred * K, B),
- self.proposals.tensor.unsqueeze(1).expand(num_pred, K, B).reshape(-1, B),
- )
- return boxes.view(num_pred, K * B)
-
- """
- A subclass is expected to have the following methods because
- they are used to query information about the head predictions.0
- """
-
- def losses(self):
- """
- Compute the default losses for box head in Fast(er) R-CNN,
- with softmax cross entropy loss and smooth L1 loss.
-
- Returns:
- A dict of losses (scalar tensors) containing keys "loss_cls" and "loss_box_reg".
- """
- return {
- "loss_cls": self.softmax_cross_entropy_loss(),
- "loss_box_reg": self.smooth_l1_loss(),
- }
-
- def predict_boxes(self):
- """
- Returns:
- list[Tensor]: A list of Tensors of predicted class-specific or class-agnostic boxes
- for each image. Element i has shape (Ri, K * B) or (Ri, B), where Ri is
- the number of predicted objects for image i and B is the box dimension (4 or 5)
- """
- return self._predict_boxes().split(self.num_preds_per_image, dim=0)
-
- def predict_boxes_for_gt_classes(self):
- """
- Returns:
- list[Tensor]: A list of Tensors of predicted boxes for GT classes in case of
- class-specific box head. Element i of the list has shape (Ri, B), where Ri is
- the number of predicted objects for image i and B is the box dimension (4 or 5)
- """
- predicted_boxes = self._predict_boxes()
- B = self.proposals.tensor.shape[1]
- # If the box head is class-agnostic, then the method is equivalent to `predicted_boxes`.
- if predicted_boxes.shape[1] > B:
- num_pred = len(self.proposals)
- num_classes = predicted_boxes.shape[1] // B
- # Some proposals are ignored or have a background class. Their gt_classes
- # cannot be used as index.
- gt_classes = torch.clamp(self.gt_classes, 0, num_classes - 1)
- predicted_boxes = predicted_boxes.view(num_pred, num_classes, B)[
- torch.arange(num_pred, dtype=torch.long, device=predicted_boxes.device), gt_classes
- ]
- return predicted_boxes.split(self.num_preds_per_image, dim=0)
-
- def predict_probs(self):
- """
- Returns:
- list[Tensor]: A list of Tensors of predicted class probabilities for each image.
- Element i has shape (Ri, K + 1), where Ri is the number of predicted objects
- for image i.
- """
- probs = F.softmax(self.pred_class_logits, dim=-1)
- return probs.split(self.num_preds_per_image, dim=0)
-
- def inference(self, score_thresh, nms_thresh, topk_per_image):
- """
- Args:
- score_thresh (float): same as fast_rcnn_inference.
- nms_thresh (float): same as fast_rcnn_inference.
- topk_per_image (int): same as fast_rcnn_inference.
- Returns:
- list[Instances]: same as fast_rcnn_inference.
- list[Tensor]: same as fast_rcnn_inference.
- """
- boxes = self.predict_boxes()
- scores = self.predict_probs()
- image_shapes = self.image_shapes
-
- return fast_rcnn_inference(
- boxes, scores, image_shapes, score_thresh, nms_thresh, topk_per_image
- )
-
-
-class FastRCNNOutputLayers(nn.Module):
- """
- Two linear layers for predicting Fast R-CNN outputs:
- (1) proposal-to-detection box regression deltas
- (2) classification scores
- """
-
- @configurable
- def __init__(
- self,
- input_shape,
- box2box_transform,
- num_classes,
- cls_agnostic_bbox_reg=False,
- smooth_l1_beta=0.0,
- test_score_thresh=0.0,
- test_nms_thresh=0.5,
- test_topk_per_image=100,
- ):
- """
- Args:
- input_shape (ShapeSpec): shape of the input feature to this module
- box2box_transform (Box2BoxTransform or Box2BoxTransformRotated):
- num_classes (int): number of foreground classes
- cls_agnostic_bbox_reg (bool): whether to use class agnostic for bbox regression
- smooth_l1_beta (float): transition point from L1 to L2 loss.
- test_score_thresh (float): threshold to filter predictions results.
- test_nms_thresh (float): NMS threshold for prediction results.
- test_topk_per_image (int): number of top predictions to produce per image.
- """
- super().__init__()
- if isinstance(input_shape, int): # some backward compatbility
- input_shape = ShapeSpec(channels=input_shape)
- input_size = input_shape.channels * (input_shape.width or 1) * (input_shape.height or 1)
- # The prediction layer for num_classes foreground classes and one background class
- # (hence + 1)
- self.cls_score = Linear(input_size, num_classes + 1)
- num_bbox_reg_classes = 1 if cls_agnostic_bbox_reg else num_classes
- box_dim = len(box2box_transform.weights)
- self.bbox_pred = Linear(input_size, num_bbox_reg_classes * box_dim)
-
- nn.init.normal_(self.cls_score.weight, std=0.01)
- nn.init.normal_(self.bbox_pred.weight, std=0.001)
- for l in [self.cls_score, self.bbox_pred]:
- nn.init.constant_(l.bias, 0)
-
- self.box2box_transform = box2box_transform
- self.smooth_l1_beta = smooth_l1_beta
- self.test_score_thresh = test_score_thresh
- self.test_nms_thresh = test_nms_thresh
- self.test_topk_per_image = test_topk_per_image
-
- @classmethod
- def from_config(cls, cfg, input_shape):
- return {
- "input_shape": input_shape,
- "box2box_transform": Box2BoxTransform(weights=cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS),
- # fmt: off
- "num_classes" : cfg.MODEL.ROI_HEADS.NUM_CLASSES,
- "cls_agnostic_bbox_reg" : cfg.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG,
- "smooth_l1_beta" : cfg.MODEL.ROI_BOX_HEAD.SMOOTH_L1_BETA,
- "test_score_thresh" : cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST,
- "test_nms_thresh" : cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST,
- "test_topk_per_image" : cfg.TEST.DETECTIONS_PER_IMAGE
- # fmt: on
- }
-
- def forward(self, x):
- """
- Returns:
- Tensor: Nx(K+1) scores for each box
- Tensor: Nx4 or Nx(Kx4) bounding box regression deltas.
- """
- if x.dim() > 2:
- x = torch.flatten(x, start_dim=1)
- scores = self.cls_score(x)
- proposal_deltas = self.bbox_pred(x)
- return scores, proposal_deltas
-
- # TODO: move the implementation to this class.
- def losses(self, predictions, proposals):
- """
- Args:
- predictions: return values of :meth:`forward()`.
- proposals (list[Instances]): proposals that match the features
- that were used to compute predictions.
- """
- scores, proposal_deltas = predictions
- return FastRCNNOutputs(
- self.box2box_transform, scores, proposal_deltas, proposals, self.smooth_l1_beta
- ).losses()
-
- def inference(self, predictions, proposals):
- scores, proposal_deltas = predictions
- return FastRCNNOutputs(
- self.box2box_transform, scores, proposal_deltas, proposals, self.smooth_l1_beta
- ).inference(self.test_score_thresh, self.test_nms_thresh, self.test_topk_per_image)
-
- def predict_boxes_for_gt_classes(self, predictions, proposals):
- scores, proposal_deltas = predictions
- return FastRCNNOutputs(
- self.box2box_transform, scores, proposal_deltas, proposals, self.smooth_l1_beta
- ).predict_boxes_for_gt_classes()
-
- def predict_boxes(self, predictions, proposals):
- scores, proposal_deltas = predictions
- return FastRCNNOutputs(
- self.box2box_transform, scores, proposal_deltas, proposals, self.smooth_l1_beta
- ).predict_boxes()
-
- def predict_probs(self, predictions, proposals):
- scores, proposal_deltas = predictions
- return FastRCNNOutputs(
- self.box2box_transform, scores, proposal_deltas, proposals, self.smooth_l1_beta
- ).predict_probs()
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_config.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_config.py
deleted file mode 100644
index 650bdf2c42107c7031709653783cb2f3043e1bdf..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_config.py
+++ /dev/null
@@ -1,240 +0,0 @@
-#!/usr/bin/env python
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-
-
-import os
-import tempfile
-import unittest
-import torch
-
-from detectron2.config import configurable, downgrade_config, get_cfg, upgrade_config
-from detectron2.layers import ShapeSpec
-
-_V0_CFG = """
-MODEL:
- RPN_HEAD:
- NAME: "TEST"
-VERSION: 0
-"""
-
-_V1_CFG = """
-MODEL:
- WEIGHT: "/path/to/weight"
-"""
-
-
-class TestConfigVersioning(unittest.TestCase):
- def test_upgrade_downgrade_consistency(self):
- cfg = get_cfg()
- # check that custom is preserved
- cfg.USER_CUSTOM = 1
-
- down = downgrade_config(cfg, to_version=0)
- up = upgrade_config(down)
- self.assertTrue(up == cfg)
-
- def _merge_cfg_str(self, cfg, merge_str):
- f = tempfile.NamedTemporaryFile(mode="w", suffix=".yaml", delete=False)
- try:
- f.write(merge_str)
- f.close()
- cfg.merge_from_file(f.name)
- finally:
- os.remove(f.name)
- return cfg
-
- def test_auto_upgrade(self):
- cfg = get_cfg()
- latest_ver = cfg.VERSION
- cfg.USER_CUSTOM = 1
-
- self._merge_cfg_str(cfg, _V0_CFG)
-
- self.assertEqual(cfg.MODEL.RPN.HEAD_NAME, "TEST")
- self.assertEqual(cfg.VERSION, latest_ver)
-
- def test_guess_v1(self):
- cfg = get_cfg()
- latest_ver = cfg.VERSION
- self._merge_cfg_str(cfg, _V1_CFG)
- self.assertEqual(cfg.VERSION, latest_ver)
-
-
-class _TestClassA(torch.nn.Module):
- @configurable
- def __init__(self, arg1, arg2, arg3=3):
- super().__init__()
- self.arg1 = arg1
- self.arg2 = arg2
- self.arg3 = arg3
- assert arg1 == 1
- assert arg2 == 2
- assert arg3 == 3
-
- @classmethod
- def from_config(cls, cfg):
- args = {"arg1": cfg.ARG1, "arg2": cfg.ARG2}
- return args
-
-
-class _TestClassB(_TestClassA):
- @configurable
- def __init__(self, input_shape, arg1, arg2, arg3=3):
- """
- Doc of _TestClassB
- """
- assert input_shape == "shape"
- super().__init__(arg1, arg2, arg3)
-
- @classmethod
- def from_config(cls, cfg, input_shape): # test extra positional arg in from_config
- args = {"arg1": cfg.ARG1, "arg2": cfg.ARG2}
- args["input_shape"] = input_shape
- return args
-
-
-class _LegacySubClass(_TestClassB):
- # an old subclass written in cfg style
- def __init__(self, cfg, input_shape, arg4=4):
- super().__init__(cfg, input_shape)
- assert self.arg1 == 1
- assert self.arg2 == 2
- assert self.arg3 == 3
-
-
-class _NewSubClassNewInit(_TestClassB):
- # test new subclass with a new __init__
- @configurable
- def __init__(self, input_shape, arg4=4, **kwargs):
- super().__init__(input_shape, **kwargs)
- assert self.arg1 == 1
- assert self.arg2 == 2
- assert self.arg3 == 3
-
-
-class _LegacySubClassNotCfg(_TestClassB):
- # an old subclass written in cfg style, but argument is not called "cfg"
- def __init__(self, config, input_shape):
- super().__init__(config, input_shape)
- assert self.arg1 == 1
- assert self.arg2 == 2
- assert self.arg3 == 3
-
-
-class _TestClassC(_TestClassB):
- @classmethod
- def from_config(cls, cfg, input_shape, **kwargs): # test extra kwarg overwrite
- args = {"arg1": cfg.ARG1, "arg2": cfg.ARG2}
- args["input_shape"] = input_shape
- args.update(kwargs)
- return args
-
-
-class _TestClassD(_TestClassA):
- @configurable
- def __init__(self, input_shape: ShapeSpec, arg1: int, arg2, arg3=3):
- assert input_shape == "shape"
- super().__init__(arg1, arg2, arg3)
-
- # _TestClassA.from_config does not have input_shape args.
- # Test whether input_shape will be forwarded to __init__
-
-
-class TestConfigurable(unittest.TestCase):
- def testInitWithArgs(self):
- _ = _TestClassA(arg1=1, arg2=2, arg3=3)
- _ = _TestClassB("shape", arg1=1, arg2=2)
- _ = _TestClassC("shape", arg1=1, arg2=2)
- _ = _TestClassD("shape", arg1=1, arg2=2, arg3=3)
-
- def testPatchedAttr(self):
- self.assertTrue("Doc" in _TestClassB.__init__.__doc__)
- self.assertEqual(_TestClassD.__init__.__annotations__["arg1"], int)
-
- def testInitWithCfg(self):
- cfg = get_cfg()
- cfg.ARG1 = 1
- cfg.ARG2 = 2
- cfg.ARG3 = 3
- _ = _TestClassA(cfg)
- _ = _TestClassB(cfg, input_shape="shape")
- _ = _TestClassC(cfg, input_shape="shape")
- _ = _TestClassD(cfg, input_shape="shape")
- _ = _LegacySubClass(cfg, input_shape="shape")
- _ = _NewSubClassNewInit(cfg, input_shape="shape")
- _ = _LegacySubClassNotCfg(cfg, input_shape="shape")
- with self.assertRaises(TypeError):
- # disallow forwarding positional args to __init__ since it's prone to errors
- _ = _TestClassD(cfg, "shape")
-
- # call with kwargs instead
- _ = _TestClassA(cfg=cfg)
- _ = _TestClassB(cfg=cfg, input_shape="shape")
- _ = _TestClassC(cfg=cfg, input_shape="shape")
- _ = _TestClassD(cfg=cfg, input_shape="shape")
- _ = _LegacySubClass(cfg=cfg, input_shape="shape")
- _ = _NewSubClassNewInit(cfg=cfg, input_shape="shape")
- _ = _LegacySubClassNotCfg(config=cfg, input_shape="shape")
-
- def testInitWithCfgOverwrite(self):
- cfg = get_cfg()
- cfg.ARG1 = 1
- cfg.ARG2 = 999 # wrong config
- with self.assertRaises(AssertionError):
- _ = _TestClassA(cfg, arg3=3)
-
- # overwrite arg2 with correct config later:
- _ = _TestClassA(cfg, arg2=2, arg3=3)
- _ = _TestClassB(cfg, input_shape="shape", arg2=2, arg3=3)
- _ = _TestClassC(cfg, input_shape="shape", arg2=2, arg3=3)
- _ = _TestClassD(cfg, input_shape="shape", arg2=2, arg3=3)
-
- # call with kwargs cfg=cfg instead
- _ = _TestClassA(cfg=cfg, arg2=2, arg3=3)
- _ = _TestClassB(cfg=cfg, input_shape="shape", arg2=2, arg3=3)
- _ = _TestClassC(cfg=cfg, input_shape="shape", arg2=2, arg3=3)
- _ = _TestClassD(cfg=cfg, input_shape="shape", arg2=2, arg3=3)
-
- def testInitWithCfgWrongArgs(self):
- cfg = get_cfg()
- cfg.ARG1 = 1
- cfg.ARG2 = 2
- with self.assertRaises(TypeError):
- _ = _TestClassB(cfg, "shape", not_exist=1)
- with self.assertRaises(TypeError):
- _ = _TestClassC(cfg, "shape", not_exist=1)
- with self.assertRaises(TypeError):
- _ = _TestClassD(cfg, "shape", not_exist=1)
-
- def testBadClass(self):
- class _BadClass1:
- @configurable
- def __init__(self, a=1, b=2):
- pass
-
- class _BadClass2:
- @configurable
- def __init__(self, a=1, b=2):
- pass
-
- def from_config(self, cfg): # noqa
- pass
-
- class _BadClass3:
- @configurable
- def __init__(self, a=1, b=2):
- pass
-
- # bad name: must be cfg
- @classmethod
- def from_config(cls, config): # noqa
- pass
-
- with self.assertRaises(AttributeError):
- _ = _BadClass1(a=1)
-
- with self.assertRaises(TypeError):
- _ = _BadClass2(a=1)
-
- with self.assertRaises(TypeError):
- _ = _BadClass3(get_cfg())
diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_operator_overloading.cpp b/spaces/CVPR/LIVE/pybind11/tests/test_operator_overloading.cpp
deleted file mode 100644
index f3c2eaafa9918baf38483725cd52c48aa6ecb8af..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/tests/test_operator_overloading.cpp
+++ /dev/null
@@ -1,226 +0,0 @@
-/*
- tests/test_operator_overloading.cpp -- operator overloading
-
- Copyright (c) 2016 Wenzel Jakob
-
- All rights reserved. Use of this source code is governed by a
- BSD-style license that can be found in the LICENSE file.
-*/
-
-#include "pybind11_tests.h"
-#include "constructor_stats.h"
-#include
-#include
-
-class Vector2 {
-public:
- Vector2(float x, float y) : x(x), y(y) { print_created(this, toString()); }
- Vector2(const Vector2 &v) : x(v.x), y(v.y) { print_copy_created(this); }
- Vector2(Vector2 &&v) : x(v.x), y(v.y) { print_move_created(this); v.x = v.y = 0; }
- Vector2 &operator=(const Vector2 &v) { x = v.x; y = v.y; print_copy_assigned(this); return *this; }
- Vector2 &operator=(Vector2 &&v) { x = v.x; y = v.y; v.x = v.y = 0; print_move_assigned(this); return *this; }
- ~Vector2() { print_destroyed(this); }
-
- std::string toString() const { return "[" + std::to_string(x) + ", " + std::to_string(y) + "]"; }
-
- Vector2 operator-() const { return Vector2(-x, -y); }
- Vector2 operator+(const Vector2 &v) const { return Vector2(x + v.x, y + v.y); }
- Vector2 operator-(const Vector2 &v) const { return Vector2(x - v.x, y - v.y); }
- Vector2 operator-(float value) const { return Vector2(x - value, y - value); }
- Vector2 operator+(float value) const { return Vector2(x + value, y + value); }
- Vector2 operator*(float value) const { return Vector2(x * value, y * value); }
- Vector2 operator/(float value) const { return Vector2(x / value, y / value); }
- Vector2 operator*(const Vector2 &v) const { return Vector2(x * v.x, y * v.y); }
- Vector2 operator/(const Vector2 &v) const { return Vector2(x / v.x, y / v.y); }
- Vector2& operator+=(const Vector2 &v) { x += v.x; y += v.y; return *this; }
- Vector2& operator-=(const Vector2 &v) { x -= v.x; y -= v.y; return *this; }
- Vector2& operator*=(float v) { x *= v; y *= v; return *this; }
- Vector2& operator/=(float v) { x /= v; y /= v; return *this; }
- Vector2& operator*=(const Vector2 &v) { x *= v.x; y *= v.y; return *this; }
- Vector2& operator/=(const Vector2 &v) { x /= v.x; y /= v.y; return *this; }
-
- friend Vector2 operator+(float f, const Vector2 &v) { return Vector2(f + v.x, f + v.y); }
- friend Vector2 operator-(float f, const Vector2 &v) { return Vector2(f - v.x, f - v.y); }
- friend Vector2 operator*(float f, const Vector2 &v) { return Vector2(f * v.x, f * v.y); }
- friend Vector2 operator/(float f, const Vector2 &v) { return Vector2(f / v.x, f / v.y); }
-
- bool operator==(const Vector2 &v) const {
- return x == v.x && y == v.y;
- }
- bool operator!=(const Vector2 &v) const {
- return x != v.x || y != v.y;
- }
-private:
- float x, y;
-};
-
-class C1 { };
-class C2 { };
-
-int operator+(const C1 &, const C1 &) { return 11; }
-int operator+(const C2 &, const C2 &) { return 22; }
-int operator+(const C2 &, const C1 &) { return 21; }
-int operator+(const C1 &, const C2 &) { return 12; }
-
-// Note: Specializing explicit within `namespace std { ... }` is done due to a
-// bug in GCC<7. If you are supporting compilers later than this, consider
-// specializing `using template<> struct std::hash<...>` in the global
-// namespace instead, per this recommendation:
-// https://en.cppreference.com/w/cpp/language/extending_std#Adding_template_specializations
-namespace std {
- template<>
- struct hash {
- // Not a good hash function, but easy to test
- size_t operator()(const Vector2 &) { return 4; }
- };
-}
-
-// Not a good abs function, but easy to test.
-std::string abs(const Vector2&) {
- return "abs(Vector2)";
-}
-
-// MSVC warns about unknown pragmas, and warnings are errors.
-#ifndef _MSC_VER
- #pragma GCC diagnostic push
- // clang 7.0.0 and Apple LLVM 10.0.1 introduce `-Wself-assign-overloaded` to
- // `-Wall`, which is used here for overloading (e.g. `py::self += py::self `).
- // Here, we suppress the warning using `#pragma diagnostic`.
- // Taken from: https://github.com/RobotLocomotion/drake/commit/aaf84b46
- // TODO(eric): This could be resolved using a function / functor (e.g. `py::self()`).
- #if (__APPLE__) && (__clang__)
- #if (__clang_major__ >= 10) && (__clang_minor__ >= 0) && (__clang_patchlevel__ >= 1)
- #pragma GCC diagnostic ignored "-Wself-assign-overloaded"
- #endif
- #elif (__clang__)
- #if (__clang_major__ >= 7)
- #pragma GCC diagnostic ignored "-Wself-assign-overloaded"
- #endif
- #endif
-#endif
-
-TEST_SUBMODULE(operators, m) {
-
- // test_operator_overloading
- py::class_(m, "Vector2")
- .def(py::init())
- .def(py::self + py::self)
- .def(py::self + float())
- .def(py::self - py::self)
- .def(py::self - float())
- .def(py::self * float())
- .def(py::self / float())
- .def(py::self * py::self)
- .def(py::self / py::self)
- .def(py::self += py::self)
- .def(py::self -= py::self)
- .def(py::self *= float())
- .def(py::self /= float())
- .def(py::self *= py::self)
- .def(py::self /= py::self)
- .def(float() + py::self)
- .def(float() - py::self)
- .def(float() * py::self)
- .def(float() / py::self)
- .def(-py::self)
- .def("__str__", &Vector2::toString)
- .def("__repr__", &Vector2::toString)
- .def(py::self == py::self)
- .def(py::self != py::self)
- .def(py::hash(py::self))
- // N.B. See warning about usage of `py::detail::abs(py::self)` in
- // `operators.h`.
- .def("__abs__", [](const Vector2& v) { return abs(v); })
- ;
-
- m.attr("Vector") = m.attr("Vector2");
-
- // test_operators_notimplemented
- // #393: need to return NotSupported to ensure correct arithmetic operator behavior
- py::class_(m, "C1")
- .def(py::init<>())
- .def(py::self + py::self);
-
- py::class_(m, "C2")
- .def(py::init<>())
- .def(py::self + py::self)
- .def("__add__", [](const C2& c2, const C1& c1) { return c2 + c1; })
- .def("__radd__", [](const C2& c2, const C1& c1) { return c1 + c2; });
-
- // test_nested
- // #328: first member in a class can't be used in operators
- struct NestABase { int value = -2; };
- py::class_(m, "NestABase")
- .def(py::init<>())
- .def_readwrite("value", &NestABase::value);
-
- struct NestA : NestABase {
- int value = 3;
- NestA& operator+=(int i) { value += i; return *this; }
- };
- py::class_(m, "NestA")
- .def(py::init<>())
- .def(py::self += int())
- .def("as_base", [](NestA &a) -> NestABase& {
- return (NestABase&) a;
- }, py::return_value_policy::reference_internal);
- m.def("get_NestA", [](const NestA &a) { return a.value; });
-
- struct NestB {
- NestA a;
- int value = 4;
- NestB& operator-=(int i) { value -= i; return *this; }
- };
- py::class_(m, "NestB")
- .def(py::init<>())
- .def(py::self -= int())
- .def_readwrite("a", &NestB::a);
- m.def("get_NestB", [](const NestB &b) { return b.value; });
-
- struct NestC {
- NestB b;
- int value = 5;
- NestC& operator*=(int i) { value *= i; return *this; }
- };
- py::class_(m, "NestC")
- .def(py::init<>())
- .def(py::self *= int())
- .def_readwrite("b", &NestC::b);
- m.def("get_NestC", [](const NestC &c) { return c.value; });
-
-
- // test_overriding_eq_reset_hash
- // #2191 Overriding __eq__ should set __hash__ to None
- struct Comparable {
- int value;
- bool operator==(const Comparable& rhs) const {return value == rhs.value;}
- };
-
- struct Hashable : Comparable {
- explicit Hashable(int value): Comparable{value}{};
- size_t hash() const { return static_cast(value); }
- };
-
- struct Hashable2 : Hashable {
- using Hashable::Hashable;
- };
-
- py::class_(m, "Comparable")
- .def(py::init())
- .def(py::self == py::self);
-
- py::class_(m, "Hashable")
- .def(py::init())
- .def(py::self == py::self)
- .def("__hash__", &Hashable::hash);
-
- // define __hash__ before __eq__
- py::class_(m, "Hashable2")
- .def("__hash__", &Hashable::hash)
- .def(py::init())
- .def(py::self == py::self);
-}
-
-#ifndef _MSC_VER
- #pragma GCC diagnostic pop
-#endif
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/remove.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/remove.h
deleted file mode 100644
index 48de522dfdc5b1f6e0e274eb31f98d352943fccd..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/remove.h
+++ /dev/null
@@ -1,202 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-/*! \file remove.h
- * \brief Sequential implementations of remove functions.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace sequential
-{
-
-
-__thrust_exec_check_disable__
-template
-__host__ __device__
- ForwardIterator remove_if(sequential::execution_policy &,
- ForwardIterator first,
- ForwardIterator last,
- Predicate pred)
-{
- // wrap pred
- thrust::detail::wrapped_function<
- Predicate,
- bool
- > wrapped_pred(pred);
-
- // advance iterators until wrapped_pred(*first) is true or we reach the end of input
- while(first != last && !wrapped_pred(*first))
- ++first;
-
- if(first == last)
- return first;
-
- // result always trails first
- ForwardIterator result = first;
-
- ++first;
-
- while(first != last)
- {
- if(!wrapped_pred(*first))
- {
- *result = *first;
- ++result;
- }
- ++first;
- }
-
- return result;
-}
-
-
-__thrust_exec_check_disable__
-template
-__host__ __device__
- ForwardIterator remove_if(sequential::execution_policy &,
- ForwardIterator first,
- ForwardIterator last,
- InputIterator stencil,
- Predicate pred)
-{
- // wrap pred
- thrust::detail::wrapped_function<
- Predicate,
- bool
- > wrapped_pred(pred);
-
- // advance iterators until wrapped_pred(*stencil) is true or we reach the end of input
- while(first != last && !wrapped_pred(*stencil))
- {
- ++first;
- ++stencil;
- }
-
- if(first == last)
- return first;
-
- // result always trails first
- ForwardIterator result = first;
-
- ++first;
- ++stencil;
-
- while(first != last)
- {
- if(!wrapped_pred(*stencil))
- {
- *result = *first;
- ++result;
- }
- ++first;
- ++stencil;
- }
-
- return result;
-}
-
-
-__thrust_exec_check_disable__
-template
-__host__ __device__
- OutputIterator remove_copy_if(sequential::execution_policy &,
- InputIterator first,
- InputIterator last,
- OutputIterator result,
- Predicate pred)
-{
- // wrap pred
- thrust::detail::wrapped_function<
- Predicate,
- bool
- > wrapped_pred(pred);
-
- while (first != last)
- {
- if (!wrapped_pred(*first))
- {
- *result = *first;
- ++result;
- }
-
- ++first;
- }
-
- return result;
-}
-
-
-__thrust_exec_check_disable__
-template
-__host__ __device__
- OutputIterator remove_copy_if(sequential::execution_policy &,
- InputIterator1 first,
- InputIterator1 last,
- InputIterator2 stencil,
- OutputIterator result,
- Predicate pred)
-{
- // wrap pred
- thrust::detail::wrapped_function<
- Predicate,
- bool
- > wrapped_pred(pred);
-
- while (first != last)
- {
- if (!wrapped_pred(*stencil))
- {
- *result = *first;
- ++result;
- }
-
- ++first;
- ++stencil;
- }
-
- return result;
-}
-
-
-} // end namespace sequential
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
diff --git a/spaces/Cecil8352/vits-models/text/__init__.py b/spaces/Cecil8352/vits-models/text/__init__.py
deleted file mode 100644
index 663c4b6416affb53c9dc56dddbc8b2b65d4bf518..0000000000000000000000000000000000000000
--- a/spaces/Cecil8352/vits-models/text/__init__.py
+++ /dev/null
@@ -1,57 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-from text import cleaners
-from text.symbols import symbols
-
-
-# Mappings from symbol to numeric ID and vice versa:
-_symbol_to_id = {s: i for i, s in enumerate(symbols)}
-_id_to_symbol = {i: s for i, s in enumerate(symbols)}
-
-
-def text_to_sequence(text, symbols, cleaner_names):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- cleaner_names: names of the cleaner functions to run the text through
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- _symbol_to_id = {s: i for i, s in enumerate(symbols)}
- sequence = []
-
- clean_text = _clean_text(text, cleaner_names)
- for symbol in clean_text:
- if symbol not in _symbol_to_id.keys():
- continue
- symbol_id = _symbol_to_id[symbol]
- sequence += [symbol_id]
- return sequence, clean_text
-
-
-def cleaned_text_to_sequence(cleaned_text):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()]
- return sequence
-
-
-def sequence_to_text(sequence):
- '''Converts a sequence of IDs back to a string'''
- result = ''
- for symbol_id in sequence:
- s = _id_to_symbol[symbol_id]
- result += s
- return result
-
-
-def _clean_text(text, cleaner_names):
- for name in cleaner_names:
- cleaner = getattr(cleaners, name)
- if not cleaner:
- raise Exception('Unknown cleaner: %s' % name)
- text = cleaner(text)
- return text
diff --git a/spaces/Chomkwoy/Nilkessye/cpool_new/src/right_pool.cpp b/spaces/Chomkwoy/Nilkessye/cpool_new/src/right_pool.cpp
deleted file mode 100644
index b606c47bb0eb68440851b61dae3e5786675fd486..0000000000000000000000000000000000000000
--- a/spaces/Chomkwoy/Nilkessye/cpool_new/src/right_pool.cpp
+++ /dev/null
@@ -1,91 +0,0 @@
-// #include
-#include
-
-#include
-
-std::vector pool_forward(
- torch::Tensor input
-) {
- // Initialize output
- torch::Tensor output = torch::zeros_like(input);
-
- // Get width
- int64_t width = input.size(3);
-
- // Copy the last column
- torch::Tensor input_temp = input.select(3, 0);
- torch::Tensor output_temp = output.select(3, 0);
- output_temp.copy_(input_temp);
-
- torch::Tensor max_temp;
- for (int64_t ind = 0; ind < width - 1; ++ind) {
- input_temp = input.select(3, ind + 1);
- output_temp = output.select(3, ind);
- max_temp = output.select(3, ind + 1);
-
- torch::max_out(max_temp, input_temp, output_temp);
- }
-
- return {
- output
- };
-}
-
-std::vector pool_backward(
- torch::Tensor input,
- torch::Tensor grad_output
-) {
- torch::Tensor output = torch::zeros_like(input);
-
- int32_t batch = input.size(0);
- int32_t channel = input.size(1);
- int32_t height = input.size(2);
- int32_t width = input.size(3);
-
- // auto max_val = torch::zeros(torch::CUDA(torch::kFloat), {batch, channel, height});
- // auto max_ind = torch::zeros(torch::CUDA(torch::kLong), {batch, channel, height});
- auto max_val = torch::zeros({batch, channel, height}, torch::TensorOptions().dtype(torch::kFloat).device(torch::kCUDA));
- auto max_ind = torch::zeros({batch, channel, height}, torch::TensorOptions().dtype(torch::kLong).device(torch::kCUDA));
-
- auto input_temp = input.select(3, 0);
- max_val.copy_(input_temp);
-
- max_ind.fill_(0);
-
- auto output_temp = output.select(3, 0);
- auto grad_output_temp = grad_output.select(3, 0);
- output_temp.copy_(grad_output_temp);
-
- auto un_max_ind = max_ind.unsqueeze(3);
- // auto gt_mask = torch::zeros(torch::CUDA(torch::kByte), {batch, channel, height});
- // auto max_temp = torch::zeros(torch::CUDA(torch::kFloat), {batch, channel, height});
- auto gt_mask = torch::zeros({batch, channel, height}, torch::TensorOptions().dtype(torch::kByte).device(torch::kCUDA));
- auto max_temp = torch::zeros({batch, channel, height}, torch::TensorOptions().dtype(torch::kFloat).device(torch::kCUDA));
-
- for (int32_t ind = 0; ind < width - 1; ++ind) {
- input_temp = input.select(3, ind + 1);
- torch::gt_out(gt_mask, input_temp, max_val);
-
- torch::masked_select_out(max_temp, input_temp, gt_mask);
- max_val.masked_scatter_(gt_mask, max_temp);
- max_ind.masked_fill_(gt_mask, ind + 1);
-
- grad_output_temp = grad_output.select(3, ind + 1).unsqueeze(3);
- output.scatter_add_(3, un_max_ind, grad_output_temp);
- }
-
- return {
- output
- };
-}
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def(
- "forward", &pool_forward, "Right Pool Forward",
- py::call_guard()
- );
- m.def(
- "backward", &pool_backward, "Right Pool Backward",
- py::call_guard()
- );
-}
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/adapter/GSUIDCore.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/adapter/GSUIDCore.js
deleted file mode 100644
index c790d7f5a138c784d8d164d2e9ad03afa2024798..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/adapter/GSUIDCore.js
+++ /dev/null
@@ -1,249 +0,0 @@
-import { randomUUID } from "crypto"
-import path from "node:path"
-import fs from "node:fs"
-
-Bot.adapter.push(new class GSUIDCoreAdapter {
- constructor() {
- this.id = "GSUIDCore"
- this.name = "早柚核心"
- this.path = this.id
- }
-
- toStr(data) {
- switch (typeof data) {
- case "string":
- return data
- case "number":
- return String(data)
- case "object":
- if (Buffer.isBuffer(data))
- return Buffer.from(data, "utf8").toString()
- else
- return JSON.stringify(data)
- }
- return data
- }
-
- makeLog(msg) {
- return this.toStr(msg).replace(/base64:\/\/.*?"/g, "base64://...\"")
- }
-
- makeMsg(msg) {
- if (!Array.isArray(msg))
- msg = [msg]
- const msgs = []
- for (let i of msg) {
- if (typeof i != "object")
- i = { type: "text", text: i }
-
- switch (i.type) {
- case "text":
- i = { type: "text", data: i.text }
- break
- case "image":
- i = { type: "image", data: i.file }
- break
- case "record":
- i = { type: "file", data: i.file }
- break
- case "video":
- i = { type: "file", data: i.file }
- break
- case "file":
- i = { type: "file", data: i.file }
- break
- case "at":
- i = { type: "at", data: i.qq }
- break
- case "reply":
- i = { type: "reply", data: i.id }
- break
- case "node": {
- const array = []
- for (const { message } of i.data)
- array.push(...this.makeMsg(message))
- i.data = array
- break
- } default:
- i = { type: "text", data: JSON.stringify(i) }
- }
- msgs.push(i)
- }
- return msgs
- }
-
- sendFriendMsg(data, msg) {
- const content = this.makeMsg(msg)
- logger.info(`${logger.blue(`[${data.self_id} => ${data.user_id}]`)} 发送好友消息:${this.makeLog(content)}`)
- data.bot.sendApi({
- bot_id: data.bot.bot_id,
- bot_self_id: data.bot.bot_self_id,
- target_type: "direct",
- target_id: data.user_id,
- content,
- })
- return { message_id: Date.now() }
- }
-
- sendGroupMsg(data, msg) {
- const target = data.group_id.split("-")
- const content = this.makeMsg(msg)
- logger.info(`${logger.blue(`[${data.self_id} => ${data.group_id}]`)} 发送群消息:${this.makeLog(content)}`)
- data.bot.sendApi({
- bot_id: data.bot.bot_id,
- bot_self_id: data.bot.bot_self_id,
- target_type: target[0],
- target_id: target[1],
- content,
- })
- return { message_id: Date.now() }
- }
-
- pickFriend(id, user_id) {
- const i = {
- ...Bot[id].fl.get(user_id),
- self_id: id,
- bot: Bot[id],
- user_id: user_id,
- }
- return {
- ...i,
- sendMsg: msg => this.sendFriendMsg(i, msg),
- }
- }
-
- pickMember(id, group_id, user_id) {
- const i = {
- ...Bot[id].fl.get(user_id),
- self_id: id,
- bot: Bot[id],
- group_id: group_id,
- user_id: user_id,
- }
- return {
- ...this.pickFriend(id, user_id),
- ...i,
- }
- }
-
- pickGroup(id, group_id) {
- const i = {
- ...Bot[id].gl.get(group_id),
- self_id: id,
- bot: Bot[id],
- group_id: group_id,
- }
- return {
- ...i,
- sendMsg: msg => this.sendGroupMsg(i, msg),
- pickMember: user_id => this.pickMember(id, group_id, user_id),
- }
- }
-
- makeBot(data, ws) {
- Bot[data.self_id] = {
- adapter: this,
- ws: ws,
- get sendApi() { return this.ws.sendMsg },
- uin: data.self_id,
- bot_id: data.bot_id,
- bot_self_id: data.bot_self_id,
- stat: { start_time: Date.now()/1000 },
- version: {
- id: this.id,
- name: this.name,
- },
- pickFriend: user_id => this.pickFriend(data.self_id, user_id),
- get pickUser() { return this.pickFriend },
- pickMember: (group_id, user_id) => this.pickMember(data.self_id, group_id, user_id),
- pickGroup: group_id => this.pickGroup(data.self_id, group_id),
- fl: new Map,
- gl: new Map,
- gml: new Map,
- }
-
- logger.mark(`${logger.blue(`[${data.self_id}]`)} ${this.name}(${this.id}) 已连接`)
- Bot.em(`connect.${data.self_id}`, data)
- }
-
- message(data, ws) {
- try {
- data = JSON.parse(data)
- } catch (err) {
- return logger.error(`解码数据失败:${logger.red(err)}`)
- }
-
- data.self_id = data.bot_self_id
- if (Bot[data.self_id]) {
- data.bot = Bot[data.self_id]
- data.bot.ws = ws
- } else {
- this.makeBot(data, ws)
- }
-
- data.post_type = "message"
- data.message_id = data.msg_id
- data.user_id = data.user_id
- data.sender = {
- user_id: data.user_id,
- user_pm: data.user_pm,
- }
- if (!data.bot.fl.has(data.user_id))
- data.bot.fl.set(data.user_id, data.sender)
-
- data.message = []
- data.raw_message = ""
- for (const i of data.content) {
- switch (i.type) {
- case "text":
- data.message.push({ type: "text", text: i.data })
- data.raw_message += i.data
- break
- case "image":
- data.message.push({ type: "image", url: i.data })
- data.raw_message += `[图片:${i.data}]`
- break
- case "file":
- data.message.push({ type: "file", url: i.data })
- data.raw_message += `[文件:${i.data}]`
- break
- case "at":
- data.message.push({ type: "at", qq: i.data })
- data.raw_message += `[提及:${i.data}]`
- break
- case "reply":
- data.message.push({ type: "reply", id: i.data })
- data.raw_message += `[回复:${i.data}]`
- break
- case "node":
- data.message.push({ type: "node", data: i.data })
- data.raw_message += `[合并转发:${JSON.stringify(i.data)}]`
- break
- default:
- data.message.push(i)
- data.raw_message += JSON.stringify(i)
- }
- }
-
- if (data.user_type == "direct") {
- data.message_type = "private"
- logger.info(`${logger.blue(`[${data.self_id}]`)} 好友消息:[${data.user_id}] ${data.raw_message}`)
- } else {
- data.message_type = "group"
- data.group_id = `${data.user_type}-${data.group_id}`
- if (!data.bot.gl.has(data.group_id))
- data.bot.gl.set(data.group_id, { group_id: data.group_id })
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群消息:[${data.group_id}, ${data.user_id}] ${data.raw_message}`)
- }
-
- Bot.em(`${data.post_type}.${data.message_type}`, data)
- }
-
- load() {
- if (!Array.isArray(Bot.wsf[this.path]))
- Bot.wsf[this.path] = []
- Bot.wsf[this.path].push((ws, ...args) =>
- ws.on("message", data => this.message(data, ws, ...args))
- )
- }
-})
\ No newline at end of file
diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/call_110/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/call_110/__init__.py
deleted file mode 100644
index 241f6fcc37209a66dbd767b6630a211065f6780d..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/meme-api/meme_generator/memes/call_110/__init__.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from typing import List
-
-from pil_utils import BuildImage
-
-from meme_generator import add_meme
-
-
-def call_110(images: List[BuildImage], texts, args):
- img1 = images[0].convert("RGBA").square().resize((250, 250))
- img0 = images[1].convert("RGBA").square().resize((250, 250))
-
- frame = BuildImage.new("RGB", (900, 500), "white")
- frame.draw_text((0, 0, 900, 200), "遇到困难请拨打", max_fontsize=100)
- frame.paste(img1, (50, 200), alpha=True)
- frame.paste(img1, (325, 200), alpha=True)
- frame.paste(img0, (600, 200), alpha=True)
- return frame.save_jpg()
-
-
-add_meme("call_110", call_110, min_images=2, max_images=2, keywords=["遇到困难请拨打"])
diff --git a/spaces/CognitiveLabs/Research-Assistant/config/singleton.py b/spaces/CognitiveLabs/Research-Assistant/config/singleton.py
deleted file mode 100644
index 55b2aeea120bbe51ca837265fcb7fbff467e55f2..0000000000000000000000000000000000000000
--- a/spaces/CognitiveLabs/Research-Assistant/config/singleton.py
+++ /dev/null
@@ -1,24 +0,0 @@
-"""The singleton metaclass for ensuring only one instance of a class."""
-import abc
-
-
-class Singleton(abc.ABCMeta, type):
- """
- Singleton metaclass for ensuring only one instance of a class.
- """
-
- _instances = {}
-
- def __call__(cls, *args, **kwargs):
- """Call method for the singleton metaclass."""
- if cls not in cls._instances:
- cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
- return cls._instances[cls]
-
-
-class AbstractSingleton(abc.ABC, metaclass=Singleton):
- """
- Abstract singleton class for ensuring only one instance of a class.
- """
-
- pass
diff --git a/spaces/Cpp4App/Cpp4App/CDM/detect_classify/classification.py b/spaces/Cpp4App/Cpp4App/CDM/detect_classify/classification.py
deleted file mode 100644
index fef30d40c43d601c990418c96de1928cf88e5209..0000000000000000000000000000000000000000
--- a/spaces/Cpp4App/Cpp4App/CDM/detect_classify/classification.py
+++ /dev/null
@@ -1,380 +0,0 @@
-from CDM.detect_merge.Element import Element
-import CDM.detect_compo.lib_ip.ip_preprocessing as pre
-import time
-import cv2
-import torch
-import numpy as np
-from torchvision import models
-from torch import nn
-import pandas as pd
-import re
-import openai
-import random
-import os
-from CDM.detect_merge.merge import reassign_ids
-import CDM.detect_merge.merge as merge
-from os.path import join as pjoin, exists
-
-label_dic ={'72':'Location', '42':'Photos', '77':'Social media', '91':'Voices', '6':'Email', '89':'Social media', '40':'Location', '43':'Phone', '82':'Photos',
- '3':'Contacts', '68':'Contacts', '49':'Profile', '56':'Photos'}
-
-keyword_list = {'Name':['name', 'first name', 'last name', 'full name', 'real name', 'surname', 'family name', 'given name'],
- 'Birthday':['birthday', 'date of birth', 'birth date', 'DOB', 'dob full birthday', 'birth year'],
- 'Address':['mailing address', 'physical address', 'postal address', 'billing address', 'shipping address', 'delivery address', 'residence', 'collect address', 'personal address', 'residential address'],
- 'Phone':['phone', 'phone number', 'mobile', 'mobile phone', 'mobile number', 'telephone', 'telephone number', 'call'],
- 'Email':['email', 'e-mail', 'email address', 'e-mail address'],
- 'Contacts':['contacts', 'phone-book', 'phone book', 'phonebook', 'contact list', 'phone contacts', 'address book'],
- 'Location':['location', 'locate', 'geography', 'geo', 'geo-location', 'precision location', 'nearby'],
- 'Photos':['camera', 'photo', 'scan', 'album', 'picture', 'gallery', 'photo library', 'storage', 'image', 'video', 'scanner', 'photograph'],
- 'Voices':['microphone', 'voice', 'mic', 'speech', 'talk'],
- 'Financial info':['credit card', 'pay', 'payment', 'debit card', 'mastercard', 'wallet'],
- 'IP':['IP', 'Internet Protocol', 'IP address', 'internet protocol address'],
- 'Cookies':['cookies', 'cookie'],
- 'Social media':['facebook', 'twitter', 'socialmedia', 'social media'],
- 'Profile':['profile', 'account'],
- 'Gender':['gender']}
-
-def get_data_type(sentence, keywords, use_gpt=True):
-
- sent_data_type = "others"
-
- if use_gpt:
- openai.api_key = os.environ["OPENAI_API_KEY"]
-
- prompt = f"Is this piece of texts \"{sentence}\" related to any following privacy information data types? Or not relevant to any of them? ONLY answer the data type or \"not relevant\". ONLY use following data type list. Data types and their Description:\n" \
- f"Name: How a user refers to themselves," \
- f" Birthday: A user’s birthday," \
- f" Address: A user’s address," \
- f" Phone: A user’s phone number," \
- f" Email: A user’s email address," \
- f" Contacts: A user’s contact information, or the access to the contact permission," \
- f" Location: A user’s location information, or the access to the location permission," \
- f" Photos: A user’s photos, videos, or the access to the camera permission," \
- f" Voices: A user’s voices, recordings, or the access to the microphone permission," \
- f" Financial Info: Information about a user’s financial accounts, purchases, or transactions," \
- f" Profile: A user’s account information," \
- f"Social Media: A user's social media information, or the access to social media accounts"
-
- response = openai.ChatCompletion.create(
- # engine="text-davinci-002",
- model="gpt-3.5-turbo",
- messages=[
- # {"role": "system", "content": "You are a helpful assistant."},
- {"role": "user", "content": prompt}
- ],
- max_tokens=100,
- n=1,
- stop=None,
- temperature=0,
- )
-
- # response_full_text = response.choices[0].text.strip()
- response_full_text = response.choices[0].message['content']
- for k in keywords.keys():
- if k == "Financial info" or k == "Social media":
- if k.lower() in response_full_text.lower():
- sent_data_type = k
- break
- else:
- words = re.split(r'\W+', response_full_text.lower())
- if k.lower() in words:
- sent_data_type = k
- break
-
- # print("----------------------")
- # print("sentence: ", sentence)
- # print("prompt: ", prompt)
- # print("response: ", response_full_text)
- # print("sent_data_type: ", sent_data_type)
-
- else:
- for k in keywords.keys():
- for w in keywords[k]:
- words = re.split(r'\W+', sentence.lower())
- if w.lower() in words:
- sent_data_type = k
- break
- if sent_data_type != "others":
- break
-
- return sent_data_type
-
-# def get_clf_model(use_resnet18=True, use_gpu=False):
-#
-# device = 'cpu'
-# if use_gpu:
-# device = 'cuda:0'
-#
-# if use_resnet18:
-# model = models.resnet18().to(device)
-# in_feature_num = model.fc.in_features
-# model.fc = nn.Linear(in_feature_num, 99)
-# model.conv1 = nn.Conv2d(in_channels=1, out_channels=64, kernel_size=(5, 5), padding=(3, 3), stride=(2, 2),
-# bias=False)
-#
-# PATH = "./CDM/model/model-99-resnet18.pkl"
-# model.load_state_dict(torch.load(PATH, map_location=torch.device(device)))
-#
-# model.eval()
-# else:
-# # replace with your own model
-# None
-#
-# return model
-
-def get_clf_model(clf_model="ResNet18", use_gpu=False):
-
- device = 'cpu'
- if use_gpu:
- device = 'cuda:0'
-
- if clf_model == "ResNet18":
- model = models.resnet18().to(device)
- in_feature_num = model.fc.in_features
- model.fc = nn.Linear(in_feature_num, 99)
- model.conv1 = nn.Conv2d(in_channels=1, out_channels=64, kernel_size=(5, 5), padding=(3, 3), stride=(2, 2),
- bias=False)
-
- PATH = "./CDM/model/model-99-resnet18.pkl"
- model.load_state_dict(torch.load(PATH, map_location=torch.device(device)))
-
- model.eval()
- elif clf_model == "ViT":
- model = torch.load('./CDM/model/model-99-ViT-entire.pkl', map_location=torch.device(device))
- model = model.to(device)
- model.eval()
-
- else:
- # replace with your own model
- None
-
- return model
-
-def compo_classification(input_img, output_root, segment_root, merge_json, output_data, resize_by_height=800, clf_model="ResNet18"):
- # load text and non-text compo
- ele_id = 0
- compos = []
- texts = []
- elements = []
-
- for compo in merge_json['compos']:
- if compo['class'] == 'Text':
- element = Element(ele_id,
- (compo["position"]['column_min'], compo["position"]['row_min'],
- compo["position"]['column_max'], compo["position"]['row_max']),
- 'Text', text_content=compo['text_content'])
- texts.append(element)
- ele_id += 1
- else:
- element = Element(ele_id,
- (compo["position"]['column_min'], compo["position"]['row_min'],
- compo["position"]['column_max'], compo["position"]['row_max']),
- compo['class'])
- compos.append(element)
- ele_id += 1
-
- org, grey = pre.read_img(input_img, resize_by_height)
-
- grey = grey.astype('float32')
- grey = grey / 255
-
- # grey = (grey - grey.mean()) / grey.std()
-
- # --------- classification ----------
-
- classification_start_time = time.process_time()
-
- for compo in compos:
-
- # comp_grey = grey[compo.row_min:compo.row_max, compo.col_min:compo.col_max]
- #
- # comp_crop = cv2.resize(comp_grey, (32, 32))
- #
- # comp_crop = comp_crop.reshape(1, 1, 32, 32)
- #
- # comp_tensor = torch.tensor(comp_crop)
- # comp_tensor = comp_tensor.permute(0, 1, 3, 2)
- #
- # model = get_clf_model()
- # pred_label = model(comp_tensor)
- #
- # if str(np.argmax(pred_label.cpu().data.numpy(), axis=1)[0]) in label_dic.keys():
- # compo.label = label_dic[str(np.argmax(pred_label.cpu().data.numpy(), axis=1)[0])]
- # elements.append(compo)
- # else:
- # compo.label = str(np.argmax(pred_label.cpu().data.numpy(), axis=1)[0])
-
- if clf_model == "ResNet18":
-
- comp_grey = grey[compo.row_min:compo.row_max, compo.col_min:compo.col_max]
-
- comp_crop = cv2.resize(comp_grey, (32, 32))
-
- comp_crop = comp_crop.reshape(1, 1, 32, 32)
-
- comp_tensor = torch.tensor(comp_crop)
- comp_tensor = comp_tensor.permute(0, 1, 3, 2)
-
- model = get_clf_model(clf_model)
- pred_label = model(comp_tensor)
-
- if str(np.argmax(pred_label.cpu().data.numpy(), axis=1)[0]) in label_dic.keys():
- compo.label = label_dic[str(np.argmax(pred_label.cpu().data.numpy(), axis=1)[0])]
- elements.append(compo)
- else:
- compo.label = str(np.argmax(pred_label.cpu().data.numpy(), axis=1)[0])
-
- elif clf_model == "ViT":
-
- comp_grey = grey[compo.row_min:compo.row_max, compo.col_min:compo.col_max]
-
- comp_crop = cv2.resize(comp_grey, (224, 224))
-
- # Convert the image to tensor
- comp_tensor = torch.from_numpy(comp_crop)
-
- # Reshape and repeat along the channel dimension to convert to RGB
- comp_tensor = comp_tensor.view(1, 224, 224).repeat(3, 1, 1)
-
- # comp_tensor = comp_tensor.permute(0, 2, 1)
-
- comp_tensor = comp_tensor.unsqueeze(0) # add a batch dimension
-
- model = get_clf_model(clf_model)
- # pred_label = model(comp_tensor)
-
- # Forward pass through the model
- with torch.no_grad():
- output = model(comp_tensor)
-
- # Get the predicted label
- _, predicted = torch.max(output.logits, 1)
-
- # print("predicted_label: ", predicted.cpu().numpy())
-
- if str(predicted.cpu().numpy()[0]) in label_dic.keys():
- compo.label = label_dic[str(predicted.cpu().numpy()[0])]
- elements.append(compo)
- else:
- compo.label = str(predicted.cpu().numpy()[0])
-
- else:
- print("clf_model has to be ResNet18 or ViT")
-
- time_cost_ic = time.process_time() - classification_start_time
- print("time cost for icon classification: %2.2f s" % time_cost_ic)
- # ic_time_cost_all.append(time_cost_ic)
-
- # --------- end classification ----------
-
- text_selection_time = time.process_time()
-
- for this_text in texts:
- # found_flag = 0
- #
- # for key in keyword_list:
- # for w in keyword_list[key]:
- # words = re.split(r'\W+', this_text.text_content.lower())
- # if w.lower() in words:
- # this_text.label = key
- # elements.append(this_text)
- # found_flag = 1
- # break
- #
- # if found_flag == 0:
- # this_text.label = 'others'
-
- retries = 10
- for i in range(retries):
- try:
- text_label = get_data_type(this_text.text_content.lower(), keyword_list, use_gpt=False)
- break
- except openai.error.RateLimitError as e:
- if "overloaded" in str(e):
- # Exponential backoff with jitter
- sleep_time = 2 * (2 ** i) + random.uniform(0, 0.1)
- time.sleep(sleep_time)
- else:
- raise
- except Exception as e:
- raise
-
- this_text.label = text_label
-
- if this_text.label != "others":
- elements.append(this_text)
-
- time_cost_ts = time.process_time() - text_selection_time
- print("time cost for text selection: %2.2f s" % time_cost_ts)
- # ts_time_cost_all.append(time_cost_ts)
-
- # ---------- end -------------------------------
-
- full_size_org, full_size_grey = pre.read_img(input_img)
- ratio = full_size_org.shape[0]/org.shape[0]
-
- show = False
- wait_key = 0
-
- reassign_ids(elements)
- board = merge.show_elements(full_size_org, elements, ratio, show=show, win_name='elements after merging', wait_key=wait_key, line=3)
- board_one_element = merge.show_one_element(full_size_org, elements, ratio, show=show, win_name='elements after merging', wait_key=wait_key, line=3)
-
- classification_root = pjoin(output_root, 'classification')
-
- # save all merged elements, clips and blank background
- name = input_img.replace('\\', '/').split('/')[-1][:-4]
- components = merge.save_elements(pjoin(classification_root, name + '.json'), elements, full_size_org.shape, ratio)
- cv2.imwrite(pjoin(classification_root, name + '.jpg'), board)
-
- print("len(board_one_element): ", len(board_one_element))
-
- for i in range(len(elements)):
- e_name = str(int(elements[i].id) + 1)
- cv2.imwrite(pjoin(classification_root + '/GUI', name + '-' + e_name + '.jpg'), board_one_element[i])
-
- print('[Classification Completed] Input: %s Output: %s' % (input_img, pjoin(classification_root, name + '.jpg')))
-
- # ---------- matching result -----------
-
- index = input_img.split('/')[-1][:-4]
- app_id = str(index).split('-')[0]
-
- index_path = pjoin(segment_root, app_id, 'classified_sentences/keyword_index.txt')
- dict_index = {}
- if exists(index_path):
- with open(index_path, 'r') as g:
- for line in g:
- key, value = line.strip().split(':', 1)
- dict_index[key] = value
-
- for item in elements:
- complete_path = pjoin(segment_root, app_id, 'classified_sentences', item.label + '.txt')
- print("complete_path: ", complete_path)
-
- if exists(complete_path):
-
- with open(complete_path, 'r', encoding='utf-8') as file:
- content = file.read()
-
- # Replace line breaks with spaces and strip any extra whitespace
- this_text = ' '.join(content.splitlines()).strip()
-
- lines = content.splitlines()
- non_empty_lines = [line for line in lines if line.strip() != ""]
- for i in range(len(non_empty_lines)):
- if non_empty_lines[i][0].isalpha():
- non_empty_lines[i] = non_empty_lines[i][0].upper() + non_empty_lines[i][1:]
-
- # output_data = output_data.append({'screenshot': 's' + str(index), 'id': item.id + 1, 'label': item.label, 'index': dict_index[item.label], 'text': this_text, 'sentences': non_empty_lines}, ignore_index=True)
- output_data = pd.concat([output_data, pd.DataFrame([{'screenshot': 's' + str(index), 'id': item.id + 1,
- 'label': item.label, 'index': dict_index[item.label],
- 'text': this_text, 'sentences': non_empty_lines}])])
-
- else:
- # output_data = output_data.append({'screenshot': 's' + str(index), 'id': item.id + 1, 'label': item.label, 'index': "None", 'text': "No information!", 'sentences': "None"},
- # ignore_index=True)
- output_data = pd.concat([output_data, pd.DataFrame([{'screenshot': 's' + str(index), 'id': item.id + 1,
- 'label': item.label, 'index': "None",
- 'text': "No information!", 'sentences': "None"}])])
- return time_cost_ic, time_cost_ts, output_data, board
diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/common/optims.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/common/optims.py
deleted file mode 100644
index b466e38dc4ceba80ea54759ba608b7281c583bed..0000000000000000000000000000000000000000
--- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/common/optims.py
+++ /dev/null
@@ -1,119 +0,0 @@
-"""
- Copyright (c) 2022, salesforce.com, inc.
- All rights reserved.
- SPDX-License-Identifier: BSD-3-Clause
- For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-
-import math
-
-from video_llama.common.registry import registry
-
-
-@registry.register_lr_scheduler("linear_warmup_step_lr")
-class LinearWarmupStepLRScheduler:
- def __init__(
- self,
- optimizer,
- max_epoch,
- min_lr,
- init_lr,
- decay_rate=1,
- warmup_start_lr=-1,
- warmup_steps=0,
- **kwargs
- ):
- self.optimizer = optimizer
-
- self.max_epoch = max_epoch
- self.min_lr = min_lr
-
- self.decay_rate = decay_rate
-
- self.init_lr = init_lr
- self.warmup_steps = warmup_steps
- self.warmup_start_lr = warmup_start_lr if warmup_start_lr >= 0 else init_lr
-
- def step(self, cur_epoch, cur_step):
- if cur_epoch == 0:
- warmup_lr_schedule(
- step=cur_step,
- optimizer=self.optimizer,
- max_step=self.warmup_steps,
- init_lr=self.warmup_start_lr,
- max_lr=self.init_lr,
- )
- else:
- step_lr_schedule(
- epoch=cur_epoch,
- optimizer=self.optimizer,
- init_lr=self.init_lr,
- min_lr=self.min_lr,
- decay_rate=self.decay_rate,
- )
-
-
-@registry.register_lr_scheduler("linear_warmup_cosine_lr")
-class LinearWarmupCosineLRScheduler:
- def __init__(
- self,
- optimizer,
- max_epoch,
- iters_per_epoch,
- min_lr,
- init_lr,
- warmup_steps=0,
- warmup_start_lr=-1,
- **kwargs
- ):
- self.optimizer = optimizer
-
- self.max_epoch = max_epoch
- self.iters_per_epoch = iters_per_epoch
- self.min_lr = min_lr
-
- self.init_lr = init_lr
- self.warmup_steps = warmup_steps
- self.warmup_start_lr = warmup_start_lr if warmup_start_lr >= 0 else init_lr
-
- def step(self, cur_epoch, cur_step):
- total_cur_step = cur_epoch * self.iters_per_epoch + cur_step
- if total_cur_step < self.warmup_steps:
- warmup_lr_schedule(
- step=cur_step,
- optimizer=self.optimizer,
- max_step=self.warmup_steps,
- init_lr=self.warmup_start_lr,
- max_lr=self.init_lr,
- )
- else:
- cosine_lr_schedule(
- epoch=total_cur_step,
- optimizer=self.optimizer,
- max_epoch=self.max_epoch * self.iters_per_epoch,
- init_lr=self.init_lr,
- min_lr=self.min_lr,
- )
-
-
-def cosine_lr_schedule(optimizer, epoch, max_epoch, init_lr, min_lr):
- """Decay the learning rate"""
- lr = (init_lr - min_lr) * 0.5 * (
- 1.0 + math.cos(math.pi * epoch / max_epoch)
- ) + min_lr
- for param_group in optimizer.param_groups:
- param_group["lr"] = lr
-
-
-def warmup_lr_schedule(optimizer, step, max_step, init_lr, max_lr):
- """Warmup the learning rate"""
- lr = min(max_lr, init_lr + (max_lr - init_lr) * step / max(max_step, 1))
- for param_group in optimizer.param_groups:
- param_group["lr"] = lr
-
-
-def step_lr_schedule(optimizer, epoch, init_lr, min_lr, decay_rate):
- """Decay the learning rate"""
- lr = max(min_lr, init_lr * (decay_rate**epoch))
- for param_group in optimizer.param_groups:
- param_group["lr"] = lr
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/__init__.py
deleted file mode 100644
index 7cfa792f744b7e0b4e28a536c0603f142ded6518..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/__init__.py
+++ /dev/null
@@ -1,132 +0,0 @@
-# SPDX-License-Identifier: MIT
-
-"""
-Classes Without Boilerplate
-"""
-
-from functools import partial
-from typing import Callable
-
-from . import converters, exceptions, filters, setters, validators
-from ._cmp import cmp_using
-from ._config import get_run_validators, set_run_validators
-from ._funcs import asdict, assoc, astuple, evolve, has, resolve_types
-from ._make import (
- NOTHING,
- Attribute,
- Factory,
- attrib,
- attrs,
- fields,
- fields_dict,
- make_class,
- validate,
-)
-from ._next_gen import define, field, frozen, mutable
-from ._version_info import VersionInfo
-
-
-s = attributes = attrs
-ib = attr = attrib
-dataclass = partial(attrs, auto_attribs=True) # happy Easter ;)
-
-
-class AttrsInstance:
- pass
-
-
-__all__ = [
- "Attribute",
- "AttrsInstance",
- "Factory",
- "NOTHING",
- "asdict",
- "assoc",
- "astuple",
- "attr",
- "attrib",
- "attributes",
- "attrs",
- "cmp_using",
- "converters",
- "define",
- "evolve",
- "exceptions",
- "field",
- "fields",
- "fields_dict",
- "filters",
- "frozen",
- "get_run_validators",
- "has",
- "ib",
- "make_class",
- "mutable",
- "resolve_types",
- "s",
- "set_run_validators",
- "setters",
- "validate",
- "validators",
-]
-
-
-def _make_getattr(mod_name: str) -> Callable:
- """
- Create a metadata proxy for packaging information that uses *mod_name* in
- its warnings and errors.
- """
-
- def __getattr__(name: str) -> str:
- dunder_to_metadata = {
- "__title__": "Name",
- "__copyright__": "",
- "__version__": "version",
- "__version_info__": "version",
- "__description__": "summary",
- "__uri__": "",
- "__url__": "",
- "__author__": "",
- "__email__": "",
- "__license__": "license",
- }
- if name not in dunder_to_metadata.keys():
- raise AttributeError(f"module {mod_name} has no attribute {name}")
-
- import sys
- import warnings
-
- if sys.version_info < (3, 8):
- from importlib_metadata import metadata
- else:
- from importlib.metadata import metadata
-
- if name != "__version_info__":
- warnings.warn(
- f"Accessing {mod_name}.{name} is deprecated and will be "
- "removed in a future release. Use importlib.metadata directly "
- "to query for attrs's packaging metadata.",
- DeprecationWarning,
- stacklevel=2,
- )
-
- meta = metadata("attrs")
- if name == "__license__":
- return "MIT"
- elif name == "__copyright__":
- return "Copyright (c) 2015 Hynek Schlawack"
- elif name in ("__uri__", "__url__"):
- return meta["Project-URL"].split(" ", 1)[-1]
- elif name == "__version_info__":
- return VersionInfo._from_version_string(meta["version"])
- elif name == "__author__":
- return meta["Author-email"].rsplit(" ", 1)[0]
- elif name == "__email__":
- return meta["Author-email"].rsplit("<", 1)[1][:-1]
-
- return meta[dunder_to_metadata[name]]
-
- return __getattr__
-
-
-__getattr__ = _make_getattr(__name__)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/svgLib/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/svgLib/__init__.py
deleted file mode 100644
index c049006bf2e58bed7786432e7ec84c8c8b40edf0..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/svgLib/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from .path import SVGPath, parse_path
-
-__all__ = ["SVGPath", "parse_path"]
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/reload.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/reload.py
deleted file mode 100644
index 55be19b2bf037811aa7341f6830ca2576c757b3a..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/reload.py
+++ /dev/null
@@ -1,91 +0,0 @@
-"""
-
-Contains the functions that run when `gradio` is called from the command line. Specifically, allows
-
-$ gradio app.py, to run app.py in reload mode where any changes in the app.py file or Gradio library reloads the demo.
-$ gradio app.py my_demo.app, to use variable names other than "demo"
-"""
-import inspect
-import os
-import sys
-from pathlib import Path
-
-from uvicorn import Config
-from uvicorn.supervisors import ChangeReload
-
-import gradio
-from gradio import networking, utils
-
-
-def _setup_config():
- args = sys.argv[1:]
- if len(args) == 0:
- raise ValueError("No file specified.")
- if len(args) == 1 or args[1].startswith("--"):
- demo_name = "demo.app"
- else:
- demo_name = args[1]
- if "." not in demo_name:
- print(
- "\nWARNING: As of Gradio 3.31, the parameter after the file path must be the name of the FastAPI app, not the Gradio demo. In most cases, this just means you should add '.app' after the name of your demo, e.g. 'demo' -> 'demo.app'."
- )
-
- original_path = args[0]
- abs_original_path = utils.abspath(original_path)
- path = os.path.normpath(original_path)
- path = path.replace("/", ".")
- path = path.replace("\\", ".")
- filename = os.path.splitext(path)[0]
-
- gradio_folder = Path(inspect.getfile(gradio)).parent
-
- port = networking.get_first_available_port(
- networking.INITIAL_PORT_VALUE,
- networking.INITIAL_PORT_VALUE + networking.TRY_NUM_PORTS,
- )
- print(
- f"\nLaunching in *reload mode* on: http://{networking.LOCALHOST_NAME}:{port} (Press CTRL+C to quit)\n"
- )
-
- gradio_app = f"{filename}:{demo_name}"
- message = "Watching:"
- message_change_count = 0
-
- watching_dirs = []
- if str(gradio_folder).strip():
- watching_dirs.append(gradio_folder)
- message += f" '{gradio_folder}'"
- message_change_count += 1
-
- abs_parent = abs_original_path.parent
- if str(abs_parent).strip():
- watching_dirs.append(abs_parent)
- if message_change_count == 1:
- message += ","
- message += f" '{abs_parent}'"
-
- print(message + "\n")
-
- # guaranty access to the module of an app
- sys.path.insert(0, os.getcwd())
-
- # uvicorn.run blocks the execution (looping) which makes it hard to test
- return Config(
- gradio_app,
- reload=True,
- port=port,
- log_level="warning",
- reload_dirs=watching_dirs,
- )
-
-
-def main():
- # default execution pattern to start the server and watch changes
- config = _setup_config()
- server = networking.Server(config)
- sock = config.bind_socket()
- ChangeReload(config, target=server.run, sockets=[sock]).run()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/visualization/grad.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/visualization/grad.py
deleted file mode 100644
index fdc0a259baf55a8e1c4aa4d103ff0edeb4989531..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/visualization/grad.py
+++ /dev/null
@@ -1,117 +0,0 @@
-"""
-@Date: 2021/11/06
-@description:
-"""
-import cv2
-import numpy as np
-import torch
-import matplotlib.pyplot as plt
-
-from utils.conversion import depth2xyz
-
-
-def convert_img(value, h, need_nor=True, cmap=None):
- value = value.clone().detach().cpu().numpy()[None]
- if need_nor:
- value -= value.min()
- value /= value.max() - value.min()
- grad_img = value.repeat(int(h), axis=0)
-
- if cmap is None:
- grad_img = grad_img[..., np.newaxis].repeat(3, axis=-1)
- elif cmap == cv2.COLORMAP_PLASMA:
- grad_img = cv2.applyColorMap((grad_img * 255).astype(np.uint8), colormap=cmap)
- grad_img = grad_img[..., ::-1]
- grad_img = grad_img.astype(np.float) / 255.0
- elif cmap == 'HSV':
- grad_img = np.round(grad_img * 1000) / 1000.0
- grad_img = grad_img[..., np.newaxis].repeat(3, axis=-1)
- grad_img[..., 0] = grad_img[..., 0] * 180
- grad_img[..., 1] = 255
- grad_img[..., 2] = 255
- grad_img = grad_img.astype(np.uint8)
- grad_img = cv2.cvtColor(grad_img, cv2.COLOR_HSV2RGB)
- grad_img = grad_img.astype(np.float) / 255.0
- return grad_img
-
-
-def show_grad(depth, grad_conv, h=5, show=False):
- """
- :param h:
- :param depth: [patch_num]
- :param grad_conv:
- :param show:
- :return:
- """
-
- direction, angle, grad = get_all(depth[None], grad_conv)
-
- # depth_img = convert_img(depth, h)
- # angle_img = convert_img(angle[0], h)
- # grad_img = convert_img(grad[0], depth.shape[-1] // 4 - h * 2)
- depth_img = convert_img(depth, h, cmap=cv2.COLORMAP_PLASMA)
- angle_img = convert_img(angle[0], h, cmap='HSV')
-
- # vis_grad = grad[0] / grad[0].max() / 2 + 0.5
- grad_img = convert_img(grad[0], h)
- img = np.concatenate([depth_img, angle_img, grad_img], axis=0)
- if show:
- plt.imshow(img)
- plt.show()
- return img
-
-
-def get_grad(direction):
- """
- :param direction: [b patch_num]
- :return:[b patch_num]
- """
- a = torch.roll(direction, -1, dims=1) # xz[i+1]
- b = torch.roll(direction, 1, dims=1) # xz[i-1]
- grad = torch.acos(torch.clip(a[..., 0] * b[..., 0] + a[..., 1] * b[..., 1], -1+1e-6, 1-1e-6))
- return grad
-
-
-def get_grad2(angle, grad_conv):
- """
- :param angle: [b patch_num]
- :param grad_conv:
- :return:[b patch_num]
- """
- angle = torch.sin(angle)
- angle = angle + 1
-
- angle = torch.cat([angle[..., -1:], angle, angle[..., :1]], dim=-1)
- grad = grad_conv(angle[:, None]) # [b, patch_num] -> [b, 1, patch_num]
- # grad = torch.abs(grad)
- return grad.reshape(angle.shape[0], -1)
-
-
-def get_edge_angle(direction):
- """
- :param direction: [b patch_num 2]
- :return:
- """
- angle = torch.atan2(direction[..., 1], direction[..., 0])
- return angle
-
-
-def get_edge_direction(depth):
- xz = depth2xyz(depth)[..., ::2]
- direction = torch.roll(xz, -1, dims=1) - xz # direct[i] = xz[i+1] - xz[i]
- direction = direction / direction.norm(p=2, dim=-1)[..., None]
- return direction
-
-
-def get_all(depth, grad_conv):
- """
-
- :param grad_conv:
- :param depth: [b patch_num]
- :return:
- """
- direction = get_edge_direction(depth)
- angle = get_edge_angle(direction)
- # angle_grad = get_grad(direction)
- angle_grad = get_grad2(angle, grad_conv) # signed gradient
- return direction, angle, angle_grad
diff --git a/spaces/Detomo/ai-comic-generation/src/app/store/index.ts b/spaces/Detomo/ai-comic-generation/src/app/store/index.ts
deleted file mode 100644
index e85dd4d052996e9b4120bef57abb6c72c509d41a..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-comic-generation/src/app/store/index.ts
+++ /dev/null
@@ -1,203 +0,0 @@
-"use client"
-
-import { create } from "zustand"
-
-import { FontName } from "@/lib/fonts"
-import { Preset, PresetName, defaultPreset, getPreset, getRandomPreset } from "@/app/engine/presets"
-import { LayoutName, defaultLayout, getRandomLayoutName, getRandomLayoutNames } from "../layouts"
-import html2canvas from "html2canvas"
-import { RenderedScene } from "@/types"
-
-export const useStore = create<{
- prompt: string
- font: FontName
- preset: Preset
- nbFrames: number
- panels: string[]
- captions: string[]
- upscaleQueue: Record
- showCaptions: boolean
- renderedScenes: Record
- layout: LayoutName
- layouts: LayoutName[]
- zoomLevel: number
- page: HTMLDivElement
- isGeneratingStory: boolean
- panelGenerationStatus: Record
- isGeneratingText: boolean
- atLeastOnePanelIsBusy: boolean
- setRendered: (panelId: string, renderedScene: RenderedScene) => void
- addToUpscaleQueue: (panelId: string, renderedScene: RenderedScene) => void
- removeFromUpscaleQueue: (panelId: string) => void
- setPrompt: (prompt: string) => void
- setFont: (font: FontName) => void
- setPreset: (preset: Preset) => void
- setPanels: (panels: string[]) => void
- setShowCaptions: (showCaptions: boolean) => void
- setLayout: (layout: LayoutName) => void
- setLayouts: (layouts: LayoutName[]) => void
- setCaptions: (captions: string[]) => void
- setZoomLevel: (zoomLevel: number) => void
- setPage: (page: HTMLDivElement) => void
- setGeneratingStory: (isGeneratingStory: boolean) => void
- setGeneratingImages: (panelId: string, value: boolean) => void
- setGeneratingText: (isGeneratingText: boolean) => void
- pageToImage: () => Promise
- download: () => Promise
- generate: (prompt: string, presetName: PresetName, layoutName: LayoutName) => void
-}>((set, get) => ({
- prompt: "",
- font: "actionman",
- preset: getPreset(defaultPreset),
- nbFrames: 1,
- panels: [],
- captions: [],
- upscaleQueue: {} as Record,
- renderedScenes: {} as Record,
- showCaptions: false,
- layout: defaultLayout,
- layouts: [defaultLayout, defaultLayout],
- zoomLevel: 60,
- page: undefined as unknown as HTMLDivElement,
- isGeneratingStory: false,
- panelGenerationStatus: {},
- isGeneratingText: false,
- atLeastOnePanelIsBusy: false,
- setRendered: (panelId: string, renderedScene: RenderedScene) => {
- const { renderedScenes } = get()
- set({
- renderedScenes: {
- ...renderedScenes,
- [panelId]: renderedScene
- }
- })
- },
- addToUpscaleQueue: (panelId: string, renderedScene: RenderedScene) => {
- const { upscaleQueue } = get()
- set({
- upscaleQueue: {
- ...upscaleQueue,
- [panelId]: renderedScene
- },
- })
- },
- removeFromUpscaleQueue: (panelId: string) => {
- const upscaleQueue = { ...get().upscaleQueue }
- delete upscaleQueue[panelId]
- set({
- upscaleQueue,
- })
- },
- setPrompt: (prompt: string) => {
- const existingPrompt = get().prompt
- if (prompt === existingPrompt) { return }
- set({
- prompt,
- })
- },
- setFont: (font: FontName) => {
- const existingFont = get().font
- if (font === existingFont) { return }
- set({
- font,
- })
- },
- setPreset: (preset: Preset) => {
- const existingPreset = get().preset
- if (preset.label === existingPreset.label) { return }
- set({
- preset,
- })
- },
- setNbFrames: (nbFrames: number) => {
- const existingNbFrames = get().nbFrames
- if (nbFrames === existingNbFrames) { return }
- set({
- nbFrames,
- })
- },
- setPanels: (panels: string[]) => set({ panels }),
- setCaptions: (captions: string[]) => {
- set({
- captions,
- })
- },
- setShowCaptions: (showCaptions: boolean) => {
- set({
- showCaptions,
- })
- },
- setLayout: (layoutName: LayoutName) => {
- const layout = layoutName === "random"
- ? getRandomLayoutName()
- : layoutName
-
- set({
- layout,
- layouts: [layout, layout]
- })
- },
- setLayouts: (layouts: LayoutName[]) => set({ layouts }),
- setZoomLevel: (zoomLevel: number) => set({ zoomLevel }),
- setPage: (page: HTMLDivElement) => {
- if (!page) { return }
- set({ page })
- },
- setGeneratingStory: (isGeneratingStory: boolean) => set({ isGeneratingStory }),
- setGeneratingImages: (panelId: string, value: boolean) => {
- const panelGenerationStatus: Record = {
- ...get().panelGenerationStatus,
- [panelId]: value
- }
-
- const atLeastOnePanelIsBusy = Object.values(panelGenerationStatus).includes(true)
-
- set({
- panelGenerationStatus,
- atLeastOnePanelIsBusy
- })
- },
- setGeneratingText: (isGeneratingText: boolean) => set({ isGeneratingText }),
- pageToImage: async () => {
- const { page } = get()
- if (!page) { return "" }
-
-
- const canvas = await html2canvas(page)
- console.log("canvas:", canvas)
-
- const data = canvas.toDataURL('image/jpeg', 0.5)
- return data
- },
- download: async () => {
- const { pageToImage } = get()
- const data = await pageToImage()
-
- const link = document.createElement('a')
-
- if (typeof link.download === 'string') {
- link.href = data
- link.download = 'comic.jpg'
- document.body.appendChild(link)
- link.click()
- document.body.removeChild(link)
- } else {
- window.open(data)
- }
- },
- generate: (prompt: string, presetName: PresetName, layoutName: LayoutName) => {
- const layout = layoutName === "random"
- ? getRandomLayoutName()
- : layoutName
- set({
- prompt,
- panels: [],
- captions: [],
- preset: presetName === "random"
- ? getRandomPreset()
- : getPreset(presetName),
- layout,
- layouts: [layout, layout],
- })
- }
-}))
diff --git a/spaces/DiViorg/categories_error_analysis/README.md b/spaces/DiViorg/categories_error_analysis/README.md
deleted file mode 100644
index 0d0eebf16f8e02c0bb00be4304cac9b1211f19d8..0000000000000000000000000000000000000000
--- a/spaces/DiViorg/categories_error_analysis/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Categories Error Analysis
-emoji: 🐛
-colorFrom: blue
-colorTo: gray
-sdk: gradio
-sdk_version: 3.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ECCV2022/bytetrack/yolox/utils/__init__.py b/spaces/ECCV2022/bytetrack/yolox/utils/__init__.py
deleted file mode 100644
index a268c1a4538ce568c8f9ef1c0d10511fdac34be1..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/yolox/utils/__init__.py
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.
-
-from .allreduce_norm import *
-from .boxes import *
-from .checkpoint import load_ckpt, save_checkpoint
-from .demo_utils import *
-from .dist import *
-from .ema import ModelEMA
-from .logger import setup_logger
-from .lr_scheduler import LRScheduler
-from .metric import *
-from .model_utils import *
-from .setup_env import *
-from .visualize import *
diff --git a/spaces/ECCV2022/storydalle/dalle/models/tokenizer.py b/spaces/ECCV2022/storydalle/dalle/models/tokenizer.py
deleted file mode 100644
index 1187abc02d364d414b86cddf2f77180ece688197..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/storydalle/dalle/models/tokenizer.py
+++ /dev/null
@@ -1,35 +0,0 @@
-# ------------------------------------------------------------------------------------
-# Minimal DALL-E
-# Copyright (c) 2021 KakaoBrain. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------------------
-
-import os
-from functools import partial
-from tokenizers import CharBPETokenizer
-
-
-def build_tokenizer(path: str,
- context_length: int = 64,
- *args,
- **kwargs):
- try:
- from_file = partial(CharBPETokenizer.from_file,
- vocab_filename=os.path.join(path, 'bpe-16k-vocab.json'),
- merges_filename=os.path.join(path, 'bpe-16k-merges.txt'),
- unk_token='[UNK]')
- tokenizer = from_file(*args, **kwargs)
- except:
- from_file = partial(CharBPETokenizer.from_file,
- vocab_filename=os.path.join(path, 'vocab.json'),
- merges_filename=os.path.join(path, 'merges.txt'),
- unk_token='[UNK]')
- tokenizer = from_file(*args, **kwargs)
-
- # tokenizer = from_file(*args, **kwargs)
- tokenizer.add_special_tokens(['[PAD]'])
- tokenizer.enable_padding(length=context_length,
- pad_id=tokenizer.token_to_id('[PAD]'))
- tokenizer.enable_truncation(max_length=context_length)
- print(f'{path} successfully restored..')
- return tokenizer
diff --git a/spaces/Eddycrack864/Applio-Inference/demucs/pretrained.py b/spaces/Eddycrack864/Applio-Inference/demucs/pretrained.py
deleted file mode 100644
index 6aac5db100cc7a9084af96d2cd083f0c8fac473c..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/demucs/pretrained.py
+++ /dev/null
@@ -1,107 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-# author: adefossez
-
-import logging
-
-from diffq import DiffQuantizer
-import torch.hub
-
-from .model import Demucs
-from .tasnet import ConvTasNet
-from .utils import set_state
-
-logger = logging.getLogger(__name__)
-ROOT = "https://dl.fbaipublicfiles.com/demucs/v3.0/"
-
-PRETRAINED_MODELS = {
- 'demucs': 'e07c671f',
- 'demucs48_hq': '28a1282c',
- 'demucs_extra': '3646af93',
- 'demucs_quantized': '07afea75',
- 'tasnet': 'beb46fac',
- 'tasnet_extra': 'df3777b2',
- 'demucs_unittest': '09ebc15f',
-}
-
-SOURCES = ["drums", "bass", "other", "vocals"]
-
-
-def get_url(name):
- sig = PRETRAINED_MODELS[name]
- return ROOT + name + "-" + sig[:8] + ".th"
-
-
-def is_pretrained(name):
- return name in PRETRAINED_MODELS
-
-
-def load_pretrained(name):
- if name == "demucs":
- return demucs(pretrained=True)
- elif name == "demucs48_hq":
- return demucs(pretrained=True, hq=True, channels=48)
- elif name == "demucs_extra":
- return demucs(pretrained=True, extra=True)
- elif name == "demucs_quantized":
- return demucs(pretrained=True, quantized=True)
- elif name == "demucs_unittest":
- return demucs_unittest(pretrained=True)
- elif name == "tasnet":
- return tasnet(pretrained=True)
- elif name == "tasnet_extra":
- return tasnet(pretrained=True, extra=True)
- else:
- raise ValueError(f"Invalid pretrained name {name}")
-
-
-def _load_state(name, model, quantizer=None):
- url = get_url(name)
- state = torch.hub.load_state_dict_from_url(url, map_location='cpu', check_hash=True)
- set_state(model, quantizer, state)
- if quantizer:
- quantizer.detach()
-
-
-def demucs_unittest(pretrained=True):
- model = Demucs(channels=4, sources=SOURCES)
- if pretrained:
- _load_state('demucs_unittest', model)
- return model
-
-
-def demucs(pretrained=True, extra=False, quantized=False, hq=False, channels=64):
- if not pretrained and (extra or quantized or hq):
- raise ValueError("if extra or quantized is True, pretrained must be True.")
- model = Demucs(sources=SOURCES, channels=channels)
- if pretrained:
- name = 'demucs'
- if channels != 64:
- name += str(channels)
- quantizer = None
- if sum([extra, quantized, hq]) > 1:
- raise ValueError("Only one of extra, quantized, hq, can be True.")
- if quantized:
- quantizer = DiffQuantizer(model, group_size=8, min_size=1)
- name += '_quantized'
- if extra:
- name += '_extra'
- if hq:
- name += '_hq'
- _load_state(name, model, quantizer)
- return model
-
-
-def tasnet(pretrained=True, extra=False):
- if not pretrained and extra:
- raise ValueError("if extra is True, pretrained must be True.")
- model = ConvTasNet(X=10, sources=SOURCES)
- if pretrained:
- name = 'tasnet'
- if extra:
- name = 'tasnet_extra'
- _load_state(name, model)
- return model
diff --git a/spaces/Ekimetrics/climate-question-answering/climateqa/llm.py b/spaces/Ekimetrics/climate-question-answering/climateqa/llm.py
deleted file mode 100644
index 98f5d509ebb72a7bd83e15a47ffa100bf2d9f8eb..0000000000000000000000000000000000000000
--- a/spaces/Ekimetrics/climate-question-answering/climateqa/llm.py
+++ /dev/null
@@ -1,25 +0,0 @@
-from langchain.chat_models import AzureChatOpenAI
-import os
-# LOAD ENVIRONMENT VARIABLES
-try:
- from dotenv import load_dotenv
- load_dotenv()
-except:
- pass
-
-
-def get_llm(max_tokens = 1024,temperature = 0.0,verbose = True,streaming = False, **kwargs):
-
- llm = AzureChatOpenAI(
- openai_api_base=os.environ["AZURE_OPENAI_API_BASE_URL"],
- openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],
- deployment_name=os.environ["AZURE_OPENAI_API_DEPLOYMENT_NAME"],
- openai_api_key=os.environ["AZURE_OPENAI_API_KEY"],
- openai_api_type = "azure",
- max_tokens = max_tokens,
- temperature = temperature,
- verbose = verbose,
- streaming = streaming,
- **kwargs,
- )
- return llm
diff --git a/spaces/ElainaFanBoy/MusicGen/tests/utils/__init__.py b/spaces/ElainaFanBoy/MusicGen/tests/utils/__init__.py
deleted file mode 100644
index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/MusicGen/tests/utils/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/EronSamez/RVC_HFmeu/Applio-RVC-Fork/utils/backups_test.py b/spaces/EronSamez/RVC_HFmeu/Applio-RVC-Fork/utils/backups_test.py
deleted file mode 100644
index f3edf15811b5035ee82f21e54e87b7e87ce413eb..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/Applio-RVC-Fork/utils/backups_test.py
+++ /dev/null
@@ -1,138 +0,0 @@
-
-import os
-import shutil
-import hashlib
-import time
-
-LOGS_FOLDER = '/content/Applio-RVC-Fork/logs'
-WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights'
-GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup'
-
-def import_google_drive_backup():
- print("Importing Google Drive backup...")
- GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup' # change this to your Google Drive path
- LOGS_FOLDER = '/content/Applio-RVC-Fork/logs'
- WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights'
- weights_exist = False
- files_to_copy = []
- weights_to_copy = []
-
- def handle_files(root, files, is_weight_files=False):
- for filename in files:
- filepath = os.path.join(root, filename)
- if filename.endswith('.pth') and is_weight_files:
- weights_exist = True
- backup_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH))
- else:
- backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH))
- backup_folderpath = os.path.dirname(backup_filepath)
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created folder: {backup_folderpath}', flush=True)
- if is_weight_files:
- weights_to_copy.append((filepath, backup_filepath))
- else:
- files_to_copy.append((filepath, backup_filepath))
-
- for root, dirs, files in os.walk(os.path.join(GOOGLE_DRIVE_PATH, 'logs')):
- handle_files(root, files)
-
- for root, dirs, files in os.walk(os.path.join(GOOGLE_DRIVE_PATH, 'weights')):
- handle_files(root, files, True)
-
- # Copy files in batches
- total_files = len(files_to_copy)
- start_time = time.time()
- for i, (source, dest) in enumerate(files_to_copy, start=1):
- with open(source, 'rb') as src, open(dest, 'wb') as dst:
- shutil.copyfileobj(src, dst, 1024*1024) # 1MB buffer size
- # Report progress every 5 seconds or after every 100 files, whichever is less frequent
- if time.time() - start_time > 5 or i % 100 == 0:
- print(f'\rCopying file {i} of {total_files} ({i * 100 / total_files:.2f}%)', end="")
- start_time = time.time()
- print(f'\nImported {len(files_to_copy)} files from Google Drive backup')
-
- # Copy weights in batches
- total_weights = len(weights_to_copy)
- start_time = time.time()
- for i, (source, dest) in enumerate(weights_to_copy, start=1):
- with open(source, 'rb') as src, open(dest, 'wb') as dst:
- shutil.copyfileobj(src, dst, 1024*1024) # 1MB buffer size
- # Report progress every 5 seconds or after every 100 files, whichever is less frequent
- if time.time() - start_time > 5 or i % 100 == 0:
- print(f'\rCopying weight file {i} of {total_weights} ({i * 100 / total_weights:.2f}%)', end="")
- start_time = time.time()
- if weights_exist:
- print(f'\nImported {len(weights_to_copy)} weight files')
- print("Copied weights from Google Drive backup to local weights folder.")
- else:
- print("\nNo weights found in Google Drive backup.")
- print("Google Drive backup import completed.")
-
-def backup_files():
- print("\n Starting backup loop...")
- last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt')
- fully_updated = False # boolean to track if all files are up to date
- try:
- with open(last_backup_timestamps_path, 'r') as f:
- last_backup_timestamps = dict(line.strip().split(':') for line in f)
- except:
- last_backup_timestamps = {}
-
- while True:
- updated = False
- files_to_copy = []
- files_to_delete = []
-
- for root, dirs, files in os.walk(LOGS_FOLDER):
- for filename in files:
- if filename != 'last_backup_timestamps.txt':
- filepath = os.path.join(root, filename)
- if os.path.isfile(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- backup_folderpath = os.path.dirname(backup_filepath)
-
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created backup folder: {backup_folderpath}', flush=True)
-
- # check if file has changed since last backup
- last_backup_timestamp = last_backup_timestamps.get(filepath)
- current_timestamp = os.path.getmtime(filepath)
- if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp:
- files_to_copy.append((filepath, backup_filepath)) # add to list of files to copy
- last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp
- updated = True
- fully_updated = False # if a file is updated, all files are not up to date
-
- # check if any files were deleted in Colab and delete them from the backup drive
- for filepath in list(last_backup_timestamps.keys()):
- if not os.path.exists(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- if os.path.exists(backup_filepath):
- files_to_delete.append(backup_filepath) # add to list of files to delete
- del last_backup_timestamps[filepath]
- updated = True
- fully_updated = False # if a file is deleted, all files are not up to date
-
- # Copy files in batches
- if files_to_copy:
- for source, dest in files_to_copy:
- shutil.copy2(source, dest)
- print(f'Copied or updated {len(files_to_copy)} files')
-
- # Delete files in batches
- if files_to_delete:
- for file in files_to_delete:
- os.remove(file)
- print(f'Deleted {len(files_to_delete)} files')
-
- if not updated and not fully_updated:
- print("Files are up to date.")
- fully_updated = True # if all files are up to date, set the boolean to True
- copy_weights_folder_to_drive()
-
- with open(last_backup_timestamps_path, 'w') as f:
- for filepath, timestamp in last_backup_timestamps.items():
- f.write(f'{filepath}:{timestamp}\n')
- time.sleep(15) # wait for 15 seconds before checking again
diff --git a/spaces/EronSamez/RVC_HFmeu/tools/torchgate/utils.py b/spaces/EronSamez/RVC_HFmeu/tools/torchgate/utils.py
deleted file mode 100644
index dc97d45a399c112c76e80cdd8c73cfebaf3ef6ad..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/tools/torchgate/utils.py
+++ /dev/null
@@ -1,66 +0,0 @@
-import torch
-from torch.types import Number
-
-
-@torch.no_grad()
-def amp_to_db(x: torch.Tensor, eps=torch.finfo(torch.float64).eps, top_db=40) -> torch.Tensor:
- """
- Convert the input tensor from amplitude to decibel scale.
-
- Arguments:
- x {[torch.Tensor]} -- [Input tensor.]
-
- Keyword Arguments:
- eps {[float]} -- [Small value to avoid numerical instability.]
- (default: {torch.finfo(torch.float64).eps})
- top_db {[float]} -- [threshold the output at ``top_db`` below the peak]
- ` (default: {40})
-
- Returns:
- [torch.Tensor] -- [Output tensor in decibel scale.]
- """
- x_db = 20 * torch.log10(x.abs() + eps)
- return torch.max(x_db, (x_db.max(-1).values - top_db).unsqueeze(-1))
-
-
-@torch.no_grad()
-def temperature_sigmoid(x: torch.Tensor, x0: float, temp_coeff: float) -> torch.Tensor:
- """
- Apply a sigmoid function with temperature scaling.
-
- Arguments:
- x {[torch.Tensor]} -- [Input tensor.]
- x0 {[float]} -- [Parameter that controls the threshold of the sigmoid.]
- temp_coeff {[float]} -- [Parameter that controls the slope of the sigmoid.]
-
- Returns:
- [torch.Tensor] -- [Output tensor after applying the sigmoid with temperature scaling.]
- """
- return torch.sigmoid((x - x0) / temp_coeff)
-
-
-@torch.no_grad()
-def linspace(start: Number, stop: Number, num: int = 50, endpoint: bool = True, **kwargs) -> torch.Tensor:
- """
- Generate a linearly spaced 1-D tensor.
-
- Arguments:
- start {[Number]} -- [The starting value of the sequence.]
- stop {[Number]} -- [The end value of the sequence, unless `endpoint` is set to False.
- In that case, the sequence consists of all but the last of ``num + 1``
- evenly spaced samples, so that `stop` is excluded. Note that the step
- size changes when `endpoint` is False.]
-
- Keyword Arguments:
- num {[int]} -- [Number of samples to generate. Default is 50. Must be non-negative.]
- endpoint {[bool]} -- [If True, `stop` is the last sample. Otherwise, it is not included.
- Default is True.]
- **kwargs -- [Additional arguments to be passed to the underlying PyTorch `linspace` function.]
-
- Returns:
- [torch.Tensor] -- [1-D tensor of `num` equally spaced samples from `start` to `stop`.]
- """
- if endpoint:
- return torch.linspace(start, stop, num, **kwargs)
- else:
- return torch.linspace(start, stop, num + 1, **kwargs)[:-1]
diff --git a/spaces/EuroPython2022/automatic-speech-recognition-with-next-gen-kaldi/model.py b/spaces/EuroPython2022/automatic-speech-recognition-with-next-gen-kaldi/model.py
deleted file mode 100644
index d67e0b04132f6dea5e5b7dbcd9fa1ec79deae42c..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/automatic-speech-recognition-with-next-gen-kaldi/model.py
+++ /dev/null
@@ -1,585 +0,0 @@
-# Copyright 2022 Xiaomi Corp. (authors: Fangjun Kuang)
-#
-# See LICENSE for clarification regarding multiple authors
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from huggingface_hub import hf_hub_download
-from functools import lru_cache
-import os
-
-os.system(
- "cp -v /home/user/.local/lib/python3.8/site-packages/k2/lib/*.so /home/user/.local/lib/python3.8/site-packages/sherpa/lib/"
-)
-
-import k2
-import sherpa
-
-
-sample_rate = 16000
-
-
-@lru_cache(maxsize=30)
-def get_pretrained_model(
- repo_id: str,
- decoding_method: str,
- num_active_paths: int,
-) -> sherpa.OfflineRecognizer:
- if repo_id in chinese_models:
- return chinese_models[repo_id](
- repo_id, decoding_method=decoding_method, num_active_paths=num_active_paths
- )
- elif repo_id in english_models:
- return english_models[repo_id](
- repo_id, decoding_method=decoding_method, num_active_paths=num_active_paths
- )
- elif repo_id in chinese_english_mixed_models:
- return chinese_english_mixed_models[repo_id](
- repo_id, decoding_method=decoding_method, num_active_paths=num_active_paths
- )
- elif repo_id in tibetan_models:
- return tibetan_models[repo_id](
- repo_id, decoding_method=decoding_method, num_active_paths=num_active_paths
- )
- elif repo_id in arabic_models:
- return arabic_models[repo_id](
- repo_id, decoding_method=decoding_method, num_active_paths=num_active_paths
- )
- elif repo_id in german_models:
- return german_models[repo_id](
- repo_id, decoding_method=decoding_method, num_active_paths=num_active_paths
- )
- else:
- raise ValueError(f"Unsupported repo_id: {repo_id}")
-
-
-def _get_nn_model_filename(
- repo_id: str,
- filename: str,
- subfolder: str = "exp",
-) -> str:
- nn_model_filename = hf_hub_download(
- repo_id=repo_id,
- filename=filename,
- subfolder=subfolder,
- )
- return nn_model_filename
-
-
-def _get_bpe_model_filename(
- repo_id: str,
- filename: str = "bpe.model",
- subfolder: str = "data/lang_bpe_500",
-) -> str:
- bpe_model_filename = hf_hub_download(
- repo_id=repo_id,
- filename=filename,
- subfolder=subfolder,
- )
- return bpe_model_filename
-
-
-def _get_token_filename(
- repo_id: str,
- filename: str = "tokens.txt",
- subfolder: str = "data/lang_char",
-) -> str:
- token_filename = hf_hub_download(
- repo_id=repo_id,
- filename=filename,
- subfolder=subfolder,
- )
- return token_filename
-
-
-@lru_cache(maxsize=10)
-def _get_aishell2_pretrained_model(
- repo_id: str,
- decoding_method: str,
- num_active_paths: int,
-) -> sherpa.OfflineRecognizer:
- assert repo_id in [
- # context-size 1
- "yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-A-2022-07-12", # noqa
- # context-size 2
- "yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-B-2022-07-12", # noqa
- ], repo_id
-
- nn_model = _get_nn_model_filename(
- repo_id=repo_id,
- filename="cpu_jit.pt",
- )
- tokens = _get_token_filename(repo_id=repo_id)
-
- feat_config = sherpa.FeatureConfig()
- feat_config.fbank_opts.frame_opts.samp_freq = sample_rate
- feat_config.fbank_opts.mel_opts.num_bins = 80
- feat_config.fbank_opts.frame_opts.dither = 0
-
- config = sherpa.OfflineRecognizerConfig(
- nn_model=nn_model,
- tokens=tokens,
- use_gpu=False,
- feat_config=feat_config,
- decoding_method=decoding_method,
- num_active_paths=num_active_paths,
- )
-
- recognizer = sherpa.OfflineRecognizer(config)
-
- return recognizer
-
-
-@lru_cache(maxsize=10)
-def _get_gigaspeech_pre_trained_model(
- repo_id: str,
- decoding_method: str,
- num_active_paths: int,
-) -> sherpa.OfflineRecognizer:
- assert repo_id in [
- "wgb14/icefall-asr-gigaspeech-pruned-transducer-stateless2",
- ], repo_id
-
- nn_model = _get_nn_model_filename(
- repo_id=repo_id,
- filename="cpu_jit-iter-3488000-avg-20.pt",
- )
- tokens = "./giga-tokens.txt"
-
- feat_config = sherpa.FeatureConfig()
- feat_config.fbank_opts.frame_opts.samp_freq = sample_rate
- feat_config.fbank_opts.mel_opts.num_bins = 80
- feat_config.fbank_opts.frame_opts.dither = 0
-
- config = sherpa.OfflineRecognizerConfig(
- nn_model=nn_model,
- tokens=tokens,
- use_gpu=False,
- feat_config=feat_config,
- decoding_method=decoding_method,
- num_active_paths=num_active_paths,
- )
-
- recognizer = sherpa.OfflineRecognizer(config)
-
- return recognizer
-
-
-@lru_cache(maxsize=10)
-def _get_librispeech_pre_trained_model(
- repo_id: str,
- decoding_method: str,
- num_active_paths: int,
-) -> sherpa.OfflineRecognizer:
- assert repo_id in [
- "WeijiZhuang/icefall-asr-librispeech-pruned-transducer-stateless8-2022-12-02", # noqa
- "csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13", # noqa
- "csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless7-2022-11-11", # noqa
- "csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless8-2022-11-14", # noqa
- ], repo_id
-
- filename = "cpu_jit.pt"
- if (
- repo_id
- == "csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless7-2022-11-11"
- ):
- filename = "cpu_jit-torch-1.10.0.pt"
-
- if (
- repo_id
- == "WeijiZhuang/icefall-asr-librispeech-pruned-transducer-stateless8-2022-12-02"
- ):
- filename = "cpu_jit-torch-1.10.pt"
-
- nn_model = _get_nn_model_filename(
- repo_id=repo_id,
- filename=filename,
- )
- tokens = _get_token_filename(repo_id=repo_id, subfolder="data/lang_bpe_500")
-
- feat_config = sherpa.FeatureConfig()
- feat_config.fbank_opts.frame_opts.samp_freq = sample_rate
- feat_config.fbank_opts.mel_opts.num_bins = 80
- feat_config.fbank_opts.frame_opts.dither = 0
-
- config = sherpa.OfflineRecognizerConfig(
- nn_model=nn_model,
- tokens=tokens,
- use_gpu=False,
- feat_config=feat_config,
- decoding_method=decoding_method,
- num_active_paths=num_active_paths,
- )
-
- recognizer = sherpa.OfflineRecognizer(config)
-
- return recognizer
-
-
-@lru_cache(maxsize=10)
-def _get_wenetspeech_pre_trained_model(
- repo_id: str,
- decoding_method: str,
- num_active_paths: int,
-):
- assert repo_id in [
- "luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2",
- ], repo_id
-
- nn_model = _get_nn_model_filename(
- repo_id=repo_id,
- filename="cpu_jit_epoch_10_avg_2_torch_1.7.1.pt",
- )
- tokens = _get_token_filename(repo_id=repo_id)
-
- feat_config = sherpa.FeatureConfig()
- feat_config.fbank_opts.frame_opts.samp_freq = sample_rate
- feat_config.fbank_opts.mel_opts.num_bins = 80
- feat_config.fbank_opts.frame_opts.dither = 0
-
- config = sherpa.OfflineRecognizerConfig(
- nn_model=nn_model,
- tokens=tokens,
- use_gpu=False,
- feat_config=feat_config,
- decoding_method=decoding_method,
- num_active_paths=num_active_paths,
- )
-
- recognizer = sherpa.OfflineRecognizer(config)
-
- return recognizer
-
-
-@lru_cache(maxsize=10)
-def _get_chinese_english_mixed_model(
- repo_id: str,
- decoding_method: str,
- num_active_paths: int,
-):
- assert repo_id in [
- "luomingshuang/icefall_asr_tal-csasr_pruned_transducer_stateless5",
- "ptrnull/icefall-asr-conv-emformer-transducer-stateless2-zh",
- ], repo_id
-
- if repo_id == "luomingshuang/icefall_asr_tal-csasr_pruned_transducer_stateless5":
- filename = "cpu_jit.pt"
- subfolder = "data/lang_char"
- elif repo_id == "ptrnull/icefall-asr-conv-emformer-transducer-stateless2-zh":
- filename = "cpu_jit-epoch-11-avg-1.pt"
- subfolder = "data/lang_char_bpe"
-
- nn_model = _get_nn_model_filename(
- repo_id=repo_id,
- filename=filename,
- )
- tokens = _get_token_filename(repo_id=repo_id, subfolder=subfolder)
-
- feat_config = sherpa.FeatureConfig()
- feat_config.fbank_opts.frame_opts.samp_freq = sample_rate
- feat_config.fbank_opts.mel_opts.num_bins = 80
- feat_config.fbank_opts.frame_opts.dither = 0
-
- config = sherpa.OfflineRecognizerConfig(
- nn_model=nn_model,
- tokens=tokens,
- use_gpu=False,
- feat_config=feat_config,
- decoding_method=decoding_method,
- num_active_paths=num_active_paths,
- )
-
- recognizer = sherpa.OfflineRecognizer(config)
-
- return recognizer
-
-
-@lru_cache(maxsize=10)
-def _get_alimeeting_pre_trained_model(
- repo_id: str,
- decoding_method: str,
- num_active_paths: int,
-):
- assert repo_id in [
- "luomingshuang/icefall_asr_alimeeting_pruned_transducer_stateless2",
- ], repo_id
-
- nn_model = _get_nn_model_filename(
- repo_id=repo_id,
- filename="cpu_jit_torch_1.7.1.pt",
- )
- tokens = _get_token_filename(repo_id=repo_id)
-
- feat_config = sherpa.FeatureConfig()
- feat_config.fbank_opts.frame_opts.samp_freq = sample_rate
- feat_config.fbank_opts.mel_opts.num_bins = 80
- feat_config.fbank_opts.frame_opts.dither = 0
-
- config = sherpa.OfflineRecognizerConfig(
- nn_model=nn_model,
- tokens=tokens,
- use_gpu=False,
- feat_config=feat_config,
- decoding_method=decoding_method,
- num_active_paths=num_active_paths,
- )
-
- recognizer = sherpa.OfflineRecognizer(config)
-
- return recognizer
-
-
-@lru_cache(maxsize=10)
-def _get_wenet_model(
- repo_id: str,
- decoding_method: str,
- num_active_paths: int,
-):
- assert repo_id in [
- "csukuangfj/wenet-chinese-model",
- "csukuangfj/wenet-english-model",
- ], repo_id
-
- nn_model = _get_nn_model_filename(
- repo_id=repo_id,
- filename="final.zip",
- subfolder=".",
- )
- tokens = _get_token_filename(
- repo_id=repo_id,
- filename="units.txt",
- subfolder=".",
- )
-
- feat_config = sherpa.FeatureConfig(normalize_samples=False)
- feat_config.fbank_opts.frame_opts.samp_freq = sample_rate
- feat_config.fbank_opts.mel_opts.num_bins = 80
- feat_config.fbank_opts.frame_opts.dither = 0
-
- config = sherpa.OfflineRecognizerConfig(
- nn_model=nn_model,
- tokens=tokens,
- use_gpu=False,
- feat_config=feat_config,
- decoding_method=decoding_method,
- num_active_paths=num_active_paths,
- )
-
- recognizer = sherpa.OfflineRecognizer(config)
-
- return recognizer
-
-
-@lru_cache(maxsize=10)
-def _get_aidatatang_200zh_pretrained_mode(
- repo_id: str,
- decoding_method: str,
- num_active_paths: int,
-):
- assert repo_id in [
- "luomingshuang/icefall_asr_aidatatang-200zh_pruned_transducer_stateless2",
- ], repo_id
-
- nn_model = _get_nn_model_filename(
- repo_id=repo_id,
- filename="cpu_jit_torch.1.7.1.pt",
- )
- tokens = _get_token_filename(repo_id=repo_id)
-
- feat_config = sherpa.FeatureConfig()
- feat_config.fbank_opts.frame_opts.samp_freq = sample_rate
- feat_config.fbank_opts.mel_opts.num_bins = 80
- feat_config.fbank_opts.frame_opts.dither = 0
-
- config = sherpa.OfflineRecognizerConfig(
- nn_model=nn_model,
- tokens=tokens,
- use_gpu=False,
- feat_config=feat_config,
- decoding_method=decoding_method,
- num_active_paths=num_active_paths,
- )
-
- recognizer = sherpa.OfflineRecognizer(config)
-
- return recognizer
-
-
-@lru_cache(maxsize=10)
-def _get_tibetan_pre_trained_model(
- repo_id: str,
- decoding_method: str,
- num_active_paths: int,
-):
- assert repo_id in [
- "syzym/icefall-asr-xbmu-amdo31-pruned-transducer-stateless7-2022-12-02",
- "syzym/icefall-asr-xbmu-amdo31-pruned-transducer-stateless5-2022-11-29",
- ], repo_id
-
- filename = "cpu_jit.pt"
- if (
- repo_id
- == "syzym/icefall-asr-xbmu-amdo31-pruned-transducer-stateless5-2022-11-29"
- ):
- filename = "cpu_jit-epoch-28-avg-23-torch-1.10.0.pt"
-
- nn_model = _get_nn_model_filename(
- repo_id=repo_id,
- filename=filename,
- )
-
- tokens = _get_token_filename(repo_id=repo_id, subfolder="data/lang_bpe_500")
-
- feat_config = sherpa.FeatureConfig()
- feat_config.fbank_opts.frame_opts.samp_freq = sample_rate
- feat_config.fbank_opts.mel_opts.num_bins = 80
- feat_config.fbank_opts.frame_opts.dither = 0
-
- config = sherpa.OfflineRecognizerConfig(
- nn_model=nn_model,
- tokens=tokens,
- use_gpu=False,
- feat_config=feat_config,
- decoding_method=decoding_method,
- num_active_paths=num_active_paths,
- )
-
- recognizer = sherpa.OfflineRecognizer(config)
-
- return recognizer
-
-
-@lru_cache(maxsize=10)
-def _get_arabic_pre_trained_model(
- repo_id: str,
- decoding_method: str,
- num_active_paths: int,
-):
- assert repo_id in [
- "AmirHussein/icefall-asr-mgb2-conformer_ctc-2022-27-06",
- ], repo_id
-
- nn_model = _get_nn_model_filename(
- repo_id=repo_id,
- filename="cpu_jit.pt",
- )
-
- tokens = _get_token_filename(repo_id=repo_id, subfolder="data/lang_bpe_5000")
-
- feat_config = sherpa.FeatureConfig()
- feat_config.fbank_opts.frame_opts.samp_freq = sample_rate
- feat_config.fbank_opts.mel_opts.num_bins = 80
- feat_config.fbank_opts.frame_opts.dither = 0
-
- config = sherpa.OfflineRecognizerConfig(
- nn_model=nn_model,
- tokens=tokens,
- use_gpu=False,
- feat_config=feat_config,
- decoding_method=decoding_method,
- num_active_paths=num_active_paths,
- )
-
- recognizer = sherpa.OfflineRecognizer(config)
-
- return recognizer
-
-
-@lru_cache(maxsize=10)
-def _get_german_pre_trained_model(
- repo_id: str,
- decoding_method: str,
- num_active_paths: int,
-):
- assert repo_id in [
- "csukuangfj/wav2vec2.0-torchaudio",
- ], repo_id
-
- nn_model = _get_nn_model_filename(
- repo_id=repo_id,
- filename="voxpopuli_asr_base_10k_de.pt",
- subfolder=".",
- )
-
- tokens = _get_token_filename(
- repo_id=repo_id,
- filename="tokens-de.txt",
- subfolder=".",
- )
-
- config = sherpa.OfflineRecognizerConfig(
- nn_model=nn_model,
- tokens=tokens,
- use_gpu=False,
- decoding_method=decoding_method,
- num_active_paths=num_active_paths,
- )
-
- recognizer = sherpa.OfflineRecognizer(config)
-
- return recognizer
-
-
-chinese_models = {
- "luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2": _get_wenetspeech_pre_trained_model, # noqa
- "yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-A-2022-07-12": _get_aishell2_pretrained_model, # noqa
- "yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-B-2022-07-12": _get_aishell2_pretrained_model, # noqa
- "luomingshuang/icefall_asr_aidatatang-200zh_pruned_transducer_stateless2": _get_aidatatang_200zh_pretrained_mode, # noqa
- "luomingshuang/icefall_asr_alimeeting_pruned_transducer_stateless2": _get_alimeeting_pre_trained_model, # noqa
- "csukuangfj/wenet-chinese-model": _get_wenet_model,
-}
-
-english_models = {
- "wgb14/icefall-asr-gigaspeech-pruned-transducer-stateless2": _get_gigaspeech_pre_trained_model, # noqa
- "WeijiZhuang/icefall-asr-librispeech-pruned-transducer-stateless8-2022-12-02": _get_librispeech_pre_trained_model, # noqa
- "csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless8-2022-11-14": _get_librispeech_pre_trained_model, # noqa
- "csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless7-2022-11-11": _get_librispeech_pre_trained_model, # noqa
- "csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13": _get_librispeech_pre_trained_model, # noqa
- "csukuangfj/wenet-english-model": _get_wenet_model,
-}
-
-chinese_english_mixed_models = {
- "ptrnull/icefall-asr-conv-emformer-transducer-stateless2-zh": _get_chinese_english_mixed_model,
- "luomingshuang/icefall_asr_tal-csasr_pruned_transducer_stateless5": _get_chinese_english_mixed_model, # noqa
-}
-
-tibetan_models = {
- "syzym/icefall-asr-xbmu-amdo31-pruned-transducer-stateless7-2022-12-02": _get_tibetan_pre_trained_model, # noqa
- "syzym/icefall-asr-xbmu-amdo31-pruned-transducer-stateless5-2022-11-29": _get_tibetan_pre_trained_model, # noqa
-}
-
-arabic_models = {
- "AmirHussein/icefall-asr-mgb2-conformer_ctc-2022-27-06": _get_arabic_pre_trained_model, # noqa
-}
-
-german_models = {
- "csukuangfj/wav2vec2.0-torchaudio": _get_german_pre_trained_model,
-}
-
-all_models = {
- **chinese_models,
- **english_models,
- **chinese_english_mixed_models,
- **tibetan_models,
- **arabic_models,
- **german_models,
-}
-
-language_to_models = {
- "Chinese": list(chinese_models.keys()),
- "English": list(english_models.keys()),
- "Chinese+English": list(chinese_english_mixed_models.keys()),
- "Tibetan": list(tibetan_models.keys()),
- "Arabic": list(arabic_models.keys()),
- "German": list(german_models.keys()),
-}
diff --git a/spaces/EuroPython2022/illustrated-lyrics-generator/models.py b/spaces/EuroPython2022/illustrated-lyrics-generator/models.py
deleted file mode 100644
index abbf46a33f8546bdd0b98cccbf04acbb43a8d28f..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/illustrated-lyrics-generator/models.py
+++ /dev/null
@@ -1,246 +0,0 @@
-# Source: https://huggingface.co/huggan/fastgan-few-shot-fauvism-still-life
-import torch
-import torch.nn as nn
-
-from typing import Any, Tuple, Union
-
-from utils import (
- ImageType,
- crop_image_part,
-)
-
-from layers import (
- SpectralConv2d,
- InitLayer,
- SLEBlock,
- UpsampleBlockT1,
- UpsampleBlockT2,
- DownsampleBlockT1,
- DownsampleBlockT2,
- Decoder,
-)
-
-from huggan.pytorch.huggan_mixin import HugGANModelHubMixin
-
-
-class Generator(nn.Module, HugGANModelHubMixin):
-
- def __init__(self, in_channels: int,
- out_channels: int):
- super().__init__()
-
- self._channels = {
- 4: 1024,
- 8: 512,
- 16: 256,
- 32: 128,
- 64: 128,
- 128: 64,
- 256: 32,
- 512: 16,
- 1024: 8,
- }
-
- self._init = InitLayer(
- in_channels=in_channels,
- out_channels=self._channels[4],
- )
-
- self._upsample_8 = UpsampleBlockT2(in_channels=self._channels[4], out_channels=self._channels[8] )
- self._upsample_16 = UpsampleBlockT1(in_channels=self._channels[8], out_channels=self._channels[16] )
- self._upsample_32 = UpsampleBlockT2(in_channels=self._channels[16], out_channels=self._channels[32] )
- self._upsample_64 = UpsampleBlockT1(in_channels=self._channels[32], out_channels=self._channels[64] )
- self._upsample_128 = UpsampleBlockT2(in_channels=self._channels[64], out_channels=self._channels[128] )
- self._upsample_256 = UpsampleBlockT1(in_channels=self._channels[128], out_channels=self._channels[256] )
- self._upsample_512 = UpsampleBlockT2(in_channels=self._channels[256], out_channels=self._channels[512] )
- self._upsample_1024 = UpsampleBlockT1(in_channels=self._channels[512], out_channels=self._channels[1024])
-
- self._sle_64 = SLEBlock(in_channels=self._channels[4], out_channels=self._channels[64] )
- self._sle_128 = SLEBlock(in_channels=self._channels[8], out_channels=self._channels[128])
- self._sle_256 = SLEBlock(in_channels=self._channels[16], out_channels=self._channels[256])
- self._sle_512 = SLEBlock(in_channels=self._channels[32], out_channels=self._channels[512])
-
- self._out_128 = nn.Sequential(
- SpectralConv2d(
- in_channels=self._channels[128],
- out_channels=out_channels,
- kernel_size=1,
- stride=1,
- padding='same',
- bias=False,
- ),
- nn.Tanh(),
- )
-
- self._out_1024 = nn.Sequential(
- SpectralConv2d(
- in_channels=self._channels[1024],
- out_channels=out_channels,
- kernel_size=3,
- stride=1,
- padding='same',
- bias=False,
- ),
- nn.Tanh(),
- )
-
- def forward(self, input: torch.Tensor) -> \
- Tuple[torch.Tensor, torch.Tensor]:
- size_4 = self._init(input)
- size_8 = self._upsample_8(size_4)
- size_16 = self._upsample_16(size_8)
- size_32 = self._upsample_32(size_16)
-
- size_64 = self._sle_64 (size_4, self._upsample_64 (size_32) )
- size_128 = self._sle_128(size_8, self._upsample_128(size_64) )
- size_256 = self._sle_256(size_16, self._upsample_256(size_128))
- size_512 = self._sle_512(size_32, self._upsample_512(size_256))
-
- size_1024 = self._upsample_1024(size_512)
-
- out_128 = self._out_128 (size_128)
- out_1024 = self._out_1024(size_1024)
- return out_1024, out_128
-
-
-class Discriminrator(nn.Module, HugGANModelHubMixin):
-
- def __init__(self, in_channels: int):
- super().__init__()
-
- self._channels = {
- 4: 1024,
- 8: 512,
- 16: 256,
- 32: 128,
- 64: 128,
- 128: 64,
- 256: 32,
- 512: 16,
- 1024: 8,
- }
-
- self._init = nn.Sequential(
- SpectralConv2d(
- in_channels=in_channels,
- out_channels=self._channels[1024],
- kernel_size=4,
- stride=2,
- padding=1,
- bias=False,
- ),
- nn.LeakyReLU(negative_slope=0.2),
- SpectralConv2d(
- in_channels=self._channels[1024],
- out_channels=self._channels[512],
- kernel_size=4,
- stride=2,
- padding=1,
- bias=False,
- ),
- nn.BatchNorm2d(num_features=self._channels[512]),
- nn.LeakyReLU(negative_slope=0.2),
- )
-
- self._downsample_256 = DownsampleBlockT2(in_channels=self._channels[512], out_channels=self._channels[256])
- self._downsample_128 = DownsampleBlockT2(in_channels=self._channels[256], out_channels=self._channels[128])
- self._downsample_64 = DownsampleBlockT2(in_channels=self._channels[128], out_channels=self._channels[64] )
- self._downsample_32 = DownsampleBlockT2(in_channels=self._channels[64], out_channels=self._channels[32] )
- self._downsample_16 = DownsampleBlockT2(in_channels=self._channels[32], out_channels=self._channels[16] )
-
- self._sle_64 = SLEBlock(in_channels=self._channels[512], out_channels=self._channels[64])
- self._sle_32 = SLEBlock(in_channels=self._channels[256], out_channels=self._channels[32])
- self._sle_16 = SLEBlock(in_channels=self._channels[128], out_channels=self._channels[16])
-
- self._small_track = nn.Sequential(
- SpectralConv2d(
- in_channels=in_channels,
- out_channels=self._channels[256],
- kernel_size=4,
- stride=2,
- padding=1,
- bias=False,
- ),
- nn.LeakyReLU(negative_slope=0.2),
- DownsampleBlockT1(in_channels=self._channels[256], out_channels=self._channels[128]),
- DownsampleBlockT1(in_channels=self._channels[128], out_channels=self._channels[64] ),
- DownsampleBlockT1(in_channels=self._channels[64], out_channels=self._channels[32] ),
- )
-
- self._features_large = nn.Sequential(
- SpectralConv2d(
- in_channels=self._channels[16] ,
- out_channels=self._channels[8],
- kernel_size=1,
- stride=1,
- padding=0,
- bias=False,
- ),
- nn.BatchNorm2d(num_features=self._channels[8]),
- nn.LeakyReLU(negative_slope=0.2),
- SpectralConv2d(
- in_channels=self._channels[8],
- out_channels=1,
- kernel_size=4,
- stride=1,
- padding=0,
- bias=False,
- )
- )
-
- self._features_small = nn.Sequential(
- SpectralConv2d(
- in_channels=self._channels[32],
- out_channels=1,
- kernel_size=4,
- stride=1,
- padding=0,
- bias=False,
- ),
- )
-
- self._decoder_large = Decoder(in_channels=self._channels[16], out_channels=3)
- self._decoder_small = Decoder(in_channels=self._channels[32], out_channels=3)
- self._decoder_piece = Decoder(in_channels=self._channels[32], out_channels=3)
-
- def forward(self, images_1024: torch.Tensor,
- images_128: torch.Tensor,
- image_type: ImageType) -> \
- Union[
- torch.Tensor,
- Tuple[torch.Tensor, Tuple[Any, Any, Any]]
- ]:
- # large track
-
- down_512 = self._init(images_1024)
- down_256 = self._downsample_256(down_512)
- down_128 = self._downsample_128(down_256)
-
- down_64 = self._downsample_64(down_128)
- down_64 = self._sle_64(down_512, down_64)
-
- down_32 = self._downsample_32(down_64)
- down_32 = self._sle_32(down_256, down_32)
-
- down_16 = self._downsample_16(down_32)
- down_16 = self._sle_16(down_128, down_16)
-
- # small track
-
- down_small = self._small_track(images_128)
-
- # features
-
- features_large = self._features_large(down_16).view(-1)
- features_small = self._features_small(down_small).view(-1)
- features = torch.cat([features_large, features_small], dim=0)
-
- # decoder
-
- if image_type != ImageType.FAKE:
- dec_large = self._decoder_large(down_16)
- dec_small = self._decoder_small(down_small)
- dec_piece = self._decoder_piece(crop_image_part(down_32, image_type))
- return features, (dec_large, dec_small, dec_piece)
-
- return features
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textrecog/seg/seg_r31_1by16_fpnocr_toy_dataset.py b/spaces/EuroPython2022/mmocr-demo/configs/textrecog/seg/seg_r31_1by16_fpnocr_toy_dataset.py
deleted file mode 100644
index 893bebba496c04e9364bdcea3caef651e3d426d0..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/textrecog/seg/seg_r31_1by16_fpnocr_toy_dataset.py
+++ /dev/null
@@ -1,39 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/recog_datasets/seg_toy_data.py',
- '../../_base_/recog_models/seg.py',
- '../../_base_/recog_pipelines/seg_pipeline.py',
-]
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-train_pipeline = {{_base_.train_pipeline}}
-test_pipeline = {{_base_.test_pipeline}}
-
-# optimizer
-optimizer = dict(type='Adam', lr=1e-4)
-optimizer_config = dict(grad_clip=None)
-# learning policy
-lr_config = dict(policy='step', step=[3, 4])
-total_epochs = 5
-
-data = dict(
- samples_per_gpu=8,
- workers_per_gpu=1,
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline))
-
-evaluation = dict(interval=1, metric='acc')
-
-find_unused_parameters = True
diff --git a/spaces/FineLong/stabilityai-stable-diffusion-2/README.md b/spaces/FineLong/stabilityai-stable-diffusion-2/README.md
deleted file mode 100644
index fa5a6e75e2257f29922ff8a4959758db19ec861c..0000000000000000000000000000000000000000
--- a/spaces/FineLong/stabilityai-stable-diffusion-2/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Stabilityai Stable Diffusion 2
-emoji: 🏢
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.32.0
-app_file: app.py
-pinned: false
-license: openrail++
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Flux9665/IMS-Toucan/Layers/MultiLayeredConv1d.py b/spaces/Flux9665/IMS-Toucan/Layers/MultiLayeredConv1d.py
deleted file mode 100644
index f2de4a06a06d891fbaca726959b0f0d34d93d7cc..0000000000000000000000000000000000000000
--- a/spaces/Flux9665/IMS-Toucan/Layers/MultiLayeredConv1d.py
+++ /dev/null
@@ -1,87 +0,0 @@
-# Copyright 2019 Tomoki Hayashi
-# MIT License (https://opensource.org/licenses/MIT)
-# Adapted by Florian Lux 2021
-
-"""
-Layer modules for FFT block in FastSpeech (Feed-forward Transformer).
-"""
-
-import torch
-
-
-class MultiLayeredConv1d(torch.nn.Module):
- """
- Multi-layered conv1d for Transformer block.
-
- This is a module of multi-layered conv1d designed
- to replace positionwise feed-forward network
- in Transformer block, which is introduced in
- `FastSpeech: Fast, Robust and Controllable Text to Speech`_.
-
- .. _`FastSpeech: Fast, Robust and Controllable Text to Speech`:
- https://arxiv.org/pdf/1905.09263.pdf
- """
-
- def __init__(self, in_chans, hidden_chans, kernel_size, dropout_rate):
- """
- Initialize MultiLayeredConv1d module.
-
- Args:
- in_chans (int): Number of input channels.
- hidden_chans (int): Number of hidden channels.
- kernel_size (int): Kernel size of conv1d.
- dropout_rate (float): Dropout rate.
- """
- super(MultiLayeredConv1d, self).__init__()
- self.w_1 = torch.nn.Conv1d(in_chans, hidden_chans, kernel_size, stride=1, padding=(kernel_size - 1) // 2, )
- self.w_2 = torch.nn.Conv1d(hidden_chans, in_chans, kernel_size, stride=1, padding=(kernel_size - 1) // 2, )
- self.dropout = torch.nn.Dropout(dropout_rate)
-
- def forward(self, x):
- """
- Calculate forward propagation.
-
- Args:
- x (torch.Tensor): Batch of input tensors (B, T, in_chans).
-
- Returns:
- torch.Tensor: Batch of output tensors (B, T, hidden_chans).
- """
- x = torch.relu(self.w_1(x.transpose(-1, 1))).transpose(-1, 1)
- return self.w_2(self.dropout(x).transpose(-1, 1)).transpose(-1, 1)
-
-
-class Conv1dLinear(torch.nn.Module):
- """
- Conv1D + Linear for Transformer block.
-
- A variant of MultiLayeredConv1d, which replaces second conv-layer to linear.
- """
-
- def __init__(self, in_chans, hidden_chans, kernel_size, dropout_rate):
- """
- Initialize Conv1dLinear module.
-
- Args:
- in_chans (int): Number of input channels.
- hidden_chans (int): Number of hidden channels.
- kernel_size (int): Kernel size of conv1d.
- dropout_rate (float): Dropout rate.
- """
- super(Conv1dLinear, self).__init__()
- self.w_1 = torch.nn.Conv1d(in_chans, hidden_chans, kernel_size, stride=1, padding=(kernel_size - 1) // 2, )
- self.w_2 = torch.nn.Linear(hidden_chans, in_chans)
- self.dropout = torch.nn.Dropout(dropout_rate)
-
- def forward(self, x):
- """
- Calculate forward propagation.
-
- Args:
- x (torch.Tensor): Batch of input tensors (B, T, in_chans).
-
- Returns:
- torch.Tensor: Batch of output tensors (B, T, hidden_chans).
- """
- x = torch.relu(self.w_1(x.transpose(-1, 1))).transpose(-1, 1)
- return self.w_2(self.dropout(x))
diff --git a/spaces/GAIR/Factool/factool/knowledge_qa/__init__.py b/spaces/GAIR/Factool/factool/knowledge_qa/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/GXSA/bingo/src/components/tailwind-indicator.tsx b/spaces/GXSA/bingo/src/components/tailwind-indicator.tsx
deleted file mode 100644
index f2a1291213dd67055fcebe67fab574c8441338df..0000000000000000000000000000000000000000
--- a/spaces/GXSA/bingo/src/components/tailwind-indicator.tsx
+++ /dev/null
@@ -1,14 +0,0 @@
-export function TailwindIndicator() {
- if (process.env.NODE_ENV === 'production') return null
-
- return (
-
-
xs
-
sm
-
md
-
lg
-
xl
-
2xl
-
- )
-}
diff --git a/spaces/Gabesantos1007/Dall-e/README.md b/spaces/Gabesantos1007/Dall-e/README.md
deleted file mode 100644
index 70eee4a0f9ae7408e78adba20aefaa39e1083563..0000000000000000000000000000000000000000
--- a/spaces/Gabesantos1007/Dall-e/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Dall E
-emoji: 👁
-colorFrom: blue
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.26.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Gradio-Blocks/SlowMo_n_Timelapse_Your_Video/app.py b/spaces/Gradio-Blocks/SlowMo_n_Timelapse_Your_Video/app.py
deleted file mode 100644
index 1f9fea88f63b3e68d2747e490e9376329a42d499..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/SlowMo_n_Timelapse_Your_Video/app.py
+++ /dev/null
@@ -1,309 +0,0 @@
-import gradio as gr
-import ffmpeg
-from pathlib import Path
-import os
-import ast
-import json
-import base64
-import requests
-import moviepy.editor as mp
-from PIL import Image, ImageSequence
-import cv2
-
-API_URL = "https://api-inference.huggingface.co/models/facebook/wav2vec2-base-960h"
-HF_TOKEN = os.environ["HF_TOKEN"]
-headers = {"Authorization": f"Bearer {HF_TOKEN}"}
-
-video_list = []
-
-def generate_transcripts(in_video): #generate_gifs(in_video, gif_transcript):
- print("********* Inside generate_transcripts() **********")
- #convert video to audio
- print(f" input video is : {in_video}")
-
- #sample
- #video_path = Path("./ShiaLaBeouf.mp4")
- audio_memory, _ = ffmpeg.input(in_video).output('-', format="wav", ac=1, ar='16k').overwrite_output().global_args('-loglevel', 'quiet').run(capture_stdout=True)
- #audio_memory, _ = ffmpeg.input(video_path).output('-', format="wav", ac=1, ar='16k').overwrite_output().global_args('-loglevel', 'quiet').run(capture_stdout=True)
-
- #Getting transcripts using wav2Vec2 huggingface hosted accelerated inference
- #sending audio file in request along with stride and chunk length information
- model_response = query_api(audio_memory)
-
- #model response has both - transcripts as well as character timestamps or chunks
- print(f"model_response is : {model_response}")
- transcription = model_response["text"].lower()
- chnk = model_response["chunks"]
-
- #creating lists from chunks to consume downstream easily
- timestamps = [[chunk["text"].lower(), chunk["timestamp"][0], chunk["timestamp"][1]]
- for chunk in chnk]
-
- #getting words and word timestamps
- words, words_timestamp = get_word_timestamps(timestamps)
- print(f"Total words in the audio transcript is:{len(words)}, transcript word list is :{words}, type of words is :{type(words)} ")
- print(f"Total Word timestamps derived fromcharacter timestamp are :{len(words_timestamp)}, Word timestamps are :{words_timestamp}")
-
- return transcription, words, words_timestamp
-
-
-def generate_gifs(in_video, gif_transcript, words, words_timestamp, vid_speed):
- print("********* Inside generate_gifs() **********")
-
- #creating list from input gif transcript
- #gif = "don't let your dreams be dreams"
- gif = gif_transcript
- #gif = gif_transcript
- giflist = gif.split()
-
- #getting gif indexes from the generator
- # Converting string to list
- words = ast.literal_eval(words)
- words_timestamp = ast.literal_eval(words_timestamp)
- print(f"words is :{words}")
- print(f"type of words is :{type(words)}")
- print(f"length of words is :{len(words)}")
- print(f"giflist is :{giflist}")
-
- giflist_indxs = list(list(get_gif_word_indexes(words, giflist))[0])
- print(f"giflist_indxs is : {giflist_indxs}")
- #getting start and end timestamps for a gif video
- start_seconds, end_seconds = get_gif_timestamps(giflist_indxs, words_timestamp)
- print(f"start_seconds, end_seconds are : ({start_seconds}, {end_seconds})")
- #generated .gif image
- #gif_out, vid_out = gen_moviepy_gif(in_video, start_seconds, end_seconds)
- print(f"vid_speed from SLider is : {vid_speed}")
-
- speededit_vids_list, concat_vid = gen_moviepy_gif(in_video, start_seconds, end_seconds, float(vid_speed), video_list)
-
- return concat_vid #speededit_vids_list
-
-
-#calling the hosted model
-def query_api(audio_bytes: bytes):
- """
- Query for Huggingface Inference API for Automatic Speech Recognition task
- """
- print("********* Inside query_api() **********")
- payload = json.dumps({
- "inputs": base64.b64encode(audio_bytes).decode("utf-8"),
- "parameters": {
- "return_timestamps": "char",
- "chunk_length_s": 10,
- "stride_length_s": [4, 2]
- },
- "options": {"use_gpu": False}
- }).encode("utf-8")
-
- response = requests.request(
- "POST", API_URL, headers=headers, data=payload)
- json_reponse = json.loads(response.content.decode("utf-8"))
- print(f"json_reponse is :{json_reponse}")
- return json_reponse
-
-
-#getting word timestamps from character timestamps
-def get_word_timestamps(timestamps):
- print("********* inside get_word_timestamps() **************")
- words, word = [], []
- letter_timestamp, word_timestamp, words_timestamp = [], [], []
- for idx,entry in enumerate(timestamps):
- word.append(entry[0])
- letter_timestamp.append(entry[1])
- if entry[0] == ' ':
- words.append(''.join(word))
- word_timestamp.append(letter_timestamp[0])
- word_timestamp.append(timestamps[idx-1][2])
- words_timestamp.append(word_timestamp)
- word, word_timestamp, letter_timestamp = [], [], []
-
- words = [word.strip() for word in words]
- print(f"words created from timestamps are : {words}")
- return words, words_timestamp
-
-
-#getting index of gif words in main transcript
-def get_gif_word_indexes(total_words_list, gif_words_list):
- if not gif_words_list:
- return
- # just optimization
- COUNT=0
- lengthgif_words_list = len(gif_words_list)
- firstgif_words_list = gif_words_list[0]
-
- print(f"total_words_list is :{total_words_list}")
- print(f"length of total_words_list is :{len(total_words_list)}")
- print(f"gif_words_list is :{gif_words_list}")
- print(f"length of gif_words_list is :{len(gif_words_list)}")
-
- for idx, item in enumerate(total_words_list):
- COUNT+=1
- if item == firstgif_words_list:
- if total_words_list[idx:idx+lengthgif_words_list] == gif_words_list:
- print(f"value of tuple is : {tuple(range(idx, idx+lengthgif_words_list))}")
- yield tuple(range(idx, idx+lengthgif_words_list))
-
-
-#getting start and end timestamps for gif transcript
-def get_gif_timestamps(giflist_indxs, words_timestamp):
- print(f"******** Inside get_gif_timestamps() **********")
- min_idx = min(giflist_indxs)
- max_idx = max(giflist_indxs)
- print(f"min_idx is :{min_idx}")
- print(f"max_idx is :{max_idx}")
-
- gif_words_timestamp = words_timestamp[min_idx : max_idx+1]
- print(f"words_timestamp is :{words_timestamp}")
- print(f"gif_words_timestamp is :{gif_words_timestamp}")
-
- start_seconds, end_seconds = gif_words_timestamp[0][0], gif_words_timestamp[-1][-1]
- print(f"start_seconds, end_seconds are :{start_seconds},{end_seconds}")
-
- return start_seconds, end_seconds
-
-
-#extracting the video and building and serving a .gif image
-def gen_moviepy_gif(in_video, start_seconds, end_seconds, vid_speed, vid_list):
- print("******** inside moviepy_gif () ***************")
- #sample
- #video_path = "./ShiaLaBeouf.mp4"
- video = mp.VideoFileClip(in_video)
- #video = mp.VideoFileClip(video_path)
-
- leftover_clip_start = video.subclip(0, int(start_seconds) + float("{:.2f}".format(1-start_seconds%1))).without_audio() #float("{:.2f}".format(1-a%1))
- final_clip = video.subclip(start_seconds, end_seconds)
- tmp = int(end_seconds) + float("{:.2f}".format(1-end_seconds%1))
- if tmp < video.duration:
- leftover_clip_end = video.subclip(int(end_seconds) + float("{:.2f}".format(1-end_seconds%1)) ).without_audio() #end=None
- else:
- leftover_clip_end = video.subclip(int(end_seconds)).without_audio()
- #slowmo
- print(f"vid_speed from calling function is : {vid_speed}")
- speededit_clip = final_clip.fx(mp.vfx.speedx, vid_speed)
- speededit_clip = speededit_clip.without_audio()
-
- #concat
- concatenated_clip = mp.concatenate_videoclips([leftover_clip_start, speededit_clip, leftover_clip_end])
- concatenated_clip.write_videofile("concat.mp4")
-
- filename = f"speededit{len(vid_list)}"
- print("filename is :",filename)
- speededit_clip.write_videofile("speededit.mp4") #(filename)
- vid_list.append("speededit.mp4") #(filename)
-
- #might use later
- #if len(vid_list) == 1:
- # speededit_clip.write_videofile("slomo.mp4")
- #elif len(vid_list) == 2:
- # speededit_clip.write_videofile("timelapse.mp4")
-
- #writing to RAM - gif and smaller clip
- #final_clip.write_gif("gifimage.gif") #, program='ffmpeg', tempfiles=True, fps=15, fuzz=3)
- #final_clip.write_videofile("gifimage.mp4")
- final_clip.close()
- #reading in a variable
- #gif_img = mp.VideoFileClip("gifimage.gif")
- #gif_vid = mp.VideoFileClip("gifimage.mp4")
- #im = Image.open("gifimage.gif")
- #vid_cap = cv2.VideoCapture('gifimage.mp4')
- return vid_list, "concat.mp4" #"slomo.mp4", "timelapse.mp4", #"gifimage.gif", "gifimage.mp4" #im, gif_img, gif_vid, vid_cap, #"gifimage.mp4"
-
-
-sample_video = ["olympic100m.mp4"] #[['./ShiaLaBeouf.mp4']]
-sample_vid = gr.Video(label='Video file') #for displaying the example
-examples = gr.components.Dataset(components=[sample_vid], samples=[sample_video], type='values')
-
-
-demo = gr.Blocks()
-
-with demo:
- gr.Markdown("""# **Watch your video in SloMo or in Timelapse!** """)
- gr.Markdown("""
- ### Editing your video using ASR pipeline..
-
- A Space by [Yuvraj Sharma](https://huggingface.co/ysharma).
-
- **Background:** In this Gradio BLocks Party Space, I am trying to -
- - Provide a capability to slow down your video
- - Timelapse your video
-
- **How To Use:** 1. Upload a video or simply click on the sample provided here.
- 2. Then click on 'Generate transcripts' button and first textbox will display the extract Transcript from the audio associated with your sample.
- 3. Clip the text from transcript or type transcripts manually in the second Textbox provided.
- 4. A slowed down or timelapsed version of your video will get generated on the right hand side !
-
- Hope you have fun using this 😀
- """)
-
- with gr.Row():
- #for incoming video
- input_video = gr.Video(label="Upload a Video", visible=True)
- #to generate and display transcriptions for input video
- text_transcript = gr.Textbox(label="Transcripts", lines = 10, interactive = True )
-
- #Just to move data between function hence keeping visible false
- text_words = gr.Textbox(visible=False)
- text_wordstimestamps = gr.Textbox(visible=False)
-
- with gr.Row():
- button_transcript = gr.Button("Generate transcripts")
-
- #For SlowMo
- with gr.Row():
- #to copy paste required gif transcript / or to populate by itself on pressing the button
- text_slomo_transcript = gr.Textbox(label="Transcripts", placeholder="Copy paste transcripts here to create SlowMo Video" , lines = 5, interactive = True )
-
- def load_slomo_text(text):
- print("****** inside load_slomo_text() ******")
- print("text for slomo video is : ", text)
- return text
-
- text_transcript.change(load_slomo_text, text_transcript, text_slomo_transcript )
-
- #out_gif = gr.Image(label="Generated GIF image")
- out_slomo_vid = gr.Video(label="Generated SlowMo Video")
-
- with gr.Row():
- #button_transcript = gr.Button("Generate transcripts")
- vid_speed_slomo = gr.Slider(0.1,0.9, step=0.1)
- button_slomo = gr.Button("Create SloMo")
-
- #For TimeLapse
- with gr.Row():
- #to copy paste required gif transcript / or to populate by itself on pressing the button
- text_timelapse_transcript = gr.Textbox(label="Transcripts", placeholder="Copy paste transcripts here to create GIF image" , lines = 5) #, interactive = True )
-
- def load_timelapse_text(text):
- print("****** inside load_timelapse_text() ******")
- print("text for timelapse video is : ", text)
- return text
-
- text_transcript.change(load_timelapse_text, text_transcript, text_timelapse_transcript )
-
- #out_gif = gr.Image(label="Generated GIF image")
- out_timelapse_vid = gr.Video(label="Generated TimeLapse Video")
-
- with gr.Row():
- #button_transcript = gr.Button("Generate transcripts")
- vid_speed_timelapse = gr.Slider(1,2, step=0.25)
- button_timelapse = gr.Button("Create TimeLapse")
-
- with gr.Row():
- #to render video example on mouse hover/click
- examples.render()
- #to load sample video into input_video upon clicking on it
- def load_examples(video):
- print("****** inside load_example() ******")
- print("in_video is : ", video[0])
- return video[0]
-
- examples.click(load_examples, examples, input_video)
-
- #vid_speed = gr.Slider(0.1,0.9, step=0.1)
-
-
- button_transcript.click(generate_transcripts, input_video, [text_transcript, text_words, text_wordstimestamps ])
- button_slomo.click(generate_gifs, [input_video, text_slomo_transcript, text_words, text_wordstimestamps, vid_speed_slomo], out_slomo_vid )
- button_timelapse.click(generate_gifs, [out_slomo_vid, text_timelapse_transcript, text_words, text_wordstimestamps, vid_speed_timelapse], out_timelapse_vid )
-
-demo.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/Greencapabara/OpenAI-whisper-with-upload.no-time-limit/README.md b/spaces/Greencapabara/OpenAI-whisper-with-upload.no-time-limit/README.md
deleted file mode 100644
index 9f36f1bf2e2411db9255a07ade4dc803e90ac241..0000000000000000000000000000000000000000
--- a/spaces/Greencapabara/OpenAI-whisper-with-upload.no-time-limit/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Whisper1
-emoji: 🏢
-colorFrom: pink
-colorTo: green
-sdk: gradio
-sdk_version: 3.3.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Grezz/generate_human_motion/pyrender/tests/unit/__init__.py b/spaces/Grezz/generate_human_motion/pyrender/tests/unit/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/SPNet/res2net_v1b_base.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/SPNet/res2net_v1b_base.py
deleted file mode 100644
index ef054f7fe6b61d8bcd807f2580fc4628ef129e43..0000000000000000000000000000000000000000
--- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/SPNet/res2net_v1b_base.py
+++ /dev/null
@@ -1,346 +0,0 @@
-
-import torch.nn as nn
-import math
-import torch
-__all__ = ['Res2Net', 'res2net50_v1b', 'res2net101_v1b']
-
-
-model_urls = {
- 'res2net50_v1b_26w_4s': 'https://shanghuagao.oss-cn-beijing.aliyuncs.com/res2net/res2net50_v1b_26w_4s-3cf99910.pth',
- 'res2net101_v1b_26w_4s': 'https://shanghuagao.oss-cn-beijing.aliyuncs.com/res2net/res2net101_v1b_26w_4s-0812c246.pth',
-}
-
-
-class Bottle2neck(nn.Module):
- expansion = 4
-
- def __init__(self, inplanes, planes, stride=1, downsample=None, baseWidth=26, scale = 4, stype='normal'):
- """ Constructor
- Args:
- inplanes: input channel dimensionality
- planes: output channel dimensionality
- stride: conv stride. Replaces pooling layer.
- downsample: None when stride = 1
- baseWidth: basic width of conv3x3
- scale: number of scale.
- type: 'normal': normal set. 'stage': first block of a new stage.
- """
- super(Bottle2neck, self).__init__()
-
- width = int(math.floor(planes * (baseWidth/64.0)))
- self.conv1 = nn.Conv2d(inplanes, width*scale, kernel_size=1, bias=False)
- self.bn1 = nn.BatchNorm2d(width*scale)
-
- if scale == 1:
- self.nums = 1
- else:
- self.nums = scale -1
- if stype == 'stage':
- self.pool = nn.AvgPool2d(kernel_size=3, stride = stride, padding=1)
- convs = []
- bns = []
- for i in range(self.nums):
- convs.append(nn.Conv2d(width, width, kernel_size=3, stride = stride, padding=1, bias=False))
- bns.append(nn.BatchNorm2d(width))
- self.convs = nn.ModuleList(convs)
- self.bns = nn.ModuleList(bns)
-
- self.conv3 = nn.Conv2d(width*scale, planes * self.expansion, kernel_size=1, bias=False)
- self.bn3 = nn.BatchNorm2d(planes * self.expansion)
-
- self.relu = nn.ReLU(inplace=True)
- self.downsample = downsample
- self.stype = stype
- self.scale = scale
- self.width = width
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- spx = torch.split(out, self.width, 1)
- for i in range(self.nums):
- if i==0 or self.stype=='stage':
- sp = spx[i]
- else:
- sp = sp + spx[i]
- sp = self.convs[i](sp)
- sp = self.relu(self.bns[i](sp))
- if i==0:
- out = sp
- else:
- out = torch.cat((out, sp), 1)
- if self.scale != 1 and self.stype=='normal':
- out = torch.cat((out, spx[self.nums]),1)
- elif self.scale != 1 and self.stype=='stage':
- out = torch.cat((out, self.pool(spx[self.nums])),1)
-
- out = self.conv3(out)
- out = self.bn3(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class Res2Net(nn.Module):
-
- def __init__(self, block, layers, baseWidth = 26, scale = 4, num_classes=1000):
- self.inplanes = 64
- super(Res2Net, self).__init__()
- self.baseWidth = baseWidth
- self.scale = scale
- self.conv1 = nn.Sequential(
- nn.Conv2d(3, 32, 3, 2, 1, bias=False),
- nn.BatchNorm2d(32),
- nn.ReLU(inplace=True),
- nn.Conv2d(32, 32, 3, 1, 1, bias=False),
- nn.BatchNorm2d(32),
- nn.ReLU(inplace=True),
- nn.Conv2d(32, 64, 3, 1, 1, bias=False)
- )
- self.bn1 = nn.BatchNorm2d(64)
- self.relu = nn.ReLU()
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
- self.layer1 = self._make_layer(block, 64, layers[0])
- self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
- self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
- self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
- self.avgpool = nn.AdaptiveAvgPool2d(1)
- self.fc = nn.Linear(512 * block.expansion, num_classes)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, nn.BatchNorm2d):
- nn.init.constant_(m.weight, 1)
- nn.init.constant_(m.bias, 0)
-
- def _make_layer(self, block, planes, blocks, stride=1):
- downsample = None
- if stride != 1 or self.inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- nn.AvgPool2d(kernel_size=stride, stride=stride,
- ceil_mode=True, count_include_pad=False),
- nn.Conv2d(self.inplanes, planes * block.expansion,
- kernel_size=1, stride=1, bias=False),
- nn.BatchNorm2d(planes * block.expansion),
- )
-
- layers = []
- layers.append(block(self.inplanes, planes, stride, downsample=downsample,
- stype='stage', baseWidth = self.baseWidth, scale=self.scale))
- self.inplanes = planes * block.expansion
- for i in range(1, blocks):
- layers.append(block(self.inplanes, planes, baseWidth = self.baseWidth, scale=self.scale))
-
- return nn.Sequential(*layers)
-
- def forward(self, x):
-
- x = self.conv1(x)
- x = self.bn1(x)
- x = self.relu(x)
- x0 = self.maxpool(x)
-
-
- x1 = self.layer1(x0)
- x2 = self.layer2(x1)
- x3 = self.layer3(x2)
- x4 = self.layer4(x3)
-
- x5 = self.avgpool(x4)
- x6 = x5.view(x5.size(0), -1)
- x7 = self.fc(x6)
-
- return x7
-
-
-
-class Res2Net_Ours(nn.Module):
-
- def __init__(self, block, layers, baseWidth = 26, scale = 4, num_classes=1000):
- self.inplanes = 64
- super(Res2Net_Ours, self).__init__()
-
- self.baseWidth = baseWidth
- self.scale = scale
- self.conv1 = nn.Sequential(
- nn.Conv2d(3, 32, 3, 2, 1, bias=False),
- nn.BatchNorm2d(32),
- nn.ReLU(inplace=True),
- nn.Conv2d(32, 32, 3, 1, 1, bias=False),
- nn.BatchNorm2d(32),
- nn.ReLU(inplace=True),
- nn.Conv2d(32, 64, 3, 1, 1, bias=False)
- )
- self.bn1 = nn.BatchNorm2d(64)
- self.relu = nn.ReLU()
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
- self.layer1 = self._make_layer(block, 64, layers[0])
- self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
- self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
- self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
-
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, nn.BatchNorm2d):
- nn.init.constant_(m.weight, 1)
- nn.init.constant_(m.bias, 0)
-
- def _make_layer(self, block, planes, blocks, stride=1):
- downsample = None
- if stride != 1 or self.inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- nn.AvgPool2d(kernel_size=stride, stride=stride,
- ceil_mode=True, count_include_pad=False),
- nn.Conv2d(self.inplanes, planes * block.expansion,
- kernel_size=1, stride=1, bias=False),
- nn.BatchNorm2d(planes * block.expansion),
- )
-
- layers = []
- layers.append(block(self.inplanes, planes, stride, downsample=downsample,
- stype='stage', baseWidth = self.baseWidth, scale=self.scale))
- self.inplanes = planes * block.expansion
- for i in range(1, blocks):
- layers.append(block(self.inplanes, planes, baseWidth = self.baseWidth, scale=self.scale))
-
- return nn.Sequential(*layers)
-
- def forward(self, x):
-
- x = self.conv1(x)
- x = self.bn1(x)
- x = self.relu(x)
- x0 = self.maxpool(x)
-
-
- x1 = self.layer1(x0)
- x2 = self.layer2(x1)
- x3 = self.layer3(x2)
- x4 = self.layer4(x3)
-
-
- return x0,x1,x2,x3,x4
-
-
-
-def res2net50_v1b(pretrained=False, **kwargs):
- """Constructs a Res2Net-50_v1b model.
- Res2Net-50 refers to the Res2Net-50_v1b_26w_4s.
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- """
- model = Res2Net(Bottle2neck, [3, 4, 6, 3], baseWidth = 26, scale = 4, **kwargs)
- # if pretrained:
- # model.load_state_dict(model_zoo.load_url(model_urls['res2net50_v1b_26w_4s'],map_location='cpu'))
- return model
-
-def res2net101_v1b(pretrained=False, **kwargs):
- """Constructs a Res2Net-50_v1b_26w_4s model.
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- """
- model = Res2Net(Bottle2neck, [3, 4, 23, 3], baseWidth = 26, scale = 4, **kwargs)
- # if pretrained:
- # model.load_state_dict(model_zoo.load_url(model_urls['res2net101_v1b_26w_4s']))
- return model
-
-
-
-def res2net50_v1b_Ours(pretrained=False, **kwargs):
- """Constructs a Res2Net-50_v1b model.
- Res2Net-50 refers to the Res2Net-50_v1b_26w_4s.
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- """
- model = Res2Net_Ours(Bottle2neck, [3, 4, 6, 3], baseWidth = 26, scale = 4, **kwargs)
- # if pretrained:
- # model.load_state_dict(model_zoo.load_url(model_urls['res2net50_v1b_26w_4s']))
- return model
-
-def res2net101_v1b_Ours(pretrained=False, **kwargs):
- """Constructs a Res2Net-50_v1b_26w_4s model.
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- """
- model = Res2Net_Ours(Bottle2neck, [3, 4, 23, 3], baseWidth = 26, scale = 4, **kwargs)
- # if pretrained:
- # model.load_state_dict(model_zoo.load_url(model_urls['res2net101_v1b_26w_4s']))
- return model
-
-
-
-def res2net50_v1b_26w_4s(pretrained=False, **kwargs):
- """Constructs a Res2Net-50_v1b_26w_4s model.
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- """
- model = Res2Net(Bottle2neck, [3, 4, 6, 3], baseWidth = 26, scale = 4, **kwargs)
- # if pretrained:
- # model.load_state_dict(model_zoo.load_url(model_urls['res2net50_v1b_26w_4s'],map_location='cpu'))
- return model
-
-def res2net101_v1b_26w_4s(pretrained=False, **kwargs):
- """Constructs a Res2Net-50_v1b_26w_4s model.
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- """
- model = Res2Net(Bottle2neck, [3, 4, 23, 3], baseWidth = 26, scale = 4, **kwargs)
- # if pretrained:
- # model.load_state_dict(model_zoo.load_url(model_urls['res2net101_v1b_26w_4s']))
- return model
-
-def res2net152_v1b_26w_4s(pretrained=False, **kwargs):
- """Constructs a Res2Net-50_v1b_26w_4s model.
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- """
- model = Res2Net(Bottle2neck, [3, 8, 36, 3], baseWidth = 26, scale = 4, **kwargs)
- # if pretrained:
- # model.load_state_dict(model_zoo.load_url(model_urls['res2net152_v1b_26w_4s']))
- return model
-
-
-
-
-def Res2Net_model(ind=50):
-
- if ind == 50:
- model_base = res2net50_v1b(pretrained=True)
- model = res2net50_v1b_Ours()
-
- if ind == 101:
- model_base = res2net101_v1b(pretrained=True)
- model = res2net101_v1b_Ours()
-
-
- pretrained_dict = model_base.state_dict()
- model_dict = model.state_dict()
-
- pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}
-
- model_dict.update(pretrained_dict)
- model.load_state_dict(model_dict)
-
- return model
-
-
-
-
-
-if __name__ == '__main__':
- images = torch.rand(1, 3, 352, 352)
- model = res2net50_v1b_26w_4s(pretrained=False)
- model = model
- print(model(images).size())
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/m2m_100/README.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/m2m_100/README.md
deleted file mode 100644
index 02a68a5f0919a26a0468069bed46a5b1abc78941..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/m2m_100/README.md
+++ /dev/null
@@ -1,241 +0,0 @@
-# Beyond English-Centric Multilingual Machine Translation
-
-## Introduction
-In this work, we create a true Many-to-Many multilingual translation model that can translate directly between any pair of 100 languages. Our focus on non-English-Centric models brings gains of more than 10 BLEU when directly translating between non-English directions while performing competitively with the best single systems of WMT.
-
-If you are new to using fairseq, read the following walkthrough. Otherwise, skip to the sections below.
-
-0. **Generation Data**
-
-To download the generation data, follow the below commands. Note that all datasets need to be detokenized *before* applying SPM in the data preprocessing step. If you use these evaluation datasets, please cite their associated papers.
-```bash
-# WMT - use sacrebleu, example here:
-sacrebleu -t wmt14 -l fr-en --echo src > wmt.test.fr-en.fr
-sacrebleu -t wmt14 -l fr-en --echo ref > wmt.test.fr-en.en
-
-# WAT
-wget http://lotus.kuee.kyoto-u.ac.jp/WAT/my-en-data/wat2020.my-en.zip
-unzip wat2020.my-en.zip
-
-# FLORES
-# download from: https://github.com/facebookresearch/flores
-
-# TED - need to detokenize with Moses!
-# from: https://github.com/neulab/word-embeddings-for-nmt
-wget http://phontron.com/data/ted_talks.tar.gz
-
-# Autshumato
-# request to download: https://repo.sadilar.org/handle/20.500.12185/397
-
-# Tatoeba Challenge
-# available here: https://github.com/Helsinki-NLP/Tatoeba-Challenge
-```
-
-1. **Training Data**
-
-To produce the training data, we use a combination of [CCMatrix](https://arxiv.org/abs/1911.04944) and [CCAligned](https://arxiv.org/abs/1911.06154). Check out the instructions [here](https://github.com/facebookresearch/LASER/tree/master/tasks/CCMatrix) to download the raw data.
-
-2. **Preprocess Data**
-
-After downloading raw data, you will need to postprocess the data, then apply SPM, then binarize. Note that it is very important you run the postprocessing script, because this removes any instance of the evaluation data in the mined training data.
-
-```bash
-# preprocess data
-
-# remove sentences with more than 50% punctuation
-python /path/to/fairseq/examples/m2m_100/process_data/remove_too_much_punc.py
-
-# deduplicate training data
-paste /path/to/datadir/train.$src /path/to/datadir/train.$tgt | awk '!x[$0]++' > /path/to/datadir/train.dedup
-echo "keeping $(wc -l /path/to/datadir/train.dedup) bitext out of $(wc -l /path/to/datadir/train.$src)"
-cut -f1 /path/to/datadir/train.dedup > /path/to/datadir/train.$src
-cut -f2 /path/to/datadir/train.dedup > /path/to/datadir/train.$tgt
-
-# remove all instances of evaluation data from the training data
-python /path/to/fairseq/examples/m2m_100/process_data/dedup_data.py
-
-# frequency cleaning
-wget https://dl.fbaipublicfiles.com/m2m_100/histograms.tar.gz
-tar -xvzf histograms.tar.gz
-python /path/to/fairseq/examples/m2m_100/process_data/clean_histogram.py --src $src --tgt $tgt --src-file /path/to/source/file --tgt-file /path/to/output/file --src-output-file source_output.$src --tgt-output-file target_output.$tgt --histograms /path/to/histograms
-
-# apply SPM
-wget https://dl.fbaipublicfiles.com/m2m_100/spm.128k.model
-python /path/to/fairseq/scripts/spm_encode.py \
- --model spm.128k.model \
- --output_format=piece \
- --inputs=/path/to/input/file/here \
- --outputs=/path/to/output/file/here
-
-# length ratio cleaning
-perl mosesdecoder/scripts/training/clean-corpus-n.perl --ratio 3 /path/to/training/data/train.spm.$src-$tgt $src $tgt /path/to/output/directory/train.spm.$src-$tgt 1 250
-
-# binarize data
-wget https://dl.fbaipublicfiles.com/m2m_100/data_dict.128k.txt
-fairseq-preprocess \
- --source-lang $src --target-lang $tgt \
- --testpref spm.$src.$tgt \
- --thresholdsrc 0 --thresholdtgt 0 \
- --destdir data_bin \
- --srcdict data_dict.128k.txt --tgtdict data_dict.128k.txt
-```
-
-3. **Training Scripts**
-
-To reproduce the training of our models, we train with fairseq-py's multilingual translation [task](https://github.com/pytorch/fairseq/tree/main/examples/multilingual). If you are interested in model parallel training, also check out [fairscale](https://github.com/facebookresearch/fairscale).
-
-4. **Generation**
-
-To generate from our models, follow the the commands in the generation section below.
-
-
-If you use any of the resources listed here, please cite:
-```bibtex
-@article{fan2020beyond,
- title={Beyond English-Centric Multilingual Machine Translation},
- author={Fan, Angela and Bhosale, Shruti and Schwenk, Holger and Ma, Zhiyi and El-Kishky, Ahmed and Goyal, Siddharth and Baines, Mandeep and Celebi, Onur and Wenzek, Guillaume and Chaudhary, Vishrav and Goyal, Naman and Birch, Tom and Liptchinsky, Vitaliy and Edunov, Sergey and Grave, Edouard and Auli, Michael and Joulin, Armand},
- journal={arXiv preprint},
- year={2020}
-}
-
-@article{schwenk2019ccmatrix,
- title={Ccmatrix: Mining billions of high-quality parallel sentences on the web},
- author={Schwenk, Holger and Wenzek, Guillaume and Edunov, Sergey and Grave, Edouard and Joulin, Armand},
- journal={arXiv preprint arXiv:1911.04944},
- year={2019}
-}
-
-@article{el2019massive,
- title={A Massive Collection of Cross-Lingual Web-Document Pairs},
- author={El-Kishky, Ahmed and Chaudhary, Vishrav and Guzman, Francisco and Koehn, Philipp},
- journal={arXiv preprint arXiv:1911.06154},
- year={2019}
-}
-```
-
-
-## Trained Models
-
-### 418M and 1.2B Model
-We include the last checkpoint for both of these models.
-
-```bash
-wget https://dl.fbaipublicfiles.com/m2m_100/model_dict.128k.txt
-wget https://dl.fbaipublicfiles.com/m2m_100/language_pairs_small_models.txt
-
-# 418M parameter model
-wget https://dl.fbaipublicfiles.com/m2m_100/418M_last_checkpoint.pt
-
-# 1.2B parameter model
-wget https://dl.fbaipublicfiles.com/m2m_100/1.2B_last_checkpoint.pt
-
-# Generation:
-fairseq-generate $binarized_data_path --batch-size 32 --path $path_to_model --fixed-dictionary model_dict.128k.txt -s en -t fr --remove-bpe 'sentencepiece' --beam 5 --task translation_multi_simple_epoch --lang-pairs language_pairs_small_models.txt --decoder-langtok --encoder-langtok src --gen-subset test > gen_out
-```
-
-### 12B Model
-12B parameter model trained on many-to-many training data for 100 languages. We include the last checkpoint, average of last 5 checkpoints, average of last 10 checkpoints. There isn't a universally best choice out of these three, but all three versions are pretty close in accuracy. You can either sweep over the 3 checkpoints on a dev test and use the best performing checkpoint for final testing. Or the last checkpoint can be a good default choice.
-
-**Model Download Links**
-Configuration | 2 32GB GPUs | 4 16GB GPUs | 6 12GB GPUs | 8 8GB GPUs
-:--|:--|:--|:--|:--
-Last Checkpoint | [12b_last_chk_2_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_2_gpus.pt) | [12b_last_chk_4_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_4_gpus.pt) | [12b_last_chk_6_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_6_gpus.pt) | [12b_last_chk_8_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_8_gpus.pt)
-Average of last 5 checkpoints | [12b_avg5_chk_2_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_2_gpus.pt) | [12b_avg5_chk_4_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_4_gpus.pt) | [12b_avg5_chk_6_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_6_gpus.pt) | [12b_avg5_chk_8_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_8_gpus.pt)
-Average of last 10 checkpoints | [12b_avg10_chk_2_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_2_gpus.pt) | [12b_avg10_chk_4_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_4_gpus.pt) | [12b_avg10_chk_6_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_6_gpus.pt) | [12b_avg10_chk_8_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_8_gpus.pt)
-
-**Generation Arguments**
-Configuration | 2 32GB GPUs | 4 16GB GPUs | 6 12GB GPUs | 8 8GB GPUs
-:--|:--|:--|:--|:--
-`--pipeline-encoder-balance` | `[26]` | `[1,15,10]` | `[1,9,9,7]` | `[1,6,6,6,7]`
-`--pipeline-encoder-devices` | `[0]` | `[0,1,0]` | `[0,1,2,0]` | `[0,4,5,1,0]`
-`--pipeline-decoder-balance` | `[3,22,1]` | `[3,11,11,1]` | `[3,7,7,8,1]` | `[1,6,6,6,6,1]`
-`--pipeline-decoder-devices` | `[0,1,0]` | `[0,2,3,0]` | `[0,3,4,5,0]` | `[0,2,6,7,3,0]`
-
-
-## SentencePiece Model
-
-```bash
-wget https://dl.fbaipublicfiles.com/m2m_100/spm.128k.model
-```
-
-## Generation with M2M-100
-
-### Encode using our SentencePiece Model
-
-Note: Install SentencePiece from [here](https://github.com/google/sentencepiece)
-
-```bash
-fairseq=/path/to/fairseq
-cd $fairseq
-sacrebleu --echo src -l de-fr -t wmt19 | head -n 20 > raw_input.de-fr.de
-sacrebleu --echo ref -l de-fr -t wmt19 | head -n 20 > raw_input.de-fr.fr
-wget https://dl.fbaipublicfiles.com/m2m_100/spm.128k.model
-for lang in de fr ; do
- python scripts/spm_encode.py \
- --model spm.128k.model \
- --output_format=piece \
- --inputs=raw_input.de-fr.${lang} \
- --outputs=spm.de-fr.${lang}
-done
-```
-
-### Binarization
-
-```bash
-wget https://dl.fbaipublicfiles.com/m2m_100/data_dict.128k.txt
-fairseq-preprocess \
- --source-lang de --target-lang fr \
- --testpref spm.de-fr \
- --thresholdsrc 0 --thresholdtgt 0 \
- --destdir data_bin \
- --srcdict data_dict.128k.txt --tgtdict data_dict.128k.txt
-```
-
-### Generation for the 12B model
-
-Note that generation can currently be run using 2 32GB / 4 16GB / 6 12GB / 8 8GB GPUs, and the corresponding model checkpoints and pipeline arguments can be found in the [12B Model Section](#12b-model).
-Generation on CPUs will be added in the future.
-
-```bash
-wget https://dl.fbaipublicfiles.com/m2m_100/model_dict.128k.txt
-wget https://dl.fbaipublicfiles.com/m2m_100/language_pairs.txt
-wget https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_4_gpus.pt
-fairseq-generate \
- data_bin \
- --batch-size 1 \
- --path 12b_last_chk_4_gpus.pt \
- --fixed-dictionary model_dict.128k.txt \
- -s de -t fr \
- --remove-bpe 'sentencepiece' \
- --beam 5 \
- --task translation_multi_simple_epoch \
- --lang-pairs language_pairs.txt \
- --decoder-langtok --encoder-langtok src \
- --gen-subset test \
- --fp16 \
- --dataset-impl mmap \
- --distributed-world-size 1 --distributed-no-spawn \
- --pipeline-model-parallel \
- --pipeline-chunks 1 \
- --pipeline-encoder-balance '[1,15,10]' \
- --pipeline-encoder-devices '[0,1,0]' \
- --pipeline-decoder-balance '[3,11,11,1]' \
- --pipeline-decoder-devices '[0,2,3,0]' > gen_out
-```
-## Evaluation with M2M-100
-
-### Tokenization
-
-Note: Refer to tokenizers/README.md for more details on tokenization.
-
-```bash
-cd ${fairseq}/examples/m2m_100
-cat ${fairseq}/gen_out | grep -P "^H" | sort -V | cut -f 3- | sh tok.sh fr > hyp
-cat ${fairseq}/raw_input.de-fr.fr | sh tok.sh fr > ref
-```
-
-### BLEU
-
-```bash
-sacrebleu -tok 'none' ref < hyp
-```
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/rxf/README.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/rxf/README.md
deleted file mode 100644
index 22a1cc47df23c7e0ebbf0ad805031478d1b4a95e..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/rxf/README.md
+++ /dev/null
@@ -1,52 +0,0 @@
-[Better Fine-Tuning by Reducing Representational Collapse](https://arxiv.org/abs/2008.03156)
-=====================
-This repo contains the code to replicate all experiments from the _Better Fine-Tuning by Reducing Representational Collapse_ paper excluding the probing results.
-
-The R3F sentence prediction criterion is registered as `sentence_prediction_r3f` while the label smoothing version of it is implemented as `label_smoothed_cross_entropy_r3f`. The R4F version of the sentence prediction criterion can be achieved by applying spectral norm to the classification head via the `--spectral-norm-classification-head` parameter.
-
-## Hyper-parameters
-Our methods introduce 3 new hyper-parameters; `--eps` which sets the standard deviation or range of the distribution we're sampling from, `--r3f-lambda` which controls the combining of logistic loss and noisy KL loss and `--noise-type` which controls which parametric distribution we use ('normal', 'uniform').
-
-For example to run R3F on RTE from GLUE
-
-```
-TOTAL_NUM_UPDATES=3120
-WARMUP_UPDATES=187
-LR=1e-05
-NUM_CLASSES=2
-MAX_SENTENCES=8 # Batch size.
-ROBERTA_PATH=/path/to/roberta/model.pt
-
-CUDA_VISIBLE_DEVICES=0 fairseq-train RTE-bin \
- --restore-file $ROBERTA_PATH \
- --max-positions 512 \
- --max-sentences $MAX_SENTENCES \
- --max-tokens 4400 \
- --task sentence_prediction \
- --reset-optimizer --reset-dataloader --reset-meters \
- --required-batch-size-multiple 1 \
- --init-token 0 --separator-token 2 \
- --arch roberta_large \
- --criterion sentence_prediction_r3f \
- --num-classes $NUM_CLASSES \
- --dropout 0.1 --attention-dropout 0.1 \
- --weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \
- --clip-norm 0.0 \
- --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \
- --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \
- --max-epoch 10 \
- --find-unused-parameters \
- --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \
- --noise-type uniform --r3f-lambda 0.7 \
- --user-dir examples/rxf/rxf_src
-```
-
-## Citation
-```bibtex
-@article{aghajanyan2020better,
- title={Better Fine-Tuning by Reducing Representational Collapse},
- author={Aghajanyan, Armen and Shrivastava, Akshat and Gupta, Anchit and Goyal, Naman and Zettlemoyer, Luke and Gupta, Sonal},
- journal={arXiv preprint arXiv:2008.03156},
- year={2020}
-}
-```
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/transform_eos_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/transform_eos_dataset.py
deleted file mode 100644
index fb14ff018edf13b20f5d0e486692dfb0a37ec6d1..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/transform_eos_dataset.py
+++ /dev/null
@@ -1,120 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-from . import FairseqDataset
-
-
-class TransformEosDataset(FairseqDataset):
- """A :class:`~fairseq.data.FairseqDataset` wrapper that appends/prepends/strips EOS.
-
- Note that the transformation is applied in :func:`collater`.
-
- Args:
- dataset (~fairseq.data.FairseqDataset): dataset to wrap
- eos (int): index of the end-of-sentence symbol
- append_eos_to_src (bool, optional): append EOS to the end of src
- remove_eos_from_src (bool, optional): remove EOS from the end of src
- append_eos_to_tgt (bool, optional): append EOS to the end of tgt
- remove_eos_from_tgt (bool, optional): remove EOS from the end of tgt
- """
-
- def __init__(
- self,
- dataset,
- eos,
- append_eos_to_src=False,
- remove_eos_from_src=False,
- append_eos_to_tgt=False,
- remove_eos_from_tgt=False,
- has_target=True,
- ):
- if not isinstance(dataset, FairseqDataset):
- raise ValueError("dataset must be an instance of FairseqDataset")
- if append_eos_to_src and remove_eos_from_src:
- raise ValueError("cannot combine append_eos_to_src and remove_eos_from_src")
- if append_eos_to_tgt and remove_eos_from_tgt:
- raise ValueError("cannot combine append_eos_to_tgt and remove_eos_from_tgt")
-
- self.dataset = dataset
- self.eos = torch.LongTensor([eos])
- self.append_eos_to_src = append_eos_to_src
- self.remove_eos_from_src = remove_eos_from_src
- self.append_eos_to_tgt = append_eos_to_tgt
- self.remove_eos_from_tgt = remove_eos_from_tgt
- self.has_target = has_target
-
- # precompute how we should adjust the reported sizes
- self._src_delta = 0
- self._src_delta += 1 if append_eos_to_src else 0
- self._src_delta -= 1 if remove_eos_from_src else 0
- self._tgt_delta = 0
- self._tgt_delta += 1 if append_eos_to_tgt else 0
- self._tgt_delta -= 1 if remove_eos_from_tgt else 0
-
- self._checked_src = False
- self._checked_tgt = False
-
- def _check_src(self, src, expect_eos):
- if not self._checked_src:
- assert (src[-1] == self.eos[0]) == expect_eos
- self._checked_src = True
-
- def _check_tgt(self, tgt, expect_eos):
- if self.has_target and not self._checked_tgt:
- assert (tgt[-1] == self.eos[0]) == expect_eos
- self._checked_tgt = True
-
- def __getitem__(self, index):
- return self.dataset[index]
-
- def __len__(self):
- return len(self.dataset)
-
- def collater(self, samples):
- def transform(item):
- if self.append_eos_to_src:
- self.eos = self.eos.to(device=item["source"].device)
- self._check_src(item["source"], expect_eos=False)
- item["source"] = torch.cat([item["source"], self.eos])
- if self.remove_eos_from_src:
- self.eos = self.eos.to(device=item["source"].device)
- self._check_src(item["source"], expect_eos=True)
- item["source"] = item["source"][:-1]
- if self.append_eos_to_tgt:
- self.eos = self.eos.to(device=item["target"].device)
- self._check_tgt(item["target"], expect_eos=False)
- item["target"] = torch.cat([item["target"], self.eos])
- if self.remove_eos_from_tgt:
- self.eos = self.eos.to(device=item["target"].device)
- self._check_tgt(item["target"], expect_eos=True)
- item["target"] = item["target"][:-1]
- return item
-
- samples = list(map(transform, samples))
- return self.dataset.collater(samples)
-
- def num_tokens(self, index):
- return self.dataset.num_tokens(index)
-
- def size(self, index):
- if self.has_target:
- src_len, tgt_len = self.dataset.size(index)
- return (src_len + self._src_delta, tgt_len + self._tgt_delta)
- else:
- return self.dataset.size(index)
-
- def ordered_indices(self):
- # NOTE: we assume that the ordering does not change based on the
- # addition or removal of eos
- return self.dataset.ordered_indices()
-
- @property
- def supports_prefetch(self):
- return getattr(self.dataset, "supports_prefetch", False)
-
- def prefetch(self, indices):
- return self.dataset.prefetch(indices)
diff --git a/spaces/Hazem/Image_Face_Upscale_Restoration-GFPGAN/app.py b/spaces/Hazem/Image_Face_Upscale_Restoration-GFPGAN/app.py
deleted file mode 100644
index 67fcac0171bbb77d2b1d3b23b7293635b6297e28..0000000000000000000000000000000000000000
--- a/spaces/Hazem/Image_Face_Upscale_Restoration-GFPGAN/app.py
+++ /dev/null
@@ -1,142 +0,0 @@
-import os
-
-import cv2
-import gradio as gr
-import torch
-from basicsr.archs.srvgg_arch import SRVGGNetCompact
-from gfpgan.utils import GFPGANer
-from realesrgan.utils import RealESRGANer
-
-os.system("pip freeze")
-# download weights
-if not os.path.exists('realesr-general-x4v3.pth'):
- os.system("wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth -P .")
-if not os.path.exists('GFPGANv1.2.pth'):
- os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.2.pth -P .")
-if not os.path.exists('GFPGANv1.3.pth'):
- os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P .")
-if not os.path.exists('GFPGANv1.4.pth'):
- os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth -P .")
-if not os.path.exists('RestoreFormer.pth'):
- os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/RestoreFormer.pth -P .")
-if not os.path.exists('CodeFormer.pth'):
- os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/CodeFormer.pth -P .")
-
-torch.hub.download_url_to_file(
- 'https://thumbs.dreamstime.com/b/tower-bridge-traditional-red-bus-black-white-colors-view-to-tower-bridge-london-black-white-colors-108478942.jpg',
- 'a1.jpg')
-torch.hub.download_url_to_file(
- 'https://media.istockphoto.com/id/523514029/photo/london-skyline-b-w.jpg?s=612x612&w=0&k=20&c=kJS1BAtfqYeUDaORupj0sBPc1hpzJhBUUqEFfRnHzZ0=',
- 'a2.jpg')
-torch.hub.download_url_to_file(
- 'https://i.guim.co.uk/img/media/06f614065ed82ca0e917b149a32493c791619854/0_0_3648_2789/master/3648.jpg?width=700&quality=85&auto=format&fit=max&s=05764b507c18a38590090d987c8b6202',
- 'a3.jpg')
-torch.hub.download_url_to_file(
- 'https://i.pinimg.com/736x/46/96/9e/46969eb94aec2437323464804d27706d--victorian-london-victorian-era.jpg',
- 'a4.jpg')
-
-# background enhancer with RealESRGAN
-model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu')
-model_path = 'realesr-general-x4v3.pth'
-half = True if torch.cuda.is_available() else False
-upsampler = RealESRGANer(scale=4, model_path=model_path, model=model, tile=0, tile_pad=10, pre_pad=0, half=half)
-
-os.makedirs('output', exist_ok=True)
-
-
-# def inference(img, version, scale, weight):
-def inference(img, version, scale):
- # weight /= 100
- print(img, version, scale)
- try:
- extension = os.path.splitext(os.path.basename(str(img)))[1]
- img = cv2.imread(img, cv2.IMREAD_UNCHANGED)
- if len(img.shape) == 3 and img.shape[2] == 4:
- img_mode = 'RGBA'
- elif len(img.shape) == 2: # for gray inputs
- img_mode = None
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
- else:
- img_mode = None
-
- h, w = img.shape[0:2]
- if h < 300:
- img = cv2.resize(img, (w * 2, h * 2), interpolation=cv2.INTER_LANCZOS4)
-
- if version == 'v1.2':
- face_enhancer = GFPGANer(
- model_path='GFPGANv1.2.pth', upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=upsampler)
- elif version == 'v1.3':
- face_enhancer = GFPGANer(
- model_path='GFPGANv1.3.pth', upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=upsampler)
- elif version == 'v1.4':
- face_enhancer = GFPGANer(
- model_path='GFPGANv1.4.pth', upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=upsampler)
- elif version == 'RestoreFormer':
- face_enhancer = GFPGANer(
- model_path='RestoreFormer.pth', upscale=2, arch='RestoreFormer', channel_multiplier=2, bg_upsampler=upsampler)
- elif version == 'CodeFormer':
- face_enhancer = GFPGANer(
- model_path='CodeFormer.pth', upscale=2, arch='CodeFormer', channel_multiplier=2, bg_upsampler=upsampler)
- elif version == 'RealESR-General-x4v3':
- face_enhancer = GFPGANer(
- model_path='realesr-general-x4v3.pth', upscale=2, arch='realesr-general', channel_multiplier=2, bg_upsampler=upsampler)
-
- try:
- # _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True, weight=weight)
- _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True)
- except RuntimeError as error:
- print('Error', error)
-
- try:
- if scale != 2:
- interpolation = cv2.INTER_AREA if scale < 2 else cv2.INTER_LANCZOS4
- h, w = img.shape[0:2]
- output = cv2.resize(output, (int(w * scale / 2), int(h * scale / 2)), interpolation=interpolation)
- except Exception as error:
- print('wrong scale input.', error)
- if img_mode == 'RGBA': # RGBA images should be saved in png format
- extension = 'png'
- else:
- extension = 'jpg'
- save_path = f'output/out.{extension}'
- cv2.imwrite(save_path, output)
-
- output = cv2.cvtColor(output, cv2.COLOR_BGR2RGB)
- return output, save_path
- except Exception as error:
- print('global exception', error)
- return None, None
-
-
-title = "Image Upscaling & Restoration(esp. Face) using GFPGAN Algorithm"
-description = r"""Gradio demo for GFPGAN: Towards Real-World Blind Face Restoration and Upscalling of the image with a Generative Facial Prior.
-Practically the algorithm is used to restore your **old photos** or improve **AI-generated faces**.
-To use it, simply just upload the concerned image.
-"""
-article = r"""
-[](https://github.com/TencentARC/GFPGAN/releases)
-[](https://github.com/TencentARC/GFPGAN)
-[](https://arxiv.org/abs/2101.04061)
-
-"""
-demo = gr.Interface(
- inference, [
- gr.inputs.Image(type="filepath", label="Input"),
- # gr.inputs.Radio(['v1.2', 'v1.3', 'v1.4', 'RestoreFormer', 'CodeFormer'], type="value", default='v1.4', label='version'),
- gr.inputs.Radio(['v1.2', 'v1.3', 'v1.4', 'RestoreFormer','CodeFormer','RealESR-General-x4v3'], type="value", default='v1.4', label='version'),
- gr.inputs.Number(label="Rescaling factor", default=2),
- # gr.Slider(0, 100, label='Weight, only for CodeFormer. 0 for better quality, 100 for better identity', default=50)
- ], [
- gr.outputs.Image(type="numpy", label="Output (The whole image)"),
- gr.outputs.File(label="Download the output image")
- ],
- title=title,
- description=description,
- article=article,
- # examples=[['AI-generate.jpg', 'v1.4', 2, 50], ['lincoln.jpg', 'v1.4', 2, 50], ['Blake_Lively.jpg', 'v1.4', 2, 50],
- # ['10045.png', 'v1.4', 2, 50]]).launch()
- examples=[['a1.jpg', 'v1.4', 2], ['a2.jpg', 'v1.4', 2], ['a3.jpg', 'v1.4', 2],['a4.jpg', 'v1.4', 2]])
-
-demo.queue(concurrency_count=4)
-demo.launch()
\ No newline at end of file
diff --git a/spaces/HighCWu/GFPGAN-1.3/gfpgan/weights/README.md b/spaces/HighCWu/GFPGAN-1.3/gfpgan/weights/README.md
deleted file mode 100644
index 4d7b7e642591ef88575d9e6c360a4d29e0cc1a4f..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/GFPGAN-1.3/gfpgan/weights/README.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# Weights
-
-Put the downloaded weights to this folder.
diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/layouts.py b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/layouts.py
deleted file mode 100644
index cab5aabffee2e69f55b883b1d81b6d1bda69e30e..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/layouts.py
+++ /dev/null
@@ -1,377 +0,0 @@
-from __future__ import annotations
-
-import warnings
-from typing import TYPE_CHECKING, Callable, List, Type
-
-from gradio.blocks import BlockContext
-from gradio.documentation import document, set_documentation_group
-
-set_documentation_group("layout")
-
-if TYPE_CHECKING: # Only import for type checking (is False at runtime).
- from gradio.components import Component
-
-
-@document()
-class Row(BlockContext):
- """
- Row is a layout element within Blocks that renders all children horizontally.
- Example:
- with gradio.Blocks() as demo:
- with gradio.Row():
- gr.Image("lion.jpg")
- gr.Image("tiger.jpg")
- demo.launch()
- Guides: controlling_layout
- """
-
- def __init__(
- self,
- *,
- variant: str = "default",
- visible: bool = True,
- elem_id: str | None = None,
- **kwargs,
- ):
- """
- Parameters:
- variant: row type, 'default' (no background), 'panel' (gray background color and rounded corners), or 'compact' (rounded corners and no internal gap).
- visible: If False, row will be hidden.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- """
- self.variant = variant
- if variant == "compact":
- self.allow_expected_parents = False
- super().__init__(visible=visible, elem_id=elem_id, **kwargs)
-
- def get_config(self):
- return {"type": "row", "variant": self.variant, **super().get_config()}
-
- @staticmethod
- def update(
- visible: bool | None = None,
- ):
- return {
- "visible": visible,
- "__type__": "update",
- }
-
- def style(
- self,
- *,
- equal_height: bool | None = None,
- mobile_collapse: bool | None = None,
- **kwargs,
- ):
- """
- Styles the Row.
- Parameters:
- equal_height: If True, makes every child element have equal height
- mobile_collapse: DEPRECATED.
- """
- if equal_height is not None:
- self._style["equal_height"] = equal_height
- if mobile_collapse is not None:
- warnings.warn("mobile_collapse is no longer supported.")
- return self
-
-
-@document()
-class Column(BlockContext):
- """
- Column is a layout element within Blocks that renders all children vertically. The widths of columns can be set through the `scale` and `min_width` parameters.
- If a certain scale results in a column narrower than min_width, the min_width parameter will win.
- Example:
- with gradio.Blocks() as demo:
- with gradio.Row():
- with gradio.Column(scale=1):
- text1 = gr.Textbox()
- text2 = gr.Textbox()
- with gradio.Column(scale=4):
- btn1 = gr.Button("Button 1")
- btn2 = gr.Button("Button 2")
- Guides: controlling_layout
- """
-
- def __init__(
- self,
- *,
- scale: int = 1,
- min_width: int = 320,
- variant: str = "default",
- visible: bool = True,
- elem_id: str | None = None,
- **kwargs,
- ):
- """
- Parameters:
- scale: relative width compared to adjacent Columns. For example, if Column A has scale=2, and Column B has scale=1, A will be twice as wide as B.
- min_width: minimum pixel width of Column, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in a column narrower than min_width, the min_width parameter will be respected first.
- variant: column type, 'default' (no background), 'panel' (gray background color and rounded corners), or 'compact' (rounded corners and no internal gap).
- visible: If False, column will be hidden.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- """
- self.scale = scale
- self.min_width = min_width
- self.variant = variant
- if variant == "compact":
- self.allow_expected_parents = False
- super().__init__(visible=visible, elem_id=elem_id, **kwargs)
-
- def get_config(self):
- return {
- "type": "column",
- "variant": self.variant,
- "scale": self.scale,
- "min_width": self.min_width,
- **super().get_config(),
- }
-
- @staticmethod
- def update(
- variant: str | None = None,
- visible: bool | None = None,
- ):
- return {
- "variant": variant,
- "visible": visible,
- "__type__": "update",
- }
-
-
-class Tabs(BlockContext):
- """
- Tabs is a layout element within Blocks that can contain multiple "Tab" Components.
- """
-
- def __init__(
- self,
- *,
- selected: int | str | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- **kwargs,
- ):
- """
- Parameters:
- selected: The currently selected tab. Must correspond to an id passed to the one of the child TabItems. Defaults to the first TabItem.
- visible: If False, Tabs will be hidden.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- """
- super().__init__(visible=visible, elem_id=elem_id, **kwargs)
- self.selected = selected
-
- def get_config(self):
- return {"selected": self.selected, **super().get_config()}
-
- @staticmethod
- def update(
- selected: int | str | None = None,
- ):
- return {
- "selected": selected,
- "__type__": "update",
- }
-
- def change(self, fn: Callable, inputs: List[Component], outputs: List[Component]):
- """
- Parameters:
- fn: Callable function
- inputs: List of inputs
- outputs: List of outputs
- Returns: None
- """
- self.set_event_trigger("change", fn, inputs, outputs)
-
-
-@document()
-class Tab(BlockContext):
- """
- Tab (or its alias TabItem) is a layout element. Components defined within the Tab will be visible when this tab is selected tab.
- Example:
- with gradio.Blocks() as demo:
- with gradio.Tab("Lion"):
- gr.Image("lion.jpg")
- gr.Button("New Lion")
- with gradio.Tab("Tiger"):
- gr.Image("tiger.jpg")
- gr.Button("New Tiger")
- Guides: controlling_layout
- """
-
- def __init__(
- self,
- label: str,
- *,
- id: int | str | None = None,
- elem_id: str | None = None,
- **kwargs,
- ):
- """
- Parameters:
- label: The visual label for the tab
- id: An optional identifier for the tab, required if you wish to control the selected tab from a predict function.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- """
- super().__init__(elem_id=elem_id, **kwargs)
- self.label = label
- self.id = id
-
- def get_config(self):
- return {
- "label": self.label,
- "id": self.id,
- **super().get_config(),
- }
-
- def select(self, fn: Callable, inputs: List[Component], outputs: List[Component]):
- """
- Parameters:
- fn: Callable function
- inputs: List of inputs
- outputs: List of outputs
- Returns: None
- """
- self.set_event_trigger("select", fn, inputs, outputs)
-
- def get_expected_parent(self) -> Type[Tabs]:
- return Tabs
-
- def get_block_name(self):
- return "tabitem"
-
-
-TabItem = Tab
-
-
-class Group(BlockContext):
- """
- Group is a layout element within Blocks which groups together children so that
- they do not have any padding or margin between them.
- Example:
- with gradio.Group():
- gr.Textbox(label="First")
- gr.Textbox(label="Last")
- """
-
- def __init__(
- self,
- *,
- visible: bool = True,
- elem_id: str | None = None,
- **kwargs,
- ):
- """
- Parameters:
- visible: If False, group will be hidden.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- """
- super().__init__(visible=visible, elem_id=elem_id, **kwargs)
-
- def get_config(self):
- return {"type": "group", **super().get_config()}
-
- @staticmethod
- def update(
- visible: bool | None = None,
- ):
- return {
- "visible": visible,
- "__type__": "update",
- }
-
-
-@document()
-class Box(BlockContext):
- """
- Box is a a layout element which places children in a box with rounded corners and
- some padding around them.
- Example:
- with gradio.Box():
- gr.Textbox(label="First")
- gr.Textbox(label="Last")
- """
-
- def __init__(
- self,
- *,
- visible: bool = True,
- elem_id: str | None = None,
- **kwargs,
- ):
- """
- Parameters:
- visible: If False, box will be hidden.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- """
- super().__init__(visible=visible, elem_id=elem_id, **kwargs)
-
- def get_config(self):
- return {"type": "box", **super().get_config()}
-
- @staticmethod
- def update(
- visible: bool | None = None,
- ):
- return {
- "visible": visible,
- "__type__": "update",
- }
-
- def style(self, **kwargs):
- return self
-
-
-class Form(BlockContext):
- def get_config(self):
- return {"type": "form", **super().get_config()}
-
-
-@document()
-class Accordion(BlockContext):
- """
- Accordion is a layout element which can be toggled to show/hide the contained content.
- Example:
- with gradio.Accordion("See Details"):
- gr.Markdown("lorem ipsum")
- """
-
- def __init__(
- self,
- label,
- *,
- open: bool = True,
- visible: bool = True,
- elem_id: str | None = None,
- **kwargs,
- ):
- """
- Parameters:
- label: name of accordion section.
- open: if True, accordion is open by default.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- """
- self.label = label
- self.open = open
- super().__init__(visible=visible, elem_id=elem_id, **kwargs)
-
- def get_config(self):
- return {
- "type": "accordion",
- "open": self.open,
- "label": self.label,
- **super().get_config(),
- }
-
- @staticmethod
- def update(
- open: bool | None = None,
- label: str | None = None,
- visible: bool | None = None,
- ):
- return {
- "visible": visible,
- "label": label,
- "open": open,
- "__type__": "update",
- }
diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.04164205.js b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.04164205.js
deleted file mode 100644
index aee3a6df29f0cba38a84f02c7d8cd70d69217f44..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.04164205.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as L,i as M,s as w,e as S,b as c,d as _,f as g,x as H,n as d,F as j,P as C,c as h,m as b,j as v,k,o as T,R as q,T as B,a as D,U as E,V as F,K}from"./index.396f4a72.js";function P(n){let e;return{c(){e=S("div"),c(e,"class","output-html"),c(e,"id",n[0]),_(e,"min-h-[6rem]",n[3]),_(e,"!hidden",!n[2])},m(s,i){g(s,e,i),e.innerHTML=n[1]},p(s,[i]){i&2&&(e.innerHTML=s[1]),i&1&&c(e,"id",s[0]),i&8&&_(e,"min-h-[6rem]",s[3]),i&4&&_(e,"!hidden",!s[2])},i:H,o:H,d(s){s&&d(e)}}}function R(n,e,s){let{elem_id:i=""}=e,{value:a}=e,{visible:l=!0}=e,{min_height:f=!1}=e;const r=j();return n.$$set=t=>{"elem_id"in t&&s(0,i=t.elem_id),"value"in t&&s(1,a=t.value),"visible"in t&&s(2,l=t.visible),"min_height"in t&&s(3,f=t.min_height)},n.$$.update=()=>{n.$$.dirty&2&&r("change")},[i,a,l,f]}class U extends L{constructor(e){super(),M(this,e,R,P,w,{elem_id:0,value:1,visible:2,min_height:3})}}function V(n){let e,s,i,a,l;const f=[n[3],{variant:"center"}];let r={};for(let t=0;t{"label"in u&&s(4,i=u.label),"elem_id"in u&&s(0,a=u.elem_id),"visible"in u&&s(1,l=u.visible),"value"in u&&s(2,f=u.value),"loading_status"in u&&s(3,r=u.loading_status)},n.$$.update=()=>{n.$$.dirty&16&&t("change")},[a,l,f,r,i,m]}class G extends L{constructor(e){super(),M(this,e,A,z,w,{label:4,elem_id:0,visible:1,value:2,loading_status:3})}}var J=G;const N=["static"],O=n=>({type:"string",description:"HTML output"});export{J as Component,O as document,N as modes};
-//# sourceMappingURL=index.04164205.js.map
diff --git a/spaces/Hila/RobustViT/SegmentationTest/utils/summaries.py b/spaces/Hila/RobustViT/SegmentationTest/utils/summaries.py
deleted file mode 100644
index 6d880ad2a4fea30d0c00af91300a31bd218c4e6f..0000000000000000000000000000000000000000
--- a/spaces/Hila/RobustViT/SegmentationTest/utils/summaries.py
+++ /dev/null
@@ -1,11 +0,0 @@
-import os
-from torch.utils.tensorboard import SummaryWriter
-
-
-class TensorboardSummary(object):
- def __init__(self, directory):
- self.directory = directory
- self.writer = SummaryWriter(log_dir=os.path.join(self.directory))
-
- def add_scalar(self, *args):
- self.writer.add_scalar(*args)
\ No newline at end of file
diff --git a/spaces/Iceclear/StableSR/StableSR/ldm/modules/x_transformer.py b/spaces/Iceclear/StableSR/StableSR/ldm/modules/x_transformer.py
deleted file mode 100644
index 5fc15bf9cfe0111a910e7de33d04ffdec3877576..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/ldm/modules/x_transformer.py
+++ /dev/null
@@ -1,641 +0,0 @@
-"""shout-out to https://github.com/lucidrains/x-transformers/tree/main/x_transformers"""
-import torch
-from torch import nn, einsum
-import torch.nn.functional as F
-from functools import partial
-from inspect import isfunction
-from collections import namedtuple
-from einops import rearrange, repeat, reduce
-
-# constants
-
-DEFAULT_DIM_HEAD = 64
-
-Intermediates = namedtuple('Intermediates', [
- 'pre_softmax_attn',
- 'post_softmax_attn'
-])
-
-LayerIntermediates = namedtuple('Intermediates', [
- 'hiddens',
- 'attn_intermediates'
-])
-
-
-class AbsolutePositionalEmbedding(nn.Module):
- def __init__(self, dim, max_seq_len):
- super().__init__()
- self.emb = nn.Embedding(max_seq_len, dim)
- self.init_()
-
- def init_(self):
- nn.init.normal_(self.emb.weight, std=0.02)
-
- def forward(self, x):
- n = torch.arange(x.shape[1], device=x.device)
- return self.emb(n)[None, :, :]
-
-
-class FixedPositionalEmbedding(nn.Module):
- def __init__(self, dim):
- super().__init__()
- inv_freq = 1. / (10000 ** (torch.arange(0, dim, 2).float() / dim))
- self.register_buffer('inv_freq', inv_freq)
-
- def forward(self, x, seq_dim=1, offset=0):
- t = torch.arange(x.shape[seq_dim], device=x.device).type_as(self.inv_freq) + offset
- sinusoid_inp = torch.einsum('i , j -> i j', t, self.inv_freq)
- emb = torch.cat((sinusoid_inp.sin(), sinusoid_inp.cos()), dim=-1)
- return emb[None, :, :]
-
-
-# helpers
-
-def exists(val):
- return val is not None
-
-
-def default(val, d):
- if exists(val):
- return val
- return d() if isfunction(d) else d
-
-
-def always(val):
- def inner(*args, **kwargs):
- return val
- return inner
-
-
-def not_equals(val):
- def inner(x):
- return x != val
- return inner
-
-
-def equals(val):
- def inner(x):
- return x == val
- return inner
-
-
-def max_neg_value(tensor):
- return -torch.finfo(tensor.dtype).max
-
-
-# keyword argument helpers
-
-def pick_and_pop(keys, d):
- values = list(map(lambda key: d.pop(key), keys))
- return dict(zip(keys, values))
-
-
-def group_dict_by_key(cond, d):
- return_val = [dict(), dict()]
- for key in d.keys():
- match = bool(cond(key))
- ind = int(not match)
- return_val[ind][key] = d[key]
- return (*return_val,)
-
-
-def string_begins_with(prefix, str):
- return str.startswith(prefix)
-
-
-def group_by_key_prefix(prefix, d):
- return group_dict_by_key(partial(string_begins_with, prefix), d)
-
-
-def groupby_prefix_and_trim(prefix, d):
- kwargs_with_prefix, kwargs = group_dict_by_key(partial(string_begins_with, prefix), d)
- kwargs_without_prefix = dict(map(lambda x: (x[0][len(prefix):], x[1]), tuple(kwargs_with_prefix.items())))
- return kwargs_without_prefix, kwargs
-
-
-# classes
-class Scale(nn.Module):
- def __init__(self, value, fn):
- super().__init__()
- self.value = value
- self.fn = fn
-
- def forward(self, x, **kwargs):
- x, *rest = self.fn(x, **kwargs)
- return (x * self.value, *rest)
-
-
-class Rezero(nn.Module):
- def __init__(self, fn):
- super().__init__()
- self.fn = fn
- self.g = nn.Parameter(torch.zeros(1))
-
- def forward(self, x, **kwargs):
- x, *rest = self.fn(x, **kwargs)
- return (x * self.g, *rest)
-
-
-class ScaleNorm(nn.Module):
- def __init__(self, dim, eps=1e-5):
- super().__init__()
- self.scale = dim ** -0.5
- self.eps = eps
- self.g = nn.Parameter(torch.ones(1))
-
- def forward(self, x):
- norm = torch.norm(x, dim=-1, keepdim=True) * self.scale
- return x / norm.clamp(min=self.eps) * self.g
-
-
-class RMSNorm(nn.Module):
- def __init__(self, dim, eps=1e-8):
- super().__init__()
- self.scale = dim ** -0.5
- self.eps = eps
- self.g = nn.Parameter(torch.ones(dim))
-
- def forward(self, x):
- norm = torch.norm(x, dim=-1, keepdim=True) * self.scale
- return x / norm.clamp(min=self.eps) * self.g
-
-
-class Residual(nn.Module):
- def forward(self, x, residual):
- return x + residual
-
-
-class GRUGating(nn.Module):
- def __init__(self, dim):
- super().__init__()
- self.gru = nn.GRUCell(dim, dim)
-
- def forward(self, x, residual):
- gated_output = self.gru(
- rearrange(x, 'b n d -> (b n) d'),
- rearrange(residual, 'b n d -> (b n) d')
- )
-
- return gated_output.reshape_as(x)
-
-
-# feedforward
-
-class GEGLU(nn.Module):
- def __init__(self, dim_in, dim_out):
- super().__init__()
- self.proj = nn.Linear(dim_in, dim_out * 2)
-
- def forward(self, x):
- x, gate = self.proj(x).chunk(2, dim=-1)
- return x * F.gelu(gate)
-
-
-class FeedForward(nn.Module):
- def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.):
- super().__init__()
- inner_dim = int(dim * mult)
- dim_out = default(dim_out, dim)
- project_in = nn.Sequential(
- nn.Linear(dim, inner_dim),
- nn.GELU()
- ) if not glu else GEGLU(dim, inner_dim)
-
- self.net = nn.Sequential(
- project_in,
- nn.Dropout(dropout),
- nn.Linear(inner_dim, dim_out)
- )
-
- def forward(self, x):
- return self.net(x)
-
-
-# attention.
-class Attention(nn.Module):
- def __init__(
- self,
- dim,
- dim_head=DEFAULT_DIM_HEAD,
- heads=8,
- causal=False,
- mask=None,
- talking_heads=False,
- sparse_topk=None,
- use_entmax15=False,
- num_mem_kv=0,
- dropout=0.,
- on_attn=False
- ):
- super().__init__()
- if use_entmax15:
- raise NotImplementedError("Check out entmax activation instead of softmax activation!")
- self.scale = dim_head ** -0.5
- self.heads = heads
- self.causal = causal
- self.mask = mask
-
- inner_dim = dim_head * heads
-
- self.to_q = nn.Linear(dim, inner_dim, bias=False)
- self.to_k = nn.Linear(dim, inner_dim, bias=False)
- self.to_v = nn.Linear(dim, inner_dim, bias=False)
- self.dropout = nn.Dropout(dropout)
-
- # talking heads
- self.talking_heads = talking_heads
- if talking_heads:
- self.pre_softmax_proj = nn.Parameter(torch.randn(heads, heads))
- self.post_softmax_proj = nn.Parameter(torch.randn(heads, heads))
-
- # explicit topk sparse attention
- self.sparse_topk = sparse_topk
-
- # entmax
- #self.attn_fn = entmax15 if use_entmax15 else F.softmax
- self.attn_fn = F.softmax
-
- # add memory key / values
- self.num_mem_kv = num_mem_kv
- if num_mem_kv > 0:
- self.mem_k = nn.Parameter(torch.randn(heads, num_mem_kv, dim_head))
- self.mem_v = nn.Parameter(torch.randn(heads, num_mem_kv, dim_head))
-
- # attention on attention
- self.attn_on_attn = on_attn
- self.to_out = nn.Sequential(nn.Linear(inner_dim, dim * 2), nn.GLU()) if on_attn else nn.Linear(inner_dim, dim)
-
- def forward(
- self,
- x,
- context=None,
- mask=None,
- context_mask=None,
- rel_pos=None,
- sinusoidal_emb=None,
- prev_attn=None,
- mem=None
- ):
- b, n, _, h, talking_heads, device = *x.shape, self.heads, self.talking_heads, x.device
- kv_input = default(context, x)
-
- q_input = x
- k_input = kv_input
- v_input = kv_input
-
- if exists(mem):
- k_input = torch.cat((mem, k_input), dim=-2)
- v_input = torch.cat((mem, v_input), dim=-2)
-
- if exists(sinusoidal_emb):
- # in shortformer, the query would start at a position offset depending on the past cached memory
- offset = k_input.shape[-2] - q_input.shape[-2]
- q_input = q_input + sinusoidal_emb(q_input, offset=offset)
- k_input = k_input + sinusoidal_emb(k_input)
-
- q = self.to_q(q_input)
- k = self.to_k(k_input)
- v = self.to_v(v_input)
-
- q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h=h), (q, k, v))
-
- input_mask = None
- if any(map(exists, (mask, context_mask))):
- q_mask = default(mask, lambda: torch.ones((b, n), device=device).bool())
- k_mask = q_mask if not exists(context) else context_mask
- k_mask = default(k_mask, lambda: torch.ones((b, k.shape[-2]), device=device).bool())
- q_mask = rearrange(q_mask, 'b i -> b () i ()')
- k_mask = rearrange(k_mask, 'b j -> b () () j')
- input_mask = q_mask * k_mask
-
- if self.num_mem_kv > 0:
- mem_k, mem_v = map(lambda t: repeat(t, 'h n d -> b h n d', b=b), (self.mem_k, self.mem_v))
- k = torch.cat((mem_k, k), dim=-2)
- v = torch.cat((mem_v, v), dim=-2)
- if exists(input_mask):
- input_mask = F.pad(input_mask, (self.num_mem_kv, 0), value=True)
-
- dots = einsum('b h i d, b h j d -> b h i j', q, k) * self.scale
- mask_value = max_neg_value(dots)
-
- if exists(prev_attn):
- dots = dots + prev_attn
-
- pre_softmax_attn = dots
-
- if talking_heads:
- dots = einsum('b h i j, h k -> b k i j', dots, self.pre_softmax_proj).contiguous()
-
- if exists(rel_pos):
- dots = rel_pos(dots)
-
- if exists(input_mask):
- dots.masked_fill_(~input_mask, mask_value)
- del input_mask
-
- if self.causal:
- i, j = dots.shape[-2:]
- r = torch.arange(i, device=device)
- mask = rearrange(r, 'i -> () () i ()') < rearrange(r, 'j -> () () () j')
- mask = F.pad(mask, (j - i, 0), value=False)
- dots.masked_fill_(mask, mask_value)
- del mask
-
- if exists(self.sparse_topk) and self.sparse_topk < dots.shape[-1]:
- top, _ = dots.topk(self.sparse_topk, dim=-1)
- vk = top[..., -1].unsqueeze(-1).expand_as(dots)
- mask = dots < vk
- dots.masked_fill_(mask, mask_value)
- del mask
-
- attn = self.attn_fn(dots, dim=-1)
- post_softmax_attn = attn
-
- attn = self.dropout(attn)
-
- if talking_heads:
- attn = einsum('b h i j, h k -> b k i j', attn, self.post_softmax_proj).contiguous()
-
- out = einsum('b h i j, b h j d -> b h i d', attn, v)
- out = rearrange(out, 'b h n d -> b n (h d)')
-
- intermediates = Intermediates(
- pre_softmax_attn=pre_softmax_attn,
- post_softmax_attn=post_softmax_attn
- )
-
- return self.to_out(out), intermediates
-
-
-class AttentionLayers(nn.Module):
- def __init__(
- self,
- dim,
- depth,
- heads=8,
- causal=False,
- cross_attend=False,
- only_cross=False,
- use_scalenorm=False,
- use_rmsnorm=False,
- use_rezero=False,
- rel_pos_num_buckets=32,
- rel_pos_max_distance=128,
- position_infused_attn=False,
- custom_layers=None,
- sandwich_coef=None,
- par_ratio=None,
- residual_attn=False,
- cross_residual_attn=False,
- macaron=False,
- pre_norm=True,
- gate_residual=False,
- **kwargs
- ):
- super().__init__()
- ff_kwargs, kwargs = groupby_prefix_and_trim('ff_', kwargs)
- attn_kwargs, _ = groupby_prefix_and_trim('attn_', kwargs)
-
- dim_head = attn_kwargs.get('dim_head', DEFAULT_DIM_HEAD)
-
- self.dim = dim
- self.depth = depth
- self.layers = nn.ModuleList([])
-
- self.has_pos_emb = position_infused_attn
- self.pia_pos_emb = FixedPositionalEmbedding(dim) if position_infused_attn else None
- self.rotary_pos_emb = always(None)
-
- assert rel_pos_num_buckets <= rel_pos_max_distance, 'number of relative position buckets must be less than the relative position max distance'
- self.rel_pos = None
-
- self.pre_norm = pre_norm
-
- self.residual_attn = residual_attn
- self.cross_residual_attn = cross_residual_attn
-
- norm_class = ScaleNorm if use_scalenorm else nn.LayerNorm
- norm_class = RMSNorm if use_rmsnorm else norm_class
- norm_fn = partial(norm_class, dim)
-
- norm_fn = nn.Identity if use_rezero else norm_fn
- branch_fn = Rezero if use_rezero else None
-
- if cross_attend and not only_cross:
- default_block = ('a', 'c', 'f')
- elif cross_attend and only_cross:
- default_block = ('c', 'f')
- else:
- default_block = ('a', 'f')
-
- if macaron:
- default_block = ('f',) + default_block
-
- if exists(custom_layers):
- layer_types = custom_layers
- elif exists(par_ratio):
- par_depth = depth * len(default_block)
- assert 1 < par_ratio <= par_depth, 'par ratio out of range'
- default_block = tuple(filter(not_equals('f'), default_block))
- par_attn = par_depth // par_ratio
- depth_cut = par_depth * 2 // 3 # 2 / 3 attention layer cutoff suggested by PAR paper
- par_width = (depth_cut + depth_cut // par_attn) // par_attn
- assert len(default_block) <= par_width, 'default block is too large for par_ratio'
- par_block = default_block + ('f',) * (par_width - len(default_block))
- par_head = par_block * par_attn
- layer_types = par_head + ('f',) * (par_depth - len(par_head))
- elif exists(sandwich_coef):
- assert sandwich_coef > 0 and sandwich_coef <= depth, 'sandwich coefficient should be less than the depth'
- layer_types = ('a',) * sandwich_coef + default_block * (depth - sandwich_coef) + ('f',) * sandwich_coef
- else:
- layer_types = default_block * depth
-
- self.layer_types = layer_types
- self.num_attn_layers = len(list(filter(equals('a'), layer_types)))
-
- for layer_type in self.layer_types:
- if layer_type == 'a':
- layer = Attention(dim, heads=heads, causal=causal, **attn_kwargs)
- elif layer_type == 'c':
- layer = Attention(dim, heads=heads, **attn_kwargs)
- elif layer_type == 'f':
- layer = FeedForward(dim, **ff_kwargs)
- layer = layer if not macaron else Scale(0.5, layer)
- else:
- raise Exception(f'invalid layer type {layer_type}')
-
- if isinstance(layer, Attention) and exists(branch_fn):
- layer = branch_fn(layer)
-
- if gate_residual:
- residual_fn = GRUGating(dim)
- else:
- residual_fn = Residual()
-
- self.layers.append(nn.ModuleList([
- norm_fn(),
- layer,
- residual_fn
- ]))
-
- def forward(
- self,
- x,
- context=None,
- mask=None,
- context_mask=None,
- mems=None,
- return_hiddens=False
- ):
- hiddens = []
- intermediates = []
- prev_attn = None
- prev_cross_attn = None
-
- mems = mems.copy() if exists(mems) else [None] * self.num_attn_layers
-
- for ind, (layer_type, (norm, block, residual_fn)) in enumerate(zip(self.layer_types, self.layers)):
- is_last = ind == (len(self.layers) - 1)
-
- if layer_type == 'a':
- hiddens.append(x)
- layer_mem = mems.pop(0)
-
- residual = x
-
- if self.pre_norm:
- x = norm(x)
-
- if layer_type == 'a':
- out, inter = block(x, mask=mask, sinusoidal_emb=self.pia_pos_emb, rel_pos=self.rel_pos,
- prev_attn=prev_attn, mem=layer_mem)
- elif layer_type == 'c':
- out, inter = block(x, context=context, mask=mask, context_mask=context_mask, prev_attn=prev_cross_attn)
- elif layer_type == 'f':
- out = block(x)
-
- x = residual_fn(out, residual)
-
- if layer_type in ('a', 'c'):
- intermediates.append(inter)
-
- if layer_type == 'a' and self.residual_attn:
- prev_attn = inter.pre_softmax_attn
- elif layer_type == 'c' and self.cross_residual_attn:
- prev_cross_attn = inter.pre_softmax_attn
-
- if not self.pre_norm and not is_last:
- x = norm(x)
-
- if return_hiddens:
- intermediates = LayerIntermediates(
- hiddens=hiddens,
- attn_intermediates=intermediates
- )
-
- return x, intermediates
-
- return x
-
-
-class Encoder(AttentionLayers):
- def __init__(self, **kwargs):
- assert 'causal' not in kwargs, 'cannot set causality on encoder'
- super().__init__(causal=False, **kwargs)
-
-
-
-class TransformerWrapper(nn.Module):
- def __init__(
- self,
- *,
- num_tokens,
- max_seq_len,
- attn_layers,
- emb_dim=None,
- max_mem_len=0.,
- emb_dropout=0.,
- num_memory_tokens=None,
- tie_embedding=False,
- use_pos_emb=True
- ):
- super().__init__()
- assert isinstance(attn_layers, AttentionLayers), 'attention layers must be one of Encoder or Decoder'
-
- dim = attn_layers.dim
- emb_dim = default(emb_dim, dim)
-
- self.max_seq_len = max_seq_len
- self.max_mem_len = max_mem_len
- self.num_tokens = num_tokens
-
- self.token_emb = nn.Embedding(num_tokens, emb_dim)
- self.pos_emb = AbsolutePositionalEmbedding(emb_dim, max_seq_len) if (
- use_pos_emb and not attn_layers.has_pos_emb) else always(0)
- self.emb_dropout = nn.Dropout(emb_dropout)
-
- self.project_emb = nn.Linear(emb_dim, dim) if emb_dim != dim else nn.Identity()
- self.attn_layers = attn_layers
- self.norm = nn.LayerNorm(dim)
-
- self.init_()
-
- self.to_logits = nn.Linear(dim, num_tokens) if not tie_embedding else lambda t: t @ self.token_emb.weight.t()
-
- # memory tokens (like [cls]) from Memory Transformers paper
- num_memory_tokens = default(num_memory_tokens, 0)
- self.num_memory_tokens = num_memory_tokens
- if num_memory_tokens > 0:
- self.memory_tokens = nn.Parameter(torch.randn(num_memory_tokens, dim))
-
- # let funnel encoder know number of memory tokens, if specified
- if hasattr(attn_layers, 'num_memory_tokens'):
- attn_layers.num_memory_tokens = num_memory_tokens
-
- def init_(self):
- nn.init.normal_(self.token_emb.weight, std=0.02)
-
- def forward(
- self,
- x,
- return_embeddings=False,
- mask=None,
- return_mems=False,
- return_attn=False,
- mems=None,
- **kwargs
- ):
- b, n, device, num_mem = *x.shape, x.device, self.num_memory_tokens
- x = self.token_emb(x)
- x += self.pos_emb(x)
- x = self.emb_dropout(x)
-
- x = self.project_emb(x)
-
- if num_mem > 0:
- mem = repeat(self.memory_tokens, 'n d -> b n d', b=b)
- x = torch.cat((mem, x), dim=1)
-
- # auto-handle masking after appending memory tokens
- if exists(mask):
- mask = F.pad(mask, (num_mem, 0), value=True)
-
- x, intermediates = self.attn_layers(x, mask=mask, mems=mems, return_hiddens=True, **kwargs)
- x = self.norm(x)
-
- mem, x = x[:, :num_mem], x[:, num_mem:]
-
- out = self.to_logits(x) if not return_embeddings else x
-
- if return_mems:
- hiddens = intermediates.hiddens
- new_mems = list(map(lambda pair: torch.cat(pair, dim=-2), zip(mems, hiddens))) if exists(mems) else hiddens
- new_mems = list(map(lambda t: t[..., -self.max_mem_len:, :].detach(), new_mems))
- return out, new_mems
-
- if return_attn:
- attn_maps = list(map(lambda t: t.post_softmax_attn, intermediates.attn_intermediates))
- return out, attn_maps
-
- return out
-
diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/modules/depthwise_sep_conv.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/modules/depthwise_sep_conv.py
deleted file mode 100644
index 83dd15c3df1d9f40baf0091a373fa224532c9ddd..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/modules/depthwise_sep_conv.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import torch
-import torch.nn as nn
-
-class DepthWiseSeperableConv(nn.Module):
- def __init__(self, in_dim, out_dim, *args, **kwargs):
- super().__init__()
- if 'groups' in kwargs:
- # ignoring groups for Depthwise Sep Conv
- del kwargs['groups']
-
- self.depthwise = nn.Conv2d(in_dim, in_dim, *args, groups=in_dim, **kwargs)
- self.pointwise = nn.Conv2d(in_dim, out_dim, kernel_size=1)
-
- def forward(self, x):
- out = self.depthwise(x)
- out = self.pointwise(out)
- return out
\ No newline at end of file
diff --git a/spaces/Juno360219/cloudqi-cqi_text_to_image_pt_v0/Dockerfile b/spaces/Juno360219/cloudqi-cqi_text_to_image_pt_v0/Dockerfile
deleted file mode 100644
index 3a4dc66fdb50519fca2a6eaf64cbe0ea05b09a3f..0000000000000000000000000000000000000000
--- a/spaces/Juno360219/cloudqi-cqi_text_to_image_pt_v0/Dockerfile
+++ /dev/null
@@ -1,13 +0,0 @@
-FROM python:3.9
-
-WORKDIR /code
-
-COPY ./requirements.txt /code/requirements.txt
-
-RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
-
-COPY . .
-
-EXPOSE 7860
-
-CMD ["shiny", "run", "app.py", "--host", "0.0.0.0", "--port", "7860"]
\ No newline at end of file
diff --git a/spaces/KANATA980122/bingo/Dockerfile b/spaces/KANATA980122/bingo/Dockerfile
deleted file mode 100644
index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000
--- a/spaces/KANATA980122/bingo/Dockerfile
+++ /dev/null
@@ -1,7 +0,0 @@
-FROM weaigc/bingo:latest
-
-ARG DEBIAN_FRONTEND=noninteractive
-
-ENV BING_HEADER ""
-
-CMD npm start
diff --git a/spaces/KarloDarlo/3D_Photo_Inpainting/MiDaS/run.py b/spaces/KarloDarlo/3D_Photo_Inpainting/MiDaS/run.py
deleted file mode 100644
index a483d2850a81b3520b80097eff4bb9367ef6a144..0000000000000000000000000000000000000000
--- a/spaces/KarloDarlo/3D_Photo_Inpainting/MiDaS/run.py
+++ /dev/null
@@ -1,81 +0,0 @@
-"""Compute depth maps for images in the input folder.
-"""
-import os
-import glob
-import torch
-# from monodepth_net import MonoDepthNet
-# import utils
-import matplotlib.pyplot as plt
-import numpy as np
-import cv2
-import imageio
-
-
-def run_depth(img_names, input_path, output_path, model_path, Net, utils, target_w=None):
- """Run MonoDepthNN to compute depth maps.
-
- Args:
- input_path (str): path to input folder
- output_path (str): path to output folder
- model_path (str): path to saved model
- """
- print("initialize")
-
- # select device
- device = torch.device("cpu")
- print("device: %s" % device)
-
- # load network
- model = Net(model_path)
- model.to(device)
- model.eval()
-
- # get input
- # img_names = glob.glob(os.path.join(input_path, "*"))
- num_images = len(img_names)
-
- # create output folder
- os.makedirs(output_path, exist_ok=True)
-
- print("start processing")
-
- for ind, img_name in enumerate(img_names):
-
- print(" processing {} ({}/{})".format(img_name, ind + 1, num_images))
-
- # input
- img = utils.read_image(img_name)
- w = img.shape[1]
- scale = 640. / max(img.shape[0], img.shape[1])
- target_height, target_width = int(round(img.shape[0] * scale)), int(round(img.shape[1] * scale))
- img_input = utils.resize_image(img)
- print(img_input.shape)
- img_input = img_input.to(device)
- # compute
- with torch.no_grad():
- out = model.forward(img_input)
-
- depth = utils.resize_depth(out, target_width, target_height)
- img = cv2.resize((img * 255).astype(np.uint8), (target_width, target_height), interpolation=cv2.INTER_AREA)
-
- filename = os.path.join(
- output_path, os.path.splitext(os.path.basename(img_name))[0]
- )
- np.save(filename + '.npy', depth)
- utils.write_depth(filename, depth, bits=2)
-
- print("finished")
-
-
-# if __name__ == "__main__":
-# # set paths
-# INPUT_PATH = "image"
-# OUTPUT_PATH = "output"
-# MODEL_PATH = "model.pt"
-
-# # set torch options
-# torch.backends.cudnn.enabled = True
-# torch.backends.cudnn.benchmark = True
-
-# # compute depth maps
-# run_depth(INPUT_PATH, OUTPUT_PATH, MODEL_PATH, Net, target_w=640)
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Conversion/app.py b/spaces/Kevin676/ChatGPT-with-Voice-Conversion/app.py
deleted file mode 100644
index 38cb0f50a926e5737ba319bd5a7a157cb356668b..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Conversion/app.py
+++ /dev/null
@@ -1,268 +0,0 @@
-# !git clone https://github.com/Edresson/Coqui-TTS -b multilingual-torchaudio-SE TTS
-
-from TTS.utils.manage import ModelManager
-from TTS.utils.synthesizer import Synthesizer
-
-manager = ModelManager()
-model_path1, config_path1, model_item = manager.download_model("tts_models/zh-CN/baker/tacotron2-DDC-GST")
-synthesizer = Synthesizer(
- model_path1, config_path1, None, None, None,
-)
-
-import os
-import shutil
-import gradio as gr
-
-import sys
-
-import string
-import time
-import argparse
-import json
-
-import numpy as np
-# import IPython
-# from IPython.display import Audio
-
-import torch
-
-from TTS.tts.utils.synthesis import synthesis
-from TTS.tts.utils.text.symbols import make_symbols, phonemes, symbols
-try:
- from TTS.utils.audio import AudioProcessor
-except:
- from TTS.utils.audio import AudioProcessor
-
-
-from TTS.tts.models import setup_model
-from TTS.config import load_config
-from TTS.tts.models.vits import *
-
-from TTS.tts.utils.speakers import SpeakerManager
-from pydub import AudioSegment
-
-# from google.colab import files
-import librosa
-
-from scipy.io.wavfile import write, read
-
-import subprocess
-
-import openai
-
-mes = [
- {"role": "system", "content": "You are my personal assistant. Try to be helpful. Respond to me only in Chinese."}
-]
-
-
-'''
-from google.colab import drive
-drive.mount('/content/drive')
-
-src_path = os.path.join(os.path.join(os.path.join(os.path.join(os.getcwd(), 'drive'), 'MyDrive'), 'Colab Notebooks'), 'best_model_latest.pth.tar')
-dst_path = os.path.join(os.getcwd(), 'best_model.pth.tar')
-
-shutil.copy(src_path, dst_path)
-'''
-
-TTS_PATH = "TTS/"
-
-# add libraries into environment
-sys.path.append(TTS_PATH) # set this if TTS is not installed globally
-
-# Paths definition
-
-OUT_PATH = 'out/'
-
-# create output path
-os.makedirs(OUT_PATH, exist_ok=True)
-
-# model vars
-MODEL_PATH = 'best_model.pth.tar'
-CONFIG_PATH = 'config.json'
-TTS_LANGUAGES = "language_ids.json"
-TTS_SPEAKERS = "speakers.json"
-USE_CUDA = torch.cuda.is_available()
-
-# load the config
-C = load_config(CONFIG_PATH)
-
-# load the audio processor
-ap = AudioProcessor(**C.audio)
-
-speaker_embedding = None
-
-C.model_args['d_vector_file'] = TTS_SPEAKERS
-C.model_args['use_speaker_encoder_as_loss'] = False
-
-model = setup_model(C)
-model.language_manager.set_language_ids_from_file(TTS_LANGUAGES)
-# print(model.language_manager.num_languages, model.embedded_language_dim)
-# print(model.emb_l)
-cp = torch.load(MODEL_PATH, map_location=torch.device('cpu'))
-# remove speaker encoder
-model_weights = cp['model'].copy()
-for key in list(model_weights.keys()):
- if "speaker_encoder" in key:
- del model_weights[key]
-
-model.load_state_dict(model_weights)
-
-model.eval()
-
-if USE_CUDA:
- model = model.cuda()
-
-# synthesize voice
-use_griffin_lim = False
-
-# Paths definition
-
-CONFIG_SE_PATH = "config_se.json"
-CHECKPOINT_SE_PATH = "SE_checkpoint.pth.tar"
-
-# Load the Speaker encoder
-
-SE_speaker_manager = SpeakerManager(encoder_model_path=CHECKPOINT_SE_PATH, encoder_config_path=CONFIG_SE_PATH, use_cuda=USE_CUDA)
-
-# Define helper function
-
-
-def chatgpt(apikey, result):
-
- openai.api_key = apikey
-
- messages = mes
-
- # chatgpt
- content = result
- messages.append({"role": "user", "content": content})
-
- completion = openai.ChatCompletion.create(
- model = "gpt-3.5-turbo",
- messages = messages
- )
-
- chat_response = completion.choices[0].message.content
-
- messages.append({"role": "assistant", "content": chat_response})
-
- wavs = synthesizer.tts(chat_response + "。")
-
- synthesizer.save_wav(wavs, "output.wav")
-
- a1, b1 = read("output.wav")
-
- audio_out = "audio_out.wav"
-
- write(audio_out, a1, b1)
-
- return [chat_response, audio_out]
-
-def compute_spec(ref_file):
- y, sr = librosa.load(ref_file, sr=ap.sample_rate)
- spec = ap.spectrogram(y)
- spec = torch.FloatTensor(spec).unsqueeze(0)
- return spec
-
-
-def voice_conversion(ta, ra, da):
-
- target_audio = 'target.wav'
- reference_audio = 'reference.wav'
- driving_audio = 'driving.wav'
-
- write(target_audio, ta[0], ta[1])
- write(reference_audio, ra[0], ra[1])
- write(driving_audio, da[0], da[1])
-
- # !ffmpeg-normalize $target_audio -nt rms -t=-27 -o $target_audio -ar 16000 -f
- # !ffmpeg-normalize $reference_audio -nt rms -t=-27 -o $reference_audio -ar 16000 -f
- # !ffmpeg-normalize $driving_audio -nt rms -t=-27 -o $driving_audio -ar 16000 -f
-
- files = [target_audio, reference_audio, driving_audio]
-
- for file in files:
- subprocess.run(["ffmpeg-normalize", file, "-nt", "rms", "-t=-27", "-o", file, "-ar", "16000", "-f"])
-
- # ta_ = read(target_audio)
-
- target_emb = SE_speaker_manager.compute_d_vector_from_clip([target_audio])
- target_emb = torch.FloatTensor(target_emb).unsqueeze(0)
-
- driving_emb = SE_speaker_manager.compute_d_vector_from_clip([reference_audio])
- driving_emb = torch.FloatTensor(driving_emb).unsqueeze(0)
-
- # Convert the voice
-
- driving_spec = compute_spec(driving_audio)
- y_lengths = torch.tensor([driving_spec.size(-1)])
- if USE_CUDA:
- ref_wav_voc, _, _ = model.voice_conversion(driving_spec.cuda(), y_lengths.cuda(), driving_emb.cuda(), target_emb.cuda())
- ref_wav_voc = ref_wav_voc.squeeze().cpu().detach().numpy()
- else:
- ref_wav_voc, _, _ = model.voice_conversion(driving_spec, y_lengths, driving_emb, target_emb)
- ref_wav_voc = ref_wav_voc.squeeze().detach().numpy()
-
- # print("Reference Audio after decoder:")
- # IPython.display.display(Audio(ref_wav_voc, rate=ap.sample_rate))
-
- return (ap.sample_rate, ref_wav_voc)
-
-
-block = gr.Blocks()
-
-with block:
- with gr.Group():
- gr.Markdown(
- """ #
🥳💬💕 - TalktoAI,随时随地,谈天说地!
-
- ##
🤖 - 让有人文关怀的AI造福每一个人!AI向善,文明璀璨!TalktoAI - Enable the future!
Model by [Raven](https://huggingface.co/spaces/BlinkDL/Raven-RWKV-7B). Thanks to [PENG Bo](https://github.com/BlinkDL). Please follow me on [Bilibili](https://space.bilibili.com/501495851?spm_id_from=333.1007.0.0).
-
- """
- )
-
- gr.HTML('''
-
- ''')
-
-
-block.launch(show_error=True)
\ No newline at end of file
diff --git a/spaces/KevinQHLin/UniVTG/main/config.py b/spaces/KevinQHLin/UniVTG/main/config.py
deleted file mode 100644
index 40eab1902681354b755102bbfbccd15976a6e9b6..0000000000000000000000000000000000000000
--- a/spaces/KevinQHLin/UniVTG/main/config.py
+++ /dev/null
@@ -1,378 +0,0 @@
-import os
-import pdb
-import time
-import torch
-import logging
-import argparse
-import importlib
-from utils.basic_utils import mkdirp, remkdirp, \
- load_json, save_json, make_zipfile, dict_to_markdown
-
-logger = logging.getLogger(__name__)
-logging.basicConfig(format="%(asctime)s.%(msecs)03d:%(levelname)s:%(name)s - %(message)s",
- datefmt="%Y-%m-%d %H:%M:%S",
- level=logging.INFO)
-
-class BaseOptions(object):
- saved_option_filename = "opt.json"
- ckpt_filename = "model.ckpt"
- tensorboard_log_dir = "tensorboard_log"
- train_log_filename = "train.log.txt"
- eval_log_filename = "eval.log.txt"
-
- def __init__(self):
- self.parser = None
- self.initialized = False
- self.opt = None
-
- def initialize(self):
- self.initialized = True
- parser = argparse.ArgumentParser()
- # * Running configs
- parser.add_argument("--dset_type", type=str, choices=["mr", "hl", "vs", "vlp"]) # moment retrieval, highlight detection, and video summarization
- parser.add_argument("--dset_name", type=str, choices=["qvhighlights", "charades", "anet", "tvsum", "youtube", "summe", "ego4d", "qfvs", "video2gif", "coin", "hacs", "vlp", "videocc", "tacos"])
- parser.add_argument("--domain_name", type=str, default=None)
- parser.add_argument("--model_id", type=str, default="moment_detr")
- parser.add_argument("--exp_id", type=str, default="debug", help="id of this run, required at training")
- parser.add_argument("--device", type=int, default=0, help="0 cuda, -1 cpu")
- parser.add_argument("--gpu_id", type=int, default=0)
- parser.add_argument("--debug", action="store_true",
- help="debug (fast) mode, break all loops, do not load all data into memory.")
- parser.add_argument("--seed", type=int, default=2018, help="random seed")
-
- # * DDP
- parser.add_argument('--local_rank', default=-1, type=int, help='node rank for distributed training')
-
-
- parser.add_argument("--eval_split_name", type=str, default="val",
- help="should match keys in video_duration_idx_path, must set for VCMR")
- parser.add_argument("--data_ratio", type=float, default=1.0,
- help="how many training and eval data to use. 1.0: use all, 0.1: use 10%."
- "Use small portion for debug purposes. Note this is different from --debug, "
- "which works by breaking the loops, typically they are not used together.")
- parser.add_argument("--results_root", type=str, default="results")
- parser.add_argument("--num_workers", type=int, default=0,
- help="num subprocesses used to load the data, 0: use main process")
- parser.add_argument("--no_pin_memory", action="store_true",
- help="Don't use pin_memory=True for dataloader. "
- "ref: https://discuss.pytorch.org/t/should-we-set-non-blocking-to-true/38234/4")
-
- # * Training configs
- parser.add_argument("--bsz", type=int, default=32, help="mini-batch size")
- parser.add_argument("--n_epoch", type=int, default=200, help="number of epochs to run")
- parser.add_argument("--max_es_cnt", type=int, default=200,
- help="number of epochs to early stop, use -1 to disable early stop")
- parser.add_argument("--lr", type=float, default=1e-4, help="learning rate")
- parser.add_argument("--lr_drop", type=int, default=400, help="drop learning rate to 1/10 every lr_drop epochs")
- parser.add_argument("--lr_gamma", type=float, default=0.1, help="lr reduces the gamma times after the `drop' epoch")
- parser.add_argument("--lr_warmup", type=float, default=-1, help="linear warmup scheme")
- parser.add_argument("--wd", type=float, default=1e-4, help="weight decay")
- parser.add_argument("--grad_clip", type=float, default=0.1, help="perform gradient clip, -1: disable")
-
- # ** Loss coefficients
- # *** boundary branch
- parser.add_argument("--span_loss_type", default="l1", type=str, choices=['l1', 'ce'],
- help="l1: (center-x, width) regression. ce: (st_idx, ed_idx) classification.")
- parser.add_argument('--b_loss_coef', default=10, type=float) # boundary regression e.g., l1
- parser.add_argument('--g_loss_coef', default=1, type=float) # giou loss
- # *** foreground branch
- parser.add_argument('--eos_coef', default=0.1, type=float, help="relative classification weight of the no-object class")
- parser.add_argument('--f_loss_coef', default=4, type=float) # cls loss for foreground
- # *** saliency branch
- parser.add_argument("--s_loss_intra_coef", type=float, default=1., help="inter-video (frame-level) saliency loss e.g. momentdetr saliency loss")
- parser.add_argument("--s_loss_inter_coef", type=float, default=0., help="intra-video (sample-level) saliency loss,")
-
- # * Eval configs
- parser.add_argument("--main_metric", type=str, default="MR-full-mAP")
- parser.add_argument('--eval_mode', default=None, type=str,
- help="how to integrate foreground and saliency for better prediction")
- parser.add_argument("--eval_bsz", type=int, default=100,
- help="mini-batch size at inference, for query")
- parser.add_argument("--eval_epoch", type=int, default=5,
- help="number of epochs for once inference")
- parser.add_argument("--eval_init", action="store_true", help="evaluate model before training i.e. `epoch=-1'")
- parser.add_argument("--save_interval", type=int, default=50)
-
- parser.add_argument("--resume", type=str, default=None,
- help="checkpoint path to resume or evaluate, without --resume_all this only load weights")
- parser.add_argument("--resume_dir", type=str, default=None,
- help="checkpoint path to resume or evaluate, without --resume_all this only load weights")
- parser.add_argument("--resume_all", action="store_true",
- help="if --resume_all, load optimizer/scheduler/epoch as well")
- parser.add_argument("--start_epoch", type=int, default=None,
- help="if None, will be set automatically when using --resume_all")
-
- # ** NMS configs
- parser.add_argument("--no_sort_results", action="store_true",
- help="do not sort results, use this for moment query visualization")
- parser.add_argument("--max_before_nms", type=int, default=10)
- parser.add_argument("--max_after_nms", type=int, default=10)
- parser.add_argument("--conf_thd", type=float, default=0.0, help="only keep windows with conf >= conf_thd")
- parser.add_argument("--nms_thd", type=float, default=-1,
- help="additionally use non-maximum suppression "
- "(or non-minimum suppression for distance)"
- "to post-processing the predictions. "
- "-1: do not use nms. [0, 1]")
-
- # * Dataset configs
- parser.add_argument("--use_cache", type=int, default=-1, help="Preload features into cache for fast IO")
- parser.add_argument("--max_q_l", type=int, default=75)
- parser.add_argument("--max_v_l", type=int, default=75)
- parser.add_argument("--clip_length", type=float, default=1.0)
- parser.add_argument("--clip_len_list", type=int, nargs='+')
- parser.add_argument("--max_windows", type=int, default=5)
-
- parser.add_argument("--add_easy_negative", type=int, default=1)
- parser.add_argument("--easy_negative_only", type=int, default=1)
- parser.add_argument("--round_multiple", type=int, default=1)
-
- parser.add_argument("--train_path", type=str, default=None, nargs='+')
- parser.add_argument("--eval_path", type=str, default=None,
- help="Evaluating during training, for Dev set. If None, will only do training, ")
- parser.add_argument("--train_path_list", type=str, nargs='+')
- parser.add_argument("--eval_path_list", type=str, nargs='+')
- parser.add_argument("--feat_root_list", type=str, nargs='+')
-
- parser.add_argument("--no_norm_vfeat", action="store_true", help="Do not do normalize video feat")
- parser.add_argument("--no_norm_tfeat", action="store_true", help="Do not do normalize text feat")
- parser.add_argument("--v_feat_dirs", type=str, nargs="+",
- help="video feature dirs. If more than one, will concat their features. "
- "Note that sub ctx features are also accepted here.")
- parser.add_argument("--t_feat_dir", type=str, help="text/query feature dir")
- parser.add_argument("--v_feat_dim", type=int, help="video feature dim")
- parser.add_argument("--t_feat_dim", type=int, help="text/query feature dim")
- parser.add_argument("--ctx_mode", type=str, default="video_tef")
- parser.add_argument("--v_feat_types", type=str)
- parser.add_argument("--t_feat_type", type=str)
-
- # * Model configs
- parser.add_argument('--position_embedding', default='sine', type=str, choices=('sine', 'learned'),
- help="Type of positional embedding to use on top of the image features")
- parser.add_argument("--n_input_proj", type=int, default=2, help="#layers to vid/txt projector")
- parser.add_argument("--temperature", type=float, default=0.07, help="temperature nce contrastive_align_loss")
-
- # ** Transformer
- parser.add_argument('--enc_layers', default=4, type=int,
- help="Number of encoding layers in the transformer")
- parser.add_argument('--sub_enc_layers', default=2, type=int,
- help="Number of encoding layers in the video / text transformer in albef-style.")
- parser.add_argument('--dec_layers', default=2, type=int,
- help="Number of decoding layers in the transformer, N/A for UniVTG")
- parser.add_argument('--dim_feedforward', default=1024, type=int,
- help="Intermediate size of the feedforward layers in the transformer blocks")
- parser.add_argument('--hidden_dim', default=256, type=int,
- help="Size of the embeddings (dimension of the transformer)")
- parser.add_argument('--input_dropout', default=0.5, type=float,
- help="Dropout applied in input")
- parser.add_argument('--dropout', default=0.1, type=float,
- help="Dropout applied in the transformer")
- parser.add_argument('--droppath', default=0.1, type=float,
- help="Droppath applied in the transformer")
- parser.add_argument("--txt_drop_ratio", default=0, type=float,
- help="drop txt_drop_ratio tokens from text input. 0.1=10%")
- parser.add_argument("--use_txt_pos", action="store_true", help="use position_embedding for text as well.")
- parser.add_argument('--nheads', default=8, type=int,
- help="Number of attention heads inside the transformer's attentions")
- parser.add_argument('--num_queries', default=10, type=int,
- help="Number of query slots")
- parser.add_argument('--pre_norm', action='store_true')
-
- # ** momentdetr configs e.g. Matcher, saliency margin
- parser.add_argument('--set_cost_span', default=10, type=float,
- help="L1 span coefficient in the matching cost")
- parser.add_argument('--set_cost_giou', default=1, type=float,
- help="giou span coefficient in the matching cost")
- parser.add_argument('--set_cost_class', default=4, type=float,
- help="Class coefficient in the matching cost")
- parser.add_argument("--saliency_margin", type=float, default=0.2)
- parser.add_argument('--no_aux_loss', dest='aux_loss', action='store_true',
- help="Disables auxiliary decoding losses (loss at each layer)")
-
- # * Query-Force Video Summarization
- parser.add_argument("--max_segment_num", type=int, default=20)
- parser.add_argument("--max_frame_num", type=int, default=200)
- parser.add_argument("--top_percent", type=float, default=0.02)
-
- parser.add_argument("--qfvs_vid_feature", type=str, default='fps1')
- parser.add_argument("--qfvs_txt_feature", type=str, default='query')
- parser.add_argument("--qfvs_split", type=int, default=-1)
-
- parser.add_argument("--qfvs_dense_shot", type=int, default=-1)
- parser.add_argument("--qfvs_score_ensemble", type=int, default=-1)
- parser.add_argument("--qfvs_score_gather", type=int, default=-1)
- parser.add_argument("--qfvs_loss_gather", type=int, default=-1)
- self.parser = parser
-
- def display_save(self, opt):
- args = vars(opt)
- # Display settings
- print(dict_to_markdown(vars(opt), max_str_len=120))
- # Save settings
- if not isinstance(self, TestOptions):
- option_file_path = os.path.join(opt.results_dir, self.saved_option_filename) # not yaml file indeed
- save_json(args, option_file_path, save_pretty=True)
-
- def parse(self, args=None):
- if not self.initialized:
- self.initialize()
- opt = self.parser.parse_args()
-
- if args is not None:
- args_dict = vars(args)
- opt_dict = vars(opt)
- for key, value in args_dict.items():
- opt_dict[key] = value
- opt = argparse.Namespace(**opt_dict)
- opt.model_dir = os.path.dirname(opt.resume)
- torch.cuda.set_device(opt.gpu_id)
-
- if opt.debug:
- opt.results_root = os.path.sep.join(opt.results_root.split(os.path.sep)[:-1] + ["debug_results", ])
- opt.num_workers = 0
-
- if isinstance(self, TestOptions):
- # modify model_dir to absolute path
- # opt.model_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), "results", opt.model_dir)
- opt.model_dir = os.path.dirname(opt.resume)
- saved_options = load_json(os.path.join(opt.model_dir, self.saved_option_filename))
- for arg in saved_options: # use saved options to overwrite all BaseOptions args.
- if arg not in ["results_root", "num_workers", "nms_thd", "debug", "max_before_nms", "max_after_nms"
- "max_pred_l", "min_pred_l", "gpu_id",
- "resume", "resume_all", "no_sort_results",
- "eval_path", "eval_split_name"]:
- # "dset_name", "v_feat_dirs", "t_feat_dir"]:
- setattr(opt, arg, saved_options[arg])
- # opt.no_core_driver = True
- if opt.eval_results_dir is not None:
- opt.results_dir = opt.eval_results_dir
- else:
- if opt.exp_id is None:
- raise ValueError("--exp_id is required for at a training option!")
-
- # ctx_str = opt.ctx_mode + "_sub" if any(["sub_ctx" in p for p in opt.v_feat_dirs]) else opt.ctx_mode
-
- if 'debug' not in opt.exp_id:
- opt.results_dir = os.path.join(opt.results_root, "-".join([opt.dset_type, opt.dset_name]), "-".join([opt.exp_id, opt.v_feat_types, opt.t_feat_type, time.strftime("%Y_%m_%d_%H")]))
- else:
- opt.results_dir = os.path.join(opt.results_root, "-".join([opt.dset_type, opt.dset_name]), opt.exp_id) # debug mode.
-
- if int(opt.local_rank) in [0, -1]:
- # mkdirp(opt.results_dir)
- remkdirp(opt.results_dir) # remove dir and remkdir it.
-
- # save a copy of current code
- code_dir = os.path.dirname(os.path.realpath(__file__))
- code_zip_filename = os.path.join(opt.results_dir, "code.zip")
- make_zipfile(code_dir, code_zip_filename,
- enclosing_dir="code",
- exclude_dirs_substring="results",
- exclude_dirs=["results", "debug_results", "__pycache__"],
- exclude_extensions=[".pyc", ".ipynb", ".swap"], )
-
- if int(opt.local_rank) in [0, -1]:
- self.display_save(opt)
- opt.ckpt_filepath = os.path.join(opt.results_dir, self.ckpt_filename)
- opt.train_log_filepath = os.path.join(opt.results_dir, self.train_log_filename)
- opt.eval_log_filepath = os.path.join(opt.results_dir, self.eval_log_filename)
- opt.tensorboard_log_dir = os.path.join(opt.results_dir, self.tensorboard_log_dir)
- # opt.device = torch.device("cuda" if opt.device >= 0 else "cpu")
-
- if int(opt.local_rank) in [-1]:
- torch.cuda.set_device(opt.gpu_id)
- opt.pin_memory = not opt.no_pin_memory
-
- if opt.local_rank == -1:
- torch.cuda.set_device(opt.gpu_id)
-
- opt.use_tef = "tef" in opt.ctx_mode
- opt.use_video = "video" in opt.ctx_mode
- if not opt.use_video:
- opt.v_feat_dim = 0
- if opt.use_tef:
- opt.v_feat_dim += 2
-
- self.opt = opt
- return opt
-
-class TestOptions(BaseOptions):
- """add additional options for evaluating"""
-
- def initialize(self):
- BaseOptions.initialize(self)
- # also need to specify --eval_split_name
- self.parser.add_argument("--eval_id", type=str, help="evaluation id")
- self.parser.add_argument("--eval_results_dir", type=str, default=None,
- help="dir to save results, if not set, fall back to training results_dir")
- self.parser.add_argument("--model_dir", type=str,
- help="dir contains the model file, will be converted to absolute path afterwards")
-
-class WarmupStepLR(torch.optim.lr_scheduler.StepLR):
- def __init__(self, optimizer, warmup_steps, step_size, gamma=0.1, last_epoch=-1):
- self.warmup_steps = warmup_steps
- self.step_size = step_size
- self.gamma = gamma
- super(WarmupStepLR, self).__init__(optimizer, step_size, gamma=self.gamma, last_epoch=last_epoch)
- def get_lr(self):
- if not self._get_lr_called_within_step:
- import warnings
- warnings.warn("To get the last learning rate computed by the scheduler, "
- "please use `get_last_lr()`.", DeprecationWarning)
- # e.g. warmup_steps = 10, case: 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 21...
- if self.last_epoch == self.warmup_steps or(self.last_epoch % self.step_size != 0 and self.last_epoch > self.warmup_steps):
- return [group['lr'] for group in self.optimizer.param_groups]
- # e.g. warmup_steps = 10, case: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
- elif self.last_epoch < self.warmup_steps:
- return [group['initial_lr'] * float(self.last_epoch + 1) / float(self.warmup_steps) for group in self.optimizer.param_groups]
-
-
- # e.g. warmup_steps = 10, case: 10, 20, 30, 40...
- return [group['lr'] * self.gamma
- for group in self.optimizer.param_groups]
- def _get_closed_form_lr(self):
- if self.last_epoch <= self.warmup_steps:
- return [base_lr * float(self.last_epoch) / (self.warmup_steps) for base_lr in self.base_lrs]
- else:
- return [base_lr * self.gamma ** ((self.last_epoch - self.warmup_steps)// self.step_size) for base_lr in self.base_lrs]
-
-def setup_model(opt):
- """setup model/optimizer/scheduler and load checkpoints when needed"""
- logger.info("setup model/optimizer/scheduler")
-
- importer = importlib.import_module('.'.join(['model', opt.model_id]))
- model, criterion = importer.build_model(opt)
-
- if int(opt.device) >= 0:
- logger.info("CUDA enabled.")
- model.to(opt.gpu_id)
- criterion.to(opt.gpu_id)
-
- param_dicts = [{"params": [p for n, p in model.named_parameters() if p.requires_grad]}]
- optimizer = torch.optim.AdamW(param_dicts, lr=opt.lr, weight_decay=opt.wd)
-
- if opt.lr_warmup != -1 and opt.lr_drop > 0:
- lr_scheduler = WarmupStepLR(optimizer, warmup_steps=opt.lr_warmup[0], step_size=opt.lr_drop, gamma=opt.lr_gamma)
-
- elif opt.lr_warmup != -1:
- from transformers import get_constant_schedule_with_warmup
- lr_scheduler = get_constant_schedule_with_warmup(optimizer, opt.lr_warmup[0])
-
- elif opt.lr_drop > 0:
- lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, opt.lr_drop, gamma=opt.lr_gamma)
-
- if opt.resume is not None:
- logger.info(f"Load checkpoint from {opt.resume}")
- checkpoint = torch.load(opt.resume, map_location="cpu")
-
- for key in list(checkpoint["model"].keys()):
- checkpoint["model"][key.replace('module.', '')] = checkpoint["model"].pop(key)
- model.load_state_dict(checkpoint["model"])
-
- if opt.resume_all:
- optimizer.load_state_dict(checkpoint['optimizer'])
- lr_scheduler.load_state_dict(checkpoint['lr_scheduler'])
- opt.start_epoch = checkpoint['epoch'] + 1
- logger.info(f"Loaded model saved at epoch {checkpoint['epoch']} from checkpoint: {opt.resume}")
- else:
- logger.warning("If you intend to evaluate the model, please specify --resume with ckpt path")
-
- return model, criterion, optimizer, lr_scheduler
diff --git a/spaces/KyanChen/RSPrompter/mmdet/engine/hooks/pipeline_switch_hook.py b/spaces/KyanChen/RSPrompter/mmdet/engine/hooks/pipeline_switch_hook.py
deleted file mode 100644
index 4347289fc284c85748ceba17c88490665f99e464..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/engine/hooks/pipeline_switch_hook.py
+++ /dev/null
@@ -1,42 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from mmcv.transforms import Compose
-from mmengine.hooks import Hook
-
-from mmdet.registry import HOOKS
-
-
-@HOOKS.register_module()
-class PipelineSwitchHook(Hook):
- """Switch data pipeline at switch_epoch.
-
- Args:
- switch_epoch (int): switch pipeline at this epoch.
- switch_pipeline (list[dict]): the pipeline to switch to.
- """
-
- def __init__(self, switch_epoch, switch_pipeline):
- self.switch_epoch = switch_epoch
- self.switch_pipeline = switch_pipeline
- self._restart_dataloader = False
-
- def before_train_epoch(self, runner):
- """switch pipeline."""
- epoch = runner.epoch
- train_loader = runner.train_dataloader
- if epoch == self.switch_epoch:
- runner.logger.info('Switch pipeline now!')
- # The dataset pipeline cannot be updated when persistent_workers
- # is True, so we need to force the dataloader's multi-process
- # restart. This is a very hacky approach.
- train_loader.dataset.pipeline = Compose(self.switch_pipeline)
- if hasattr(train_loader, 'persistent_workers'
- ) and train_loader.persistent_workers is True:
- train_loader._DataLoader__initialized = False
- train_loader._iterator = None
- self._restart_dataloader = True
-
- else:
- # Once the restart is complete, we need to restore
- # the initialization flag.
- if self._restart_dataloader:
- train_loader._DataLoader__initialized = True
diff --git a/spaces/LanguageBind/LanguageBind/open_clip/push_to_hf_hub.py b/spaces/LanguageBind/LanguageBind/open_clip/push_to_hf_hub.py
deleted file mode 100644
index 6e6271da1d35e36ea22e92d339dc9465d0793249..0000000000000000000000000000000000000000
--- a/spaces/LanguageBind/LanguageBind/open_clip/push_to_hf_hub.py
+++ /dev/null
@@ -1,280 +0,0 @@
-import argparse
-import json
-import os
-from pathlib import Path
-from tempfile import TemporaryDirectory
-from typing import Optional, Tuple, Union
-
-import torch
-
-try:
- from huggingface_hub import (
- create_repo,
- get_hf_file_metadata,
- hf_hub_download,
- hf_hub_url,
- repo_type_and_id_from_hf_id,
- upload_folder,
- list_repo_files,
- )
- from huggingface_hub.utils import EntryNotFoundError
- _has_hf_hub = True
-except ImportError:
- _has_hf_hub = False
-
-try:
- import safetensors.torch
- _has_safetensors = True
-except ImportError:
- _has_safetensors = False
-
-from .factory import create_model_from_pretrained, get_model_config, get_tokenizer
-from .tokenizer import HFTokenizer
-
-# Default name for a weights file hosted on the Huggingface Hub.
-HF_WEIGHTS_NAME = "open_clip_pytorch_model.bin" # default pytorch pkl
-HF_SAFE_WEIGHTS_NAME = "open_clip_model.safetensors" # safetensors version
-HF_CONFIG_NAME = 'open_clip_config.json'
-
-def save_config_for_hf(
- model,
- config_path: str,
- model_config: Optional[dict]
-):
- preprocess_cfg = {
- 'mean': model.visual.image_mean,
- 'std': model.visual.image_std,
- }
- hf_config = {
- 'model_cfg': model_config,
- 'preprocess_cfg': preprocess_cfg,
- }
-
- with config_path.open('w') as f:
- json.dump(hf_config, f, indent=2)
-
-
-def save_for_hf(
- model,
- tokenizer: HFTokenizer,
- model_config: dict,
- save_directory: str,
- safe_serialization: Union[bool, str] = False,
- skip_weights : bool = False,
-):
- config_filename = HF_CONFIG_NAME
-
- save_directory = Path(save_directory)
- save_directory.mkdir(exist_ok=True, parents=True)
-
- if not skip_weights:
- tensors = model.state_dict()
- if safe_serialization is True or safe_serialization == "both":
- assert _has_safetensors, "`pip install safetensors` to use .safetensors"
- safetensors.torch.save_file(tensors, save_directory / HF_SAFE_WEIGHTS_NAME)
- if safe_serialization is False or safe_serialization == "both":
- torch.save(tensors, save_directory / HF_WEIGHTS_NAME)
-
- tokenizer.save_pretrained(save_directory)
-
- config_path = save_directory / config_filename
- save_config_for_hf(model, config_path, model_config=model_config)
-
-
-def push_to_hf_hub(
- model,
- tokenizer,
- model_config: Optional[dict],
- repo_id: str,
- commit_message: str = 'Add model',
- token: Optional[str] = None,
- revision: Optional[str] = None,
- private: bool = False,
- create_pr: bool = False,
- model_card: Optional[dict] = None,
- safe_serialization: Union[bool, str] = False,
-):
- if not isinstance(tokenizer, HFTokenizer):
- # default CLIP tokenizers use https://huggingface.co/openai/clip-vit-large-patch14
- tokenizer = HFTokenizer('openai/clip-vit-large-patch14')
-
- # Create repo if it doesn't exist yet
- repo_url = create_repo(repo_id, token=token, private=private, exist_ok=True)
-
- # Infer complete repo_id from repo_url
- # Can be different from the input `repo_id` if repo_owner was implicit
- _, repo_owner, repo_name = repo_type_and_id_from_hf_id(repo_url)
- repo_id = f"{repo_owner}/{repo_name}"
-
- # Check if repo already exists and determine what needs updating
- repo_exists = False
- repo_files = {}
- try:
- repo_files = set(list_repo_files(repo_id))
- repo_exists = True
- except Exception as e:
- print('Repo does not exist', e)
-
- try:
- get_hf_file_metadata(hf_hub_url(repo_id=repo_id, filename="README.md", revision=revision))
- has_readme = True
- except EntryNotFoundError:
- has_readme = False
-
- # Dump model and push to Hub
- with TemporaryDirectory() as tmpdir:
- # Save model weights and config.
- save_for_hf(
- model,
- tokenizer=tokenizer,
- model_config=model_config,
- save_directory=tmpdir,
- safe_serialization=safe_serialization,
- )
-
- # Add readme if it does not exist
- if not has_readme:
- model_card = model_card or {}
- model_name = repo_id.split('/')[-1]
- readme_path = Path(tmpdir) / "README.md"
- readme_text = generate_readme(model_card, model_name)
- readme_path.write_text(readme_text)
-
- # Upload model and return
- return upload_folder(
- repo_id=repo_id,
- folder_path=tmpdir,
- revision=revision,
- create_pr=create_pr,
- commit_message=commit_message,
- )
-
-
-def push_pretrained_to_hf_hub(
- model_name,
- pretrained: str,
- repo_id: str,
- precision: str = 'fp32',
- image_mean: Optional[Tuple[float, ...]] = None,
- image_std: Optional[Tuple[float, ...]] = None,
- commit_message: str = 'Add model',
- token: Optional[str] = None,
- revision: Optional[str] = None,
- private: bool = False,
- create_pr: bool = False,
- model_card: Optional[dict] = None,
-):
- model, preprocess_eval = create_model_from_pretrained(
- model_name,
- pretrained=pretrained,
- precision=precision,
- image_mean=image_mean,
- image_std=image_std,
- )
-
- model_config = get_model_config(model_name)
- assert model_config
-
- tokenizer = get_tokenizer(model_name)
-
- push_to_hf_hub(
- model=model,
- tokenizer=tokenizer,
- model_config=model_config,
- repo_id=repo_id,
- commit_message=commit_message,
- token=token,
- revision=revision,
- private=private,
- create_pr=create_pr,
- model_card=model_card,
- safe_serialization='both',
- )
-
-
-def generate_readme(model_card: dict, model_name: str):
- readme_text = "---\n"
- readme_text += "tags:\n- clip\n"
- readme_text += "library_name: open_clip\n"
- readme_text += "pipeline_tag: zero-shot-image-classification\n"
- readme_text += f"license: {model_card.get('license', 'mit')}\n"
- if 'details' in model_card and 'Dataset' in model_card['details']:
- readme_text += 'datasets:\n'
- readme_text += f"- {model_card['details']['Dataset'].lower()}\n"
- readme_text += "---\n"
- readme_text += f"# Model card for {model_name}\n"
- if 'description' in model_card:
- readme_text += f"\n{model_card['description']}\n"
- if 'details' in model_card:
- readme_text += f"\n## Model Details\n"
- for k, v in model_card['details'].items():
- if isinstance(v, (list, tuple)):
- readme_text += f"- **{k}:**\n"
- for vi in v:
- readme_text += f" - {vi}\n"
- elif isinstance(v, dict):
- readme_text += f"- **{k}:**\n"
- for ki, vi in v.items():
- readme_text += f" - {ki}: {vi}\n"
- else:
- readme_text += f"- **{k}:** {v}\n"
- if 'usage' in model_card:
- readme_text += f"\n## Model Usage\n"
- readme_text += model_card['usage']
- readme_text += '\n'
-
- if 'comparison' in model_card:
- readme_text += f"\n## Model Comparison\n"
- readme_text += model_card['comparison']
- readme_text += '\n'
-
- if 'citation' in model_card:
- readme_text += f"\n## Citation\n"
- if not isinstance(model_card['citation'], (list, tuple)):
- citations = [model_card['citation']]
- else:
- citations = model_card['citation']
- for c in citations:
- readme_text += f"```bibtex\n{c}\n```\n"
-
- return readme_text
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(description="Push to Hugging Face Hub")
- parser.add_argument(
- "--model", type=str, help="Name of the model to use.",
- )
- parser.add_argument(
- "--pretrained", type=str,
- help="Use a pretrained CLIP model weights with the specified tag or file path.",
- )
- parser.add_argument(
- "--repo-id", type=str,
- help="Destination HF Hub repo-id ie 'organization/model_id'.",
- )
- parser.add_argument(
- "--precision", type=str, default='fp32',
- )
- parser.add_argument(
- '--image-mean', type=float, nargs='+', default=None, metavar='MEAN',
- help='Override default image mean value of dataset')
- parser.add_argument(
- '--image-std', type=float, nargs='+', default=None, metavar='STD',
- help='Override default image std deviation of of dataset')
- args = parser.parse_args()
-
- print(f'Saving model {args.model} with pretrained weights {args.pretrained} to Hugging Face Hub at {args.repo_id}')
-
- # FIXME add support to pass model_card json / template from file via cmd line
-
- push_pretrained_to_hf_hub(
- args.model,
- args.pretrained,
- args.repo_id,
- precision=args.precision,
- image_mean=args.image_mean, # override image mean/std if trained w/ non defaults
- image_std=args.image_std,
- )
-
- print(f'{args.model} saved.')
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/modules/F0Predictor/DioF0Predictor.py
deleted file mode 100644
index 01a7f2586e85fed9e87d1b22ddb6e1ec87180c8b..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/modules/F0Predictor/DioF0Predictor.py
+++ /dev/null
@@ -1,90 +0,0 @@
-from lib.infer.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import pyworld
-import numpy as np
-
-
-class DioF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def resize_f0(self, x, target_len):
- source = np.array(x)
- source[source < 0.001] = np.nan
- target = np.interp(
- np.arange(0, len(source) * target_len, len(source)) / target_len,
- np.arange(0, len(source)),
- source,
- )
- res = np.nan_to_num(target)
- return res
-
- def compute_f0(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
-
- def compute_f0_uv(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))
diff --git a/spaces/Letheoricien/MLPC_2023_NATHEO/README.md b/spaces/Letheoricien/MLPC_2023_NATHEO/README.md
deleted file mode 100644
index 8f6d4f4426c6718aba5e97f32db57f4b126a74e7..0000000000000000000000000000000000000000
--- a/spaces/Letheoricien/MLPC_2023_NATHEO/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: MLPC 2023 NATHEO
-emoji: 🐨
-colorFrom: green
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: afl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Lianglan/Demo_Gpt3.5-turbo_model/README.md b/spaces/Lianglan/Demo_Gpt3.5-turbo_model/README.md
deleted file mode 100644
index bd3e7f56746c9cbb7e8686c982cf5841f6c1b2a9..0000000000000000000000000000000000000000
--- a/spaces/Lianglan/Demo_Gpt3.5-turbo_model/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Demo Gpt3.5-turbo Model
-emoji: 📈
-colorFrom: green
-colorTo: red
-sdk: gradio
-sdk_version: 3.20.0
-app_file: app.py
-pinned: false
-license: cc-by-nc-4.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/LinkSoul/Chinese-Llama-2-7b/model.py b/spaces/LinkSoul/Chinese-Llama-2-7b/model.py
deleted file mode 100644
index af5bd11374f5a8b81bb0ffc433d89544806c9c50..0000000000000000000000000000000000000000
--- a/spaces/LinkSoul/Chinese-Llama-2-7b/model.py
+++ /dev/null
@@ -1,63 +0,0 @@
-from typing import Iterator
-from llama_cpp import Llama
-from huggingface_hub import hf_hub_download
-
-
-def download_model():
- # See https://github.com/OpenAccess-AI-Collective/ggml-webui/blob/main/tabbed.py
- # https://huggingface.co/spaces/kat33/llama.cpp/blob/main/app.py
- print(f"Downloading model: {model_repo}/{model_filename}")
- file = hf_hub_download(
- repo_id=model_repo, filename=model_filename
- )
- print("Downloaded " + file)
- return file
-
-model_repo = "LinkSoul/Chinese-Llama-2-7b-ggml"
-model_filename = "Chinese-Llama-2-7b.ggmlv3.q4_0.bin"
-# model_filename = "Chinese-Llama-2-7b.ggmlv3.q8_0.bin"
-model_path = download_model()
-
-# load Llama-2
-llm = Llama(model_path=model_path, n_ctx=4000, verbose=False)
-
-
-def get_prompt(message: str, chat_history: list[tuple[str, str]],
- system_prompt: str) -> str:
- texts = [f'[INST] <>\n{system_prompt}\n<>\n\n']
- for user_input, response in chat_history:
- texts.append(f'{user_input.strip()} [/INST] {response.strip()} [INST] ')
- texts.append(f'{message.strip()} [/INST]')
- return ''.join(texts)
-
-def generate(prompt, max_new_tokens, temperature, top_p, top_k):
- return llm(prompt,
- max_tokens=max_new_tokens,
- stop=[""],
- temperature=temperature,
- top_p=top_p,
- top_k=top_k,
- stream=False)
-
-
-def get_input_token_length(message: str, chat_history: list[tuple[str, str]], system_prompt: str) -> int:
- prompt = get_prompt(message, chat_history, system_prompt)
- input_ids = llm.tokenize(prompt.encode('utf-8'))
- return len(input_ids)
-
-
-def run(message: str,
- chat_history: list[tuple[str, str]],
- system_prompt: str,
- max_new_tokens: int = 1024,
- temperature: float = 0.8,
- top_p: float = 0.95,
- top_k: int = 50) -> Iterator[str]:
- prompt = get_prompt(message, chat_history, system_prompt)
- output = generate(prompt, max_new_tokens, temperature, top_p, top_k)
- yield output['choices'][0]['text']
-
- # outputs = []
- # for resp in streamer:
- # outputs.append(resp['choices'][0]['text'])
- # yield ''.join(outputs)
diff --git a/spaces/ML-Demo-Challenge/test/README.md b/spaces/ML-Demo-Challenge/test/README.md
deleted file mode 100644
index cadaf666ac96585ee9610f1f2c8d75f4856d5386..0000000000000000000000000000000000000000
--- a/spaces/ML-Demo-Challenge/test/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Test
-emoji: 🔥
-colorFrom: indigo
-colorTo: gray
-sdk: gradio
-sdk_version: 3.9
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/MackDX/Neptunia/Dockerfile b/spaces/MackDX/Neptunia/Dockerfile
deleted file mode 100644
index eef259fa372a804549fb0af0913718a13344da34..0000000000000000000000000000000000000000
--- a/spaces/MackDX/Neptunia/Dockerfile
+++ /dev/null
@@ -1,11 +0,0 @@
-FROM node:18-bullseye-slim
-RUN apt-get update && \
- apt-get install -y git
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-WORKDIR /app
-RUN npm install
-COPY Dockerfile greeting.md* .env* ./
-RUN npm run build
-EXPOSE 7860
-ENV NODE_ENV=production
-CMD [ "npm", "start" ]
diff --git a/spaces/Makiing/coolb-in-gtest/src/components/tone-selector.tsx b/spaces/Makiing/coolb-in-gtest/src/components/tone-selector.tsx
deleted file mode 100644
index 5c6e464c91f564b895acd121f0a4a79ed9c5c356..0000000000000000000000000000000000000000
--- a/spaces/Makiing/coolb-in-gtest/src/components/tone-selector.tsx
+++ /dev/null
@@ -1,43 +0,0 @@
-import React from 'react'
-import { BingConversationStyle } from '@/lib/bots/bing/types'
-import { cn } from '@/lib/utils'
-
-type ToneItem = {
- type: BingConversationStyle,
- name: string
-}
-
-const ToneList: ToneItem[] = [
- { name: '有创造力', type: BingConversationStyle.Creative },
- { name: '更平衡', type: BingConversationStyle.Balanced },
- { name: '更精确', type: BingConversationStyle.Precise }
-]
-
-interface ToneSelectorProps {
- type: BingConversationStyle | ''
- onChange?: (type: BingConversationStyle) => void
-}
-
-export function ToneSelector({ type, onChange }: ToneSelectorProps) {
- return (
-