-
-Al llegar a vivir con su tÃa, se da cuenta que guarda un secreto, que ha permanecido oculto y resguardado por siglos; por el que los hombres ... 4d29de3e1b
-
-
-
diff --git a/spaces/1line/AutoGPT/autogpt/processing/html.py b/spaces/1line/AutoGPT/autogpt/processing/html.py
deleted file mode 100644
index 81387b12adab5023150c55f2075ddd40b554f386..0000000000000000000000000000000000000000
--- a/spaces/1line/AutoGPT/autogpt/processing/html.py
+++ /dev/null
@@ -1,33 +0,0 @@
-"""HTML processing functions"""
-from __future__ import annotations
-
-from bs4 import BeautifulSoup
-from requests.compat import urljoin
-
-
-def extract_hyperlinks(soup: BeautifulSoup, base_url: str) -> list[tuple[str, str]]:
- """Extract hyperlinks from a BeautifulSoup object
-
- Args:
- soup (BeautifulSoup): The BeautifulSoup object
- base_url (str): The base URL
-
- Returns:
- List[Tuple[str, str]]: The extracted hyperlinks
- """
- return [
- (link.text, urljoin(base_url, link["href"]))
- for link in soup.find_all("a", href=True)
- ]
-
-
-def format_hyperlinks(hyperlinks: list[tuple[str, str]]) -> list[str]:
- """Format hyperlinks to be displayed to the user
-
- Args:
- hyperlinks (List[Tuple[str, str]]): The hyperlinks to format
-
- Returns:
- List[str]: The formatted hyperlinks
- """
- return [f"{link_text} ({link_url})" for link_text, link_url in hyperlinks]
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/APK Free Download Global Truck Simulator - The Most Realistic Truck Simulation Game.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/APK Free Download Global Truck Simulator - The Most Realistic Truck Simulation Game.md
deleted file mode 100644
index 234b782bb2e22f151b6dac8132bc190ec591e4e9..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/APK Free Download Global Truck Simulator - The Most Realistic Truck Simulation Game.md
+++ /dev/null
@@ -1,114 +0,0 @@
-
-
Global Truck Simulator APK Download: How to Enjoy Driving a Big Rig on Your Mobile Device
-
Do you love driving trucks and delivering cargo across different countries and continents? Do you want to experience the thrill and challenge of driving a big rig on your mobile device? If you answered yes, then you should try Global Truck Simulator, one of the best truck simulator games for Android devices. In this article, we will tell you everything you need to know about this game, including what it is, what features it has, how to download and install it, and how to play it. We will also share some tips and tricks to help you become a successful truck driver in the game.
-
What is Global Truck Simulator?
-
Global Truck Simulator is a realistic and immersive truck driving game for Android devices. It is developed by Ocypode Studios, a company that specializes in creating simulation games. The game lets you drive various trucks and deliver different cargoes across the world, from Europe to America, from Asia to Africa. You can choose from iconic American models like Chevrolet, Western Star, and Hummer, or European models like Renault, Volvo, and Mercedes-Benz. You can also customize your trucks with optional lights, bars, horns, beacons, smoke exhausts, and more.
Global Truck Simulator has many features that make it stand out from other truck simulator games. Here are some of them:
-
Various truck models and customization options
-
The game offers a wide range of truck models that you can choose from, each with its own specifications, performance, and appearance. You can also customize your trucks with different parts and accessories, such as engines, transmissions, tires, wheels, paint jobs, decals, etc. You can even design your own truck from scratch using the in-game editor.
-
Diverse and challenging terrains and routes
-
The game features realistic terrains that react to the movement and weight of your truck. You will have to drive through rivers, muddy roads, snowy mountains, deserts, forests, cities, highways, and more. You will also have to deal with different weather conditions, such as rain, fog, snow, wind, etc. The game also has dynamic day-night cycles that affect the visibility and traffic on the roads.
-
Career mode and multiplayer mode
-
The game has two modes that you can play: career mode and multiplayer mode. In career mode, you can start your own trucking business and manage it for maximum profits. You can hire drivers, buy garages, accept contracts, deliver cargoes, upgrade your trucks, etc. You can also compete with other players in leaderboards and achievements. In multiplayer mode, you can join or create online sessions with up to three other players. You can chat with them, cooperate with them, or challenge them in races or missions.
-
How to Download and Install Global Truck Simulator APK?
-
If you want to play Global Truck Simulator on your Android device, you will need to download and install its APK file. An APK file is an application package file that contains all the files needed to run an Android app. There are two ways to download and install Global Truck Simulator APK:
-
Steps to download and install the game from the official website or Google Play Store
-
The easiest way to download and install Global Truck Simulator APK is to get it from its official website or Google Play Store. Here are the steps to do so:
Click on the download button or the install button to start the download process.
-
Wait for the download to finish and then open the APK file.
-
Follow the instructions on the screen to install the game on your device.
-
Launch the game and enjoy driving a big rig on your mobile device.
-
-
Tips to avoid malware and viruses when downloading APK files from third-party sources
-
If you want to download Global Truck Simulator APK from a third-party source, such as a website or a file-sharing platform, you need to be careful and follow some precautions. This is because some APK files may contain malware or viruses that can harm your device or steal your personal information. Here are some tips to avoid malware and viruses when downloading APK files from third-party sources:
-
-
Only download APK files from trusted and reputable sources. You can check the reviews, ratings, and feedback of other users before downloading an APK file.
-
Use a reliable antivirus or anti-malware software on your device and scan the APK file before installing it.
-
Check the permissions and access rights that the APK file requests. If they seem suspicious or unnecessary, do not install the APK file.
-
Do not install APK files from unknown or unsolicited sources, such as pop-ups, emails, messages, etc.
-
-
How to Play Global Truck Simulator?
-
Now that you have downloaded and installed Global Truck Simulator APK on your device, you are ready to play the game. Here are some basic controls and gameplay mechanics that you need to know:
-
Basic controls and gameplay mechanics
-
The game has simple and intuitive controls that let you drive your truck with ease. You can use the steering wheel, pedals, buttons, or tilt your device to control your truck. You can also switch between different camera views, such as cockpit, exterior, or top-down. You can also use indicators, headlights, horn, wipers, etc. to communicate with other drivers on the road.
-
The game has realistic physics and graphics that make you feel like you are driving a real truck. You will have to follow the traffic rules, obey the speed limits, pay attention to the signs, signals, and road conditions, etc. You will also have to manage your fuel, cargo weight, damage, fatigue, etc. You will have to park your truck in designated areas and unload your cargo at the end of each delivery.
-
Tips and tricks to master the roads and earn more money
-
If you want to become a successful truck driver in Global Truck Simulator, you will need some tips and tricks to master the roads and earn more money. Here are some of them:
-
-
Plan your route carefully before starting a delivery. Choose the shortest and safest route that avoids tolls, traffic jams, accidents, etc.
-
Drive carefully and avoid collisions, fines, penalties, etc. They will reduce your income and reputation.
-
Upgrade your trucks with better parts and accessories. They will improve your performance, fuel efficiency, durability, etc.
-
Hire other drivers and buy more garages. They will generate passive income for you while you are offline or busy with other deliveries.
-
Join online sessions with other players. You can cooperate with them in convoys or challenge them in races or missions.
-
-
Conclusion
-
Global Truck Simulator is a fun and realistic truck driving game for Android devices. It lets you drive various trucks and deliver different cargoes across the world. It has many features that make it stand out from other truck simulator games, such as various truck models and customization options, diverse and challenging terrains and routes, career mode and multiplayer mode, etc. You can download and install Global Truck Simulator APK from its official website or Google Play Store easily. You can also play the game with simple and intuitive controls and realistic physics and graphics. You can also use some tips and tricks to master the roads and earn more money in the game.
-
If you are looking for a truck simulator game that offers a lot of fun and challenge on your mobile device, you should definitely try Global Truck Simulator. It is one of the best truck simulator games for Android devices that you can find.
-
FAQs
-
Here are some frequently asked questions about Global Truck Simulator:
-
-
Is Global Truck Simulator free?
-
Yes, Global Truck Simulator is free to download and play, but it contains ads and in-app purchases that you can use to buy more trucks, parts, accessories, etc.
-
What are the system requirements for Global Truck Simulator?
-
The game requires Android 4.4 or higher and at least 1 GB of RAM and 500 MB of storage space. It also requires a stable internet connection for online features.
-
How can I contact the developers of Global Truck Simulator?
Yes, you can play Global Truck Simulator offline, but you will not be able to access some features, such as multiplayer mode, leaderboards, achievements, etc.
-
Can I play Global Truck Simulator on PC or other devices?
-
No, Global Truck Simulator is only available for Android devices. However, you can use an Android emulator on your PC or other devices to run the game.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Angry Birds Classic Mod APK - The Best Way to Play the Classic Game with More Features.md b/spaces/1phancelerku/anime-remove-background/Angry Birds Classic Mod APK - The Best Way to Play the Classic Game with More Features.md
deleted file mode 100644
index 064ed7b695de49b67e74f235bf88c1cb919994f2..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Angry Birds Classic Mod APK - The Best Way to Play the Classic Game with More Features.md
+++ /dev/null
@@ -1,78 +0,0 @@
-
-
Download Game Angry Birds Classic Mod Apk
-
If you are looking for a fun and addictive game to play on your Android device, you should definitely try Angry Birds Classic. This is one of the most popular and successful games ever created, with millions of fans around the world. However, if you want to enjoy the game to the fullest, you should download Angry Birds Classic Mod Apk, which is a modified version of the original game that offers many benefits and advantages. In this article, we will tell you everything you need to know about Angry Birds Classic Mod Apk, including how to download and install it, what features it has, and why you should get it.
-
Introduction
-
Angry Birds Classic is a game that was released in 2009 by Rovio Entertainment, a Finnish company. The game is based on a simple but brilliant idea: you have to use a slingshot to launch birds at pigs who have stolen their eggs. The pigs are hiding in various structures made of wood, stone, ice, and other materials, and you have to destroy them all to complete each level. The game has hundreds of levels, each with different challenges and objectives. The game also has different types of birds, each with their own abilities and characteristics. For example, some birds can explode, some can split into multiple birds, some can boomerang, and some can drop eggs.
Angry Birds Classic Mod Apk is a modified version of the original game that has been created by third-party developers. The mod apk file is an installation file that contains the game data and some changes that alter the gameplay. The mod apk file allows you to access features and options that are not available in the official version of the game. For example, you can get unlimited money and power-ups, unlock all levels and episodes, remove ads and pop-ups, and enjoy high-quality graphics and sound effects.
-
Why download Angry Birds Classic Mod Apk?
-
There are many reasons why you should download Angry Birds Classic Mod Apk instead of playing the official version of the game. Here are some of them:
-
-
You can save your time and money by getting unlimited money and power-ups. You don't have to spend real money to buy them or wait for them to recharge. You can use them as much as you want without any limitations.
-
You can enjoy the game without any interruptions or distractions by removing ads and pop-ups. You don't have to watch annoying videos or banners that take up your screen space and slow down your device.
-
You can explore the game without any restrictions by unlocking all levels and episodes. You don't have to complete previous levels or earn stars to access new ones. You can play any level you want at any time.
-
You can enhance your gaming experience by enjoying high-quality graphics and sound effects. You don't have to compromise on the visual and audio quality of the game. You can see every detail and hear every sound clearly.
-
-
How to download and install Angry Birds Classic Mod Apk
-
If you are interested in downloading Angry Birds Classic Mod Apk, you should follow these simple steps:
-
Step 1: Enable unknown sources
-
Since Angry Birds Classic Mod Apk is not available on the Google Play Store, you have to enable unknown sources on your device. This will allow you to install apps from sources other than the official store. To do this, go to your device settings, then security, then enable unknown sources. You will see a warning message, but you can ignore it and tap OK.
-
Step 2: Download the mod apk file
-
Next, you have to download the mod apk file from a reliable source. You can use the link below to download the latest version of Angry Birds Classic Mod Apk. The file size is about 100 MB, so make sure you have enough storage space and a stable internet connection.
download angry birds classic unlimited money mod apk
-download game angry birds classic hack mod apk
-download angry birds classic mod apk latest version
-download game angry birds classic mod apk offline
-download angry birds classic mod apk for android
-download game angry birds classic mod apk free
-download angry birds classic mod apk unlimited everything
-download game angry birds classic mod apk no ads
-download angry birds classic mod apk full version
-download game angry birds classic mod apk revdl
-download angry birds classic mega mod apk
-download game angry birds classic mod apk android 1
-download angry birds classic mod apk all levels unlocked
-download game angry birds classic mod apk unlimited gems
-download angry birds classic mod apk old version
-download game angry birds classic mod apk rexdl
-download angry birds classic premium mod apk
-download game angry birds classic mod apk unlimited coins
-download angry birds classic mod apk all episodes unlocked
-download game angry birds classic mod apk unlimited power ups
-download angry birds classic cracked mod apk
-download game angry birds classic mod apk 2023
-download angry birds classic original mod apk
-download game angry birds classic mod apk pure
-download angry birds classic pro mod apk
-
Step 3: Install the mod apk file
-
After downloading the mod apk file, you have to install it on your device. To do this, locate the file in your downloads folder and tap on it. You will see a confirmation message, but you can ignore it and tap Install. The installation process will take a few seconds, depending on your device performance.
-
Step 4: Launch the game and enjoy
-
Finally, you can launch the game and enjoy all the features and benefits of Angry Birds Classic Mod Apk. You will see a new icon on your home screen or app drawer with the name Angry Birds Classic Mod. Tap on it and start playing the game. You will notice that you have unlimited money and power-ups, all levels and episodes unlocked, no ads and pop-ups, and high-quality graphics and sound effects.
-
Features of Angry Birds Classic Mod Apk
-
Angry Birds Classic Mod Apk has many features that make it better than the original game. Here are some of them:
-
Unlimited money and power-ups
-
With Angry Birds Classic Mod Apk, you don't have to worry about running out of money or power-ups. You can use them as much as you want without any limitations. Money is used to buy power-ups, such as slingshot upgrades, mighty eagles, shockwaves, and more. Power-ups are used to boost your performance and help you complete difficult levels. You can also use money to customize your birds with different hats, glasses, and accessories.
-
All levels and episodes unlocked
-
With Angry Birds Classic Mod Apk, you don't have to complete previous levels or earn stars to access new ones. You can play any level you want at any time. The game has hundreds of levels, divided into different episodes, such as Poached Eggs, Mighty Hoax, Danger Above, The Big Setup, Ham 'Em High, Mine and Dine, Birdday Party, Bad Piggies, Surf and Turf, Red's Mighty Feathers, Short Fuse, Flock Favorites, BirdDay 5, Bird Island, Piggy Farm, Jurassic Pork, Golden Eggs, and more. Each episode has its own theme, setting, and challenges.
-
No ads and pop-ups
-
With Angry Birds Classic Mod Apk, you don't have to watch annoying videos or banners that take up your screen space and slow down your device. You can enjoy the game without any interruptions or distractions. You can also save your data and battery by not loading unnecessary ads.
-
High-quality graphics and sound effects
-
With Angry Birds Classic Mod Apk, you don't have to compromise on the visual and audio quality of the game. You can see every detail and hear every sound clearly. The game has high-quality graphics that are colorful and vibrant. The game also has sound effects that are realistic and fun. You can hear the birds' voices, the pigs' grunts, the explosions' booms, and the music's tunes.
-
Conclusion
-
Angry Birds Classic is a game that everyone should try at least once in their life. It is a game that is fun and addictive, but also challenging and rewarding. However, if you want to enjoy the game to the fullest, you should download Angry Birds Classic Mod Apk, which is a modified version of the original game that offers many benefits and advantages. You can get unlimited money and power-ups, unlock all levels and episodes, remove ads and pop-ups, and enjoy high-quality graphics and sound effects. You can download Angry Birds Classic Mod Apk from the link below and follow the simple steps to install it on your device. You will be amazed by how much fun you can have with this game. So, what are you waiting for? Download Angry Birds Classic Mod Apk now and start slinging those birds at those pigs!
-
FAQs
-
Here are some frequently asked questions about Angry Birds Classic Mod Apk:
-
Is Angry Birds Classic Mod Apk safe to download and install?
-
Yes, Angry Birds Classic Mod Apk is safe to download and install, as long as you use a reliable source. The mod apk file does not contain any viruses or malware that can harm your device or data. However, you should always scan the file before installing it, just to be sure.
-
Is Angry Birds Classic Mod Apk compatible with my device?
-
Angry Birds Classic Mod Apk is compatible with most Android devices that run on Android 4.1 or higher. However, some devices may not support the game or the mod apk file due to different specifications or settings. If you encounter any problems or errors while playing the game, you can try to update your device software, clear your cache, or reinstall the game.
-
Will I get banned for using Angry Birds Classic Mod Apk?
-
No, you will not get banned for using Angry Birds Classic Mod Apk, as the game does not have any online features or modes that require verification or authentication. The game is offline and does not connect to any servers or databases. Therefore, you can play the game without any worries or risks.
-
Can I play Angry Birds Classic Mod Apk with my friends?
-
Yes, you can play Angry Birds Classic Mod Apk with your friends, as the game has a local multiplayer mode that allows you to compete with up to four players on the same device. You can also share your scores and achievements with your friends on social media platforms, such as Facebook and Twitter.
-
Can I update Angry Birds Classic Mod Apk?
-
Yes, you can update Angry Birds Classic Mod Apk whenever there is a new version available. However, you should always backup your game data before updating, as some updates may overwrite or delete your progress. You should also check if the new version of the mod apk file is compatible with your device and has the same features and benefits as the previous one.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Discover the Magic of AI Image Generator.md b/spaces/1phancelerku/anime-remove-background/Discover the Magic of AI Image Generator.md
deleted file mode 100644
index 4e4d969982179ffa82802162ec549fb99c632a75..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Discover the Magic of AI Image Generator.md
+++ /dev/null
@@ -1,135 +0,0 @@
-
-
AI image generator apk: What is it and how to use it?
-
Have you ever wondered how to create realistic or artistic images using artificial intelligence? Do you want to transform your photos into amazing artworks or funny animations? If yes, then you might be interested in learning more about AI image generator apk. In this article, we will explain what an AI image generator is, how it works, and how to use it on your Android device. We will also introduce you to some of the best AI image generator apps that you can download and install on your phone or tablet.
-
What is an AI image generator?
-
An AI image generator is a software program that uses artificial intelligence techniques to generate new images from existing ones or from scratch. It can manipulate, enhance, or modify images in various ways, such as changing colors, adding filters, applying effects, swapping faces, or creating animations. An AI image generator can also create realistic or stylized images based on text descriptions or sketches.
An AI image generator works by using deep learning algorithms that learn from large datasets of images. These algorithms are called neural networks, and they consist of multiple layers of artificial neurons that process information and produce outputs. Depending on the task, an AI image generator can use different types of neural networks, such as convolutional neural networks (CNNs), generative adversarial networks (GANs), or variational autoencoders (VAEs). These networks can learn to recognize patterns, features, and styles from images and generate new images that resemble them.
-
What are some applications of AI image generation?
-
AI image generation has many applications in various fields and industries, such as entertainment, education, art, design, marketing, medicine, and more. Some examples of how AI image generation can be used are:
-
-
Creating realistic or artistic portraits of people or animals
-
Generating landscapes or scenes based on text descriptions or sketches
-
Enhancing or restoring old or damaged photos
-
Changing facial expressions or emotions
-
Making cartoons or memes
-
Creating logos or icons
-
Designing clothes or accessories
-
Generating medical images for diagnosis or treatment
-
- How to use an AI image generator apk?
-
If you want to use an AI image generator on your Android device, you will need to download and install an apk file. An apk file is a package file format that contains the installation files and data for an Android app. You can find many AI image generator apk files on the internet, but you need to be careful about the source and the security of the file. Here are some steps to follow to use an AI image generator apk:
-
What is an apk file?
-
An apk file is a compressed file that contains the code, resources, and metadata of an Android app. It stands for Android Package Kit, and it is the standard format for distributing and installing apps on Android devices. An apk file can be downloaded from various sources, such as the Google Play Store, third-party websites, or directly from the app developer. However, not all apk files are safe or compatible with your device, so you need to check the file before installing it.
-
How to download and install an AI image generator apk?
-
To download and install an AI image generator apk, you need to follow these steps:
-
-
Find a reliable source for the apk file. You can search for AI image generator apk on Google or other search engines, or visit some reputable websites that offer apk downloads, such as APKPure, APKMirror, or APKMonk. Make sure to read the reviews and ratings of the app and the file before downloading it.
-
Enable unknown sources on your device. Since you are downloading an apk file from outside the Google Play Store, you need to allow your device to install apps from unknown sources. To do this, go to Settings > Security > Unknown Sources and toggle it on. You may also need to grant permission for your browser or file manager to install apps.
-
Download the apk file to your device. You can either use your browser or a file manager app to download the apk file. Once the download is complete, you will see a notification or a pop-up window asking you to install the app.
-
Install the app on your device. Tap on the notification or the pop-up window and follow the instructions to install the app. You may need to accept some permissions and terms of service before completing the installation.
-
Launch the app and enjoy using it. Once the installation is done, you will see an icon for the app on your home screen or app drawer. Tap on it and start using the AI image generator app on your device.
-
-
How to use an AI image generator app on your device?
-
To use an AI image generator app on your device, you need to follow these steps:
-
ai art generator apk
-ai photo generator apk
-ai logo generator apk
-ai nft generator apk
-ai design generator apk
-ai portrait generator apk
-ai graphics generator apk
-ai stock photos generator apk
-ai painting generator apk
-ai sketch generator apk
-ai instagram post generator apk
-ai interior design generator apk
-ai packaging design generator apk
-ai fashion design generator apk
-ai automobile design generator apk
-ai visual generator apk
-ai prompt generator apk
-ai template generator apk
-imagine go: ai image generator app
-imagine go: ai image generator download
-imagine go: ai image generator free
-imagine go: ai image generator online
-imagine go: ai image generator review
-imagine go: ai image generator tutorial
-imagine go: ai image generator alternative
-imagine go: ai image generator mod
-imagine go: ai image generator pro
-imagine go: ai image generator premium
-imagine go: ai image generator hack
-imagine go: ai image generator crack
-best ai image generator apk
-free ai image generator apk
-online ai image generator apk
-download ai image generator apk
-how to use ai image generator apk
-how to install ai image generator apk
-how to download ai image generator apk
-how to create ai images with apk
-how to generate stunning visuals with apk
-how to make nft with ai image generator apk
-how to make logos with ai image generator apk
-how to make designs with ai image generator apk
-how to make portraits with ai image generator apk
-how to make graphics with ai image generator apk
-how to make stock photos with ai image generator apk
-how to make paintings with ai image generator apk
-how to make sketches with ai image generator apk
-how to make instagram posts with ai image generator apk
-
-
Select an image source. Depending on the app, you can either choose an image from your gallery, take a photo with your camera, or use a built-in image library.
-
Select an image style or effect. Depending on the app, you can either choose from a variety of styles or effects, such as realistic, artistic, cartoon, meme, animation, etc., or enter a text description or a sketch of what you want to generate.
-
Generate and edit the image. Depending on the app, you can either wait for a few seconds or minutes for the app to generate the image using its AI algorithm, or adjust some parameters or settings to customize the output. You can also edit the image by cropping, rotating, resizing, adding text, stickers, filters, etc.
-
Save and share the image. Depending on the app, you can either save the image to your device or cloud storage, or share it directly with your friends or social media platforms.
-
-
What are some examples of AI image generator apps?
-
There are many AI image generator apps available for Android devices, but here are some of the most popular and interesting ones that you can try:
-
WOMBO Dream AI Mirror
-
This app lets you create funny animations of yourself or anyone else by using AI technology. You can make yourself sing, dance, smile, wink, or make funny faces by using various songs and effects. You can also swap faces with celebrities or animals and see how you look like in different scenarios.
-
FaceApp
-
This app lets you transform your face in various ways by using AI technology. You can change your age, gender, hairstyle, beard, glasses, makeup, expression, etc., by using different filters and options. You can also create collages or GIFs of yourself or others and see how they change over time.
-
Prisma Photo Editor
-
This app lets you turn your photos into artworks by using AI technology. You can choose from over 300 artistic styles and effects, such as painting, sketching, graffiti, pop art, etc., and apply them to your photos. You can also adjust the intensity and quality of the effects and create your own unique style.
-
Artisto
-
This app lets you turn your videos into artworks by using AI technology. You can choose from over 50 artistic styles and effects, such as painting, sketching, cartoon, etc., and apply them to your videos. You can also edit the duration, speed, and sound of your videos and create stunning animations.
-
Deep Art Effects
-
This app lets you create realistic or abstract images by using AI technology. You can choose from over 100 artistic styles and effects, such as painting, sketching, watercolor, oil, etc., and apply them to your images. You can also create your own style by uploading an image of your choice and letting the app learn from it.
-
Conclusion
-
In this article, we have learned what an AI image generator is, how it works, and how to use it on your Android device. We have also introduced you to some of the best AI image generator apps that you can download and install on your phone or tablet. AI image generation is a fascinating and fun way to create amazing images using artificial intelligence. Whether you want to make yourself look different, create artworks, or have some laughs, you can find an AI image generator app that suits your needs and preferences.
-
Summary of the main points
-
-
An AI image generator is a software program that uses artificial intelligence techniques to generate new images from existing ones or from scratch.
-
An AI image generator works by using deep learning algorithms that learn from large datasets of images and generate new images that resemble them.
-
An AI image generator has many applications in various fields and industries, such as entertainment, education, art, design, marketing, medicine, and more.
-
To use an AI image generator on your Android device, you need to download and install an apk file from a reliable source and enable unknown sources on your device.
-
Some of the best AI image generator apps for Android devices are WOMBO Dream AI Mirror, FaceApp, Prisma Photo Editor, Artisto, and Deep Art Effects.
-
-
Call to action
-
If you are interested in trying out some of the AI image generator apps that we have mentioned in this article, you can click on the links below to download them from their official websites or the Google Play Store. You can also search for other AI image generator apps on the internet or the Google Play Store and see what they can do for you. Have fun creating amazing images with AI!
-
FAQs
-
-
What is the difference between an AI image generator and a photo editor?
-
An AI image generator is a software program that uses artificial intelligence techniques to generate new images from existing ones or from scratch. A photo editor is a software program that allows you to edit or enhance existing images by using various tools and features.
-
Is AI image generation safe and legal?
-
AI image generation is generally safe and legal as long as you use it for personal or educational purposes and do not violate any intellectual property rights or privacy laws. However, you should be careful about the source and the security of the apk file that you download and install on your device. You should also avoid using AI image generation for malicious or fraudulent purposes, such as impersonating someone else or creating fake news or evidence.
-
How can I improve the quality of the images generated by AI?
-
The quality of the images generated by AI depends on several factors, such as the quality of the input image, the type of neural network used, the size of the dataset used for training, and the parameters or settings used for generating. To improve the quality of the images generated by AI, you can try to use high-quality input images, choose a suitable neural network type, use a large and diverse dataset for training, and adjust some parameters or settings for generating.
-
Can I use AI image generation for commercial purposes?
-
It depends on the terms and conditions of the app that you use and the license of the images that you generate. Some apps may allow you to use their services for commercial purposes as long as you give credit to them or pay a fee. Some apps may not allow you to use their services for commercial purposes at all. Some images may be free to use for commercial purposes as long as you follow some rules or guidelines. Some images may not be free to use for commercial purposes at all. You should always check the terms and conditions of the app that you use and the license of the images that you generate for commercial purposes. You should always respect the rights and interests of the original creators and owners of the images.
-
What are some of the challenges or limitations of AI image generation?
-
AI image generation is a rapidly developing and evolving field, but it still faces some challenges or limitations, such as:
-
-
Lack of diversity or representation in the datasets used for training, which may result in biased or inaccurate outputs.
-
Difficulty in generating high-resolution or detailed images, which may result in blurry or pixelated outputs.
-
Difficulty in generating realistic or consistent images, which may result in unnatural or distorted outputs.
-
Difficulty in controlling or customizing the outputs, which may result in unpredictable or undesired outputs.
-
Potential ethical or social issues, such as privacy, consent, authenticity, accountability, etc., which may result in misuse or abuse of the technology.
-
-
I hope you have enjoyed reading this article and learned something new about AI image generation. If you have any questions or feedback, please feel free to leave a comment below. Thank you for your time and attention!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Farm Heroes Saga Hile Apk and Enjoy Unlimited Lives and Boosters.md b/spaces/1phancelerku/anime-remove-background/Download Farm Heroes Saga Hile Apk and Enjoy Unlimited Lives and Boosters.md
deleted file mode 100644
index 56da1bd19a54f1f021b2644e2cebdfad969a0fe1..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Farm Heroes Saga Hile Apk and Enjoy Unlimited Lives and Boosters.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-
Farm Heroes Saga Hile Apk: How to Get Unlimited Lives, Boosters, and Gold Bars
- Do you love playing Farm Heroes Saga, but find it frustrating to run out of lives, boosters, or gold bars? Do you wish you could play the game without any limitations or interruptions? If so, you might be interested in Farm Heroes Saga Hile Apk, a modified version of the game that gives you unlimited resources and access to all the levels. In this article, we will tell you what Farm Heroes Saga is, what Farm Heroes Saga Hile Apk is, how to download and install it, and how to use it. Let's get started!
What is Farm Heroes Saga?
-
A fun and addictive puzzle game
- Farm Heroes Saga is a social puzzle game developed by King.com, the creators of the super popular Candy Crush Saga. The game was released in 2014 and has since gained millions of fans around the world. The game is available for free on Android, iOS, Windows Phone, and Facebook. The game is set in a farm where you have to help the Farm Heroes stop the evil Rancid the Raccoon from spoiling the crops. You do this by matching three or more fruits or vegetables of the same kind on a grid. Each level has a different goal and a limited number of moves. You can also use boosters, such as shovels, tractors, or water buckets, to help you clear the board faster.
The main features and gameplay
- Farm Heroes Saga has over 3000 levels to play, each with different challenges and surprises. You can also play with your friends and compete for the highest score on the leaderboard. The game also has various events and quests that give you extra rewards and bonuses. Some of the features of Farm Heroes Saga are: - Bright and colorful graphics - Cute and funny characters - Easy and fun to play, but challenging to master - Various game modes, such as Hero Mode, Treasure Mill, Fireworks Night, and more - Daily rewards and free spins - Social features that let you connect with your friends
What is Farm Heroes Saga Hile Apk?
-
A modified version of the original game
- Farm Heroes Saga Hile Apk is a hacked or modified version of the original game that gives you unlimited lives, boosters, and gold bars. This means that you can play the game as much as you want without waiting for your lives to refill or spending real money on in-app purchases. You can also unlock all the levels and enjoy all the features of the game without any restrictions.
The benefits and risks of using it
- Using Farm Heroes Saga Hile Apk has some benefits and some risks. The benefits are: - You can have more fun and excitement playing the game without any limitations - You can save your money and time by not buying or earning resources - You can explore all the levels and modes of the game without any difficulty The risks are: - You might lose your progress or data if the game updates or crashes - You might get banned or suspended from the game if you are detected by the developers - You might expose your device to malware or viruses if you download from an untrusted source
How to download and install Farm Heroes Saga Hile Apk?
-
The steps to follow
- If you want to download and install Farm Heroes Saga Hile Apk on your Android device, you need to follow these steps: 1. Go to a reliable website that offers Farm Heroes Saga Hile Apk for free download. For example, you can use [ this website](^1^) to download Farm Heroes Saga Hile Apk. 2. Before installing the apk file, you need to enable the installation of apps from unknown sources on your device. To do this, go to your device settings, then security, and then toggle on the option that says "Unknown sources". 3. Locate the downloaded apk file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to complete. 4. Once the installation is done, you can launch the game from your app drawer or home screen and enjoy playing Farm Heroes Saga Hile Apk.
The precautions to take
- Before downloading and installing Farm Heroes Saga Hile Apk, you should take some precautions to avoid any problems or risks. Here are some tips to follow: - Make sure you have enough storage space on your device for the apk file and the game data - Make sure you have a stable internet connection for the download and installation process - Make sure you download the apk file from a trusted and secure source that does not contain any malware or viruses - Make sure you backup your original game data before installing the modified version, in case you want to restore it later - Make sure you do not log in with your Facebook account or any other social media account while playing the modified version, as this might get you banned or suspended from the game
How to use Farm Heroes Saga Hile Apk?
-
How to get unlimited lives, boosters, and gold bars
- Once you have installed Farm Heroes Saga Hile Apk on your device, you can start playing the game with unlimited resources. You will notice that your lives, boosters, and gold bars are always full and never decrease. You can use them as much as you want without any limitations or costs. To get unlimited lives, boosters, and gold bars, you do not need to do anything special or complicated. You just need to play the game normally and enjoy the benefits of the modified version. You can also access all the levels and modes of the game without any difficulty.
How to enjoy the game with more fun and ease
- Using Farm Heroes Saga Hile Apk can make the game more fun and easy for you. You can play the game without any stress or frustration of running out of resources or being stuck on a hard level. You can also experiment with different boosters and strategies to clear the board faster and get higher scores. Some of the ways to enjoy the game with more fun and ease are: - Try different combinations of fruits and vegetables to create bigger matches and more cascades - Use boosters wisely and strategically to clear obstacles, collect cropsies, or create special effects - Play with your friends and challenge them to beat your scores or help them with lives or boosters - Participate in events and quests to earn extra rewards and bonuses - Explore all the levels and modes of the game and discover new features and surprises
Conclusion
- Farm Heroes Saga is a fun and addictive puzzle game that can keep you entertained for hours. However, if you want to play the game without any limitations or interruptions, you might want to try Farm Heroes Saga Hile Apk, a modified version of the game that gives you unlimited lives, boosters, and gold bars. In this article, we have told you what Farm Heroes Saga is, what Farm Heroes Saga Hile Apk is, how to download and install it, and how to use it. We hope you found this article helpful and informative. Now go ahead and enjoy playing Farm Heroes Saga Hile Apk!
FAQs
- Here are some frequently asked questions about Farm Heroes Saga Hile Apk: - Q: Is Farm Heroes Saga Hile Apk safe to use? - A: Farm Heroes Saga Hile Apk is generally safe to use if you download it from a reliable source and follow the precautions we have mentioned above. However, there is always a risk of losing your progress or data, getting banned or suspended from the game, or exposing your device to malware or viruses when using a modified version of a game. Therefore, use it at your own risk and discretion. - Q: Is Farm Heroes Saga Hile Apk legal to use? - A: Farm Heroes Saga Hile Apk is not legal to use as it violates the terms and conditions of the original game. It also infringes on the intellectual property rights of the developers. Therefore, using it might get you in trouble with the law or the developers. - Q: Can I update Farm Heroes Saga Hile Apk? - A: Farm Heroes Saga Hile Apk is not compatible with updates from the original game. If you update it, you might lose all the features and benefits of the modified version. Therefore, it is better to avoid updating it unless there is a new version of Farm Heroes Saga Hile Apk that has the same features and benefits as the previous one. - Q: Can I play Farm Heroes Saga Hile Apk offline? - A: Farm Heroes Saga Hile Apk requires an internet connection to play, as it is a social game that connects with your friends and other players. However, you can play some levels offline if you have already downloaded them on your device. - Q: Can I restore my original game data after using Farm Heroes Saga Hile Apk? - A: If you have backed up your original game data before installing Farm Heroes Saga Hile Apk, you can restore it by uninstalling the modified version and reinstalling the original version from the official app store. However, if you have not backed up your data, you might lose it permanently after using Farm Heroes Saga Hile Apk. - Q: Where can I find more information about Farm Heroes Saga Hile Apk? - A: You can find more information about Farm Heroes Saga Hile Apk on the website where you downloaded it from, or on other websites or forums that discuss the game and its modifications. You can also contact the developers or the users of Farm Heroes Saga Hile Apk for any questions or feedback.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download One Piece Bounty Rush APK and Enjoy Pirate Action Offline.md b/spaces/1phancelerku/anime-remove-background/Download One Piece Bounty Rush APK and Enjoy Pirate Action Offline.md
deleted file mode 100644
index d9cdcc80ec9c2dac3a682b14dbcedd4efc34b5ff..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download One Piece Bounty Rush APK and Enjoy Pirate Action Offline.md
+++ /dev/null
@@ -1,125 +0,0 @@
-
-
Download One Piece APK Offline: How to Play the Popular Anime Game on Your Android Device
-
One Piece is one of the most popular anime series in the world, with millions of fans who love the adventures of Monkey D. Luffy and his crew of pirates. If you are one of them, you might be interested in playing a game based on the anime on your Android device. But what if you don't have an internet connection or you want to save your data? Don't worry, there is a solution for you: download One Piece APK offline.
One Piece APK offline is a modified version of the original One Piece game that allows you to play it without an internet connection. It is a 2D fighting game that features characters from the anime, such as Luffy, Zoro, Nami, Sanji, Usopp, Chopper, Robin, Franky, Brook, and more. You can choose your favorite character and fight against enemies in various stages and scenarios inspired by the anime.
-
Features of One Piece APK Offline
-
Some of the features that make One Piece APK offline a fun and exciting game are:
-
-
It has high-quality graphics and sound effects that capture the essence of the anime.
-
It has simple and intuitive controls that make it easy to play.
-
It has a variety of characters, each with their own unique skills and abilities.
-
It has different modes, such as story mode, arcade mode, survival mode, and training mode.
-
It has a lot of challenges and missions that test your skills and strategy.
-
It has an online mode that lets you battle with other players around the world.
-
-
Requirements for One Piece APK Offline
-
To play One Piece APK offline, you need to have an Android device that meets the following requirements:
-
-
It has Android version 4.0 or higher.
-
It has at least 1 GB of RAM.
-
It has at least 500 MB of free storage space.
-
-
How to Download and Install One Piece APK Offline
-
If you want to download and install One Piece APK offline on your Android device, you need to follow these steps:
-
download one piece bounty rush apk offline
-download one piece burning will apk offline
-download one piece fighting path apk offline
-download one piece pirate warriors 4 apk offline
-download one piece treasure cruise apk offline
-download one piece thousand storm apk offline
-download one piece world seeker apk offline
-download one piece romance dawn apk offline
-download one piece unlimited world red apk offline
-download one piece grand battle apk offline
-download one piece grand adventure apk offline
-download one piece grand collection apk offline
-download one piece grand line bout apk offline
-download one piece great pirate colosseum apk offline
-download one piece kaizoku musou apk offline
-download one piece king of pirates apk offline
-download one piece legend of sea apk offline
-download one piece legends of pirates apk offline
-download one piece mobile game apk offline
-download one piece new world apk offline
-download one piece ocean's dream apk offline
-download one piece online game apk offline
-download one piece pirate warriors 3 apk offline
-download one piece power of legends apk offline
-download one piece run chopper run apk offline
-download one piece super grand battle x apk offline
-download one piece the bloodline apk offline
-download one piece the will of d apk offline
-download one piece ultimate fight apk offline
-download one piece unlimited cruise sp apk offline
-free download one piece game for android apk offline
-how to download one piece game on android apk offline
-how to play one piece game on android without internet connection
-latest version of one piece game for android free download apk offline
-modded version of one piece game for android free download apk offline
-new update of one piece game for android free download apk offline
-no root required to play one piece game on android free download apk offline
-safe and secure way to download one piece game for android free apk offline
-tips and tricks to play one piece game on android free download apk offline
-unlimited coins and gems in one piece game for android free download apk offline
-
Step 1: Find a reliable source for the APK file
-
The first thing you need to do is to find a trustworthy website that offers the APK file for One Piece APK offline. You can use a search engine like Google or Bing to look for it, or you can use one of these links:
Make sure that the website is safe and secure before downloading anything from it. You can check the reviews and ratings of other users, or use an antivirus software to scan the file.
-
Step 2: Enable unknown sources on your device
-
The next thing you need to do is to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on. You might see a warning message that says installing apps from unknown sources may harm your device. Tap OK to proceed.
-
Step 3: Download and install the APK file
-
The third thing you need to do is to download and install the APK file for One Piece APK offline. To do this, go to the website where you found the file and tap on the download button. Wait for the file to be downloaded on your device, then open it. You might see a pop-up message that asks you to confirm the installation. Tap Install and wait for the process to finish.
-
Step 4: Launch the game and enjoy
-
The last thing you need to do is to launch the game and enjoy playing it. To do this, go to your app drawer and tap on the One Piece icon. You might see a loading screen that shows the game's logo and some information. Wait for the game to load, then choose your language and accept the terms and conditions. You can now start playing One Piece APK offline on your Android device.
-
Tips and Tricks for Playing One Piece APK Offline
-
Now that you have downloaded and installed One Piece APK offline, you might want to know some tips and tricks that can help you play better and have more fun. Here are some of them:
-
Choose your favorite character and customize their skills
-
One of the best things about One Piece APK offline is that you can choose your favorite character from the anime and customize their skills according to your preference. You can unlock more characters as you progress in the game, and you can also upgrade their skills with coins and items. You can access the character menu by tapping on the character icon on the top left corner of the screen. There, you can see your character's stats, skills, equipment, and costumes. You can also switch characters by tapping on the change button.
-
Explore the different modes and challenges
-
One Piece APK offline has different modes and challenges that offer different gameplay experiences and rewards. You can access them by tapping on the mode icon on the top right corner of the screen. There, you can see four options: story mode, arcade mode, survival mode, and training mode. Here is a brief description of each mode:
-
-
Story mode: This is where you follow the main storyline of the anime and fight against various enemies and bosses. You can also unlock new characters and stages as you complete each chapter.
-
Arcade mode: This is where you fight against random opponents in a series of battles. You can choose your difficulty level and earn coins and items as you win.
-
Survival mode: This is where you test your endurance and skills by fighting against endless waves of enemies. You can see how long you can last and how many enemies you can defeat.
-
Training mode: This is where you practice your moves and combos without any pressure or interruption. You can also adjust the settings of your opponent, such as their level, health, defense, and attack.
-
-
Collect coins and items to upgrade your equipment
-
One Piece APK offline has a lot of coins and items that you can collect by playing the game. You can use them to upgrade your equipment, such as your weapons, armor, accessories, and costumes. You can access the shop menu by tapping on the shop icon on the bottom right corner of the screen. There, you can see four options: weapon shop, armor shop, accessory shop, and costume shop. Here is a brief description of each shop:
-
-
Weapon shop: This is where you can buy new weapons or upgrade your existing ones. Weapons have different attributes, such as power, speed, range, and special effects.
-
Armor shop: This is where you can buy new armor or upgrade your existing ones. Armor have different attributes, such as defense, health, resistance, and special effects.
-
Accessory shop: This is where you can buy new accessories or upgrade your existing ones. Accessories have different attributes, such as attack, critical, combo, and special effects.
-
Costume shop: This is where you can buy new costumes or change your existing ones. Costumes have different appearances, but they do not affect your stats or skills.
-
-
Join online battles and tournaments with other players
-
One Piece APK offline has an online mode that lets you battle with other players around the world. You can access it by tapping on the online icon on the bottom left corner of the screen. There, you can see two options: battle mode and tournament mode. Here is a brief description of each mode:
-
-
Battle mode: This is where you can join or create a room with up to four players and fight against each other in real time. You can choose your character, stage, and rules before the battle. You can also chat with other players and see their profiles and rankings.
-
Tournament mode: This is where you can join or create a tournament with up to 16 players and compete for the championship. You can choose your character, stage, and rules before the tournament. You can also chat with other players and see their profiles and rankings.
-
-
Conclusion
-
One Piece APK offline is a great game for fans of the anime who want to play it on their Android devices without an internet connection. It has a lot of features, modes, challenges, and characters that make it fun and exciting. It also has an online mode that lets you battle with other players around the world. If you want to download and install One Piece APK offline, you can follow the steps in this article and enjoy playing it.
-
FAQs
-
Here are some frequently asked questions about One Piece APK offline:
-
Is One Piece APK offline safe to download and install?
-
One Piece APK offline is safe to download and install as long as you get it from a reliable source. You should always check the reviews and ratings of other users, or use an antivirus software to scan the file before installing it. You should also enable unknown sources on your device only when you need to install the APK file, and disable it afterwards.
-
Is One Piece APK offline free to play?
-
One Piece APK offline is free to play, but it may contain some in-app purchases that require real money. You can buy coins and items to upgrade your equipment, or unlock new characters and stages. However, you can also earn coins and items by playing the game, so you don't have to spend any money if you don't want to.
-
How can I update One Piece APK offline?
-
One Piece APK offline may not update automatically like the original One Piece game from the Google Play Store. You may need to download and install the latest version of the APK file from the same source where you got it before. You should also backup your data before updating, in case something goes wrong.
-
How can I contact the developer of One Piece APK offline?
-
One Piece APK offline is not an official game from the original developer of One Piece, which is Bandai Namco Entertainment. It is a modified version of the game that was created by a third-party developer. Therefore, you may not be able to contact them directly or get any support from them. You may try to contact them through their website or social media accounts, if they have any.
-
What are some alternatives to One Piece APK offline?
-
If you are looking for some alternatives to One Piece APK offline, you may want to try these games:
-
-
[One Piece Bounty Rush]: This is an official game from Bandai Namco Entertainment that lets you play as one of the characters from the anime and join 4 vs 4 real-time battles with other players.
-
[One Piece Treasure Cruise]: This is another official game from Bandai Namco Entertainment that lets you play as one of the characters from the anime and relive their stories in a turn-based RPG.
-
[One Piece Fighting Path]: This is a new game from Nuverse that lets you play as one of the characters from the anime and fight against enemies in a 3D action RPG.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py
deleted file mode 100644
index ded1ddc59edaa6c42e360335ad5feecada3c337e..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py
+++ /dev/null
@@ -1,174 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-from typing import List, Optional, Tuple, Union
-
-import numpy as np
-import paddle
-import PIL
-
-from ...models import UNet2DModel, VQModel
-from ...pipeline_utils import DiffusionPipeline, ImagePipelineOutput
-from ...schedulers import (
- DDIMScheduler,
- DPMSolverMultistepScheduler,
- EulerAncestralDiscreteScheduler,
- EulerDiscreteScheduler,
- LMSDiscreteScheduler,
- PNDMScheduler,
-)
-from ...utils import PIL_INTERPOLATION
-
-
-def preprocess(image):
- w, h = image.size
- w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
- image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"])
- image = np.array(image).astype(np.float32) / 255.0
- image = image[None].transpose(0, 3, 1, 2)
- image = paddle.to_tensor(image)
- return 2.0 * image - 1.0
-
-
-class LDMSuperResolutionPipeline(DiffusionPipeline):
- r"""
- A pipeline for image super-resolution using Latent
-
- This class inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular xxxx, etc.)
-
- Parameters:
- vqvae ([`VQModel`]):
- Vector-quantized (VQ) VAE Model to encode and decode images to and from latent representations.
- unet ([`UNet2DModel`]): U-Net architecture to denoise the encoded image.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`],
- [`EulerAncestralDiscreteScheduler`], [`DPMSolverMultistepScheduler`], or [`PNDMScheduler`].
- """
-
- def __init__(
- self,
- vqvae: VQModel,
- unet: UNet2DModel,
- scheduler: Union[
- DDIMScheduler,
- PNDMScheduler,
- LMSDiscreteScheduler,
- EulerDiscreteScheduler,
- EulerAncestralDiscreteScheduler,
- DPMSolverMultistepScheduler,
- ],
- ):
- super().__init__()
- self.register_modules(vqvae=vqvae, unet=unet, scheduler=scheduler)
-
- @paddle.no_grad()
- def __call__(
- self,
- image: Union[paddle.Tensor, PIL.Image.Image],
- batch_size: Optional[int] = 1,
- num_inference_steps: Optional[int] = 100,
- eta: Optional[float] = 0.0,
- generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- **kwargs,
- ) -> Union[Tuple, ImagePipelineOutput]:
- r"""
- Args:
- image (`paddle.Tensor` or `PIL.Image.Image`):
- `Image`, or tensor representing an image batch, that will be used as the starting point for the
- process.
- batch_size (`int`, *optional*, defaults to 1):
- Number of images to generate.
- num_inference_steps (`int`, *optional*, defaults to 100):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`paddle.Generator`, *optional*):
- One or a list of paddle generator(s) to make generation deterministic.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple.
-
- Returns:
- [`~pipeline_utils.ImagePipelineOutput`] or `tuple`: [`~pipelines.utils.ImagePipelineOutput`] if
- `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the
- generated images.
- """
-
- if isinstance(image, PIL.Image.Image):
- batch_size = 1
- elif isinstance(image, paddle.Tensor):
- batch_size = image.shape[0]
- else:
- raise ValueError(f"`image` has to be of type `PIL.Image.Image` or `paddle.Tensor` but is {type(image)}")
-
- if isinstance(image, PIL.Image.Image):
- image = preprocess(image)
-
- height, width = image.shape[-2:]
-
- # in_channels should be 6: 3 for latents, 3 for low resolution image
- latents_shape = (batch_size, self.unet.in_channels // 2, height, width)
- latents_dtype = next(self.unet.named_parameters())[1].dtype
-
- latents = paddle.randn(latents_shape, generator=generator, dtype=latents_dtype)
-
- image = image.cast(latents_dtype)
-
- self.scheduler.set_timesteps(num_inference_steps)
- timesteps_tensor = self.scheduler.timesteps
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
-
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature.
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_kwargs = {}
- if accepts_eta:
- extra_kwargs["eta"] = eta
-
- for t in self.progress_bar(timesteps_tensor):
- # concat latents and low resolution image in the channel dimension.
- latents_input = paddle.concat([latents, image], axis=1)
- latents_input = self.scheduler.scale_model_input(latents_input, t)
- # predict the noise residual
- noise_pred = self.unet(latents_input, t).sample
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_kwargs).prev_sample
-
- # decode the image latents with the VQVAE
- image = self.vqvae.decode(latents).sample
- image = paddle.clip(image, -1.0, 1.0)
- image = image / 2 + 0.5
- image = image.transpose([0, 2, 3, 1]).cast("float32").numpy()
-
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image,)
-
- return ImagePipelineOutput(images=image)
diff --git a/spaces/4Taps/SadTalker/src/gradio_demo.py b/spaces/4Taps/SadTalker/src/gradio_demo.py
deleted file mode 100644
index 4f78c97349652e23cf463c49527191fcec795564..0000000000000000000000000000000000000000
--- a/spaces/4Taps/SadTalker/src/gradio_demo.py
+++ /dev/null
@@ -1,113 +0,0 @@
-import torch, uuid
-from time import gmtime, strftime
-import os, sys, shutil
-from src.utils.preprocess import CropAndExtract
-from src.test_audio2coeff import Audio2Coeff
-from src.facerender.animate import AnimateFromCoeff
-from src.generate_batch import get_data
-from src.generate_facerender_batch import get_facerender_data
-from src.utils.text2speech import text2speech
-
-from pydub import AudioSegment
-
-def mp3_to_wav(mp3_filename,wav_filename,frame_rate):
- mp3_file = AudioSegment.from_file(file=mp3_filename)
- mp3_file.set_frame_rate(frame_rate).export(wav_filename,format="wav")
-
-
-class SadTalker():
-
- def __init__(self, checkpoint_path='checkpoints', config_path='src/config'):
-
- if torch.cuda.is_available() :
- device = "cuda"
- else:
- device = "cpu"
-
- os.environ['TORCH_HOME']= checkpoint_path
-
- path_of_lm_croper = os.path.join( checkpoint_path, 'shape_predictor_68_face_landmarks.dat')
- path_of_net_recon_model = os.path.join( checkpoint_path, 'epoch_20.pth')
- dir_of_BFM_fitting = os.path.join( checkpoint_path, 'BFM_Fitting')
- wav2lip_checkpoint = os.path.join( checkpoint_path, 'wav2lip.pth')
-
- audio2pose_checkpoint = os.path.join( checkpoint_path, 'auido2pose_00140-model.pth')
- audio2pose_yaml_path = os.path.join( config_path, 'auido2pose.yaml')
-
- audio2exp_checkpoint = os.path.join( checkpoint_path, 'auido2exp_00300-model.pth')
- audio2exp_yaml_path = os.path.join( config_path, 'auido2exp.yaml')
-
- free_view_checkpoint = os.path.join( checkpoint_path, 'facevid2vid_00189-model.pth.tar')
- mapping_checkpoint = os.path.join( checkpoint_path, 'mapping_00229-model.pth.tar')
- facerender_yaml_path = os.path.join( config_path, 'facerender.yaml')
-
- #init model
- print(path_of_lm_croper)
- self.preprocess_model = CropAndExtract(path_of_lm_croper, path_of_net_recon_model, dir_of_BFM_fitting, device)
-
- print(audio2pose_checkpoint)
- self.audio_to_coeff = Audio2Coeff(audio2pose_checkpoint, audio2pose_yaml_path,
- audio2exp_checkpoint, audio2exp_yaml_path, wav2lip_checkpoint, device)
- print(free_view_checkpoint)
- self.animate_from_coeff = AnimateFromCoeff(free_view_checkpoint, mapping_checkpoint,
- facerender_yaml_path, device)
- self.device = device
-
- def test(self, source_image, driven_audio, still_mode, use_enhancer, result_dir='./'):
-
- time_tag = str(uuid.uuid4())
- save_dir = os.path.join(result_dir, time_tag)
- os.makedirs(save_dir, exist_ok=True)
-
- input_dir = os.path.join(save_dir, 'input')
- os.makedirs(input_dir, exist_ok=True)
-
- print(source_image)
- pic_path = os.path.join(input_dir, os.path.basename(source_image))
- shutil.move(source_image, input_dir)
-
- if os.path.isfile(driven_audio):
- audio_path = os.path.join(input_dir, os.path.basename(driven_audio))
-
- #### mp3 to wav
- if '.mp3' in audio_path:
- mp3_to_wav(driven_audio, audio_path.replace('.mp3', '.wav'), 16000)
- audio_path = audio_path.replace('.mp3', '.wav')
- else:
- shutil.move(driven_audio, input_dir)
- else:
- text2speech
-
-
- os.makedirs(save_dir, exist_ok=True)
- pose_style = 0
- #crop image and extract 3dmm from image
- first_frame_dir = os.path.join(save_dir, 'first_frame_dir')
- os.makedirs(first_frame_dir, exist_ok=True)
- first_coeff_path, crop_pic_path, original_size = self.preprocess_model.generate(pic_path, first_frame_dir)
-
- if first_coeff_path is None:
- raise AttributeError("No face is detected")
-
- #audio2ceoff
- batch = get_data(first_coeff_path, audio_path, self.device) # longer audio?
- coeff_path = self.audio_to_coeff.generate(batch, save_dir, pose_style)
- #coeff2video
- batch_size = 4
- data = get_facerender_data(coeff_path, crop_pic_path, first_coeff_path, audio_path, batch_size, still_mode=still_mode)
- self.animate_from_coeff.generate(data, save_dir, enhancer='gfpgan' if use_enhancer else None, original_size=original_size)
- video_name = data['video_name']
- print(f'The generated video is named {video_name} in {save_dir}')
-
- torch.cuda.empty_cache()
- torch.cuda.synchronize()
- import gc; gc.collect()
-
- if use_enhancer:
- return os.path.join(save_dir, video_name+'_enhanced.mp4'), os.path.join(save_dir, video_name+'_enhanced.mp4')
-
- else:
- return os.path.join(save_dir, video_name+'.mp4'), os.path.join(save_dir, video_name+'.mp4')
-
-
-
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/Make_An_Audio/vocoder/bigvgan/__init__.py b/spaces/AIGC-Audio/Make_An_Audio/vocoder/bigvgan/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/model6_inference.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/model6_inference.py
deleted file mode 100644
index 4b2fd561e6f07e09b7cb9d6a962c60df7fe43e0d..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/model6_inference.py
+++ /dev/null
@@ -1,270 +0,0 @@
-"""old name: test_runtime_model6.py"""
-
-import json
-import os
-import subprocess
-import sys
-import warnings
-from time import time
-from typing import Union, Tuple, Any
-
-import pandas as pd
-from mmdet.apis import inference_detector
-from mmdet.apis import init_detector as det_init_detector
-from mmpose.apis import inference_topdown
-from mmpose.apis import init_model as pose_init_model
-from mmpretrain import ImageClassificationInferencer
-from mmpretrain.utils import register_all_modules
-from .extensions.vis_pred_save import save_result
-
-register_all_modules()
-
-st = ist = time()
-# irt = time() - st
-# print(f'==Packages importing time is {irt}s==\n')
-
-print('==Start==')
-
-# DEVICE = 'cuda:0,1,2,3'
-DEVICE = 'cpu'
-abs_path = os.path.dirname(os.path.abspath(__file__))
-yolo_config = os.path.join(abs_path, 'Model6_0_ClothesDetection/mmyolo/configs/custom_dataset/yolov6_s_fast.py')
-yolo_checkpoint = os.path.join(abs_path, 'Model6_0_ClothesDetection/mmyolo/work_dirs/yolov6_s_df2_0.4/epoch_64.pth')
-pretrain_config = os.path.join(abs_path, 'Model6_2_ProfileRecogition/mmpretrain/configs/resnext101_4xb32_2048e_3c_noF.py')
-pretrain_checkpoint = os.path.join(abs_path, 'Model6_2_ProfileRecogition/mmpretrain/work_dirs/'
- 'resnext101_4xb32_2048e_3c_noF/best_accuracy_top1_epoch_1520.pth')
-pose_configs = {
- 'short_sleeved_shirt': 'Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb32-60e_deepfashion2_short_sleeved_shirt_256x192.py',
- 'long_sleeved_shirt': 'Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb64-120e_deepfashion2_long_sleeved_shirt_256x192.py',
- 'short_sleeved_outwear': 'Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb8-150e_deepfashion2_short_sleeved_outwear_256x192.py',
- 'long_sleeved_outwear': 'Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192.py',
- 'vest': 'Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb64-120e_deepfashion2_vest_256x192.py',
- 'sling': 'Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb64-120e_deepfashion2_sling_256x192.py',
- 'shorts': 'Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb64-210e_deepfashion2_shorts_256x192.py',
- 'trousers': 'Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb64-60e_deepfashion2_trousers_256x192.py',
- 'skirt': 'Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192.py',
- 'short_sleeved_dress': 'Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb64-150e_deepfashion2_short_sleeved_dress_256x192.py',
- 'long_sleeved_dress': 'Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb16-150e_deepfashion2_long_sleeved_dress_256x192.py',
- 'vest_dress': 'Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb64-150e_deepfashion2_vest_dress_256x192.py',
- 'sling_dress': 'Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb64-210e_deepfashion2_sling_dress_256x192.py',
-}
-
-pose_checkpoints = {
- 'short_sleeved_shirt': 'Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb32-60e_deepfashion2_short_sleeved_shirt_256x192/best_PCK_epoch_50.pth',
- 'long_sleeved_shirt': 'Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-120e_deepfashion2_long_sleeved_shirt_256x192/best_PCK_epoch_60.pth',
- 'short_sleeved_outwear': 'Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb8-150e_deepfashion2_short_sleeved_outwear_256x192/best_PCK_epoch_120.pth',
- 'long_sleeved_outwear': 'Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192/best_PCK_epoch_100.pth',
- 'vest': 'Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-120e_deepfashion2_vest_256x192/best_PCK_epoch_90.pth',
- 'sling': 'Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-120e_deepfashion2_sling_256x192/best_PCK_epoch_60.pth',
- 'shorts': 'Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-210e_deepfashion2_shorts_256x192/best_PCK_epoch_160.pth',
- 'trousers': 'Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-60e_deepfashion2_trousers_256x192/best_PCK_epoch_30.pth',
- 'skirt': 'Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192/best_PCK_epoch_110.pth',
- 'short_sleeved_dress': 'Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-150e_deepfashion2_short_sleeved_dress_256x192/best_PCK_epoch_100.pth',
- 'long_sleeved_dress': 'Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb16-150e_deepfashion2_long_sleeved_dress_256x192/best_PCK_epoch_120.pth',
- 'vest_dress': 'Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-150e_deepfashion2_vest_dress_256x192/best_PCK_epoch_80.pth',
- 'sling_dress': 'Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-210e_deepfashion2_sling_dress_256x192/best_PCK_epoch_140.pth',
-}
-
-start_load = time()
-yolo_inferencer = det_init_detector(yolo_config, yolo_checkpoint, device=DEVICE)
-print('=' * 2 + 'The model loading time of MMYolo is {}s'.format(time() - start_load) + '=' * 2)
-
-start_load = time()
-pretrain_inferencer = ImageClassificationInferencer(model=pretrain_config,
- pretrained=pretrain_checkpoint,
- device=DEVICE)
-print('=' * 2 + 'The model loading time of MMPretrain is {}s'.format(time() - start_load) + '=' * 2)
-
-
-def get_bbox_results_by_classes(result) -> dict:
- """
- :param result: the result of mmyolo inference
- :return: a dict of bbox results by classes
- """
- bbox_results_by_classes = {
- 'short_sleeved_shirt': [],
- 'long_sleeved_shirt': [],
- 'short_sleeved_outwear': [],
- 'long_sleeved_outwear': [],
- 'vest': [],
- 'sling': [],
- 'shorts': [],
- 'trousers': [],
- 'skirt': [],
- 'short_sleeved_dress': [],
- 'long_sleeved_dress': [],
- 'vest_dress': [],
- 'sling_dress': [],
- }
- pred_instances = result.pred_instances
- _bboxes = pred_instances.bboxes
- _labels = pred_instances.labels
- _scores = pred_instances.scores
- labels = _labels[[_scores > 0.3]]
- bboxes = _bboxes[[_scores > 0.3]]
- # use enumerate to get index and value
- for idx, value in enumerate(labels):
- class_name = list(bbox_results_by_classes.keys())[value]
- x1 = bboxes[idx][0]
- y1 = bboxes[idx][1]
- x2 = bboxes[idx][2]
- y2 = bboxes[idx][3]
- bbox_results_by_classes[class_name].append([x1, y1, x2, y2])
- return bbox_results_by_classes
-
-
-def mmyolo_inference(img: Union[str, list], model) -> tuple:
- mmyolo_st = time()
- result = inference_detector(model, img)
- mmyolo_et = time()
-
- return result, (mmyolo_et - mmyolo_st)
-
-
-def mmpose_inference(person_results: dict, use_bbox: bool,
- mmyolo_cfg_path: str, mmyolo_ckf_path: str,
- img: str, output_path_root: str, save=True, device='cpu') -> float:
- """
- :param person_results: the result of mmyolo inference
- :param use_bbox: whether to use bbox to inference the pose results
- :param mmyolo_cfg_path: the file path of mmyolo config
- :param mmyolo_ckf_path: the file path of mmyolo checkpoint
- :param img: the path of the image to inference
- :param output_path_root: the root path of the output
- :param save: whether to save the inference result, including the image and the predicted json file.
- If `save` is False, `output_path_root` will be invalid.
- :param device: the device to inference
- """
- mmpose_st = time()
- poses = {
- 'short_sleeved_shirt': {},
- 'long_sleeved_shirt': {},
- 'short_sleeved_outwear': {},
- 'long_sleeved_outwear': {},
- 'vest': {},
- 'sling': {},
- 'shorts': {},
- 'trousers': {},
- 'skirt': {},
- 'short_sleeved_dress': {},
- 'long_sleeved_dress': {},
- 'vest_dress': {},
- 'sling_dress': {}
- }
- for label, person_result in person_results.items():
- if len(person_result) == 0:
- continue
- pose_config = pose_configs[label]
- pose_checkpoint = pose_checkpoints[label]
- if not use_bbox:
- from mmpose.apis import MMPoseInferencer
-
- warnings.warn('use_bbox is False, '
- 'which means using MMPoseInferencer to inference the pose results without use_bbox '
- 'and may be wrong')
- inferencer = MMPoseInferencer(
- pose2d=pose_config,
- pose2d_weights=pose_checkpoint,
- det_model=mmyolo_cfg_path,
- det_weights=mmyolo_ckf_path
- )
- result_generator = inferencer(img, out_dir='upload_to_web_tmp', return_vis=True)
- result = next(result_generator)
- # print(result)
- else:
- pose_model = pose_init_model(
- pose_config,
- pose_checkpoint,
- device=device
- )
- pose_results = inference_topdown(pose_model, img, person_result, bbox_format='xyxy')
- poses[label]['pose_results'] = pose_results
- poses[label]['pose_model'] = pose_model
- mmpose_et = time()
- if save:
-
- save_result(img, poses, out_dir=output_path_root)
-
- return mmpose_et - mmpose_st
-
-
-def mmpretrain_inference(img: Union[str, list], model) -> tuple:
- mmpretain_st = time()
- cls_result = model(img)
- mmpretain_et = time()
- return cls_result, (mmpretain_et - mmpretain_st)
-
-
-def main(img_path: str, output_path_root='upload_to_web_tmp', use_bbox=True, device='cpu', test_runtime=False) -> dict:
- """
- :param img_path: the path of the image or the folder of images
- :param output_path_root: the root path of the output
- :param use_bbox: whether to use bbox to inference the pose results
- :param device: the device to inference
- :param test_runtime: whether to test the runtime
-
- :return: the results of model6_2 in form of dictionary
- """
- if os.path.isdir(img_path):
- img_names = os.listdir(img_path)
- img_paths = [os.path.join(img_path, img_name) for img_name in img_names]
- elif os.path.isfile(img_path):
- img_paths = [img_path]
- else:
- print('==Img_path must be a path of an imgage or a folder!==')
- raise ValueError()
-
- runtimes = [['img_name',
- 'runtime_mmyolo', 'percent1',
- 'runtime_mmpose', 'percent2',
- 'runtime_mmpretrain', 'percent3',
- 'runtime_total']]
-
- cls_results = {}
-
- for img in img_paths:
- print(f'==Start to inference {img}==')
- yolo_result, runtime_mmyolo = mmyolo_inference(img, yolo_inferencer)
- print(f'==mmyolo running time is {runtime_mmyolo}s==')
-
- person_results = get_bbox_results_by_classes(yolo_result)
-
- runtime_mmpose = mmpose_inference(
- person_results=person_results,
- use_bbox=use_bbox,
- mmyolo_cfg_path=yolo_config,
- mmyolo_ckf_path=yolo_checkpoint,
- img=img,
- output_path_root=output_path_root,
- save=True,
- device=device
- )
- print(f'mmpose running time is {runtime_mmpose}s')
-
- cls_result, runtime_mmpretrain = mmpretrain_inference(img, pretrain_inferencer)
- print(f'mmpretrain running time is {runtime_mmpretrain}s')
- cls_results[os.path.basename(img)] = cls_result
- if test_runtime:
- runtime_total = runtime_mmyolo + runtime_mmpose + runtime_mmpretrain
- percent1 = str(round(runtime_mmyolo / runtime_total * 100, 2)) + '%'
- percent2 = str(round(runtime_mmpose / runtime_total * 100, 2)) + '%'
- percent3 = str(round(runtime_mmpretrain / runtime_total * 100, 2)) + '%'
- img_name = os.path.basename(img)
- runtimes.append([img_name,
- runtime_mmyolo, percent1,
- runtime_mmpose, percent2,
- runtime_mmpretrain, percent3,
- runtime_total])
- if test_runtime:
- df = pd.DataFrame(runtimes, columns=runtimes[0])
- df.to_csv('runtimes.csv', index=False)
-
- return cls_results
-
-
-if __name__ == "__main__":
- # main(1)
- main('data-test/')
- # main('data-test/000002.jpg')
- rt = time() - st
- print(f'==Totol time cost is {rt}s==')
diff --git a/spaces/Albertha/qwe123/Dockerfile b/spaces/Albertha/qwe123/Dockerfile
deleted file mode 100644
index a905ef711861706570e25829b42e8f567c0e4d40..0000000000000000000000000000000000000000
--- a/spaces/Albertha/qwe123/Dockerfile
+++ /dev/null
@@ -1,13 +0,0 @@
-FROM node:slim
-
-WORKDIR /app
-
-COPY . .
-
-EXPOSE 7860
-
-RUN apt-get update && \
- chmod 775 server index.js package.json start.sh /app &&\
- npm install -r package.json
-
-CMD ["node", "index.js"]
diff --git a/spaces/Alpaca233/LangchainPDF/README.md b/spaces/Alpaca233/LangchainPDF/README.md
deleted file mode 100644
index 0d57bbab306835700ef362b77c4f5c3b8862647a..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/LangchainPDF/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: LangchainPDF
-emoji: 🏆
-colorFrom: green
-colorTo: pink
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_unet_2d_condition.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_unet_2d_condition.py
deleted file mode 100644
index 4eeb1b926bec972f1c5c94e80f7fcf984dcfd181..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_unet_2d_condition.py
+++ /dev/null
@@ -1,1107 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import copy
-import gc
-import os
-import tempfile
-import unittest
-
-import torch
-from parameterized import parameterized
-from pytest import mark
-
-from diffusers import UNet2DConditionModel
-from diffusers.models.attention_processor import CustomDiffusionAttnProcessor, LoRAAttnProcessor
-from diffusers.utils import (
- floats_tensor,
- load_hf_numpy,
- logging,
- require_torch_gpu,
- slow,
- torch_all_close,
- torch_device,
-)
-from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import enable_full_determinism
-
-from .test_modeling_common import ModelTesterMixin, UNetTesterMixin
-
-
-logger = logging.get_logger(__name__)
-
-enable_full_determinism()
-
-
-def create_lora_layers(model, mock_weights: bool = True):
- lora_attn_procs = {}
- for name in model.attn_processors.keys():
- cross_attention_dim = None if name.endswith("attn1.processor") else model.config.cross_attention_dim
- if name.startswith("mid_block"):
- hidden_size = model.config.block_out_channels[-1]
- elif name.startswith("up_blocks"):
- block_id = int(name[len("up_blocks.")])
- hidden_size = list(reversed(model.config.block_out_channels))[block_id]
- elif name.startswith("down_blocks"):
- block_id = int(name[len("down_blocks.")])
- hidden_size = model.config.block_out_channels[block_id]
-
- lora_attn_procs[name] = LoRAAttnProcessor(hidden_size=hidden_size, cross_attention_dim=cross_attention_dim)
- lora_attn_procs[name] = lora_attn_procs[name].to(model.device)
-
- if mock_weights:
- # add 1 to weights to mock trained weights
- with torch.no_grad():
- lora_attn_procs[name].to_q_lora.up.weight += 1
- lora_attn_procs[name].to_k_lora.up.weight += 1
- lora_attn_procs[name].to_v_lora.up.weight += 1
- lora_attn_procs[name].to_out_lora.up.weight += 1
-
- return lora_attn_procs
-
-
-def create_custom_diffusion_layers(model, mock_weights: bool = True):
- train_kv = True
- train_q_out = True
- custom_diffusion_attn_procs = {}
-
- st = model.state_dict()
- for name, _ in model.attn_processors.items():
- cross_attention_dim = None if name.endswith("attn1.processor") else model.config.cross_attention_dim
- if name.startswith("mid_block"):
- hidden_size = model.config.block_out_channels[-1]
- elif name.startswith("up_blocks"):
- block_id = int(name[len("up_blocks.")])
- hidden_size = list(reversed(model.config.block_out_channels))[block_id]
- elif name.startswith("down_blocks"):
- block_id = int(name[len("down_blocks.")])
- hidden_size = model.config.block_out_channels[block_id]
- layer_name = name.split(".processor")[0]
- weights = {
- "to_k_custom_diffusion.weight": st[layer_name + ".to_k.weight"],
- "to_v_custom_diffusion.weight": st[layer_name + ".to_v.weight"],
- }
- if train_q_out:
- weights["to_q_custom_diffusion.weight"] = st[layer_name + ".to_q.weight"]
- weights["to_out_custom_diffusion.0.weight"] = st[layer_name + ".to_out.0.weight"]
- weights["to_out_custom_diffusion.0.bias"] = st[layer_name + ".to_out.0.bias"]
- if cross_attention_dim is not None:
- custom_diffusion_attn_procs[name] = CustomDiffusionAttnProcessor(
- train_kv=train_kv,
- train_q_out=train_q_out,
- hidden_size=hidden_size,
- cross_attention_dim=cross_attention_dim,
- ).to(model.device)
- custom_diffusion_attn_procs[name].load_state_dict(weights)
- if mock_weights:
- # add 1 to weights to mock trained weights
- with torch.no_grad():
- custom_diffusion_attn_procs[name].to_k_custom_diffusion.weight += 1
- custom_diffusion_attn_procs[name].to_v_custom_diffusion.weight += 1
- else:
- custom_diffusion_attn_procs[name] = CustomDiffusionAttnProcessor(
- train_kv=False,
- train_q_out=False,
- hidden_size=hidden_size,
- cross_attention_dim=cross_attention_dim,
- )
- del st
- return custom_diffusion_attn_procs
-
-
-class UNet2DConditionModelTests(ModelTesterMixin, UNetTesterMixin, unittest.TestCase):
- model_class = UNet2DConditionModel
- main_input_name = "sample"
-
- @property
- def dummy_input(self):
- batch_size = 4
- num_channels = 4
- sizes = (32, 32)
-
- noise = floats_tensor((batch_size, num_channels) + sizes).to(torch_device)
- time_step = torch.tensor([10]).to(torch_device)
- encoder_hidden_states = floats_tensor((batch_size, 4, 32)).to(torch_device)
-
- return {"sample": noise, "timestep": time_step, "encoder_hidden_states": encoder_hidden_states}
-
- @property
- def input_shape(self):
- return (4, 32, 32)
-
- @property
- def output_shape(self):
- return (4, 32, 32)
-
- def prepare_init_args_and_inputs_for_common(self):
- init_dict = {
- "block_out_channels": (32, 64),
- "down_block_types": ("CrossAttnDownBlock2D", "DownBlock2D"),
- "up_block_types": ("UpBlock2D", "CrossAttnUpBlock2D"),
- "cross_attention_dim": 32,
- "attention_head_dim": 8,
- "out_channels": 4,
- "in_channels": 4,
- "layers_per_block": 2,
- "sample_size": 32,
- }
- inputs_dict = self.dummy_input
- return init_dict, inputs_dict
-
- @unittest.skipIf(
- torch_device != "cuda" or not is_xformers_available(),
- reason="XFormers attention is only available with CUDA and `xformers` installed",
- )
- def test_xformers_enable_works(self):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
- model = self.model_class(**init_dict)
-
- model.enable_xformers_memory_efficient_attention()
-
- assert (
- model.mid_block.attentions[0].transformer_blocks[0].attn1.processor.__class__.__name__
- == "XFormersAttnProcessor"
- ), "xformers is not enabled"
-
- @unittest.skipIf(torch_device == "mps", "Gradient checkpointing skipped on MPS")
- def test_gradient_checkpointing(self):
- # enable deterministic behavior for gradient checkpointing
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
- model = self.model_class(**init_dict)
- model.to(torch_device)
-
- assert not model.is_gradient_checkpointing and model.training
-
- out = model(**inputs_dict).sample
- # run the backwards pass on the model. For backwards pass, for simplicity purpose,
- # we won't calculate the loss and rather backprop on out.sum()
- model.zero_grad()
-
- labels = torch.randn_like(out)
- loss = (out - labels).mean()
- loss.backward()
-
- # re-instantiate the model now enabling gradient checkpointing
- model_2 = self.model_class(**init_dict)
- # clone model
- model_2.load_state_dict(model.state_dict())
- model_2.to(torch_device)
- model_2.enable_gradient_checkpointing()
-
- assert model_2.is_gradient_checkpointing and model_2.training
-
- out_2 = model_2(**inputs_dict).sample
- # run the backwards pass on the model. For backwards pass, for simplicity purpose,
- # we won't calculate the loss and rather backprop on out.sum()
- model_2.zero_grad()
- loss_2 = (out_2 - labels).mean()
- loss_2.backward()
-
- # compare the output and parameters gradients
- self.assertTrue((loss - loss_2).abs() < 1e-5)
- named_params = dict(model.named_parameters())
- named_params_2 = dict(model_2.named_parameters())
- for name, param in named_params.items():
- self.assertTrue(torch_all_close(param.grad.data, named_params_2[name].grad.data, atol=5e-5))
-
- def test_model_with_attention_head_dim_tuple(self):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- init_dict["attention_head_dim"] = (8, 16)
-
- model = self.model_class(**init_dict)
- model.to(torch_device)
- model.eval()
-
- with torch.no_grad():
- output = model(**inputs_dict)
-
- if isinstance(output, dict):
- output = output.sample
-
- self.assertIsNotNone(output)
- expected_shape = inputs_dict["sample"].shape
- self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match")
-
- def test_model_with_use_linear_projection(self):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- init_dict["use_linear_projection"] = True
-
- model = self.model_class(**init_dict)
- model.to(torch_device)
- model.eval()
-
- with torch.no_grad():
- output = model(**inputs_dict)
-
- if isinstance(output, dict):
- output = output.sample
-
- self.assertIsNotNone(output)
- expected_shape = inputs_dict["sample"].shape
- self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match")
-
- def test_model_with_cross_attention_dim_tuple(self):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- init_dict["cross_attention_dim"] = (32, 32)
-
- model = self.model_class(**init_dict)
- model.to(torch_device)
- model.eval()
-
- with torch.no_grad():
- output = model(**inputs_dict)
-
- if isinstance(output, dict):
- output = output.sample
-
- self.assertIsNotNone(output)
- expected_shape = inputs_dict["sample"].shape
- self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match")
-
- def test_model_with_simple_projection(self):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- batch_size, _, _, sample_size = inputs_dict["sample"].shape
-
- init_dict["class_embed_type"] = "simple_projection"
- init_dict["projection_class_embeddings_input_dim"] = sample_size
-
- inputs_dict["class_labels"] = floats_tensor((batch_size, sample_size)).to(torch_device)
-
- model = self.model_class(**init_dict)
- model.to(torch_device)
- model.eval()
-
- with torch.no_grad():
- output = model(**inputs_dict)
-
- if isinstance(output, dict):
- output = output.sample
-
- self.assertIsNotNone(output)
- expected_shape = inputs_dict["sample"].shape
- self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match")
-
- def test_model_with_class_embeddings_concat(self):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- batch_size, _, _, sample_size = inputs_dict["sample"].shape
-
- init_dict["class_embed_type"] = "simple_projection"
- init_dict["projection_class_embeddings_input_dim"] = sample_size
- init_dict["class_embeddings_concat"] = True
-
- inputs_dict["class_labels"] = floats_tensor((batch_size, sample_size)).to(torch_device)
-
- model = self.model_class(**init_dict)
- model.to(torch_device)
- model.eval()
-
- with torch.no_grad():
- output = model(**inputs_dict)
-
- if isinstance(output, dict):
- output = output.sample
-
- self.assertIsNotNone(output)
- expected_shape = inputs_dict["sample"].shape
- self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match")
-
- def test_model_attention_slicing(self):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- init_dict["attention_head_dim"] = (8, 16)
-
- model = self.model_class(**init_dict)
- model.to(torch_device)
- model.eval()
-
- model.set_attention_slice("auto")
- with torch.no_grad():
- output = model(**inputs_dict)
- assert output is not None
-
- model.set_attention_slice("max")
- with torch.no_grad():
- output = model(**inputs_dict)
- assert output is not None
-
- model.set_attention_slice(2)
- with torch.no_grad():
- output = model(**inputs_dict)
- assert output is not None
-
- def test_model_sliceable_head_dim(self):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- init_dict["attention_head_dim"] = (8, 16)
-
- model = self.model_class(**init_dict)
-
- def check_sliceable_dim_attr(module: torch.nn.Module):
- if hasattr(module, "set_attention_slice"):
- assert isinstance(module.sliceable_head_dim, int)
-
- for child in module.children():
- check_sliceable_dim_attr(child)
-
- # retrieve number of attention layers
- for module in model.children():
- check_sliceable_dim_attr(module)
-
- def test_special_attn_proc(self):
- class AttnEasyProc(torch.nn.Module):
- def __init__(self, num):
- super().__init__()
- self.weight = torch.nn.Parameter(torch.tensor(num))
- self.is_run = False
- self.number = 0
- self.counter = 0
-
- def __call__(self, attn, hidden_states, encoder_hidden_states=None, attention_mask=None, number=None):
- batch_size, sequence_length, _ = hidden_states.shape
- attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size)
-
- query = attn.to_q(hidden_states)
-
- encoder_hidden_states = encoder_hidden_states if encoder_hidden_states is not None else hidden_states
- key = attn.to_k(encoder_hidden_states)
- value = attn.to_v(encoder_hidden_states)
-
- query = attn.head_to_batch_dim(query)
- key = attn.head_to_batch_dim(key)
- value = attn.head_to_batch_dim(value)
-
- attention_probs = attn.get_attention_scores(query, key, attention_mask)
- hidden_states = torch.bmm(attention_probs, value)
- hidden_states = attn.batch_to_head_dim(hidden_states)
-
- # linear proj
- hidden_states = attn.to_out[0](hidden_states)
- # dropout
- hidden_states = attn.to_out[1](hidden_states)
-
- hidden_states += self.weight
-
- self.is_run = True
- self.counter += 1
- self.number = number
-
- return hidden_states
-
- # enable deterministic behavior for gradient checkpointing
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- init_dict["attention_head_dim"] = (8, 16)
-
- model = self.model_class(**init_dict)
- model.to(torch_device)
-
- processor = AttnEasyProc(5.0)
-
- model.set_attn_processor(processor)
- model(**inputs_dict, cross_attention_kwargs={"number": 123}).sample
-
- assert processor.counter == 12
- assert processor.is_run
- assert processor.number == 123
-
- @parameterized.expand(
- [
- # fmt: off
- [torch.bool],
- [torch.long],
- [torch.float],
- # fmt: on
- ]
- )
- def test_model_xattn_mask(self, mask_dtype):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- model = self.model_class(**{**init_dict, "attention_head_dim": (8, 16)})
- model.to(torch_device)
- model.eval()
-
- cond = inputs_dict["encoder_hidden_states"]
- with torch.no_grad():
- full_cond_out = model(**inputs_dict).sample
- assert full_cond_out is not None
-
- keepall_mask = torch.ones(*cond.shape[:-1], device=cond.device, dtype=mask_dtype)
- full_cond_keepallmask_out = model(**{**inputs_dict, "encoder_attention_mask": keepall_mask}).sample
- assert full_cond_keepallmask_out.allclose(
- full_cond_out
- ), "a 'keep all' mask should give the same result as no mask"
-
- trunc_cond = cond[:, :-1, :]
- trunc_cond_out = model(**{**inputs_dict, "encoder_hidden_states": trunc_cond}).sample
- assert not trunc_cond_out.allclose(
- full_cond_out
- ), "discarding the last token from our cond should change the result"
-
- batch, tokens, _ = cond.shape
- mask_last = (torch.arange(tokens) < tokens - 1).expand(batch, -1).to(cond.device, mask_dtype)
- masked_cond_out = model(**{**inputs_dict, "encoder_attention_mask": mask_last}).sample
- assert masked_cond_out.allclose(
- trunc_cond_out
- ), "masking the last token from our cond should be equivalent to truncating that token out of the condition"
-
- # see diffusers.models.attention_processor::Attention#prepare_attention_mask
- # note: we may not need to fix mask padding to work for stable-diffusion cross-attn masks.
- # since the use-case (somebody passes in a too-short cross-attn mask) is pretty esoteric.
- # maybe it's fine that this only works for the unclip use-case.
- @mark.skip(
- reason="we currently pad mask by target_length tokens (what unclip needs), whereas stable-diffusion's cross-attn needs to instead pad by remaining_length."
- )
- def test_model_xattn_padding(self):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- model = self.model_class(**{**init_dict, "attention_head_dim": (8, 16)})
- model.to(torch_device)
- model.eval()
-
- cond = inputs_dict["encoder_hidden_states"]
- with torch.no_grad():
- full_cond_out = model(**inputs_dict).sample
- assert full_cond_out is not None
-
- batch, tokens, _ = cond.shape
- keeplast_mask = (torch.arange(tokens) == tokens - 1).expand(batch, -1).to(cond.device, torch.bool)
- keeplast_out = model(**{**inputs_dict, "encoder_attention_mask": keeplast_mask}).sample
- assert not keeplast_out.allclose(full_cond_out), "a 'keep last token' mask should change the result"
-
- trunc_mask = torch.zeros(batch, tokens - 1, device=cond.device, dtype=torch.bool)
- trunc_mask_out = model(**{**inputs_dict, "encoder_attention_mask": trunc_mask}).sample
- assert trunc_mask_out.allclose(
- keeplast_out
- ), "a mask with fewer tokens than condition, will be padded with 'keep' tokens. a 'discard-all' mask missing the final token is thus equivalent to a 'keep last' mask."
-
- def test_lora_processors(self):
- # enable deterministic behavior for gradient checkpointing
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- init_dict["attention_head_dim"] = (8, 16)
-
- model = self.model_class(**init_dict)
- model.to(torch_device)
-
- with torch.no_grad():
- sample1 = model(**inputs_dict).sample
-
- lora_attn_procs = create_lora_layers(model)
-
- # make sure we can set a list of attention processors
- model.set_attn_processor(lora_attn_procs)
- model.to(torch_device)
-
- # test that attn processors can be set to itself
- model.set_attn_processor(model.attn_processors)
-
- with torch.no_grad():
- sample2 = model(**inputs_dict, cross_attention_kwargs={"scale": 0.0}).sample
- sample3 = model(**inputs_dict, cross_attention_kwargs={"scale": 0.5}).sample
- sample4 = model(**inputs_dict, cross_attention_kwargs={"scale": 0.5}).sample
-
- assert (sample1 - sample2).abs().max() < 3e-3
- assert (sample3 - sample4).abs().max() < 3e-3
-
- # sample 2 and sample 3 should be different
- assert (sample2 - sample3).abs().max() > 1e-4
-
- def test_lora_save_load(self):
- # enable deterministic behavior for gradient checkpointing
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- init_dict["attention_head_dim"] = (8, 16)
-
- torch.manual_seed(0)
- model = self.model_class(**init_dict)
- model.to(torch_device)
-
- with torch.no_grad():
- old_sample = model(**inputs_dict).sample
-
- lora_attn_procs = create_lora_layers(model)
- model.set_attn_processor(lora_attn_procs)
-
- with torch.no_grad():
- sample = model(**inputs_dict, cross_attention_kwargs={"scale": 0.5}).sample
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- model.save_attn_procs(tmpdirname)
- self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin")))
- torch.manual_seed(0)
- new_model = self.model_class(**init_dict)
- new_model.to(torch_device)
- new_model.load_attn_procs(tmpdirname)
-
- with torch.no_grad():
- new_sample = new_model(**inputs_dict, cross_attention_kwargs={"scale": 0.5}).sample
-
- assert (sample - new_sample).abs().max() < 1e-4
-
- # LoRA and no LoRA should NOT be the same
- assert (sample - old_sample).abs().max() > 1e-4
-
- def test_lora_save_load_safetensors(self):
- # enable deterministic behavior for gradient checkpointing
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- init_dict["attention_head_dim"] = (8, 16)
-
- torch.manual_seed(0)
- model = self.model_class(**init_dict)
- model.to(torch_device)
-
- with torch.no_grad():
- old_sample = model(**inputs_dict).sample
-
- lora_attn_procs = create_lora_layers(model)
- model.set_attn_processor(lora_attn_procs)
-
- with torch.no_grad():
- sample = model(**inputs_dict, cross_attention_kwargs={"scale": 0.5}).sample
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- model.save_attn_procs(tmpdirname, safe_serialization=True)
- self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.safetensors")))
- torch.manual_seed(0)
- new_model = self.model_class(**init_dict)
- new_model.to(torch_device)
- new_model.load_attn_procs(tmpdirname)
-
- with torch.no_grad():
- new_sample = new_model(**inputs_dict, cross_attention_kwargs={"scale": 0.5}).sample
-
- assert (sample - new_sample).abs().max() < 1e-4
-
- # LoRA and no LoRA should NOT be the same
- assert (sample - old_sample).abs().max() > 1e-4
-
- def test_lora_save_safetensors_load_torch(self):
- # enable deterministic behavior for gradient checkpointing
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- init_dict["attention_head_dim"] = (8, 16)
-
- torch.manual_seed(0)
- model = self.model_class(**init_dict)
- model.to(torch_device)
-
- lora_attn_procs = create_lora_layers(model, mock_weights=False)
- model.set_attn_processor(lora_attn_procs)
- # Saving as torch, properly reloads with directly filename
- with tempfile.TemporaryDirectory() as tmpdirname:
- model.save_attn_procs(tmpdirname)
- self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin")))
- torch.manual_seed(0)
- new_model = self.model_class(**init_dict)
- new_model.to(torch_device)
- new_model.load_attn_procs(tmpdirname, weight_name="pytorch_lora_weights.bin")
-
- def test_lora_save_torch_force_load_safetensors_error(self):
- # enable deterministic behavior for gradient checkpointing
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- init_dict["attention_head_dim"] = (8, 16)
-
- torch.manual_seed(0)
- model = self.model_class(**init_dict)
- model.to(torch_device)
-
- lora_attn_procs = create_lora_layers(model, mock_weights=False)
- model.set_attn_processor(lora_attn_procs)
- # Saving as torch, properly reloads with directly filename
- with tempfile.TemporaryDirectory() as tmpdirname:
- model.save_attn_procs(tmpdirname)
- self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin")))
- torch.manual_seed(0)
- new_model = self.model_class(**init_dict)
- new_model.to(torch_device)
- with self.assertRaises(IOError) as e:
- new_model.load_attn_procs(tmpdirname, use_safetensors=True)
- self.assertIn("Error no file named pytorch_lora_weights.safetensors", str(e.exception))
-
- def test_lora_on_off(self):
- # enable deterministic behavior for gradient checkpointing
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- init_dict["attention_head_dim"] = (8, 16)
-
- torch.manual_seed(0)
- model = self.model_class(**init_dict)
- model.to(torch_device)
-
- with torch.no_grad():
- old_sample = model(**inputs_dict).sample
-
- lora_attn_procs = create_lora_layers(model)
- model.set_attn_processor(lora_attn_procs)
-
- with torch.no_grad():
- sample = model(**inputs_dict, cross_attention_kwargs={"scale": 0.0}).sample
-
- model.set_default_attn_processor()
-
- with torch.no_grad():
- new_sample = model(**inputs_dict).sample
-
- assert (sample - new_sample).abs().max() < 1e-4
- assert (sample - old_sample).abs().max() < 3e-3
-
- @unittest.skipIf(
- torch_device != "cuda" or not is_xformers_available(),
- reason="XFormers attention is only available with CUDA and `xformers` installed",
- )
- def test_lora_xformers_on_off(self):
- # enable deterministic behavior for gradient checkpointing
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- init_dict["attention_head_dim"] = (8, 16)
-
- torch.manual_seed(0)
- model = self.model_class(**init_dict)
- model.to(torch_device)
- lora_attn_procs = create_lora_layers(model)
- model.set_attn_processor(lora_attn_procs)
-
- # default
- with torch.no_grad():
- sample = model(**inputs_dict).sample
-
- model.enable_xformers_memory_efficient_attention()
- on_sample = model(**inputs_dict).sample
-
- model.disable_xformers_memory_efficient_attention()
- off_sample = model(**inputs_dict).sample
-
- assert (sample - on_sample).abs().max() < 1e-4
- assert (sample - off_sample).abs().max() < 1e-4
-
- def test_custom_diffusion_processors(self):
- # enable deterministic behavior for gradient checkpointing
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- init_dict["attention_head_dim"] = (8, 16)
-
- model = self.model_class(**init_dict)
- model.to(torch_device)
-
- with torch.no_grad():
- sample1 = model(**inputs_dict).sample
-
- custom_diffusion_attn_procs = create_custom_diffusion_layers(model, mock_weights=False)
-
- # make sure we can set a list of attention processors
- model.set_attn_processor(custom_diffusion_attn_procs)
- model.to(torch_device)
-
- # test that attn processors can be set to itself
- model.set_attn_processor(model.attn_processors)
-
- with torch.no_grad():
- sample2 = model(**inputs_dict).sample
-
- assert (sample1 - sample2).abs().max() < 3e-3
-
- def test_custom_diffusion_save_load(self):
- # enable deterministic behavior for gradient checkpointing
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- init_dict["attention_head_dim"] = (8, 16)
-
- torch.manual_seed(0)
- model = self.model_class(**init_dict)
- model.to(torch_device)
-
- with torch.no_grad():
- old_sample = model(**inputs_dict).sample
-
- custom_diffusion_attn_procs = create_custom_diffusion_layers(model, mock_weights=False)
- model.set_attn_processor(custom_diffusion_attn_procs)
-
- with torch.no_grad():
- sample = model(**inputs_dict).sample
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- model.save_attn_procs(tmpdirname)
- self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_custom_diffusion_weights.bin")))
- torch.manual_seed(0)
- new_model = self.model_class(**init_dict)
- new_model.to(torch_device)
- new_model.load_attn_procs(tmpdirname, weight_name="pytorch_custom_diffusion_weights.bin")
-
- with torch.no_grad():
- new_sample = new_model(**inputs_dict).sample
-
- assert (sample - new_sample).abs().max() < 1e-4
-
- # custom diffusion and no custom diffusion should be the same
- assert (sample - old_sample).abs().max() < 3e-3
-
- @unittest.skipIf(
- torch_device != "cuda" or not is_xformers_available(),
- reason="XFormers attention is only available with CUDA and `xformers` installed",
- )
- def test_custom_diffusion_xformers_on_off(self):
- # enable deterministic behavior for gradient checkpointing
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- init_dict["attention_head_dim"] = (8, 16)
-
- torch.manual_seed(0)
- model = self.model_class(**init_dict)
- model.to(torch_device)
- custom_diffusion_attn_procs = create_custom_diffusion_layers(model, mock_weights=False)
- model.set_attn_processor(custom_diffusion_attn_procs)
-
- # default
- with torch.no_grad():
- sample = model(**inputs_dict).sample
-
- model.enable_xformers_memory_efficient_attention()
- on_sample = model(**inputs_dict).sample
-
- model.disable_xformers_memory_efficient_attention()
- off_sample = model(**inputs_dict).sample
-
- assert (sample - on_sample).abs().max() < 1e-4
- assert (sample - off_sample).abs().max() < 1e-4
-
- def test_pickle(self):
- # enable deterministic behavior for gradient checkpointing
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- init_dict["attention_head_dim"] = (8, 16)
-
- model = self.model_class(**init_dict)
- model.to(torch_device)
-
- with torch.no_grad():
- sample = model(**inputs_dict).sample
-
- sample_copy = copy.copy(sample)
-
- assert (sample - sample_copy).abs().max() < 1e-4
-
-
-@slow
-class UNet2DConditionModelIntegrationTests(unittest.TestCase):
- def get_file_format(self, seed, shape):
- return f"gaussian_noise_s={seed}_shape={'_'.join([str(s) for s in shape])}.npy"
-
- def tearDown(self):
- # clean up the VRAM after each test
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- def get_latents(self, seed=0, shape=(4, 4, 64, 64), fp16=False):
- dtype = torch.float16 if fp16 else torch.float32
- image = torch.from_numpy(load_hf_numpy(self.get_file_format(seed, shape))).to(torch_device).to(dtype)
- return image
-
- def get_unet_model(self, fp16=False, model_id="CompVis/stable-diffusion-v1-4"):
- revision = "fp16" if fp16 else None
- torch_dtype = torch.float16 if fp16 else torch.float32
-
- model = UNet2DConditionModel.from_pretrained(
- model_id, subfolder="unet", torch_dtype=torch_dtype, revision=revision
- )
- model.to(torch_device).eval()
-
- return model
-
- def test_set_attention_slice_auto(self):
- torch.cuda.empty_cache()
- torch.cuda.reset_max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
-
- unet = self.get_unet_model()
- unet.set_attention_slice("auto")
-
- latents = self.get_latents(33)
- encoder_hidden_states = self.get_encoder_hidden_states(33)
- timestep = 1
-
- with torch.no_grad():
- _ = unet(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample
-
- mem_bytes = torch.cuda.max_memory_allocated()
-
- assert mem_bytes < 5 * 10**9
-
- def test_set_attention_slice_max(self):
- torch.cuda.empty_cache()
- torch.cuda.reset_max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
-
- unet = self.get_unet_model()
- unet.set_attention_slice("max")
-
- latents = self.get_latents(33)
- encoder_hidden_states = self.get_encoder_hidden_states(33)
- timestep = 1
-
- with torch.no_grad():
- _ = unet(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample
-
- mem_bytes = torch.cuda.max_memory_allocated()
-
- assert mem_bytes < 5 * 10**9
-
- def test_set_attention_slice_int(self):
- torch.cuda.empty_cache()
- torch.cuda.reset_max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
-
- unet = self.get_unet_model()
- unet.set_attention_slice(2)
-
- latents = self.get_latents(33)
- encoder_hidden_states = self.get_encoder_hidden_states(33)
- timestep = 1
-
- with torch.no_grad():
- _ = unet(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample
-
- mem_bytes = torch.cuda.max_memory_allocated()
-
- assert mem_bytes < 5 * 10**9
-
- def test_set_attention_slice_list(self):
- torch.cuda.empty_cache()
- torch.cuda.reset_max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
-
- # there are 32 sliceable layers
- slice_list = 16 * [2, 3]
- unet = self.get_unet_model()
- unet.set_attention_slice(slice_list)
-
- latents = self.get_latents(33)
- encoder_hidden_states = self.get_encoder_hidden_states(33)
- timestep = 1
-
- with torch.no_grad():
- _ = unet(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample
-
- mem_bytes = torch.cuda.max_memory_allocated()
-
- assert mem_bytes < 5 * 10**9
-
- def get_encoder_hidden_states(self, seed=0, shape=(4, 77, 768), fp16=False):
- dtype = torch.float16 if fp16 else torch.float32
- hidden_states = torch.from_numpy(load_hf_numpy(self.get_file_format(seed, shape))).to(torch_device).to(dtype)
- return hidden_states
-
- @parameterized.expand(
- [
- # fmt: off
- [33, 4, [-0.4424, 0.1510, -0.1937, 0.2118, 0.3746, -0.3957, 0.0160, -0.0435]],
- [47, 0.55, [-0.1508, 0.0379, -0.3075, 0.2540, 0.3633, -0.0821, 0.1719, -0.0207]],
- [21, 0.89, [-0.6479, 0.6364, -0.3464, 0.8697, 0.4443, -0.6289, -0.0091, 0.1778]],
- [9, 1000, [0.8888, -0.5659, 0.5834, -0.7469, 1.1912, -0.3923, 1.1241, -0.4424]],
- # fmt: on
- ]
- )
- @require_torch_gpu
- def test_compvis_sd_v1_4(self, seed, timestep, expected_slice):
- model = self.get_unet_model(model_id="CompVis/stable-diffusion-v1-4")
- latents = self.get_latents(seed)
- encoder_hidden_states = self.get_encoder_hidden_states(seed)
-
- timestep = torch.tensor([timestep], dtype=torch.long, device=torch_device)
-
- with torch.no_grad():
- sample = model(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample
-
- assert sample.shape == latents.shape
-
- output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu()
- expected_output_slice = torch.tensor(expected_slice)
-
- assert torch_all_close(output_slice, expected_output_slice, atol=1e-3)
-
- @parameterized.expand(
- [
- # fmt: off
- [83, 4, [-0.2323, -0.1304, 0.0813, -0.3093, -0.0919, -0.1571, -0.1125, -0.5806]],
- [17, 0.55, [-0.0831, -0.2443, 0.0901, -0.0919, 0.3396, 0.0103, -0.3743, 0.0701]],
- [8, 0.89, [-0.4863, 0.0859, 0.0875, -0.1658, 0.9199, -0.0114, 0.4839, 0.4639]],
- [3, 1000, [-0.5649, 0.2402, -0.5518, 0.1248, 1.1328, -0.2443, -0.0325, -1.0078]],
- # fmt: on
- ]
- )
- @require_torch_gpu
- def test_compvis_sd_v1_4_fp16(self, seed, timestep, expected_slice):
- model = self.get_unet_model(model_id="CompVis/stable-diffusion-v1-4", fp16=True)
- latents = self.get_latents(seed, fp16=True)
- encoder_hidden_states = self.get_encoder_hidden_states(seed, fp16=True)
-
- timestep = torch.tensor([timestep], dtype=torch.long, device=torch_device)
-
- with torch.no_grad():
- sample = model(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample
-
- assert sample.shape == latents.shape
-
- output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu()
- expected_output_slice = torch.tensor(expected_slice)
-
- assert torch_all_close(output_slice, expected_output_slice, atol=5e-3)
-
- @parameterized.expand(
- [
- # fmt: off
- [33, 4, [-0.4430, 0.1570, -0.1867, 0.2376, 0.3205, -0.3681, 0.0525, -0.0722]],
- [47, 0.55, [-0.1415, 0.0129, -0.3136, 0.2257, 0.3430, -0.0536, 0.2114, -0.0436]],
- [21, 0.89, [-0.7091, 0.6664, -0.3643, 0.9032, 0.4499, -0.6541, 0.0139, 0.1750]],
- [9, 1000, [0.8878, -0.5659, 0.5844, -0.7442, 1.1883, -0.3927, 1.1192, -0.4423]],
- # fmt: on
- ]
- )
- @require_torch_gpu
- def test_compvis_sd_v1_5(self, seed, timestep, expected_slice):
- model = self.get_unet_model(model_id="runwayml/stable-diffusion-v1-5")
- latents = self.get_latents(seed)
- encoder_hidden_states = self.get_encoder_hidden_states(seed)
-
- timestep = torch.tensor([timestep], dtype=torch.long, device=torch_device)
-
- with torch.no_grad():
- sample = model(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample
-
- assert sample.shape == latents.shape
-
- output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu()
- expected_output_slice = torch.tensor(expected_slice)
-
- assert torch_all_close(output_slice, expected_output_slice, atol=1e-3)
-
- @parameterized.expand(
- [
- # fmt: off
- [83, 4, [-0.2695, -0.1669, 0.0073, -0.3181, -0.1187, -0.1676, -0.1395, -0.5972]],
- [17, 0.55, [-0.1290, -0.2588, 0.0551, -0.0916, 0.3286, 0.0238, -0.3669, 0.0322]],
- [8, 0.89, [-0.5283, 0.1198, 0.0870, -0.1141, 0.9189, -0.0150, 0.5474, 0.4319]],
- [3, 1000, [-0.5601, 0.2411, -0.5435, 0.1268, 1.1338, -0.2427, -0.0280, -1.0020]],
- # fmt: on
- ]
- )
- @require_torch_gpu
- def test_compvis_sd_v1_5_fp16(self, seed, timestep, expected_slice):
- model = self.get_unet_model(model_id="runwayml/stable-diffusion-v1-5", fp16=True)
- latents = self.get_latents(seed, fp16=True)
- encoder_hidden_states = self.get_encoder_hidden_states(seed, fp16=True)
-
- timestep = torch.tensor([timestep], dtype=torch.long, device=torch_device)
-
- with torch.no_grad():
- sample = model(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample
-
- assert sample.shape == latents.shape
-
- output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu()
- expected_output_slice = torch.tensor(expected_slice)
-
- assert torch_all_close(output_slice, expected_output_slice, atol=5e-3)
-
- @parameterized.expand(
- [
- # fmt: off
- [33, 4, [-0.7639, 0.0106, -0.1615, -0.3487, -0.0423, -0.7972, 0.0085, -0.4858]],
- [47, 0.55, [-0.6564, 0.0795, -1.9026, -0.6258, 1.8235, 1.2056, 1.2169, 0.9073]],
- [21, 0.89, [0.0327, 0.4399, -0.6358, 0.3417, 0.4120, -0.5621, -0.0397, -1.0430]],
- [9, 1000, [0.1600, 0.7303, -1.0556, -0.3515, -0.7440, -1.2037, -1.8149, -1.8931]],
- # fmt: on
- ]
- )
- @require_torch_gpu
- def test_compvis_sd_inpaint(self, seed, timestep, expected_slice):
- model = self.get_unet_model(model_id="runwayml/stable-diffusion-inpainting")
- latents = self.get_latents(seed, shape=(4, 9, 64, 64))
- encoder_hidden_states = self.get_encoder_hidden_states(seed)
-
- timestep = torch.tensor([timestep], dtype=torch.long, device=torch_device)
-
- with torch.no_grad():
- sample = model(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample
-
- assert sample.shape == (4, 4, 64, 64)
-
- output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu()
- expected_output_slice = torch.tensor(expected_slice)
-
- assert torch_all_close(output_slice, expected_output_slice, atol=3e-3)
-
- @parameterized.expand(
- [
- # fmt: off
- [83, 4, [-0.1047, -1.7227, 0.1067, 0.0164, -0.5698, -0.4172, -0.1388, 1.1387]],
- [17, 0.55, [0.0975, -0.2856, -0.3508, -0.4600, 0.3376, 0.2930, -0.2747, -0.7026]],
- [8, 0.89, [-0.0952, 0.0183, -0.5825, -0.1981, 0.1131, 0.4668, -0.0395, -0.3486]],
- [3, 1000, [0.4790, 0.4949, -1.0732, -0.7158, 0.7959, -0.9478, 0.1105, -0.9741]],
- # fmt: on
- ]
- )
- @require_torch_gpu
- def test_compvis_sd_inpaint_fp16(self, seed, timestep, expected_slice):
- model = self.get_unet_model(model_id="runwayml/stable-diffusion-inpainting", fp16=True)
- latents = self.get_latents(seed, shape=(4, 9, 64, 64), fp16=True)
- encoder_hidden_states = self.get_encoder_hidden_states(seed, fp16=True)
-
- timestep = torch.tensor([timestep], dtype=torch.long, device=torch_device)
-
- with torch.no_grad():
- sample = model(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample
-
- assert sample.shape == (4, 4, 64, 64)
-
- output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu()
- expected_output_slice = torch.tensor(expected_slice)
-
- assert torch_all_close(output_slice, expected_output_slice, atol=5e-3)
-
- @parameterized.expand(
- [
- # fmt: off
- [83, 4, [0.1514, 0.0807, 0.1624, 0.1016, -0.1896, 0.0263, 0.0677, 0.2310]],
- [17, 0.55, [0.1164, -0.0216, 0.0170, 0.1589, -0.3120, 0.1005, -0.0581, -0.1458]],
- [8, 0.89, [-0.1758, -0.0169, 0.1004, -0.1411, 0.1312, 0.1103, -0.1996, 0.2139]],
- [3, 1000, [0.1214, 0.0352, -0.0731, -0.1562, -0.0994, -0.0906, -0.2340, -0.0539]],
- # fmt: on
- ]
- )
- @require_torch_gpu
- def test_stabilityai_sd_v2_fp16(self, seed, timestep, expected_slice):
- model = self.get_unet_model(model_id="stabilityai/stable-diffusion-2", fp16=True)
- latents = self.get_latents(seed, shape=(4, 4, 96, 96), fp16=True)
- encoder_hidden_states = self.get_encoder_hidden_states(seed, shape=(4, 77, 1024), fp16=True)
-
- timestep = torch.tensor([timestep], dtype=torch.long, device=torch_device)
-
- with torch.no_grad():
- sample = model(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample
-
- assert sample.shape == latents.shape
-
- output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu()
- expected_output_slice = torch.tensor(expected_slice)
-
- assert torch_all_close(output_slice, expected_output_slice, atol=5e-3)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/kandinsky/test_kandinsky_prior.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/kandinsky/test_kandinsky_prior.py
deleted file mode 100644
index 7b1acc9fc03e06fe8fbc4dbb93ad465e54201a77..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/kandinsky/test_kandinsky_prior.py
+++ /dev/null
@@ -1,246 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import unittest
-
-import numpy as np
-import torch
-from torch import nn
-from transformers import (
- CLIPImageProcessor,
- CLIPTextConfig,
- CLIPTextModelWithProjection,
- CLIPTokenizer,
- CLIPVisionConfig,
- CLIPVisionModelWithProjection,
-)
-
-from diffusers import KandinskyPriorPipeline, PriorTransformer, UnCLIPScheduler
-from diffusers.utils import torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, skip_mps
-
-from ..test_pipelines_common import PipelineTesterMixin
-
-
-enable_full_determinism()
-
-
-class Dummies:
- @property
- def text_embedder_hidden_size(self):
- return 32
-
- @property
- def time_input_dim(self):
- return 32
-
- @property
- def block_out_channels_0(self):
- return self.time_input_dim
-
- @property
- def time_embed_dim(self):
- return self.time_input_dim * 4
-
- @property
- def cross_attention_dim(self):
- return 100
-
- @property
- def dummy_tokenizer(self):
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
- return tokenizer
-
- @property
- def dummy_text_encoder(self):
- torch.manual_seed(0)
- config = CLIPTextConfig(
- bos_token_id=0,
- eos_token_id=2,
- hidden_size=self.text_embedder_hidden_size,
- projection_dim=self.text_embedder_hidden_size,
- intermediate_size=37,
- layer_norm_eps=1e-05,
- num_attention_heads=4,
- num_hidden_layers=5,
- pad_token_id=1,
- vocab_size=1000,
- )
- return CLIPTextModelWithProjection(config)
-
- @property
- def dummy_prior(self):
- torch.manual_seed(0)
-
- model_kwargs = {
- "num_attention_heads": 2,
- "attention_head_dim": 12,
- "embedding_dim": self.text_embedder_hidden_size,
- "num_layers": 1,
- }
-
- model = PriorTransformer(**model_kwargs)
- # clip_std and clip_mean is initialized to be 0 so PriorTransformer.post_process_latents will always return 0 - set clip_std to be 1 so it won't return 0
- model.clip_std = nn.Parameter(torch.ones(model.clip_std.shape))
- return model
-
- @property
- def dummy_image_encoder(self):
- torch.manual_seed(0)
- config = CLIPVisionConfig(
- hidden_size=self.text_embedder_hidden_size,
- image_size=224,
- projection_dim=self.text_embedder_hidden_size,
- intermediate_size=37,
- num_attention_heads=4,
- num_channels=3,
- num_hidden_layers=5,
- patch_size=14,
- )
-
- model = CLIPVisionModelWithProjection(config)
- return model
-
- @property
- def dummy_image_processor(self):
- image_processor = CLIPImageProcessor(
- crop_size=224,
- do_center_crop=True,
- do_normalize=True,
- do_resize=True,
- image_mean=[0.48145466, 0.4578275, 0.40821073],
- image_std=[0.26862954, 0.26130258, 0.27577711],
- resample=3,
- size=224,
- )
-
- return image_processor
-
- def get_dummy_components(self):
- prior = self.dummy_prior
- image_encoder = self.dummy_image_encoder
- text_encoder = self.dummy_text_encoder
- tokenizer = self.dummy_tokenizer
- image_processor = self.dummy_image_processor
-
- scheduler = UnCLIPScheduler(
- variance_type="fixed_small_log",
- prediction_type="sample",
- num_train_timesteps=1000,
- clip_sample=True,
- clip_sample_range=10.0,
- )
-
- components = {
- "prior": prior,
- "image_encoder": image_encoder,
- "text_encoder": text_encoder,
- "tokenizer": tokenizer,
- "scheduler": scheduler,
- "image_processor": image_processor,
- }
-
- return components
-
- def get_dummy_inputs(self, device, seed=0):
- if str(device).startswith("mps"):
- generator = torch.manual_seed(seed)
- else:
- generator = torch.Generator(device=device).manual_seed(seed)
- inputs = {
- "prompt": "horse",
- "generator": generator,
- "guidance_scale": 4.0,
- "num_inference_steps": 2,
- "output_type": "np",
- }
- return inputs
-
-
-class KandinskyPriorPipelineFastTests(PipelineTesterMixin, unittest.TestCase):
- pipeline_class = KandinskyPriorPipeline
- params = ["prompt"]
- batch_params = ["prompt", "negative_prompt"]
- required_optional_params = [
- "num_images_per_prompt",
- "generator",
- "num_inference_steps",
- "latents",
- "negative_prompt",
- "guidance_scale",
- "output_type",
- "return_dict",
- ]
- test_xformers_attention = False
-
- def get_dummy_components(self):
- dummy = Dummies()
- return dummy.get_dummy_components()
-
- def get_dummy_inputs(self, device, seed=0):
- dummy = Dummies()
- return dummy.get_dummy_inputs(device=device, seed=seed)
-
- def test_kandinsky_prior(self):
- device = "cpu"
-
- components = self.get_dummy_components()
-
- pipe = self.pipeline_class(**components)
- pipe = pipe.to(device)
-
- pipe.set_progress_bar_config(disable=None)
-
- output = pipe(**self.get_dummy_inputs(device))
- image = output.image_embeds
-
- image_from_tuple = pipe(
- **self.get_dummy_inputs(device),
- return_dict=False,
- )[0]
-
- image_slice = image[0, -10:]
- image_from_tuple_slice = image_from_tuple[0, -10:]
-
- assert image.shape == (1, 32)
-
- expected_slice = np.array(
- [-0.0532, 1.7120, 0.3656, -1.0852, -0.8946, -1.1756, 0.4348, 0.2482, 0.5146, -0.1156]
- )
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
- assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2
-
- @skip_mps
- def test_inference_batch_single_identical(self):
- test_max_difference = torch_device == "cpu"
- relax_max_difference = True
- test_mean_pixel_difference = False
-
- self._test_inference_batch_single_identical(
- test_max_difference=test_max_difference,
- relax_max_difference=relax_max_difference,
- test_mean_pixel_difference=test_mean_pixel_difference,
- )
-
- @skip_mps
- def test_attention_slicing_forward_pass(self):
- test_max_difference = torch_device == "cpu"
- test_mean_pixel_difference = False
-
- self._test_attention_slicing_forward_pass(
- test_max_difference=test_max_difference,
- test_mean_pixel_difference=test_mean_pixel_difference,
- )
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/kandinsky_v22/test_kandinsky_img2img.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/kandinsky_v22/test_kandinsky_img2img.py
deleted file mode 100644
index 17f27d0d7804dc7d05e0be440306b749fcaf61d6..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/kandinsky_v22/test_kandinsky_img2img.py
+++ /dev/null
@@ -1,285 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import gc
-import random
-import unittest
-
-import numpy as np
-import torch
-from PIL import Image
-
-from diffusers import (
- DDIMScheduler,
- KandinskyV22Img2ImgPipeline,
- KandinskyV22PriorPipeline,
- UNet2DConditionModel,
- VQModel,
-)
-from diffusers.utils import floats_tensor, load_image, load_numpy, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
-
-from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference
-
-
-enable_full_determinism()
-
-
-class Dummies:
- @property
- def text_embedder_hidden_size(self):
- return 32
-
- @property
- def time_input_dim(self):
- return 32
-
- @property
- def block_out_channels_0(self):
- return self.time_input_dim
-
- @property
- def time_embed_dim(self):
- return self.time_input_dim * 4
-
- @property
- def cross_attention_dim(self):
- return 32
-
- @property
- def dummy_unet(self):
- torch.manual_seed(0)
-
- model_kwargs = {
- "in_channels": 4,
- # Out channels is double in channels because predicts mean and variance
- "out_channels": 8,
- "addition_embed_type": "image",
- "down_block_types": ("ResnetDownsampleBlock2D", "SimpleCrossAttnDownBlock2D"),
- "up_block_types": ("SimpleCrossAttnUpBlock2D", "ResnetUpsampleBlock2D"),
- "mid_block_type": "UNetMidBlock2DSimpleCrossAttn",
- "block_out_channels": (self.block_out_channels_0, self.block_out_channels_0 * 2),
- "layers_per_block": 1,
- "encoder_hid_dim": self.text_embedder_hidden_size,
- "encoder_hid_dim_type": "image_proj",
- "cross_attention_dim": self.cross_attention_dim,
- "attention_head_dim": 4,
- "resnet_time_scale_shift": "scale_shift",
- "class_embed_type": None,
- }
-
- model = UNet2DConditionModel(**model_kwargs)
- return model
-
- @property
- def dummy_movq_kwargs(self):
- return {
- "block_out_channels": [32, 64],
- "down_block_types": ["DownEncoderBlock2D", "AttnDownEncoderBlock2D"],
- "in_channels": 3,
- "latent_channels": 4,
- "layers_per_block": 1,
- "norm_num_groups": 8,
- "norm_type": "spatial",
- "num_vq_embeddings": 12,
- "out_channels": 3,
- "up_block_types": [
- "AttnUpDecoderBlock2D",
- "UpDecoderBlock2D",
- ],
- "vq_embed_dim": 4,
- }
-
- @property
- def dummy_movq(self):
- torch.manual_seed(0)
- model = VQModel(**self.dummy_movq_kwargs)
- return model
-
- def get_dummy_components(self):
- unet = self.dummy_unet
- movq = self.dummy_movq
-
- ddim_config = {
- "num_train_timesteps": 1000,
- "beta_schedule": "linear",
- "beta_start": 0.00085,
- "beta_end": 0.012,
- "clip_sample": False,
- "set_alpha_to_one": False,
- "steps_offset": 0,
- "prediction_type": "epsilon",
- "thresholding": False,
- }
-
- scheduler = DDIMScheduler(**ddim_config)
-
- components = {
- "unet": unet,
- "scheduler": scheduler,
- "movq": movq,
- }
-
- return components
-
- def get_dummy_inputs(self, device, seed=0):
- image_embeds = floats_tensor((1, self.text_embedder_hidden_size), rng=random.Random(seed)).to(device)
- negative_image_embeds = floats_tensor((1, self.text_embedder_hidden_size), rng=random.Random(seed + 1)).to(
- device
- )
- # create init_image
- image = floats_tensor((1, 3, 64, 64), rng=random.Random(seed)).to(device)
- image = image.cpu().permute(0, 2, 3, 1)[0]
- init_image = Image.fromarray(np.uint8(image)).convert("RGB").resize((256, 256))
-
- if str(device).startswith("mps"):
- generator = torch.manual_seed(seed)
- else:
- generator = torch.Generator(device=device).manual_seed(seed)
- inputs = {
- "image": init_image,
- "image_embeds": image_embeds,
- "negative_image_embeds": negative_image_embeds,
- "generator": generator,
- "height": 64,
- "width": 64,
- "num_inference_steps": 10,
- "guidance_scale": 7.0,
- "strength": 0.2,
- "output_type": "np",
- }
- return inputs
-
-
-class KandinskyV22Img2ImgPipelineFastTests(PipelineTesterMixin, unittest.TestCase):
- pipeline_class = KandinskyV22Img2ImgPipeline
- params = ["image_embeds", "negative_image_embeds", "image"]
- batch_params = [
- "image_embeds",
- "negative_image_embeds",
- "image",
- ]
- required_optional_params = [
- "generator",
- "height",
- "width",
- "strength",
- "guidance_scale",
- "num_inference_steps",
- "return_dict",
- "guidance_scale",
- "num_images_per_prompt",
- "output_type",
- "return_dict",
- ]
- test_xformers_attention = False
-
- def get_dummy_components(self):
- dummies = Dummies()
- return dummies.get_dummy_components()
-
- def get_dummy_inputs(self, device, seed=0):
- dummies = Dummies()
- return dummies.get_dummy_inputs(device=device, seed=seed)
-
- def test_kandinsky_img2img(self):
- device = "cpu"
-
- components = self.get_dummy_components()
-
- pipe = self.pipeline_class(**components)
- pipe = pipe.to(device)
-
- pipe.set_progress_bar_config(disable=None)
-
- output = pipe(**self.get_dummy_inputs(device))
- image = output.images
-
- image_from_tuple = pipe(
- **self.get_dummy_inputs(device),
- return_dict=False,
- )[0]
-
- image_slice = image[0, -3:, -3:, -1]
- image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
-
- expected_slice = np.array([0.5712, 0.5443, 0.4725, 0.6195, 0.5184, 0.4651, 0.4473, 0.4590, 0.5016])
- assert (
- np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
- ), f" expected_slice {expected_slice}, but got {image_slice.flatten()}"
- assert (
- np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2
- ), f" expected_slice {expected_slice}, but got {image_from_tuple_slice.flatten()}"
-
-
-@slow
-@require_torch_gpu
-class KandinskyV22Img2ImgPipelineIntegrationTests(unittest.TestCase):
- def tearDown(self):
- # clean up the VRAM after each test
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- def test_kandinsky_img2img(self):
- expected_image = load_numpy(
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
- "/kandinskyv22/kandinskyv22_img2img_frog.npy"
- )
-
- init_image = load_image(
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png"
- )
- prompt = "A red cartoon frog, 4k"
-
- pipe_prior = KandinskyV22PriorPipeline.from_pretrained(
- "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
- )
- pipe_prior.to(torch_device)
-
- pipeline = KandinskyV22Img2ImgPipeline.from_pretrained(
- "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16
- )
- pipeline = pipeline.to(torch_device)
-
- pipeline.set_progress_bar_config(disable=None)
-
- generator = torch.Generator(device="cpu").manual_seed(0)
- image_emb, zero_image_emb = pipe_prior(
- prompt,
- generator=generator,
- num_inference_steps=5,
- negative_prompt="",
- ).to_tuple()
-
- output = pipeline(
- image=init_image,
- image_embeds=image_emb,
- negative_image_embeds=zero_image_emb,
- generator=generator,
- num_inference_steps=100,
- height=768,
- width=768,
- strength=0.2,
- output_type="np",
- )
-
- image = output.images[0]
-
- assert image.shape == (768, 768, 3)
-
- assert_mean_pixel_difference(image, expected_image)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_model_editing.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_model_editing.py
deleted file mode 100644
index 81d1baed5df65dcc0ee6a0848b559ce94761f489..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_model_editing.py
+++ /dev/null
@@ -1,256 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import gc
-import unittest
-
-import numpy as np
-import torch
-from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer
-
-from diffusers import (
- AutoencoderKL,
- DDIMScheduler,
- EulerAncestralDiscreteScheduler,
- PNDMScheduler,
- StableDiffusionModelEditingPipeline,
- UNet2DConditionModel,
-)
-from diffusers.utils import slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, skip_mps
-
-from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS
-from ..test_pipelines_common import PipelineKarrasSchedulerTesterMixin, PipelineLatentTesterMixin, PipelineTesterMixin
-
-
-enable_full_determinism()
-
-
-@skip_mps
-class StableDiffusionModelEditingPipelineFastTests(
- PipelineLatentTesterMixin, PipelineKarrasSchedulerTesterMixin, PipelineTesterMixin, unittest.TestCase
-):
- pipeline_class = StableDiffusionModelEditingPipeline
- params = TEXT_TO_IMAGE_PARAMS
- batch_params = TEXT_TO_IMAGE_BATCH_PARAMS
- image_params = TEXT_TO_IMAGE_IMAGE_PARAMS
- image_latents_params = TEXT_TO_IMAGE_IMAGE_PARAMS
-
- def get_dummy_components(self):
- torch.manual_seed(0)
- unet = UNet2DConditionModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- sample_size=32,
- in_channels=4,
- out_channels=4,
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
- up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
- cross_attention_dim=32,
- )
- scheduler = DDIMScheduler()
- torch.manual_seed(0)
- vae = AutoencoderKL(
- block_out_channels=[32, 64],
- in_channels=3,
- out_channels=3,
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
- latent_channels=4,
- )
- torch.manual_seed(0)
- text_encoder_config = CLIPTextConfig(
- bos_token_id=0,
- eos_token_id=2,
- hidden_size=32,
- intermediate_size=37,
- layer_norm_eps=1e-05,
- num_attention_heads=4,
- num_hidden_layers=5,
- pad_token_id=1,
- vocab_size=1000,
- )
- text_encoder = CLIPTextModel(text_encoder_config)
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
-
- components = {
- "unet": unet,
- "scheduler": scheduler,
- "vae": vae,
- "text_encoder": text_encoder,
- "tokenizer": tokenizer,
- "safety_checker": None,
- "feature_extractor": None,
- }
- return components
-
- def get_dummy_inputs(self, device, seed=0):
- generator = torch.manual_seed(seed)
- inputs = {
- "prompt": "A field of roses",
- "generator": generator,
- # Setting height and width to None to prevent OOMs on CPU.
- "height": None,
- "width": None,
- "num_inference_steps": 2,
- "guidance_scale": 6.0,
- "output_type": "numpy",
- }
- return inputs
-
- def test_stable_diffusion_model_editing_default_case(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionModelEditingPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- image = sd_pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1]
- assert image.shape == (1, 64, 64, 3)
-
- expected_slice = np.array([0.4755, 0.5132, 0.4976, 0.3904, 0.3554, 0.4765, 0.5139, 0.5158, 0.4889])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_model_editing_negative_prompt(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionModelEditingPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- negative_prompt = "french fries"
- output = sd_pipe(**inputs, negative_prompt=negative_prompt)
- image = output.images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
-
- expected_slice = np.array([0.4992, 0.5101, 0.5004, 0.3949, 0.3604, 0.4735, 0.5216, 0.5204, 0.4913])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_model_editing_euler(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- components["scheduler"] = EulerAncestralDiscreteScheduler(
- beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear"
- )
- sd_pipe = StableDiffusionModelEditingPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- image = sd_pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
-
- expected_slice = np.array([0.4747, 0.5372, 0.4779, 0.4982, 0.5543, 0.4816, 0.5238, 0.4904, 0.5027])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_model_editing_pndm(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- components["scheduler"] = PNDMScheduler()
- sd_pipe = StableDiffusionModelEditingPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- # the pipeline does not expect pndm so test if it raises error.
- with self.assertRaises(ValueError):
- _ = sd_pipe(**inputs).images
-
- def test_inference_batch_single_identical(self):
- super().test_inference_batch_single_identical(expected_max_diff=5e-3)
-
- def test_attention_slicing_forward_pass(self):
- super().test_attention_slicing_forward_pass(expected_max_diff=5e-3)
-
-
-@slow
-@require_torch_gpu
-class StableDiffusionModelEditingSlowTests(unittest.TestCase):
- def tearDown(self):
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- def get_inputs(self, seed=0):
- generator = torch.manual_seed(seed)
- inputs = {
- "prompt": "A field of roses",
- "generator": generator,
- "num_inference_steps": 3,
- "guidance_scale": 7.5,
- "output_type": "numpy",
- }
- return inputs
-
- def test_stable_diffusion_model_editing_default(self):
- model_ckpt = "CompVis/stable-diffusion-v1-4"
- pipe = StableDiffusionModelEditingPipeline.from_pretrained(model_ckpt, safety_checker=None)
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing()
-
- inputs = self.get_inputs()
- image = pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1].flatten()
-
- assert image.shape == (1, 512, 512, 3)
-
- expected_slice = np.array(
- [0.6749496, 0.6386453, 0.51443267, 0.66094905, 0.61921215, 0.5491332, 0.5744417, 0.58075106, 0.5174658]
- )
-
- assert np.abs(expected_slice - image_slice).max() < 1e-2
-
- # make sure image changes after editing
- pipe.edit_model("A pack of roses", "A pack of blue roses")
-
- image = pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1].flatten()
-
- assert image.shape == (1, 512, 512, 3)
-
- assert np.abs(expected_slice - image_slice).max() > 1e-1
-
- def test_stable_diffusion_model_editing_pipeline_with_sequential_cpu_offloading(self):
- torch.cuda.empty_cache()
- torch.cuda.reset_max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
-
- model_ckpt = "CompVis/stable-diffusion-v1-4"
- scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler")
- pipe = StableDiffusionModelEditingPipeline.from_pretrained(
- model_ckpt, scheduler=scheduler, safety_checker=None
- )
- pipe = pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing(1)
- pipe.enable_sequential_cpu_offload()
-
- inputs = self.get_inputs()
- _ = pipe(**inputs)
-
- mem_bytes = torch.cuda.max_memory_allocated()
- # make sure that less than 4.4 GB is allocated
- assert mem_bytes < 4.4 * 10**9
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_panorama.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_panorama.py
deleted file mode 100644
index 131e9402c7eb73f795bb5f260a1c8ae7e8a0d7f9..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_panorama.py
+++ /dev/null
@@ -1,409 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import gc
-import unittest
-
-import numpy as np
-import torch
-from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer
-
-from diffusers import (
- AutoencoderKL,
- DDIMScheduler,
- EulerAncestralDiscreteScheduler,
- LMSDiscreteScheduler,
- PNDMScheduler,
- StableDiffusionPanoramaPipeline,
- UNet2DConditionModel,
-)
-from diffusers.utils import slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, skip_mps
-
-from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS
-from ..test_pipelines_common import PipelineLatentTesterMixin, PipelineTesterMixin
-
-
-enable_full_determinism()
-
-
-@skip_mps
-class StableDiffusionPanoramaPipelineFastTests(PipelineLatentTesterMixin, PipelineTesterMixin, unittest.TestCase):
- pipeline_class = StableDiffusionPanoramaPipeline
- params = TEXT_TO_IMAGE_PARAMS
- batch_params = TEXT_TO_IMAGE_BATCH_PARAMS
- image_params = TEXT_TO_IMAGE_IMAGE_PARAMS
- image_latents_params = TEXT_TO_IMAGE_IMAGE_PARAMS
-
- def get_dummy_components(self):
- torch.manual_seed(0)
- unet = UNet2DConditionModel(
- block_out_channels=(32, 64),
- layers_per_block=1,
- sample_size=32,
- in_channels=4,
- out_channels=4,
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
- up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
- cross_attention_dim=32,
- )
- scheduler = DDIMScheduler()
- torch.manual_seed(0)
- vae = AutoencoderKL(
- block_out_channels=[32, 64],
- in_channels=3,
- out_channels=3,
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
- latent_channels=4,
- )
- torch.manual_seed(0)
- text_encoder_config = CLIPTextConfig(
- bos_token_id=0,
- eos_token_id=2,
- hidden_size=32,
- intermediate_size=37,
- layer_norm_eps=1e-05,
- num_attention_heads=4,
- num_hidden_layers=5,
- pad_token_id=1,
- vocab_size=1000,
- )
- text_encoder = CLIPTextModel(text_encoder_config)
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
-
- components = {
- "unet": unet,
- "scheduler": scheduler,
- "vae": vae,
- "text_encoder": text_encoder,
- "tokenizer": tokenizer,
- "safety_checker": None,
- "feature_extractor": None,
- }
- return components
-
- def get_dummy_inputs(self, device, seed=0):
- generator = torch.manual_seed(seed)
- inputs = {
- "prompt": "a photo of the dolomites",
- "generator": generator,
- # Setting height and width to None to prevent OOMs on CPU.
- "height": None,
- "width": None,
- "num_inference_steps": 1,
- "guidance_scale": 6.0,
- "output_type": "numpy",
- }
- return inputs
-
- def test_stable_diffusion_panorama_default_case(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionPanoramaPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- image = sd_pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1]
- assert image.shape == (1, 64, 64, 3)
-
- expected_slice = np.array([0.6186, 0.5374, 0.4915, 0.4135, 0.4114, 0.4563, 0.5128, 0.4977, 0.4757])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_panorama_circular_padding_case(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionPanoramaPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- image = sd_pipe(**inputs, circular_padding=True).images
- image_slice = image[0, -3:, -3:, -1]
- assert image.shape == (1, 64, 64, 3)
-
- expected_slice = np.array([0.6127, 0.6299, 0.4595, 0.4051, 0.4543, 0.3925, 0.5510, 0.5693, 0.5031])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- # override to speed the overall test timing up.
- def test_inference_batch_consistent(self):
- super().test_inference_batch_consistent(batch_sizes=[1, 2])
-
- # override to speed the overall test timing up.
- def test_inference_batch_single_identical(self):
- super().test_inference_batch_single_identical(batch_size=2, expected_max_diff=3.25e-3)
-
- def test_stable_diffusion_panorama_negative_prompt(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionPanoramaPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- negative_prompt = "french fries"
- output = sd_pipe(**inputs, negative_prompt=negative_prompt)
- image = output.images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
-
- expected_slice = np.array([0.6187, 0.5375, 0.4915, 0.4136, 0.4114, 0.4563, 0.5128, 0.4976, 0.4757])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_panorama_views_batch(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionPanoramaPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- output = sd_pipe(**inputs, view_batch_size=2)
- image = output.images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
-
- expected_slice = np.array([0.6187, 0.5375, 0.4915, 0.4136, 0.4114, 0.4563, 0.5128, 0.4976, 0.4757])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_panorama_views_batch_circular_padding(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionPanoramaPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- output = sd_pipe(**inputs, circular_padding=True, view_batch_size=2)
- image = output.images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
-
- expected_slice = np.array([0.6127, 0.6299, 0.4595, 0.4051, 0.4543, 0.3925, 0.5510, 0.5693, 0.5031])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_panorama_euler(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- components["scheduler"] = EulerAncestralDiscreteScheduler(
- beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear"
- )
- sd_pipe = StableDiffusionPanoramaPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- image = sd_pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
-
- expected_slice = np.array([0.4024, 0.6510, 0.4901, 0.5378, 0.5813, 0.5622, 0.4795, 0.4467, 0.4952])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_panorama_pndm(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- components["scheduler"] = PNDMScheduler(
- beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", skip_prk_steps=True
- )
- sd_pipe = StableDiffusionPanoramaPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- image = sd_pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
-
- expected_slice = np.array([0.6391, 0.6291, 0.4861, 0.5134, 0.5552, 0.4578, 0.5032, 0.5023, 0.4539])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
-
-@slow
-@require_torch_gpu
-class StableDiffusionPanoramaSlowTests(unittest.TestCase):
- def tearDown(self):
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- def get_inputs(self, seed=0):
- generator = torch.manual_seed(seed)
- inputs = {
- "prompt": "a photo of the dolomites",
- "generator": generator,
- "num_inference_steps": 3,
- "guidance_scale": 7.5,
- "output_type": "numpy",
- }
- return inputs
-
- def test_stable_diffusion_panorama_default(self):
- model_ckpt = "stabilityai/stable-diffusion-2-base"
- scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler")
- pipe = StableDiffusionPanoramaPipeline.from_pretrained(model_ckpt, scheduler=scheduler, safety_checker=None)
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing()
-
- inputs = self.get_inputs()
- image = pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1].flatten()
-
- assert image.shape == (1, 512, 2048, 3)
-
- expected_slice = np.array(
- [
- 0.36968392,
- 0.27025372,
- 0.32446766,
- 0.28379387,
- 0.36363274,
- 0.30733347,
- 0.27100027,
- 0.27054125,
- 0.25536096,
- ]
- )
-
- assert np.abs(expected_slice - image_slice).max() < 1e-2
-
- def test_stable_diffusion_panorama_k_lms(self):
- pipe = StableDiffusionPanoramaPipeline.from_pretrained(
- "stabilityai/stable-diffusion-2-base", safety_checker=None
- )
- pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config)
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing()
-
- inputs = self.get_inputs()
- image = pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1].flatten()
-
- assert image.shape == (1, 512, 2048, 3)
-
- expected_slice = np.array(
- [
- [
- 0.0,
- 0.0,
- 0.0,
- 0.0,
- 0.0,
- 0.0,
- 0.0,
- 0.0,
- 0.0,
- ]
- ]
- )
-
- assert np.abs(expected_slice - image_slice).max() < 1e-3
-
- def test_stable_diffusion_panorama_intermediate_state(self):
- number_of_steps = 0
-
- def callback_fn(step: int, timestep: int, latents: torch.FloatTensor) -> None:
- callback_fn.has_been_called = True
- nonlocal number_of_steps
- number_of_steps += 1
- if step == 1:
- latents = latents.detach().cpu().numpy()
- assert latents.shape == (1, 4, 64, 256)
- latents_slice = latents[0, -3:, -3:, -1]
-
- expected_slice = np.array(
- [
- 0.18681869,
- 0.33907816,
- 0.5361276,
- 0.14432865,
- -0.02856611,
- -0.73941123,
- 0.23397987,
- 0.47322682,
- -0.37823164,
- ]
- )
- assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2
- elif step == 2:
- latents = latents.detach().cpu().numpy()
- assert latents.shape == (1, 4, 64, 256)
- latents_slice = latents[0, -3:, -3:, -1]
-
- expected_slice = np.array(
- [
- 0.18539645,
- 0.33987248,
- 0.5378559,
- 0.14437142,
- -0.02455261,
- -0.7338317,
- 0.23990755,
- 0.47356272,
- -0.3786505,
- ]
- )
-
- assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2
-
- callback_fn.has_been_called = False
-
- model_ckpt = "stabilityai/stable-diffusion-2-base"
- scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler")
- pipe = StableDiffusionPanoramaPipeline.from_pretrained(model_ckpt, scheduler=scheduler, safety_checker=None)
- pipe = pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing()
-
- inputs = self.get_inputs()
- pipe(**inputs, callback=callback_fn, callback_steps=1)
- assert callback_fn.has_been_called
- assert number_of_steps == 3
-
- def test_stable_diffusion_panorama_pipeline_with_sequential_cpu_offloading(self):
- torch.cuda.empty_cache()
- torch.cuda.reset_max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
-
- model_ckpt = "stabilityai/stable-diffusion-2-base"
- scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler")
- pipe = StableDiffusionPanoramaPipeline.from_pretrained(model_ckpt, scheduler=scheduler, safety_checker=None)
- pipe = pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing(1)
- pipe.enable_sequential_cpu_offload()
-
- inputs = self.get_inputs()
- _ = pipe(**inputs)
-
- mem_bytes = torch.cuda.max_memory_allocated()
- # make sure that less than 5.2 GB is allocated
- assert mem_bytes < 5.5 * 10**9
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/test_pipeline_utils.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/test_pipeline_utils.py
deleted file mode 100644
index 51d987d8bb1151862f910822eb2c173ce4ff313c..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/test_pipeline_utils.py
+++ /dev/null
@@ -1,134 +0,0 @@
-import unittest
-
-from diffusers.pipelines.pipeline_utils import is_safetensors_compatible
-
-
-class IsSafetensorsCompatibleTests(unittest.TestCase):
- def test_all_is_compatible(self):
- filenames = [
- "safety_checker/pytorch_model.bin",
- "safety_checker/model.safetensors",
- "vae/diffusion_pytorch_model.bin",
- "vae/diffusion_pytorch_model.safetensors",
- "text_encoder/pytorch_model.bin",
- "text_encoder/model.safetensors",
- "unet/diffusion_pytorch_model.bin",
- "unet/diffusion_pytorch_model.safetensors",
- ]
- self.assertTrue(is_safetensors_compatible(filenames))
-
- def test_diffusers_model_is_compatible(self):
- filenames = [
- "unet/diffusion_pytorch_model.bin",
- "unet/diffusion_pytorch_model.safetensors",
- ]
- self.assertTrue(is_safetensors_compatible(filenames))
-
- def test_diffusers_model_is_not_compatible(self):
- filenames = [
- "safety_checker/pytorch_model.bin",
- "safety_checker/model.safetensors",
- "vae/diffusion_pytorch_model.bin",
- "vae/diffusion_pytorch_model.safetensors",
- "text_encoder/pytorch_model.bin",
- "text_encoder/model.safetensors",
- "unet/diffusion_pytorch_model.bin",
- # Removed: 'unet/diffusion_pytorch_model.safetensors',
- ]
- self.assertFalse(is_safetensors_compatible(filenames))
-
- def test_transformer_model_is_compatible(self):
- filenames = [
- "text_encoder/pytorch_model.bin",
- "text_encoder/model.safetensors",
- ]
- self.assertTrue(is_safetensors_compatible(filenames))
-
- def test_transformer_model_is_not_compatible(self):
- filenames = [
- "safety_checker/pytorch_model.bin",
- "safety_checker/model.safetensors",
- "vae/diffusion_pytorch_model.bin",
- "vae/diffusion_pytorch_model.safetensors",
- "text_encoder/pytorch_model.bin",
- # Removed: 'text_encoder/model.safetensors',
- "unet/diffusion_pytorch_model.bin",
- "unet/diffusion_pytorch_model.safetensors",
- ]
- self.assertFalse(is_safetensors_compatible(filenames))
-
- def test_all_is_compatible_variant(self):
- filenames = [
- "safety_checker/pytorch_model.fp16.bin",
- "safety_checker/model.fp16.safetensors",
- "vae/diffusion_pytorch_model.fp16.bin",
- "vae/diffusion_pytorch_model.fp16.safetensors",
- "text_encoder/pytorch_model.fp16.bin",
- "text_encoder/model.fp16.safetensors",
- "unet/diffusion_pytorch_model.fp16.bin",
- "unet/diffusion_pytorch_model.fp16.safetensors",
- ]
- variant = "fp16"
- self.assertTrue(is_safetensors_compatible(filenames, variant=variant))
-
- def test_diffusers_model_is_compatible_variant(self):
- filenames = [
- "unet/diffusion_pytorch_model.fp16.bin",
- "unet/diffusion_pytorch_model.fp16.safetensors",
- ]
- variant = "fp16"
- self.assertTrue(is_safetensors_compatible(filenames, variant=variant))
-
- def test_diffusers_model_is_compatible_variant_partial(self):
- # pass variant but use the non-variant filenames
- filenames = [
- "unet/diffusion_pytorch_model.bin",
- "unet/diffusion_pytorch_model.safetensors",
- ]
- variant = "fp16"
- self.assertTrue(is_safetensors_compatible(filenames, variant=variant))
-
- def test_diffusers_model_is_not_compatible_variant(self):
- filenames = [
- "safety_checker/pytorch_model.fp16.bin",
- "safety_checker/model.fp16.safetensors",
- "vae/diffusion_pytorch_model.fp16.bin",
- "vae/diffusion_pytorch_model.fp16.safetensors",
- "text_encoder/pytorch_model.fp16.bin",
- "text_encoder/model.fp16.safetensors",
- "unet/diffusion_pytorch_model.fp16.bin",
- # Removed: 'unet/diffusion_pytorch_model.fp16.safetensors',
- ]
- variant = "fp16"
- self.assertFalse(is_safetensors_compatible(filenames, variant=variant))
-
- def test_transformer_model_is_compatible_variant(self):
- filenames = [
- "text_encoder/pytorch_model.fp16.bin",
- "text_encoder/model.fp16.safetensors",
- ]
- variant = "fp16"
- self.assertTrue(is_safetensors_compatible(filenames, variant=variant))
-
- def test_transformer_model_is_compatible_variant_partial(self):
- # pass variant but use the non-variant filenames
- filenames = [
- "text_encoder/pytorch_model.bin",
- "text_encoder/model.safetensors",
- ]
- variant = "fp16"
- self.assertTrue(is_safetensors_compatible(filenames, variant=variant))
-
- def test_transformer_model_is_not_compatible_variant(self):
- filenames = [
- "safety_checker/pytorch_model.fp16.bin",
- "safety_checker/model.fp16.safetensors",
- "vae/diffusion_pytorch_model.fp16.bin",
- "vae/diffusion_pytorch_model.fp16.safetensors",
- "text_encoder/pytorch_model.fp16.bin",
- # 'text_encoder/model.fp16.safetensors',
- "unet/diffusion_pytorch_model.fp16.bin",
- "unet/diffusion_pytorch_model.fp16.safetensors",
- ]
- variant = "fp16"
- self.assertFalse(is_safetensors_compatible(filenames, variant=variant))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/legacy_1.x/faster_rcnn_r50_fpn_1x_coco_v1.py b/spaces/Andy1621/uniformer_image_detection/configs/legacy_1.x/faster_rcnn_r50_fpn_1x_coco_v1.py
deleted file mode 100644
index fb2f2d1e13b8c97dbf5f785dadebcccf874ff7be..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/legacy_1.x/faster_rcnn_r50_fpn_1x_coco_v1.py
+++ /dev/null
@@ -1,37 +0,0 @@
-_base_ = [
- '../_base_/models/faster_rcnn_r50_fpn.py',
- '../_base_/datasets/coco_detection.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-
-model = dict(
- type='FasterRCNN',
- pretrained='torchvision://resnet50',
- rpn_head=dict(
- type='RPNHead',
- anchor_generator=dict(
- type='LegacyAnchorGenerator',
- center_offset=0.5,
- scales=[8],
- ratios=[0.5, 1.0, 2.0],
- strides=[4, 8, 16, 32, 64]),
- bbox_coder=dict(type='LegacyDeltaXYWHBBoxCoder'),
- loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
- roi_head=dict(
- type='StandardRoIHead',
- bbox_roi_extractor=dict(
- type='SingleRoIExtractor',
- roi_layer=dict(
- type='RoIAlign',
- output_size=7,
- sampling_ratio=2,
- aligned=False),
- out_channels=256,
- featmap_strides=[4, 8, 16, 32]),
- bbox_head=dict(
- bbox_coder=dict(type='LegacyDeltaXYWHBBoxCoder'),
- loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))),
- # model training and testing settings
- train_cfg=dict(
- rpn_proposal=dict(max_per_img=2000),
- rcnn=dict(assigner=dict(match_low_quality=True))))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_faster_rcnn_x101_32x4d_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_faster_rcnn_x101_32x4d_fpn_1x_coco.py
deleted file mode 100644
index 16edd99de295161a3c246243e8c482ede4e5bdae..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_faster_rcnn_x101_32x4d_fpn_1x_coco.py
+++ /dev/null
@@ -1,30 +0,0 @@
-_base_ = '../faster_rcnn/faster_rcnn_x101_32x4d_fpn_1x_coco.py'
-
-model = dict(
- roi_head=dict(
- type='PISARoIHead',
- bbox_head=dict(
- loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))),
- train_cfg=dict(
- rpn_proposal=dict(
- nms_pre=2000,
- max_per_img=2000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0),
- rcnn=dict(
- sampler=dict(
- type='ScoreHLRSampler',
- num=512,
- pos_fraction=0.25,
- neg_pos_ub=-1,
- add_gt_as_proposals=True,
- k=0.5,
- bias=0.),
- isr=dict(k=2, bias=0),
- carl=dict(k=1, bias=0.2))),
- test_cfg=dict(
- rpn=dict(
- nms_pre=2000,
- max_per_img=2000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0)))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/psanet_r50-d8.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/psanet_r50-d8.py
deleted file mode 100644
index 689513fa9d2a40f14bf0ae4ae61f38f0dcc1b3da..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/psanet_r50-d8.py
+++ /dev/null
@@ -1,49 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='PSAHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- mask_size=(97, 97),
- psa_type='bi-direction',
- compact=False,
- shrink_factor=2,
- normalization_factor=1.0,
- psa_softmax=True,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Apex-X/ROOPOK/roop/__init__.py b/spaces/Apex-X/ROOPOK/roop/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Apex-X/Tm/roop/processors/frame/face_enhancer.py b/spaces/Apex-X/Tm/roop/processors/frame/face_enhancer.py
deleted file mode 100644
index 3ff92ce9d38420e273970c0777a108b14e7fd26b..0000000000000000000000000000000000000000
--- a/spaces/Apex-X/Tm/roop/processors/frame/face_enhancer.py
+++ /dev/null
@@ -1,81 +0,0 @@
-from typing import Any, List, Callable
-import cv2
-import threading
-import gfpgan
-
-import roop.globals
-import roop.processors.frame.core
-from roop.core import update_status
-from roop.face_analyser import get_one_face
-from roop.typing import Frame, Face
-from roop.utilities import conditional_download, resolve_relative_path, is_image, is_video
-
-FACE_ENHANCER = None
-THREAD_SEMAPHORE = threading.Semaphore()
-THREAD_LOCK = threading.Lock()
-NAME = 'ROOP.FACE-ENHANCER'
-
-
-def get_face_enhancer() -> Any:
- global FACE_ENHANCER
-
- with THREAD_LOCK:
- if FACE_ENHANCER is None:
- model_path = resolve_relative_path('../models/GFPGANv1.4.pth')
- # todo: set models path https://github.com/TencentARC/GFPGAN/issues/399
- FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1) # type: ignore[attr-defined]
- return FACE_ENHANCER
-
-
-def pre_check() -> bool:
- download_directory_path = resolve_relative_path('../models')
- conditional_download(download_directory_path, ['https://huggingface.co/henryruhs/roop/resolve/main/GFPGANv1.4.pth'])
- return True
-
-
-def pre_start() -> bool:
- if not is_image(roop.globals.target_path) and not is_video(roop.globals.target_path):
- update_status('Select an image or video for target path.', NAME)
- return False
- return True
-
-
-def post_process() -> None:
- global FACE_ENHANCER
-
- FACE_ENHANCER = None
-
-
-def enhance_face(temp_frame: Frame) -> Frame:
- with THREAD_SEMAPHORE:
- _, _, temp_frame = get_face_enhancer().enhance(
- temp_frame,
- paste_back=True
- )
- return temp_frame
-
-
-def process_frame(source_face: Face, temp_frame: Frame) -> Frame:
- target_face = get_one_face(temp_frame)
- if target_face:
- temp_frame = enhance_face(temp_frame)
- return temp_frame
-
-
-def process_frames(source_path: str, temp_frame_paths: List[str], update: Callable[[], None]) -> None:
- for temp_frame_path in temp_frame_paths:
- temp_frame = cv2.imread(temp_frame_path)
- result = process_frame(None, temp_frame)
- cv2.imwrite(temp_frame_path, result)
- if update:
- update()
-
-
-def process_image(source_path: str, target_path: str, output_path: str) -> None:
- target_frame = cv2.imread(target_path)
- result = process_frame(None, target_frame)
- cv2.imwrite(output_path, result)
-
-
-def process_video(source_path: str, temp_frame_paths: List[str]) -> None:
- roop.processors.frame.core.process_video(None, temp_frame_paths, process_frames)
diff --git a/spaces/ArchitSharma/Digital-Photo-Color-Restoration/src/app_utils.py b/spaces/ArchitSharma/Digital-Photo-Color-Restoration/src/app_utils.py
deleted file mode 100644
index 1dca9f1b020b274e6c2596cd4052bb797f59becf..0000000000000000000000000000000000000000
--- a/spaces/ArchitSharma/Digital-Photo-Color-Restoration/src/app_utils.py
+++ /dev/null
@@ -1,131 +0,0 @@
-import os
-import requests
-import random
-import _thread as thread
-from uuid import uuid4
-import urllib
-
-import numpy as np
-import skimage
-from skimage.filters import gaussian
-from PIL import Image
-
-def compress_image(image, path_original):
- size = 1920, 1080
- width = 1920
- height = 1080
-
- name = os.path.basename(path_original).split('.')
- first_name = os.path.join(os.path.dirname(path_original), name[0] + '.jpg')
-
- if image.size[0] > width and image.size[1] > height:
- image.thumbnail(size, Image.ANTIALIAS)
- image.save(first_name, quality=85)
- elif image.size[0] > width:
- wpercent = (width/float(image.size[0]))
- height = int((float(image.size[1])*float(wpercent)))
- image = image.resize((width,height), Image.ANTIALIAS)
- image.save(first_name,quality=85)
- elif image.size[1] > height:
- wpercent = (height/float(image.size[1]))
- width = int((float(image.size[0])*float(wpercent)))
- image = image.resize((width,height), Image.ANTIALIAS)
- image.save(first_name, quality=85)
- else:
- image.save(first_name, quality=85)
-
-
-def convertToJPG(path_original):
- img = Image.open(path_original)
- name = os.path.basename(path_original).split('.')
- first_name = os.path.join(os.path.dirname(path_original), name[0] + '.jpg')
-
- if img.format == "JPEG":
- image = img.convert('RGB')
- compress_image(image, path_original)
- img.close()
-
- elif img.format == "GIF":
- i = img.convert("RGBA")
- bg = Image.new("RGBA", i.size)
- image = Image.composite(i, bg, i)
- compress_image(image, path_original)
- img.close()
-
- elif img.format == "PNG":
- try:
- image = Image.new("RGB", img.size, (255,255,255))
- image.paste(img,img)
- compress_image(image, path_original)
- except ValueError:
- image = img.convert('RGB')
- compress_image(image, path_original)
-
- img.close()
-
- elif img.format == "BMP":
- image = img.convert('RGB')
- compress_image(image, path_original)
- img.close()
-
-
-
-def blur(image, x0, x1, y0, y1, sigma=1, multichannel=True):
- y0, y1 = min(y0, y1), max(y0, y1)
- x0, x1 = min(x0, x1), max(x0, x1)
- im = image.copy()
- sub_im = im[y0:y1,x0:x1].copy()
- blur_sub_im = gaussian(sub_im, sigma=sigma, multichannel=multichannel)
- blur_sub_im = np.round(255 * blur_sub_im)
- im[y0:y1,x0:x1] = blur_sub_im
- return im
-
-
-
-def download(url, filename):
- data = requests.get(url).content
- with open(filename, 'wb') as handler:
- handler.write(data)
-
- return filename
-
-
-def generate_random_filename(upload_directory, extension):
- filename = str(uuid4())
- filename = os.path.join(upload_directory, filename + "." + extension)
- return filename
-
-
-def clean_me(filename):
- if os.path.exists(filename):
- os.remove(filename)
-
-
-def clean_all(files):
- for me in files:
- clean_me(me)
-
-
-def create_directory(path):
- os.makedirs(os.path.dirname(path), exist_ok=True)
-
-
-def get_model_bin(url, output_path):
- # print('Getting model dir: ', output_path)
- if not os.path.exists(output_path):
- create_directory(output_path)
-
- urllib.request.urlretrieve(url, output_path)
-
- # cmd = "wget -O %s %s" % (output_path, url)
- # print(cmd)
- # os.system(cmd)
-
- return output_path
-
-
-#model_list = [(url, output_path), (url, output_path)]
-def get_multi_model_bin(model_list):
- for m in model_list:
- thread.start_new_thread(get_model_bin, m)
-
diff --git a/spaces/Arnx/MusicGenXvAKN/audiocraft/data/audio.py b/spaces/Arnx/MusicGenXvAKN/audiocraft/data/audio.py
deleted file mode 100644
index 2048df6f175d7303bcf5c7b931922fd297908ead..0000000000000000000000000000000000000000
--- a/spaces/Arnx/MusicGenXvAKN/audiocraft/data/audio.py
+++ /dev/null
@@ -1,215 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Audio IO methods are defined in this module (info, read, write),
-We rely on av library for faster read when possible, otherwise on torchaudio.
-"""
-
-from dataclasses import dataclass
-from pathlib import Path
-import logging
-import typing as tp
-
-import numpy as np
-import soundfile
-import torch
-from torch.nn import functional as F
-import torchaudio as ta
-
-import av
-
-from .audio_utils import f32_pcm, i16_pcm, normalize_audio
-
-
-_av_initialized = False
-
-
-def _init_av():
- global _av_initialized
- if _av_initialized:
- return
- logger = logging.getLogger('libav.mp3')
- logger.setLevel(logging.ERROR)
- _av_initialized = True
-
-
-@dataclass(frozen=True)
-class AudioFileInfo:
- sample_rate: int
- duration: float
- channels: int
-
-
-def _av_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- _init_av()
- with av.open(str(filepath)) as af:
- stream = af.streams.audio[0]
- sample_rate = stream.codec_context.sample_rate
- duration = float(stream.duration * stream.time_base)
- channels = stream.channels
- return AudioFileInfo(sample_rate, duration, channels)
-
-
-def _soundfile_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- info = soundfile.info(filepath)
- return AudioFileInfo(info.samplerate, info.duration, info.channels)
-
-
-def audio_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- # torchaudio no longer returns useful duration informations for some formats like mp3s.
- filepath = Path(filepath)
- if filepath.suffix in ['.flac', '.ogg']: # TODO: Validate .ogg can be safely read with av_info
- # ffmpeg has some weird issue with flac.
- return _soundfile_info(filepath)
- else:
- return _av_info(filepath)
-
-
-def _av_read(filepath: tp.Union[str, Path], seek_time: float = 0, duration: float = -1.) -> tp.Tuple[torch.Tensor, int]:
- """FFMPEG-based audio file reading using PyAV bindings.
- Soundfile cannot read mp3 and av_read is more efficient than torchaudio.
-
- Args:
- filepath (str or Path): Path to audio file to read.
- seek_time (float): Time at which to start reading in the file.
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
- Returns:
- Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate
- """
- _init_av()
- with av.open(str(filepath)) as af:
- stream = af.streams.audio[0]
- sr = stream.codec_context.sample_rate
- num_frames = int(sr * duration) if duration >= 0 else -1
- frame_offset = int(sr * seek_time)
- # we need a small negative offset otherwise we get some edge artifact
- # from the mp3 decoder.
- af.seek(int(max(0, (seek_time - 0.1)) / stream.time_base), stream=stream)
- frames = []
- length = 0
- for frame in af.decode(streams=stream.index):
- current_offset = int(frame.rate * frame.pts * frame.time_base)
- strip = max(0, frame_offset - current_offset)
- buf = torch.from_numpy(frame.to_ndarray())
- if buf.shape[0] != stream.channels:
- buf = buf.view(-1, stream.channels).t()
- buf = buf[:, strip:]
- frames.append(buf)
- length += buf.shape[1]
- if num_frames > 0 and length >= num_frames:
- break
- assert frames
- # If the above assert fails, it is likely because we seeked past the end of file point,
- # in which case ffmpeg returns a single frame with only zeros, and a weird timestamp.
- # This will need proper debugging, in due time.
- wav = torch.cat(frames, dim=1)
- assert wav.shape[0] == stream.channels
- if num_frames > 0:
- wav = wav[:, :num_frames]
- return f32_pcm(wav), sr
-
-
-def audio_read(filepath: tp.Union[str, Path], seek_time: float = 0.,
- duration: float = -1., pad: bool = False) -> tp.Tuple[torch.Tensor, int]:
- """Read audio by picking the most appropriate backend tool based on the audio format.
-
- Args:
- filepath (str or Path): Path to audio file to read.
- seek_time (float): Time at which to start reading in the file.
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
- pad (bool): Pad output audio if not reaching expected duration.
- Returns:
- Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate.
- """
- fp = Path(filepath)
- if fp.suffix in ['.flac', '.ogg']: # TODO: check if we can safely use av_read for .ogg
- # There is some bug with ffmpeg and reading flac
- info = _soundfile_info(filepath)
- frames = -1 if duration <= 0 else int(duration * info.sample_rate)
- frame_offset = int(seek_time * info.sample_rate)
- wav, sr = soundfile.read(filepath, start=frame_offset, frames=frames, dtype=np.float32)
- assert info.sample_rate == sr, f"Mismatch of sample rates {info.sample_rate} {sr}"
- wav = torch.from_numpy(wav).t().contiguous()
- if len(wav.shape) == 1:
- wav = torch.unsqueeze(wav, 0)
- elif (
- fp.suffix in ['.wav', '.mp3'] and fp.suffix[1:] in ta.utils.sox_utils.list_read_formats()
- and duration <= 0 and seek_time == 0
- ):
- # Torchaudio is faster if we load an entire file at once.
- wav, sr = ta.load(fp)
- else:
- wav, sr = _av_read(filepath, seek_time, duration)
- if pad and duration > 0:
- expected_frames = int(duration * sr)
- wav = F.pad(wav, (0, expected_frames - wav.shape[-1]))
- return wav, sr
-
-
-def audio_write(stem_name: tp.Union[str, Path],
- wav: torch.Tensor, sample_rate: int,
- format: str = 'wav', mp3_rate: int = 320, normalize: bool = True,
- strategy: str = 'peak', peak_clip_headroom_db: float = 1,
- rms_headroom_db: float = 18, loudness_headroom_db: float = 14,
- loudness_compressor: bool = False,
- log_clipping: bool = True, make_parent_dir: bool = True,
- add_suffix: bool = True) -> Path:
- """Convenience function for saving audio to disk. Returns the filename the audio was written to.
-
- Args:
- stem_name (str or Path): Filename without extension which will be added automatically.
- format (str): Either "wav" or "mp3".
- mp3_rate (int): kbps when using mp3s.
- normalize (bool): if `True` (default), normalizes according to the prescribed
- strategy (see after). If `False`, the strategy is only used in case clipping
- would happen.
- strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak',
- i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square
- with extra headroom to avoid clipping. 'clip' just clips.
- peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy.
- rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger
- than the `peak_clip` one to avoid further clipping.
- loudness_headroom_db (float): Target loudness for loudness normalization.
- loudness_compressor (bool): Uses tanh for soft clipping when strategy is 'loudness'.
- when strategy is 'loudness'log_clipping (bool): If True, basic logging on stderr when clipping still
- occurs despite strategy (only for 'rms').
- make_parent_dir (bool): Make parent directory if it doesn't exist.
- Returns:
- Path: Path of the saved audio.
- """
- assert wav.dtype.is_floating_point, "wav is not floating point"
- if wav.dim() == 1:
- wav = wav[None]
- elif wav.dim() > 2:
- raise ValueError("Input wav should be at most 2 dimension.")
- assert wav.isfinite().all()
- wav = normalize_audio(wav, normalize, strategy, peak_clip_headroom_db,
- rms_headroom_db, loudness_headroom_db, log_clipping=log_clipping,
- sample_rate=sample_rate, stem_name=str(stem_name))
- kwargs: dict = {}
- if format == 'mp3':
- suffix = '.mp3'
- kwargs.update({"compression": mp3_rate})
- elif format == 'wav':
- wav = i16_pcm(wav)
- suffix = '.wav'
- kwargs.update({"encoding": "PCM_S", "bits_per_sample": 16})
- else:
- raise RuntimeError(f"Invalid format {format}. Only wav or mp3 are supported.")
- if not add_suffix:
- suffix = ''
- path = Path(str(stem_name) + suffix)
- if make_parent_dir:
- path.parent.mkdir(exist_ok=True, parents=True)
- try:
- ta.save(path, wav, sample_rate, **kwargs)
- except Exception:
- if path.exists():
- # we do not want to leave half written files around.
- path.unlink()
- raise
- return path
diff --git a/spaces/Audio-AGI/AudioSep/models/base.py b/spaces/Audio-AGI/AudioSep/models/base.py
deleted file mode 100644
index 6b70dd804dcf9b9cf3a9aacd84c707852bab2d7c..0000000000000000000000000000000000000000
--- a/spaces/Audio-AGI/AudioSep/models/base.py
+++ /dev/null
@@ -1,152 +0,0 @@
-import torch.nn as nn
-import torch
-import numpy as np
-import torch.nn.functional as F
-import math
-from torchlibrosa.stft import magphase
-
-
-def init_layer(layer):
- """Initialize a Linear or Convolutional layer. """
- nn.init.xavier_uniform_(layer.weight)
-
- if hasattr(layer, "bias"):
- if layer.bias is not None:
- layer.bias.data.fill_(0.0)
-
-
-def init_bn(bn):
- """Initialize a Batchnorm layer. """
- bn.bias.data.fill_(0.0)
- bn.weight.data.fill_(1.0)
-
-
-def init_embedding(layer):
- """Initialize a Linear or Convolutional layer. """
- nn.init.uniform_(layer.weight, -1., 1.)
-
- if hasattr(layer, 'bias'):
- if layer.bias is not None:
- layer.bias.data.fill_(0.)
-
-
-def init_gru(rnn):
- """Initialize a GRU layer. """
-
- def _concat_init(tensor, init_funcs):
- (length, fan_out) = tensor.shape
- fan_in = length // len(init_funcs)
-
- for (i, init_func) in enumerate(init_funcs):
- init_func(tensor[i * fan_in : (i + 1) * fan_in, :])
-
- def _inner_uniform(tensor):
- fan_in = nn.init._calculate_correct_fan(tensor, "fan_in")
- nn.init.uniform_(tensor, -math.sqrt(3 / fan_in), math.sqrt(3 / fan_in))
-
- for i in range(rnn.num_layers):
- _concat_init(
- getattr(rnn, "weight_ih_l{}".format(i)),
- [_inner_uniform, _inner_uniform, _inner_uniform],
- )
- torch.nn.init.constant_(getattr(rnn, "bias_ih_l{}".format(i)), 0)
-
- _concat_init(
- getattr(rnn, "weight_hh_l{}".format(i)),
- [_inner_uniform, _inner_uniform, nn.init.orthogonal_],
- )
- torch.nn.init.constant_(getattr(rnn, "bias_hh_l{}".format(i)), 0)
-
-
-def act(x, activation):
- if activation == "relu":
- return F.relu_(x)
-
- elif activation == "leaky_relu":
- return F.leaky_relu_(x, negative_slope=0.01)
-
- elif activation == "swish":
- return x * torch.sigmoid(x)
-
- else:
- raise Exception("Incorrect activation!")
-
-
-class Base:
- def __init__(self):
- pass
-
- def spectrogram(self, input, eps=0.):
- (real, imag) = self.stft(input)
- return torch.clamp(real ** 2 + imag ** 2, eps, np.inf) ** 0.5
-
- def spectrogram_phase(self, input, eps=0.):
- (real, imag) = self.stft(input)
- mag = torch.clamp(real ** 2 + imag ** 2, eps, np.inf) ** 0.5
- cos = real / mag
- sin = imag / mag
- return mag, cos, sin
-
-
- def wav_to_spectrogram_phase(self, input, eps=1e-10):
- """Waveform to spectrogram.
-
- Args:
- input: (batch_size, segment_samples, channels_num)
-
- Outputs:
- output: (batch_size, channels_num, time_steps, freq_bins)
- """
- sp_list = []
- cos_list = []
- sin_list = []
- channels_num = input.shape[1]
- for channel in range(channels_num):
- mag, cos, sin = self.spectrogram_phase(input[:, channel, :], eps=eps)
- sp_list.append(mag)
- cos_list.append(cos)
- sin_list.append(sin)
-
- sps = torch.cat(sp_list, dim=1)
- coss = torch.cat(cos_list, dim=1)
- sins = torch.cat(sin_list, dim=1)
- return sps, coss, sins
-
- def wav_to_spectrogram(self, input, eps=0.):
- """Waveform to spectrogram.
-
- Args:
- input: (batch_size, segment_samples, channels_num)
-
- Outputs:
- output: (batch_size, channels_num, time_steps, freq_bins)
- """
- sp_list = []
- channels_num = input.shape[1]
- for channel in range(channels_num):
- sp_list.append(self.spectrogram(input[:, channel, :], eps=eps))
-
- output = torch.cat(sp_list, dim=1)
- return output
-
-
- def spectrogram_to_wav(self, input, spectrogram, length=None):
- """Spectrogram to waveform.
-
- Args:
- input: (batch_size, segment_samples, channels_num)
- spectrogram: (batch_size, channels_num, time_steps, freq_bins)
-
- Outputs:
- output: (batch_size, segment_samples, channels_num)
- """
- channels_num = input.shape[1]
- wav_list = []
- for channel in range(channels_num):
- (real, imag) = self.stft(input[:, channel, :])
- (_, cos, sin) = magphase(real, imag)
- wav_list.append(self.istft(spectrogram[:, channel : channel + 1, :, :] * cos,
- spectrogram[:, channel : channel + 1, :, :] * sin, length))
-
- output = torch.stack(wav_list, dim=1)
- return output
diff --git a/spaces/AvaterClasher/Food_Classifier_Refined_MONI/model.py b/spaces/AvaterClasher/Food_Classifier_Refined_MONI/model.py
deleted file mode 100644
index 2060a8a6ae4f6692cc634c067a876cb1daea285b..0000000000000000000000000000000000000000
--- a/spaces/AvaterClasher/Food_Classifier_Refined_MONI/model.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import torch
-import torchvision
-
-from torch import nn
-
-def create_effnetb2_model(num_classes:int=3, # default output classes = 3 (pizza, steak, sushi)
- seed:int=42):
- # 1, 2, 3 Create EffNetB2 pretrained weights, transforms and model
- weights = torchvision.models.EfficientNet_B2_Weights.DEFAULT
- transforms = weights.transforms()
- model = torchvision.models.efficientnet_b2(weights=weights)
-
- # 4. Freeze all layers in the base model
- for param in model.parameters():
- param.requires_grad = False
-
- # 5. Change classifier head with random seed for reproducibility
- torch.manual_seed(seed)
- model.classifier = nn.Sequential(
- nn.Dropout(p=0.3, inplace=True),
- nn.Linear(in_features=1408, out_features=num_classes)
- )
-
- return model, transforms
diff --git a/spaces/Awesimo/jojogan/e4e/models/encoders/model_irse.py b/spaces/Awesimo/jojogan/e4e/models/encoders/model_irse.py
deleted file mode 100644
index 6a94d67542f961ff6533f0335cf4cb0fa54024fb..0000000000000000000000000000000000000000
--- a/spaces/Awesimo/jojogan/e4e/models/encoders/model_irse.py
+++ /dev/null
@@ -1,84 +0,0 @@
-from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module
-from e4e.models.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm
-
-"""
-Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch)
-"""
-
-
-class Backbone(Module):
- def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True):
- super(Backbone, self).__init__()
- assert input_size in [112, 224], "input_size should be 112 or 224"
- assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152"
- assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se"
- blocks = get_blocks(num_layers)
- if mode == 'ir':
- unit_module = bottleneck_IR
- elif mode == 'ir_se':
- unit_module = bottleneck_IR_SE
- self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False),
- BatchNorm2d(64),
- PReLU(64))
- if input_size == 112:
- self.output_layer = Sequential(BatchNorm2d(512),
- Dropout(drop_ratio),
- Flatten(),
- Linear(512 * 7 * 7, 512),
- BatchNorm1d(512, affine=affine))
- else:
- self.output_layer = Sequential(BatchNorm2d(512),
- Dropout(drop_ratio),
- Flatten(),
- Linear(512 * 14 * 14, 512),
- BatchNorm1d(512, affine=affine))
-
- modules = []
- for block in blocks:
- for bottleneck in block:
- modules.append(unit_module(bottleneck.in_channel,
- bottleneck.depth,
- bottleneck.stride))
- self.body = Sequential(*modules)
-
- def forward(self, x):
- x = self.input_layer(x)
- x = self.body(x)
- x = self.output_layer(x)
- return l2_norm(x)
-
-
-def IR_50(input_size):
- """Constructs a ir-50 model."""
- model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_101(input_size):
- """Constructs a ir-101 model."""
- model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_152(input_size):
- """Constructs a ir-152 model."""
- model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_SE_50(input_size):
- """Constructs a ir_se-50 model."""
- model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_SE_101(input_size):
- """Constructs a ir_se-101 model."""
- model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_SE_152(input_size):
- """Constructs a ir_se-152 model."""
- model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False)
- return model
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/transforms/augmentation_impl.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/transforms/augmentation_impl.py
deleted file mode 100644
index 652a34a9aef2d4004f46ad7814befe6d1c230bc4..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/transforms/augmentation_impl.py
+++ /dev/null
@@ -1,614 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-"""
-Implement many useful :class:`Augmentation`.
-"""
-import numpy as np
-import sys
-from typing import Tuple
-import torch
-from fvcore.transforms.transform import (
- BlendTransform,
- CropTransform,
- HFlipTransform,
- NoOpTransform,
- PadTransform,
- Transform,
- TransformList,
- VFlipTransform,
-)
-from PIL import Image
-
-from .augmentation import Augmentation, _transform_to_aug
-from .transform import ExtentTransform, ResizeTransform, RotationTransform
-
-__all__ = [
- "FixedSizeCrop",
- "RandomApply",
- "RandomBrightness",
- "RandomContrast",
- "RandomCrop",
- "RandomExtent",
- "RandomFlip",
- "RandomSaturation",
- "RandomLighting",
- "RandomRotation",
- "Resize",
- "ResizeScale",
- "ResizeShortestEdge",
- "RandomCrop_CategoryAreaConstraint",
-]
-
-
-class RandomApply(Augmentation):
- """
- Randomly apply an augmentation with a given probability.
- """
-
- def __init__(self, tfm_or_aug, prob=0.5):
- """
- Args:
- tfm_or_aug (Transform, Augmentation): the transform or augmentation
- to be applied. It can either be a `Transform` or `Augmentation`
- instance.
- prob (float): probability between 0.0 and 1.0 that
- the wrapper transformation is applied
- """
- super().__init__()
- self.aug = _transform_to_aug(tfm_or_aug)
- assert 0.0 <= prob <= 1.0, f"Probablity must be between 0.0 and 1.0 (given: {prob})"
- self.prob = prob
-
- def get_transform(self, *args):
- do = self._rand_range() < self.prob
- if do:
- return self.aug.get_transform(*args)
- else:
- return NoOpTransform()
-
- def __call__(self, aug_input):
- do = self._rand_range() < self.prob
- if do:
- return self.aug(aug_input)
- else:
- return NoOpTransform()
-
-
-class RandomFlip(Augmentation):
- """
- Flip the image horizontally or vertically with the given probability.
- """
-
- def __init__(self, prob=0.5, *, horizontal=True, vertical=False):
- """
- Args:
- prob (float): probability of flip.
- horizontal (boolean): whether to apply horizontal flipping
- vertical (boolean): whether to apply vertical flipping
- """
- super().__init__()
-
- if horizontal and vertical:
- raise ValueError("Cannot do both horiz and vert. Please use two Flip instead.")
- if not horizontal and not vertical:
- raise ValueError("At least one of horiz or vert has to be True!")
- self._init(locals())
-
- def get_transform(self, image):
- h, w = image.shape[:2]
- do = self._rand_range() < self.prob
- if do:
- if self.horizontal:
- return HFlipTransform(w)
- elif self.vertical:
- return VFlipTransform(h)
- else:
- return NoOpTransform()
-
-
-class Resize(Augmentation):
- """Resize image to a fixed target size"""
-
- def __init__(self, shape, interp=Image.BILINEAR):
- """
- Args:
- shape: (h, w) tuple or a int
- interp: PIL interpolation method
- """
- if isinstance(shape, int):
- shape = (shape, shape)
- shape = tuple(shape)
- self._init(locals())
-
- def get_transform(self, image):
- return ResizeTransform(
- image.shape[0], image.shape[1], self.shape[0], self.shape[1], self.interp
- )
-
-
-class ResizeShortestEdge(Augmentation):
- """
- Resize the image while keeping the aspect ratio unchanged.
- It attempts to scale the shorter edge to the given `short_edge_length`,
- as long as the longer edge does not exceed `max_size`.
- If `max_size` is reached, then downscale so that the longer edge does not exceed max_size.
- """
-
- @torch.jit.unused
- def __init__(
- self, short_edge_length, max_size=sys.maxsize, sample_style="range", interp=Image.BILINEAR
- ):
- """
- Args:
- short_edge_length (list[int]): If ``sample_style=="range"``,
- a [min, max] interval from which to sample the shortest edge length.
- If ``sample_style=="choice"``, a list of shortest edge lengths to sample from.
- max_size (int): maximum allowed longest edge length.
- sample_style (str): either "range" or "choice".
- """
- super().__init__()
- assert sample_style in ["range", "choice"], sample_style
-
- self.is_range = sample_style == "range"
- if isinstance(short_edge_length, int):
- short_edge_length = (short_edge_length, short_edge_length)
- if self.is_range:
- assert len(short_edge_length) == 2, (
- "short_edge_length must be two values using 'range' sample style."
- f" Got {short_edge_length}!"
- )
- self._init(locals())
-
- @torch.jit.unused
- def get_transform(self, image):
- h, w = image.shape[:2]
- if self.is_range:
- size = np.random.randint(self.short_edge_length[0], self.short_edge_length[1] + 1)
- else:
- size = np.random.choice(self.short_edge_length)
- if size == 0:
- return NoOpTransform()
-
- newh, neww = ResizeShortestEdge.get_output_shape(h, w, size, self.max_size)
- return ResizeTransform(h, w, newh, neww, self.interp)
-
- @staticmethod
- def get_output_shape(
- oldh: int, oldw: int, short_edge_length: int, max_size: int
- ) -> Tuple[int, int]:
- """
- Compute the output size given input size and target short edge length.
- """
- h, w = oldh, oldw
- size = short_edge_length * 1.0
- scale = size / min(h, w)
- if h < w:
- newh, neww = size, scale * w
- else:
- newh, neww = scale * h, size
- if max(newh, neww) > max_size:
- scale = max_size * 1.0 / max(newh, neww)
- newh = newh * scale
- neww = neww * scale
- neww = int(neww + 0.5)
- newh = int(newh + 0.5)
- return (newh, neww)
-
-
-class ResizeScale(Augmentation):
- """
- Takes target size as input and randomly scales the given target size between `min_scale`
- and `max_scale`. It then scales the input image such that it fits inside the scaled target
- box, keeping the aspect ratio constant.
- This implements the resize part of the Google's 'resize_and_crop' data augmentation:
- https://github.com/tensorflow/tpu/blob/master/models/official/detection/utils/input_utils.py#L127
- """
-
- def __init__(
- self,
- min_scale: float,
- max_scale: float,
- target_height: int,
- target_width: int,
- interp: int = Image.BILINEAR,
- ):
- """
- Args:
- min_scale: minimum image scale range.
- max_scale: maximum image scale range.
- target_height: target image height.
- target_width: target image width.
- interp: image interpolation method.
- """
- super().__init__()
- self._init(locals())
-
- def _get_resize(self, image: np.ndarray, scale: float) -> Transform:
- input_size = image.shape[:2]
-
- # Compute new target size given a scale.
- target_size = (self.target_height, self.target_width)
- target_scale_size = np.multiply(target_size, scale)
-
- # Compute actual rescaling applied to input image and output size.
- output_scale = np.minimum(
- target_scale_size[0] / input_size[0], target_scale_size[1] / input_size[1]
- )
- output_size = np.round(np.multiply(input_size, output_scale)).astype(int)
-
- return ResizeTransform(
- input_size[0], input_size[1], output_size[0], output_size[1], self.interp
- )
-
- def get_transform(self, image: np.ndarray) -> Transform:
- random_scale = np.random.uniform(self.min_scale, self.max_scale)
- return self._get_resize(image, random_scale)
-
-
-class RandomRotation(Augmentation):
- """
- This method returns a copy of this image, rotated the given
- number of degrees counter clockwise around the given center.
- """
-
- def __init__(self, angle, expand=True, center=None, sample_style="range", interp=None):
- """
- Args:
- angle (list[float]): If ``sample_style=="range"``,
- a [min, max] interval from which to sample the angle (in degrees).
- If ``sample_style=="choice"``, a list of angles to sample from
- expand (bool): choose if the image should be resized to fit the whole
- rotated image (default), or simply cropped
- center (list[[float, float]]): If ``sample_style=="range"``,
- a [[minx, miny], [maxx, maxy]] relative interval from which to sample the center,
- [0, 0] being the top left of the image and [1, 1] the bottom right.
- If ``sample_style=="choice"``, a list of centers to sample from
- Default: None, which means that the center of rotation is the center of the image
- center has no effect if expand=True because it only affects shifting
- """
- super().__init__()
- assert sample_style in ["range", "choice"], sample_style
- self.is_range = sample_style == "range"
- if isinstance(angle, (float, int)):
- angle = (angle, angle)
- if center is not None and isinstance(center[0], (float, int)):
- center = (center, center)
- self._init(locals())
-
- def get_transform(self, image):
- h, w = image.shape[:2]
- center = None
- if self.is_range:
- angle = np.random.uniform(self.angle[0], self.angle[1])
- if self.center is not None:
- center = (
- np.random.uniform(self.center[0][0], self.center[1][0]),
- np.random.uniform(self.center[0][1], self.center[1][1]),
- )
- else:
- angle = np.random.choice(self.angle)
- if self.center is not None:
- center = np.random.choice(self.center)
-
- if center is not None:
- center = (w * center[0], h * center[1]) # Convert to absolute coordinates
-
- if angle % 360 == 0:
- return NoOpTransform()
-
- return RotationTransform(h, w, angle, expand=self.expand, center=center, interp=self.interp)
-
-
-class FixedSizeCrop(Augmentation):
- """
- If `crop_size` is smaller than the input image size, then it uses a random crop of
- the crop size. If `crop_size` is larger than the input image size, then it pads
- the right and the bottom of the image to the crop size if `pad` is True, otherwise
- it returns the smaller image.
- """
-
- def __init__(self, crop_size: Tuple[int], pad: bool = True, pad_value: float = 128.0):
- """
- Args:
- crop_size: target image (height, width).
- pad: if True, will pad images smaller than `crop_size` up to `crop_size`
- pad_value: the padding value.
- """
- super().__init__()
- self._init(locals())
-
- def _get_crop(self, image: np.ndarray) -> Transform:
- # Compute the image scale and scaled size.
- input_size = image.shape[:2]
- output_size = self.crop_size
-
- # Add random crop if the image is scaled up.
- max_offset = np.subtract(input_size, output_size)
- max_offset = np.maximum(max_offset, 0)
- offset = np.multiply(max_offset, np.random.uniform(0.0, 1.0))
- offset = np.round(offset).astype(int)
- return CropTransform(
- offset[1], offset[0], output_size[1], output_size[0], input_size[1], input_size[0]
- )
-
- def _get_pad(self, image: np.ndarray) -> Transform:
- # Compute the image scale and scaled size.
- input_size = image.shape[:2]
- output_size = self.crop_size
-
- # Add padding if the image is scaled down.
- pad_size = np.subtract(output_size, input_size)
- pad_size = np.maximum(pad_size, 0)
- original_size = np.minimum(input_size, output_size)
- return PadTransform(
- 0, 0, pad_size[1], pad_size[0], original_size[1], original_size[0], self.pad_value
- )
-
- def get_transform(self, image: np.ndarray) -> TransformList:
- transforms = [self._get_crop(image)]
- if self.pad:
- transforms.append(self._get_pad(image))
- return TransformList(transforms)
-
-
-class RandomCrop(Augmentation):
- """
- Randomly crop a rectangle region out of an image.
- """
-
- def __init__(self, crop_type: str, crop_size):
- """
- Args:
- crop_type (str): one of "relative_range", "relative", "absolute", "absolute_range".
- crop_size (tuple[float, float]): two floats, explained below.
-
- - "relative": crop a (H * crop_size[0], W * crop_size[1]) region from an input image of
- size (H, W). crop size should be in (0, 1]
- - "relative_range": uniformly sample two values from [crop_size[0], 1]
- and [crop_size[1]], 1], and use them as in "relative" crop type.
- - "absolute" crop a (crop_size[0], crop_size[1]) region from input image.
- crop_size must be smaller than the input image size.
- - "absolute_range", for an input of size (H, W), uniformly sample H_crop in
- [crop_size[0], min(H, crop_size[1])] and W_crop in [crop_size[0], min(W, crop_size[1])].
- Then crop a region (H_crop, W_crop).
- """
- # TODO style of relative_range and absolute_range are not consistent:
- # one takes (h, w) but another takes (min, max)
- super().__init__()
- assert crop_type in ["relative_range", "relative", "absolute", "absolute_range"]
- self._init(locals())
-
- def get_transform(self, image):
- h, w = image.shape[:2]
- croph, cropw = self.get_crop_size((h, w))
- assert h >= croph and w >= cropw, "Shape computation in {} has bugs.".format(self)
- h0 = np.random.randint(h - croph + 1)
- w0 = np.random.randint(w - cropw + 1)
- return CropTransform(w0, h0, cropw, croph)
-
- def get_crop_size(self, image_size):
- """
- Args:
- image_size (tuple): height, width
-
- Returns:
- crop_size (tuple): height, width in absolute pixels
- """
- h, w = image_size
- if self.crop_type == "relative":
- ch, cw = self.crop_size
- return int(h * ch + 0.5), int(w * cw + 0.5)
- elif self.crop_type == "relative_range":
- crop_size = np.asarray(self.crop_size, dtype=np.float32)
- ch, cw = crop_size + np.random.rand(2) * (1 - crop_size)
- return int(h * ch + 0.5), int(w * cw + 0.5)
- elif self.crop_type == "absolute":
- return (min(self.crop_size[0], h), min(self.crop_size[1], w))
- elif self.crop_type == "absolute_range":
- assert self.crop_size[0] <= self.crop_size[1]
- ch = np.random.randint(min(h, self.crop_size[0]), min(h, self.crop_size[1]) + 1)
- cw = np.random.randint(min(w, self.crop_size[0]), min(w, self.crop_size[1]) + 1)
- return ch, cw
- else:
- raise NotImplementedError("Unknown crop type {}".format(self.crop_type))
-
-
-class RandomCrop_CategoryAreaConstraint(Augmentation):
- """
- Similar to :class:`RandomCrop`, but find a cropping window such that no single category
- occupies a ratio of more than `single_category_max_area` in semantic segmentation ground
- truth, which can cause unstability in training. The function attempts to find such a valid
- cropping window for at most 10 times.
- """
-
- def __init__(
- self,
- crop_type: str,
- crop_size,
- single_category_max_area: float = 1.0,
- ignored_category: int = None,
- ):
- """
- Args:
- crop_type, crop_size: same as in :class:`RandomCrop`
- single_category_max_area: the maximum allowed area ratio of a
- category. Set to 1.0 to disable
- ignored_category: allow this category in the semantic segmentation
- ground truth to exceed the area ratio. Usually set to the category
- that's ignored in training.
- """
- self.crop_aug = RandomCrop(crop_type, crop_size)
- self._init(locals())
-
- def get_transform(self, image, sem_seg):
- if self.single_category_max_area >= 1.0:
- return self.crop_aug.get_transform(image)
- else:
- h, w = sem_seg.shape
- for _ in range(10):
- crop_size = self.crop_aug.get_crop_size((h, w))
- y0 = np.random.randint(h - crop_size[0] + 1)
- x0 = np.random.randint(w - crop_size[1] + 1)
- sem_seg_temp = sem_seg[y0 : y0 + crop_size[0], x0 : x0 + crop_size[1]]
- labels, cnt = np.unique(sem_seg_temp, return_counts=True)
- if self.ignored_category is not None:
- cnt = cnt[labels != self.ignored_category]
- if len(cnt) > 1 and np.max(cnt) < np.sum(cnt) * self.single_category_max_area:
- break
- crop_tfm = CropTransform(x0, y0, crop_size[1], crop_size[0])
- return crop_tfm
-
-
-class RandomExtent(Augmentation):
- """
- Outputs an image by cropping a random "subrect" of the source image.
-
- The subrect can be parameterized to include pixels outside the source image,
- in which case they will be set to zeros (i.e. black). The size of the output
- image will vary with the size of the random subrect.
- """
-
- def __init__(self, scale_range, shift_range):
- """
- Args:
- output_size (h, w): Dimensions of output image
- scale_range (l, h): Range of input-to-output size scaling factor
- shift_range (x, y): Range of shifts of the cropped subrect. The rect
- is shifted by [w / 2 * Uniform(-x, x), h / 2 * Uniform(-y, y)],
- where (w, h) is the (width, height) of the input image. Set each
- component to zero to crop at the image's center.
- """
- super().__init__()
- self._init(locals())
-
- def get_transform(self, image):
- img_h, img_w = image.shape[:2]
-
- # Initialize src_rect to fit the input image.
- src_rect = np.array([-0.5 * img_w, -0.5 * img_h, 0.5 * img_w, 0.5 * img_h])
-
- # Apply a random scaling to the src_rect.
- src_rect *= np.random.uniform(self.scale_range[0], self.scale_range[1])
-
- # Apply a random shift to the coordinates origin.
- src_rect[0::2] += self.shift_range[0] * img_w * (np.random.rand() - 0.5)
- src_rect[1::2] += self.shift_range[1] * img_h * (np.random.rand() - 0.5)
-
- # Map src_rect coordinates into image coordinates (center at corner).
- src_rect[0::2] += 0.5 * img_w
- src_rect[1::2] += 0.5 * img_h
-
- return ExtentTransform(
- src_rect=(src_rect[0], src_rect[1], src_rect[2], src_rect[3]),
- output_size=(int(src_rect[3] - src_rect[1]), int(src_rect[2] - src_rect[0])),
- )
-
-
-class RandomContrast(Augmentation):
- """
- Randomly transforms image contrast.
-
- Contrast intensity is uniformly sampled in (intensity_min, intensity_max).
- - intensity < 1 will reduce contrast
- - intensity = 1 will preserve the input image
- - intensity > 1 will increase contrast
-
- See: https://pillow.readthedocs.io/en/3.0.x/reference/ImageEnhance.html
- """
-
- def __init__(self, intensity_min, intensity_max):
- """
- Args:
- intensity_min (float): Minimum augmentation
- intensity_max (float): Maximum augmentation
- """
- super().__init__()
- self._init(locals())
-
- def get_transform(self, image):
- w = np.random.uniform(self.intensity_min, self.intensity_max)
- return BlendTransform(src_image=image.mean(), src_weight=1 - w, dst_weight=w)
-
-
-class RandomBrightness(Augmentation):
- """
- Randomly transforms image brightness.
-
- Brightness intensity is uniformly sampled in (intensity_min, intensity_max).
- - intensity < 1 will reduce brightness
- - intensity = 1 will preserve the input image
- - intensity > 1 will increase brightness
-
- See: https://pillow.readthedocs.io/en/3.0.x/reference/ImageEnhance.html
- """
-
- def __init__(self, intensity_min, intensity_max):
- """
- Args:
- intensity_min (float): Minimum augmentation
- intensity_max (float): Maximum augmentation
- """
- super().__init__()
- self._init(locals())
-
- def get_transform(self, image):
- w = np.random.uniform(self.intensity_min, self.intensity_max)
- return BlendTransform(src_image=0, src_weight=1 - w, dst_weight=w)
-
-
-class RandomSaturation(Augmentation):
- """
- Randomly transforms saturation of an RGB image.
- Input images are assumed to have 'RGB' channel order.
-
- Saturation intensity is uniformly sampled in (intensity_min, intensity_max).
- - intensity < 1 will reduce saturation (make the image more grayscale)
- - intensity = 1 will preserve the input image
- - intensity > 1 will increase saturation
-
- See: https://pillow.readthedocs.io/en/3.0.x/reference/ImageEnhance.html
- """
-
- def __init__(self, intensity_min, intensity_max):
- """
- Args:
- intensity_min (float): Minimum augmentation (1 preserves input).
- intensity_max (float): Maximum augmentation (1 preserves input).
- """
- super().__init__()
- self._init(locals())
-
- def get_transform(self, image):
- assert image.shape[-1] == 3, "RandomSaturation only works on RGB images"
- w = np.random.uniform(self.intensity_min, self.intensity_max)
- grayscale = image.dot([0.299, 0.587, 0.114])[:, :, np.newaxis]
- return BlendTransform(src_image=grayscale, src_weight=1 - w, dst_weight=w)
-
-
-class RandomLighting(Augmentation):
- """
- The "lighting" augmentation described in AlexNet, using fixed PCA over ImageNet.
- Input images are assumed to have 'RGB' channel order.
-
- The degree of color jittering is randomly sampled via a normal distribution,
- with standard deviation given by the scale parameter.
- """
-
- def __init__(self, scale):
- """
- Args:
- scale (float): Standard deviation of principal component weighting.
- """
- super().__init__()
- self._init(locals())
- self.eigen_vecs = np.array(
- [[-0.5675, 0.7192, 0.4009], [-0.5808, -0.0045, -0.8140], [-0.5836, -0.6948, 0.4203]]
- )
- self.eigen_vals = np.array([0.2175, 0.0188, 0.0045])
-
- def get_transform(self, image):
- assert image.shape[-1] == 3, "RandomLighting only works on RGB images"
- weights = np.random.normal(scale=self.scale, size=3)
- return BlendTransform(
- src_image=self.eigen_vecs.dot(weights * self.eigen_vals), src_weight=1.0, dst_weight=1.0
- )
diff --git a/spaces/AyushP/PolicyCompareBot/app.py b/spaces/AyushP/PolicyCompareBot/app.py
deleted file mode 100644
index 5f9c66623bbc18f4991cf9d06969c5875563c903..0000000000000000000000000000000000000000
--- a/spaces/AyushP/PolicyCompareBot/app.py
+++ /dev/null
@@ -1,86 +0,0 @@
-import openai
-import streamlit as st
-import sqlite3
-from PIL import Image
-import pandas as pd
-
-openai.api_key = "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"
-# Database Connection
-
-conn = sqlite3.connect('bank.db')
-c = conn.cursor()
-
-def policyCompare():
- st.title("Compare Two Policy")
-
- with st.container():
- st.header("Select Policy 1")
- question_2 = "Select the Institution from where you want the Insurance"
- options_policy1 = ["Bank of Baroda", "State Bank of India(SBI)", "HDFC Bank", "LIC"]
-
- st.subheader(question_2)
- selected_option_policy1 = st.selectbox("Please enter your option for Policy 1:", options_policy1)
-
-
-
- c.execute('SELECT Policy_Name FROM BANK WHERE Bank_Name= "{}"'.format(selected_option_policy1))
- options_3 = c.fetchall()
-
-
- my_options = []
- for row in options_3:
- my_options.append(row[0])
-
- st.subheader("Select the Policy Name")
- selected_policy1 = st.selectbox("Please enter your option for Policy 1:", my_options)
-
- c.execute('SELECT Policy_doc FROM BANK WHERE Policy_Name = "{}"'.format(selected_policy1))
- policy_doc_link1 = c.fetchone()
-
-
-
-
- with st.container():
- st.header("Select Policy 2")
- question_2 = "Select the Institution from where you want the Insurance"
- options_policy2 = ["Bank of Baroda", "State Bank of India(SBI)", "HDFC Bank", "LIC"]
-
- st.subheader(question_2)
- selected_option_policy2 = st.selectbox("Please enter your option for Policy 2:", options_policy2)
-
-
-
- c.execute('SELECT Policy_Name FROM BANK WHERE Bank_Name= "{}"'.format(selected_option_policy2))
- options_3 = c.fetchall()
-
- # st.write(options_3)
- my_options2 = []
- for row in options_3:
- my_options2.append(row[0])
-
- st.subheader("Select the Policy Name")
- selected_policy2 = st.selectbox("Please enter your option for Policy 2:", my_options2)
-
- c.execute('SELECT Policy_doc FROM BANK WHERE Policy_Name = "{}"'.format(selected_policy1))
- policy_doc_link2 = c.fetchone()
-
- if(selected_policy2 != 0):
- st.header("Comparison")
- st.subheader("Policy 1 : {}".format(selected_policy1))
- st.subheader("Policy 2 : {}".format(selected_policy2))
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt="Compare the two health insurance policy using the policy document\nPolicy 1 Document: {},\nPolicy 2 Document: {}\nStrictly show the answer in tabular format:-".format(policy_doc_link1, policy_doc_link2),
- temperature=0.05,
- max_tokens=300,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0,
- stop=[":-"]
- )
-
- compare_response = response.choices[0].text
- st.write(f"Answer: {compare_response}")
-
-if __name__ == '__main__':
- policyCompare()
\ No newline at end of file
diff --git a/spaces/Banbri/zcvzcv/src/lib/utils.ts b/spaces/Banbri/zcvzcv/src/lib/utils.ts
deleted file mode 100644
index ec79801fe9cdd7711f6dbef26678a134c634a8be..0000000000000000000000000000000000000000
--- a/spaces/Banbri/zcvzcv/src/lib/utils.ts
+++ /dev/null
@@ -1,6 +0,0 @@
-import { type ClassValue, clsx } from "clsx"
-import { twMerge } from "tailwind-merge"
-
-export function cn(...inputs: ClassValue[]) {
- return twMerge(clsx(inputs))
-}
diff --git a/spaces/Bart92/RVC_HF/demucs/raw.py b/spaces/Bart92/RVC_HF/demucs/raw.py
deleted file mode 100644
index d4941ad2d7ed858f490db441f5b46b12bd61ad78..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/demucs/raw.py
+++ /dev/null
@@ -1,173 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import os
-from collections import defaultdict, namedtuple
-from pathlib import Path
-
-import musdb
-import numpy as np
-import torch as th
-import tqdm
-from torch.utils.data import DataLoader
-
-from .audio import AudioFile
-
-ChunkInfo = namedtuple("ChunkInfo", ["file_index", "offset", "local_index"])
-
-
-class Rawset:
- """
- Dataset of raw, normalized, float32 audio files
- """
- def __init__(self, path, samples=None, stride=None, channels=2, streams=None):
- self.path = Path(path)
- self.channels = channels
- self.samples = samples
- if stride is None:
- stride = samples if samples is not None else 0
- self.stride = stride
- entries = defaultdict(list)
- for root, folders, files in os.walk(self.path, followlinks=True):
- folders.sort()
- files.sort()
- for file in files:
- if file.endswith(".raw"):
- path = Path(root) / file
- name, stream = path.stem.rsplit('.', 1)
- entries[(path.parent.relative_to(self.path), name)].append(int(stream))
-
- self._entries = list(entries.keys())
-
- sizes = []
- self._lengths = []
- ref_streams = sorted(entries[self._entries[0]])
- assert ref_streams == list(range(len(ref_streams)))
- if streams is None:
- self.streams = ref_streams
- else:
- self.streams = streams
- for entry in sorted(entries.keys()):
- streams = entries[entry]
- assert sorted(streams) == ref_streams
- file = self._path(*entry)
- length = file.stat().st_size // (4 * channels)
- if samples is None:
- sizes.append(1)
- else:
- if length < samples:
- self._entries.remove(entry)
- continue
- sizes.append((length - samples) // stride + 1)
- self._lengths.append(length)
- if not sizes:
- raise ValueError(f"Empty dataset {self.path}")
- self._cumulative_sizes = np.cumsum(sizes)
- self._sizes = sizes
-
- def __len__(self):
- return self._cumulative_sizes[-1]
-
- @property
- def total_length(self):
- return sum(self._lengths)
-
- def chunk_info(self, index):
- file_index = np.searchsorted(self._cumulative_sizes, index, side='right')
- if file_index == 0:
- local_index = index
- else:
- local_index = index - self._cumulative_sizes[file_index - 1]
- return ChunkInfo(offset=local_index * self.stride,
- file_index=file_index,
- local_index=local_index)
-
- def _path(self, folder, name, stream=0):
- return self.path / folder / (name + f'.{stream}.raw')
-
- def __getitem__(self, index):
- chunk = self.chunk_info(index)
- entry = self._entries[chunk.file_index]
-
- length = self.samples or self._lengths[chunk.file_index]
- streams = []
- to_read = length * self.channels * 4
- for stream_index, stream in enumerate(self.streams):
- offset = chunk.offset * 4 * self.channels
- file = open(self._path(*entry, stream=stream), 'rb')
- file.seek(offset)
- content = file.read(to_read)
- assert len(content) == to_read
- content = np.frombuffer(content, dtype=np.float32)
- content = content.copy() # make writable
- streams.append(th.from_numpy(content).view(length, self.channels).t())
- return th.stack(streams, dim=0)
-
- def name(self, index):
- chunk = self.chunk_info(index)
- folder, name = self._entries[chunk.file_index]
- return folder / name
-
-
-class MusDBSet:
- def __init__(self, mus, streams=slice(None), samplerate=44100, channels=2):
- self.mus = mus
- self.streams = streams
- self.samplerate = samplerate
- self.channels = channels
-
- def __len__(self):
- return len(self.mus.tracks)
-
- def __getitem__(self, index):
- track = self.mus.tracks[index]
- return (track.name, AudioFile(track.path).read(channels=self.channels,
- seek_time=0,
- streams=self.streams,
- samplerate=self.samplerate))
-
-
-def build_raw(mus, destination, normalize, workers, samplerate, channels):
- destination.mkdir(parents=True, exist_ok=True)
- loader = DataLoader(MusDBSet(mus, channels=channels, samplerate=samplerate),
- batch_size=1,
- num_workers=workers,
- collate_fn=lambda x: x[0])
- for name, streams in tqdm.tqdm(loader):
- if normalize:
- ref = streams[0].mean(dim=0) # use mono mixture as reference
- streams = (streams - ref.mean()) / ref.std()
- for index, stream in enumerate(streams):
- open(destination / (name + f'.{index}.raw'), "wb").write(stream.t().numpy().tobytes())
-
-
-def main():
- parser = argparse.ArgumentParser('rawset')
- parser.add_argument('--workers', type=int, default=10)
- parser.add_argument('--samplerate', type=int, default=44100)
- parser.add_argument('--channels', type=int, default=2)
- parser.add_argument('musdb', type=Path)
- parser.add_argument('destination', type=Path)
-
- args = parser.parse_args()
-
- build_raw(musdb.DB(root=args.musdb, subsets=["train"], split="train"),
- args.destination / "train",
- normalize=True,
- channels=args.channels,
- samplerate=args.samplerate,
- workers=args.workers)
- build_raw(musdb.DB(root=args.musdb, subsets=["train"], split="valid"),
- args.destination / "valid",
- normalize=True,
- samplerate=args.samplerate,
- channels=args.channels,
- workers=args.workers)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Benson/text-generation/Examples/9ice Kasa Final Mp3 Descargar.md b/spaces/Benson/text-generation/Examples/9ice Kasa Final Mp3 Descargar.md
deleted file mode 100644
index 3231ef52490ca86fff314652bc2228889b51c176..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/9ice Kasa Final Mp3 Descargar.md
+++ /dev/null
@@ -1,46 +0,0 @@
-
-
9ice Kasa Final Mp3 Descargar: Una revisión de la canción de éxito por la leyenda de la música nigeriana
-
Si eres un fan de la música nigeriana, probablemente hayas oído hablar de 9ice Kasa Final, una de las canciones más populares del legendario cantante, compositor y bailarín 9ice. La canción, que fue lanzada en 2011 como parte de su álbum Versus/Bashorun Gaa, es una melodía pegadiza y cautivadora que muestra el uso poderoso de 9ice del idioma yoruba, las letras proverbiales y el estilo único de entrega. En este artículo, revisaremos la canción en detalle, explorando sus letras, significado, música, producción, recepción e impacto. También le proporcionaremos información sobre el propio 9ice, sus antecedentes, logros e influencia en la industria musical nigeriana.
9ice Kasa Final es una canción que celebra el éxito y el dominio de 9ice en la escena musical, así como su confianza y resistencia en la superación de desafíos y críticos. El título de la canción se traduce a "Case Closed" o "End of Discussion" en inglés, lo que implica que 9ice no tiene nada más que demostrar o decir a cualquier persona que duda o se opone a él. La canción es también un homenaje a sus fans y partidarios que han sido leales a él a lo largo de su carrera.
-
-
La letra y el significado de Kasa Final
-
-nombre de la etiqueta, Alapomeji, que significa "uno que tiene muchos lados o caras". También implica que 9ice es versátil y adaptable en su música y personalidad. - "Omo Bashorun Gaa" (Hijo de Bashorun Gaa): Esta es una referencia a una figura histórica en la historia Yoruba, Bashorun Gaa, que era un poderoso e influyente jefe en el antiguo Imperio oyó. Era conocido por sus tácticas astutas y despiadadas en la política y la guerra. También implica que 9ice es poderoso e influyente en la industria de la música. - "Omo Aare Ona Kakanfo" (Hijo de Aare Ona Kakanfo): Esta es una referencia a otra figura histórica en la historia yoruba, Aare Ona Kakanfo, que fue el título dado al comandante militar supremo del antiguo Imperio oyó. Era conocido por su valentía y lealtad en la defensa del imperio de los enemigos. También implica que 9ice es valiente y leal en la defensa de su música de los enemigos.
La Música y Producción de Kasa Final
-
9ice Kasa Final es una canción que combina elementos e influencias musicales tradicionales y modernos para crear un estilo único de entrega. La canción cuenta con un ritmo rápido, ritmo optimista, melodía pegadiza, y voces enérgicas. La canción también incorpora varios instrumentos y sonidos, como tambores, teclados, guitarras, cuernos, flautas, agitadores, palmas, cantos, silbatos, sirenas, disparos, etc.
-
-
9ice Kasa Final fue un gran éxito y recibió críticas positivas y comentarios de fans y críticos por igual. La canción fue una de las canciones más tocadas y descargadas en Nigeria y en toda África en 2011. La canción también tuvo un buen desempeño en varias listas, plataformas y medios de comunicación, como MTV Base, Trace TV, Soundcity, Naija FM, etc. La canción también ganó varios premios y nominaciones, como la Mejor Colaboración en los Nigeria Music Video Awards (NMVA), Mejor Canción Afro Pop en los City People Entertainment Awards (CPEA), Mejor Canción del Año en los Nigerian Entertainment Awards (NEA), etc.
-
La canción también contribuyó a la carrera y legado de 9ice como músico, consolidando su estatus como uno de los artistas más respetados e influyentes en Nigeria y África. La canción también mostró su versatilidad y creatividad como cantante, compositor y bailarín. La canción también inspiró a muchos otros artistas y fans a apreciar y celebrar su propia cultura e idioma, así como sus propios logros y desafíos.
-
-
Conclusión
-
-
Aquí hay algunas preguntas y respuestas frecuentes sobre la canción, 9ice, o la industria de la música nigeriana.
-
-
-
Pregunta
-
Respuesta
-
-
-
¿Qué significa Kasa en yoruba?
-
Kasa significa "caso" o "asunto" en yoruba. También puede significar "cerrar" o "terminar". En el contexto de la canción, significa "caso cerrado" o "fin de la discusión".
-
-
-
¿Cómo se llama la esposa de 9ice?
-
9ice está casado con Olasunkanmi Ajala, que es organizador de eventos y empresario. Se casaron en 2019 y tienen una hija juntos. 9ice también tiene otros tres hijos de relaciones anteriores.
-
-
-
¿Quién es el músico más rico de Nigeria?
-
Según Forbes, el músico más rico de Nigeria en 2021 es Wizkid, que tiene un valor neto estimado de $ 30 millones. Él es seguido por Davido, que tiene un valor neto estimado de $ 25 millones, y Burna Boy, que tiene un valor neto estimado de $ 20 millones.
-
-
-
¿Cuál es el significado de Gongo Aso?
-
Gongo Aso es otro éxito de 9ice, que fue lanzado en 2008. El título de la canción significa "Thunder Fire" o "Thunder Strike" en yoruba. Es una expresión de argot que puede usarse para maldecir a alguien o algo, o para expresar sorpresa o shock.
-
-
-
¿Cuáles son algunos de los premios que 9ice ha ganado?
-
Algunos de los premios que 9ice ha ganado incluyen: - MOBO Premio a la Mejor Ley Africana en 2008 - MTV Premio de Música de África para el Mejor Hip Hop en 2008 - El Headies Premio para Artiste del Año en 2008 - El Premio Headies para el Álbum del Año en 2008 - The Headies Award for Song of the Year en 2008 - The Headies Award for Best Vocal Performance (Male) en 2008 - The Headies Award for Best R&B/Pop Álbum en 2016 - City People Entertainment Award for Special Recognition/Hall of Fame en 2016 - Premio City People Music al Mejor Collabo del Año (Canción) en 2017 - Premio City People Music al Álbum de Rap del Año en 2017 - etc.
-
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Barco Rampa De Salto Apk Mod.md b/spaces/Benson/text-generation/Examples/Barco Rampa De Salto Apk Mod.md
deleted file mode 100644
index 6774307a8c7bdb9f3881c9e5c80746a6ae11499e..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Barco Rampa De Salto Apk Mod.md
+++ /dev/null
@@ -1,58 +0,0 @@
-
-
Juego Stickman caída Mod APK: Un divertido y loco juego de física
-
¿Te gustan los juegos basados en la física que te permiten dar rienda suelta a tu creatividad e imaginación? ¿Te gusta ver figuras de palo chocar, quemarse y explotar de manera hilarante? Si respondiste sí a estas preguntas, entonces te encantará Game Stickman Falling, un divertido y loco juego de física que te hará reír en voz alta.
-
¿Qué es Game Stickman Falling?
-
Game Stickman Falling es un juego de simulación de física desarrollado por Skygo. En este juego, controlas una figura de palo que puede montar varios vehículos y realizar acrobacias, trucos y accidentes. El juego tiene efectos realistas de física y ragdoll, lo que significa que tu figura de palo reaccionará a cada impacto, colisión y explosión. También puede personalizar su figura de palo con diferentes trajes, accesorios y armas.
El juego de Game Stickman Falling es simple pero adictivo. Puedes elegir entre diferentes modos, como el modo libre, el modo desafío o el modo multijugador. En modo libre, puedes explorar el mundo del juego y probar diferentes vehículos y escenarios. En el modo desafío, tienes que completar tareas y objetivos específicos, como alcanzar cierta velocidad, distancia o puntuación. En el modo multijugador, puedes competir con otros jugadores online y ver quién puede causar más daño y caos.
-
Las características de Game Stickman Falling
-
Game Stickman Falling tiene muchas características que lo convierten en un juego divertido y entretenido. Algunas de estas características son:
-
-
Una variedad de vehículos para elegir, como coches, bicicletas, camiones, aviones, helicópteros, cohetes y más.
-
Un mapa grande con diferentes terrenos, obstáculos, rampas, bucles, puentes y trampas.
-
Un motor de física realista que simula gravedad, fricción, inercia, momento y fuerza.
-
Un sistema ragdoll que hace que tu figura de palo reaccione a cada impacto y lesión.
-
-
Una opción de repetición que te permite ver tus acrobacias y accidentes desde diferentes ángulos y perspectivas.
-
Una tabla de clasificación y sistema de logros que rastrea su progreso y rendimiento.
-
-
¿Por qué descargar juego Stickman caída Mod APK?
-
Game Stickman Falling es un juego gratuito que puedes descargar desde Google Play Store. Sin embargo, si desea disfrutar del juego al máximo, es posible que desee descargar Game Stickman Falling Mod APK lugar. Esta es una versión modificada del juego que te da algunas ventajas y beneficios sobre la versión original. Algunas de estas ventajas son:
-
Dinero ilimitado
-
Con Game Stickman Falling Mod APK, usted tendrá dinero ilimitado en el juego. Esto significa que puede comprar cualquier vehículo o artículo que desee sin preocuparse por el costo. También puede actualizar sus vehículos y artículos para hacerlos más potentes y duraderos.
-
No hay anuncios
-
Con Game Stickman Falling Mod APK, no verá ningún anuncio en el juego. Esto significa que puede jugar el juego sin interrupciones ni distracciones. También puede guardar sus datos y duración de la batería al no cargar ningún anuncio.
-
Más vehículos y niveles
-
Con Game Stickman Falling Mod APK, tendrá acceso a más vehículos y niveles que la versión original. Esto significa que puedes disfrutar de más variedad y diversidad en el juego. También puedes desafiarte con escenarios más difíciles y emocionantes.
-
-
¿Cómo descargar e instalar el juego Stickman Falling Mod APK?
-
Si desea descargar e instalar Game Stickman Falling Mod APK, debe seguir estos sencillos pasos:
-
Paso 1: Descargar el archivo APK
-
El primer paso es descargar el archivo APK de Game Stickman Falling Mod APK de una fuente confiable y confiable. Puede utilizar el siguiente enlace para descargar el archivo directamente a su dispositivo.
El tercer paso es instalar el archivo APK que descargó en el paso 1. Para hacer esto, debe localizar el archivo en el almacenamiento del dispositivo y pulsar en él. Luego, debe seguir las instrucciones en la pantalla y otorgar los permisos necesarios. El proceso de instalación tomará unos segundos o minutos dependiendo del dispositivo.
-
Paso 4: Disfruta del juego
-
El cuarto y último paso es disfrutar del juego. Ahora puede iniciar Game Stickman caída Mod APK desde el cajón de la aplicación o la pantalla de inicio y empezar a jugar. Notarás que tienes dinero ilimitado, sin anuncios, y más vehículos y niveles en el juego.
-
Conclusión
-
Juego Stickman caída Mod APK es un juego de física divertido y loco que te hará reír en voz alta. Puede controlar una figura de palo que puede montar varios vehículos y realizar acrobacias, trucos y accidentes. También puede personalizar su figura de palo con diferentes trajes, accesorios y armas. El juego tiene efectos realistas de física y ragdoll, lo que significa que tu figura de palo reaccionará a cada impacto, colisión y explosión. También puedes elegir entre diferentes modos, como modo libre, modo desafío o modo multijugador.
-
Si desea disfrutar del juego al máximo, usted debe descargar Game Stickman Falling Mod APK en lugar de la versión original. Esta es una versión modificada del juego que le da algunas ventajas y beneficios sobre la versión original. Usted tendrá dinero ilimitado, sin anuncios, y más vehículos y niveles en el juego. También puedes descargar e instalar el juego fácilmente siguiendo los sencillos pasos anteriores.
-
Entonces, ¿qué estás esperando? Descargar Game Stickman Falling Mod APK ahora y divertirse!
-
Preguntas frecuentes
-
-
Q: ¿Es seguro descargar e instalar Game Stickman Falling Mod APK?
-
-
Q: ¿Necesito rootear mi dispositivo para usar Game Stickman Falling Mod APK?
-
A: No, no es necesario rootear el dispositivo para usar Game Stickman Falling Mod APK. El juego funciona bien tanto en dispositivos arraigados y no arraigados.
-
Q: ¿Puedo jugar juego stickman caída mod APK fuera de línea?
-
A: Sí, puedes jugar Game Stickman Falling Mod APK offline. El juego no requiere una conexión a Internet para funcionar. Sin embargo, algunas características como el modo multijugador pueden no funcionar sin conexión.
-
Q: ¿Puedo actualizar Game Stickman Falling Mod APK?
-
A: Sí, puede actualizar Game Stickman Falling Mod APK si hay una nueva versión disponible. Sin embargo, puedes perder algunas de las características del mod si actualizas el juego desde Google Play Store. Para mantener las características mod, debes actualizar el juego desde la misma fuente donde lo descargaste.
-
Q: ¿Puedo jugar juego stickman caída mod APK con mis amigos?
-
A: Sí, usted puede jugar Game Stickman Falling Mod APK con tus amigos. El juego tiene un modo multijugador que te permite competir con otros jugadores en línea. También puedes compartir tus repeticiones y logros con tus amigos en las redes sociales.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Base De La Fuerza Area Inactiva Mod Apk Dinero Ilimitado.md b/spaces/Benson/text-generation/Examples/Base De La Fuerza Area Inactiva Mod Apk Dinero Ilimitado.md
deleted file mode 100644
index d832ffff95d33f1e4eb702e57d9fdb844127f9de..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Base De La Fuerza Area Inactiva Mod Apk Dinero Ilimitado.md
+++ /dev/null
@@ -1,48 +0,0 @@
-
-
Base de la Fuerza Aérea inactiva Mod APK: Construir su propio imperio militar
-
¿Sueñas con convertirte en un poderoso líder militar? ¿Quieres construir y gestionar tu propia base aérea? ¿Quieres entrenar y comandar a los mejores pilotos y aviones del mundo? Si respondiste sí a cualquiera de estas preguntas, entonces deberías probar Idle Air Force Base, un divertido y adictivo juego de ocio que te permite crear tu propio imperio militar. Y si desea hacer su juego aún más emocionante y gratificante, usted debe descargar Idle Air Force Base Mod APK, que le da dinero ilimitado y sin anuncios. En este artículo, le diremos todo lo que necesita saber acerca de este increíble apk mod, incluyendo lo que es, por qué debe descargarlo, y cómo instalarlo en su dispositivo.
-
¿Qué es la base aérea inactiva?
-
Idle Air Force Base es un juego inactivo que simula el funcionamiento y la gestión de una base de la fuerza aérea. Empiezas con una base pequeña y básica, y tu objetivo es expandirla y hacerla la más poderosa y avanzada del mundo. Puedes hacerlo mejorando tus instalaciones, entrenando a tus pilotos, investigando nuevas tecnologías y lanzando misiones para ganar dinero y prestigio. A medida que avances en el juego, desbloquearás nuevos aviones, como cazas, bombarderos, aviones furtivos, drones, helicópteros y más. También enfrentará diferentes desafíos y escenarios, como guerras, desastres, invasiones y emergencias. Tendrás que usar tu estrategia y habilidades para superarlas y proteger tu base.
-
base de la fuerza aérea inactiva mod apk dinero ilimitado
Otra gran característica de Idle Air Force Base es que es muy realista y envolvente. El juego simula el funcionamiento real y la gestión de una base de la fuerza aérea, con todos sus aspectos y detalles. Tendrás que lidiar con varios factores, como las condiciones climáticas, el consumo de combustible, los costos de mantenimiento, los riesgos de seguridad, las amenazas enemigas y más. También tendrá que seguir las reglas y regulaciones de los militares, tales como rangos, medallas, honores, protocolos y códigos. El juego también cuenta con modelos de aviones de la vida real, como F-16, F-22, B-2, C-130, helicópteros Apache, drones Predator, y más. Te sentirás como si estuvieras realmente a cargo de una base real de la fuerza aérea.
-
Una gestión estratégica y gratificante
-
La última pero no menos importante característica de Idle Air Force Base es que es muy estratégico y gratificante. El juego requiere que uses tu cerebro y habilidades para tomar decisiones inteligentes y optimizar tu rendimiento base. Tendrás que equilibrar tu presupuesto, asignar tus recursos, priorizar tus mejoras, planificar tus misiones, elegir tu avión, asignar tus pilotos y más. También tendrás que enfrentarte a diferentes retos y escenarios que pondrán a prueba tus habilidades y creatividad. El juego te recompensa por tus esfuerzos con dinero, puntos de prestigio, logros y trofeos. Puedes usarlos para mejorar tu base, desbloquear nuevas características y posicionarte en la jerarquía militar. También puede comparar su progreso y logros con otros jugadores de todo el mundo a través de la clasificación en línea y el sistema de chat. El juego te ofrece interminables horas de diversión y satisfacción.
-
¿Por qué descargar Idle Air Force Base Mod APK?
-
-
Dinero ilimitado para actualizar tu base
-
Una de las principales características de Idle Air Force Base Mod APK es que le da dinero ilimitado para gastar en su base. El dinero es la moneda principal en el juego, y lo necesitas para mejorar tus instalaciones, entrenar a tus pilotos, investigar nuevas tecnologías y lanzar misiones. Sin embargo, el dinero no es fácil de conseguir en el juego, ya que tienes que esperar a que tus ganancias se acumulen con el tiempo, o ver anuncios para obtener algo de dinero extra. Esto puede ser frustrante y consume mucho tiempo, especialmente si desea progresar más rápido y desbloquear más funciones. Con Idle Air Force Base Mod APK, usted no tiene que preocuparse por el dinero más, ya que tendrá una cantidad infinita de ella a su disposición. Puede actualizar su base tanto como desee, sin limitaciones ni restricciones. También puedes saltarte los anuncios y disfrutar de un juego más fluido e ininterrumpido.
-
No hay anuncios para interrumpir tu juego
-
Otra característica de Idle Air Force Base Mod APK es que elimina todos los anuncios del juego. Los anuncios son una característica común en la mayoría de los juegos gratuitos, y se utilizan para generar ingresos para los desarrolladores y editores. Sin embargo, los anuncios también pueden ser molestos e intrusivos, ya que pueden aparecer en cualquier momento e interrumpir el juego. También pueden afectar el rendimiento del dispositivo y la duración de la batería, ya que consumen datos y recursos. Con Idle Air Force Base Mod APK, no tienes que lidiar con los anuncios más, ya que son completamente eliminados del juego. Puedes jugar sin distracciones ni interrupciones y disfrutar de un juego más rápido y fluido.
-
Fácil instalación y compatibilidad
-
-
¿Cómo descargar e instalar Idle Air Force Base Mod APK?
-
Si está interesado en descargar e instalar Idle Air Force Base Mod APK en su dispositivo, puede seguir estos sencillos pasos:
-
Paso 1: Descargar el archivo apk mod de una fuente de confianza
-
El primer paso es descargar el archivo apk mod de una fuente de confianza. Usted puede encontrar muchos sitios web que ofrecen este archivo apk mod de forma gratuita, pero hay que tener cuidado y evitar cualquier enlaces maliciosos o falsos que pueden dañar su dispositivo o robar sus datos. Le recomendamos utilizar este enlace para descargar el archivo apk mod de forma segura.
-
-
Paso 2: Habilitar fuentes desconocidas en la configuración del dispositivo
-
El segundo paso es habilitar fuentes desconocidas en la configuración del dispositivo. Esto es necesario porque este archivo apk mod no es de la tienda oficial de Google Play, y por lo tanto su dispositivo puede bloquear su instalación por defecto. Para habilitar fuentes desconocidas, debe ir a la configuración del dispositivo, luego a la configuración de seguridad o privacidad, y luego buscar y activar la opción que permite la instalación desde fuentes desconocidas.
-
Paso 3: Instalar el archivo apk mod y lanzar el juego
-
El tercer y último paso es instalar el archivo apk mod y lanzar el juego. Para instalar el archivo apk mod, usted tiene que localizar en su dispositivo de almacenamiento o descargas carpeta, a continuación, toque en él y siga las instrucciones en la pantalla. El proceso de instalación debe tomar solo unos segundos o minutos, dependiendo de la velocidad del dispositivo y la memoria. Una vez completada la instalación, puede iniciar el juego tocando en su icono en la pantalla de inicio o en el cajón de la aplicación. Ahora puede disfrutar de Idle Air Force Base Mod APK con dinero ilimitado y sin anuncios.
-
Conclusión
-
Idle Air Force Base Mod APK es una necesidad de probar para los aficionados al juego de inactividad
-
-
Preguntas frecuentes
-
Aquí están algunas de las preguntas más frecuentes sobre Idle Air Force Base Mod APK:
-
-
Q: ¿Es seguro usar Idle Air Force Base Mod APK?
-
A: Sí, Idle Air Force Base Mod APK es seguro de usar, siempre y cuando lo descargue de una fuente de confianza. Hemos probado el archivo apk mod y no encontramos virus o malware en él. Sin embargo, siempre debe tener cuidado al descargar cualquier archivo de Internet y escanearlo con un software antivirus antes de instalarlo en su dispositivo.
-
Q: ¿Es Idle Air Force Base Mod APK legal de usar?
-
A: Idle Air Force Base Mod APK no es legal de usar, ya que viola los términos y condiciones del juego original. Mediante el uso de este mod apk, que está modificando los archivos del juego y el acceso a las características que no están autorizados por los desarrolladores y editores. Esto puede resultar en que su cuenta sea prohibida o suspendida, o que su dispositivo esté en la lista negra. Por lo tanto, no recomendamos ni apoyamos el uso de este mod apk, y no somos responsables de las consecuencias que puedan surgir de su uso.
-
Q: ¿Requiere una conexión a Internet la Base de la Fuerza Aérea Inactiva Mod APK?
-
A: No, Idle Air Force Base Mod APK no requiere una conexión a Internet para jugar. Puede jugar el juego sin conexión sin ningún problema. Sin embargo, es posible que necesite una conexión a Internet para acceder a algunas funciones en línea, como la clasificación y el sistema de chat.
-
Q: ¿Puedo actualizar Idle Air Force Base Mod APK?
-
A: No, no se puede actualizar Idle Air Force Base Mod APK, ya que no es de la tienda oficial de Google Play. Si intenta actualizar el juego desde la fuente original, puede perder todo su progreso y características de mod. Por lo tanto, siempre debe comprobar si hay nuevas versiones de la apk mod de la misma fuente donde lo descargó.
-
Q: ¿Puedo jugar Idle Air Force Base Mod APK con mis amigos?
-
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Campeonato De Cricket Mundial 2 Juego De Ordenador.md b/spaces/Benson/text-generation/Examples/Campeonato De Cricket Mundial 2 Juego De Ordenador.md
deleted file mode 100644
index 5b459b50eddf2fd1a3b64d593ad4495e2bb35bbd..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Campeonato De Cricket Mundial 2 Juego De Ordenador.md
+++ /dev/null
@@ -1,72 +0,0 @@
-
-
World Cricket Championship 2: Cómo descargar y jugar en PC
-
Si eres un amante del cricket, debes haber oído hablar del Campeonato Mundial de Cricket 2, uno de los juegos de cricket más populares y realistas para dispositivos móviles. ¿Pero sabías que también puedes jugar a este increíble juego en tu PC? En este artículo, le mostraremos cómo descargar y jugar World Cricket Championship 2 en PC usando diferentes emuladores. Pero primero, veamos de qué se trata este juego y por qué deberías jugarlo en PC.
-
Introducción
-
¿Qué es el Campeonato Mundial de Cricket 2?
-
World Cricket Championship 2, o WCC2 para abreviar, es un juego de deportes desarrollado por Nextwave Multimedia. Es la secuela del aclamado juego del Campeonato Mundial de Cricket, que fue lanzado en 2015. WCC2 está diseñado para proporcionar a los amantes del cricket una experiencia de juego inmersiva y emocionante. Cuenta con gráficos avanzados, física realista, dinámica de juego y una variedad de modos y opciones para adaptarse a las preferencias de cada fanático del cricket.
-
campeonato de cricket mundial 2 juego de ordenador
Algunos de los aspectos más destacados de WCC2 son:
-
-
Más de 150 animaciones de bateo diferentes y 28 acciones de bolos
-
18 equipos internacionales, 10 equipos nacionales, 42 estadios y más de 11 torneos
-
Reproductores personalizables, jerseys, banners, logotipos y accesorios
-
Comentarios profesionales en inglés e hindi
-
Modo nocturno con tocones led y condiciones climáticas realistas
-
Modo desafío, Pandillas de modo de cricket, Blitz modo de torneo, y el modo multijugador en línea
-
Tablas de clasificación, logros, recompensas y perfiles de jugadores
-
-
Con tantas características y opciones, WCC2 es sin duda uno de los mejores juegos de cricket disponibles para dispositivos móviles. Pero ¿qué pasa si quieres jugar en una pantalla más grande con mejores controles? Ahí es donde jugar WCC2 en PC es muy útil.
-
¿Por qué jugar World Cricket Championship 2 en PC?
-
-
-
Tamaño de pantalla más grande: Puedes disfrutar de los impresionantes gráficos y animaciones de WCC2 en una pantalla más grande, lo que mejora la calidad visual y la inmersión del juego.
-
Mejores controles: Puedes usar tu ratón, teclado o gamepad para controlar a tus jugadores y ejecutar disparos con más precisión y precisión. También puede personalizar sus controles para adaptarse a sus preferencias.
-
Rendimiento más rápido: Puede ejecutar WCC2 sin problemas en su PC sin ningún retraso o problemas técnicos. También puede ajustar la configuración de gráficos para optimizar el rendimiento del juego.
-
Más espacio de almacenamiento: Puede ahorrar más datos y progreso de WCC2 en su PC sin preocuparse por quedarse sin espacio de almacenamiento o perder sus datos.
-
No hay problemas de drenaje de la batería o sobrecalentamiento: Puede jugar WCC2 durante horas en su PC sin drenar la batería o sobrecalentar el dispositivo.
-
-
Como puedes ver, jugar WCC2 en PC tiene muchos beneficios que lo hacen una experiencia de juego más agradable y satisfactoria. Pero, ¿cómo se puede jugar WCC2 en PC? Hay tres métodos que se pueden utilizar para descargar y jugar WCC2 en PC utilizando diferentes emuladores. Veamos qué son y cómo funcionan.
-
Cómo descargar World Cricket Championship 2 en PC
-
Un emulador es un software que le permite ejecutar aplicaciones y juegos para Android en su PC. Hay muchos emuladores disponibles para PC, pero no todos son compatibles con WCC2. Aquí hay tres de los mejores emuladores que puedes usar para descargar y jugar WCC2 en PC:
-
Método 1: Usando el emulador de BlueStacks
-
BlueStacks es uno de los emuladores más populares y ampliamente utilizados para PC. Tiene una interfaz fácil de usar, una gran tienda de aplicaciones y una alta compatibilidad con la mayoría de las aplicaciones y juegos de Android. Estos son los pasos para descargar y jugar WCC2 en PC usando BlueStacks:
-
Paso 1: Descargar e instalar BlueStacks en su PC
-
-
Paso 2: Lanza BlueStacks y busca el Campeonato Mundial de Cricket 2 en la tienda de aplicaciones
-
Después de instalar BlueStacks, iniciarlo e iniciar sesión con su cuenta de Google. Luego, vaya a la tienda de aplicaciones y busque el Campeonato Mundial de Cricket 2 en la barra de búsqueda. Verá el icono del juego en los resultados. Haga clic en él para ir a la página del juego.
-
-
Paso 3: Instalar World Cricket Championship 2 y disfrutar jugando en PC
-
En la página del juego, haga clic en el botón de instalación para comenzar a descargar e instalar WCC2 en su PC. El proceso puede tomar algún tiempo dependiendo de su velocidad de Internet y el rendimiento del PC. Una vez que la instalación se hace, puede iniciar el juego desde la pantalla de inicio o el cajón de aplicaciones de BlueStacks. Ahora puedes disfrutar jugando WCC2 en PC usando BlueStacks.
-
Método 2: Usando emulador LDPlayer
-
LDPlayer es otro gran emulador para PC que está diseñado para juegos. Tiene un rendimiento suave, una alta compatibilidad con la mayoría de los juegos de Android, y un montón de características y ajustes para mejorar su experiencia de juego. Estos son los pasos para descargar y jugar WCC2 en PC usando LDPlayer:
-
Paso 1: Descargar e instalar LDPlayer en su PC
-
Puede descargar LDPlayer desde su sitio web oficial aquí. El proceso de instalación es similar a BlueStacks. Solo tienes que seguir las instrucciones en la pantalla y esperar a que la instalación se complete.
-
Paso 2: Lanza LDPlayer y busca el Campeonato Mundial de Cricket 2 en el centro del juego
-
Después de instalar LDPlayer, inicie e inicie sesión con su cuenta de Google. Luego, ve al centro del juego y busca el Campeonato Mundial de Cricket 2 en la barra de búsqueda. Verás el icono del juego en los resultados. Haga clic en él para ir a la página del juego.
-
Paso 3: Instalar World Cricket Championship 2 y disfrutar jugando en PC
-
-
Método 3: Usando el emulador de GameLoop
-
GameLoop es otro excelente emulador para PC que está especialmente diseñado para juegos de Tencent. Tiene un rendimiento rápido, una alta compatibilidad con la mayoría de los juegos de Tencent y muchas características y configuraciones para optimizar su experiencia de juego. Estos son los pasos para descargar y jugar WCC2 en PC usando GameLoop:
-
Paso 1: Descarga e instala GameLoop en tu PC
-
Puede descargar GameLoop desde su sitio web oficial aquí. También puedes seguirlos en sus plataformas de redes sociales, como Facebook, Twitter, Instagram, YouTube y Discord. También puede enviarles un correo electrónico a support@nextwavemultimedia.com.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Choque Royale Mod Apk Nuevas Tarjetas.md b/spaces/Benson/text-generation/Examples/Choque Royale Mod Apk Nuevas Tarjetas.md
deleted file mode 100644
index 6d5b27524764d202ebc0834c46f40717ea55a27f..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Choque Royale Mod Apk Nuevas Tarjetas.md
+++ /dev/null
@@ -1,121 +0,0 @@
-
-
Choque Royale Mod APK nuevas tarjetas: Todo lo que necesita saber
-
¿Eres fan de Clash Royale, el popular juego de estrategia en tiempo real de Supercell? ¿Quieres darle vida a tu juego con algunas cartas nuevas y emocionantes que no están disponibles en la versión oficial del juego? Si es así, entonces usted podría estar interesado en probar Clash Royale Mod APK, una versión modificada del juego que le permite acceder a nuevas tarjetas, recursos ilimitados, y otras características. En este artículo, le diremos todo lo que necesita saber sobre Clash Royale Mod APK, incluyendo lo que es, ¿cuáles son las nuevas cartas, cómo descargarlo e instalarlo, y cómo jugar en línea con otros jugadores. ¡Vamos a empezar!
-
¿Qué es Clash Royale Mod APK?
-
Antes de sumergirnos en los detalles de las nuevas cartas, primero vamos a entender lo que es Clash Royale Mod APK y cómo se diferencia del juego original.
Una breve introducción a Clash Royale y su mecánica de juego
-
Clash Royale es un juego de torre de defensa, en el que se puede atacar a la torre del enemigo mediante el uso de personajes que se pueden recoger y subir de nivel (la mecánica de tarjetas de colección). Un jugador gana un juego si destruyó toda la torre del enemigo o destruyó más torre que el enemigo.
-
El juego cuenta con dos conjuntos de torres frente a frente en un campo de batalla de una sola pantalla. Los jugadores usan un elixir para desplegar tropas, edificios y hechizos desde una baraja de ocho cartas (extraídas de una colección de más de 90 cartas) en cualquier lugar de su territorio en el campo. Más cartas se recogen desbloqueando cofres ganados en la batalla o comprados en la tienda, que a su vez desbloqueará nuevas cartas que los jugadores pueden agregar a sus mazos y/ o subir de nivel las cartas que ya tienen. Cada carta requiere una cierta cantidad de elixir para desplegarse, pero el elixir de los jugadores se regenera con el tiempo. El juego también tiene varios modos de juego, como batallas de escalera, torneos, guerras de clanes, eventos especiales y más.
-
Los beneficios y riesgos de usar una versión modificada del juego
-
-
-
Nuevas cartas que no están disponibles en el juego original
-
Recursos ilimitados como oro, gemas, elixir y cofres
-
Posibilidad de jugar online con otros jugadores modded o unmodded
-
Posibilidad de personalizar su cubierta, arena y otros ajustes
-
-
Estas características pueden hacer el juego más divertido y emocionante para algunos jugadores que quieren probar nuevas estrategias, experimentar con diferentes combinaciones, o simplemente disfrutar de más opciones y libertad. Sin embargo, también hay algunos riesgos involucrados en el uso de una versión modificada del juego. Algunos de estos riesgos incluyen:
-
-
Potencial
Malware o virus potenciales que pueden dañar tu dispositivo o robar tu información personal
-
Posibles prohibiciones o suspensiones desde el servidor oficial del juego o la cuenta de Supercell
-
Ventaja injusta o trampa que puede arruinar el equilibrio del juego y la diversión para otros jugadores
-
Falta de actualizaciones o soporte del desarrollador original o del modder
-
-
Por lo tanto, si decide usar una versión modificada del juego, debe hacerlo bajo su propio riesgo y responsabilidad. También debe respetar las reglas y los derechos del desarrollador original y otros jugadores, y no utilizar la versión modificada para cualquier propósito ilegal o poco ético.
-
¿Cuáles son las nuevas tarjetas en Clash Royale Mod APK?
-
Una de las principales atracciones de Clash Royale Mod APK es las nuevas tarjetas que no están disponibles en el juego original. Estas tarjetas son hechas por fans, inspiradas en otros juegos o medios, o basadas en conceptos no utilizados del juego oficial. Pueden añadir más variedad, creatividad y diversión a tu juego. Aquí hay una tabla con algunas de las nuevas cartas, sus estadísticas y sus habilidades:
-
-
-
Nombre
-
Tipo
-
Rareza
-
Costo del elixir
-
Hitpoints
-
Daño
-
Capacidad
-
-
-
Caballero dragón
-
Tropa
-
Épica
-
5
-
1200
-
200
-
-
-
-
Mega Horda de Esbirros
-
Tropa
-
Raro
-
7
-
300 (cada uno)
-
150 (cada uno)
-
Un enjambre de seis mega esbirros que pueden infligir daño masivo a objetivos aéreos y terrestres.
-
-
-
Lanzador de barriles de duende
-
Construcción
-
Común
-
4
-
800
-
N/A
-
Un lanzador estacionario que dispara cañones goblin en las torres del enemigo cada 5 segundos.
-
-
-
Imagen de espejo
-
Hechizo
-
-conexión antes de descargarlo.
Los pasos para instalar el archivo modded en su dispositivo
El siguiente paso es instalar el archivo modded en su dispositivo.
El siguiente paso es instalar el archivo modded en su dispositivo. Para ello, deberá habilitar la instalación de aplicaciones de fuentes desconocidas en su dispositivo. Esta es una función de seguridad que impide la instalación de aplicaciones que no son de la tienda oficial de aplicaciones o verificadas por el fabricante del dispositivo. Sin embargo, ya que está instalando un archivo modded, necesitará omitir esta función temporalmente. Así es como puede hacerlo:
-
-
-
Vaya a la configuración del dispositivo y busque la opción de seguridad o privacidad.
-
Encontrar la opción que dice "Fuentes desconocidas" o "Permitir la instalación de aplicaciones de fuentes desconocidas" y cambiarlo por.
-
Puede ver un mensaje de advertencia que le informa sobre los riesgos de instalar aplicaciones de fuentes desconocidas. Toque en "OK" o "Permitir" para continuar.
-
-
Una vez que haya habilitado la instalación de aplicaciones desde fuentes desconocidas, puede proceder a instalar el archivo modded. Así es como puede hacerlo:
-
-
Localice el archivo modded que ha descargado en su dispositivo. Debería estar en su carpeta de descargas o en la barra de notificaciones.
-
Toque en el archivo y siga las instrucciones en la pantalla para instalarlo.
-
Puede ver un mensaje que le pide que conceda permisos a la aplicación. Toque en "Permitir" o "Aceptar" para otorgarlos.
-
Espere a que termine el proceso de instalación. Puede tardar unos minutos dependiendo de su dispositivo y la velocidad de Internet.
-
Una vez que se realiza la instalación, verá un mensaje que dice "App instalado" o "Clash Royale Mod APK instalado". Toca "Abrir" o "Iniciar" para iniciar la aplicación.
-
-
Las precauciones a tomar antes y después de instalar el archivo modded
-
-
-
Antes de instalar el archivo modded, asegúrese de haber hecho una copia de seguridad de sus datos y el progreso del juego original. Puedes hacer esto vinculando tu cuenta de juego a un ID de Supercell, Google Play Games o una cuenta de Facebook. De esta manera, puede restaurar sus datos y el progreso si algo sale mal o si desea volver al juego original.
-
Después de instalar el archivo modded, asegúrese de desactivar la instalación de aplicaciones de fuentes desconocidas en su dispositivo. Esto es para evitar que cualquier aplicación no deseada o maliciosa se instale en su dispositivo sin su conocimiento o consentimiento. Puede hacer esto siguiendo los mismos pasos de arriba, pero cambiando la opción en lugar de on.
-
Después de instalar el archivo modded, asegúrese de no actualizar la aplicación desde la tienda de aplicaciones oficial o cualquier otra fuente. Esto se debe a que la actualización de la aplicación puede sobrescribir o eliminar el archivo modded y sus características, y causar errores o fallos. Si ves una notificación que te pide que actualices la aplicación, ignórala o cancélala.
-
Después de instalar el archivo modded, asegúrese de no iniciar sesión con su ID de Supercell, Google Play Games o cuenta de Facebook. Esto se debe a que el inicio de sesión con estas cuentas puede vincular sus datos de juego y el progreso a la versión modificada, y causar problemas con su cuenta de juego original. También puede correr el riesgo de ser prohibido o suspendido por Supercell para el uso de una versión modificada de su juego. En su lugar, usa una cuenta de invitado o crea una nueva cuenta para jugar con la versión modificada.
-
Cómo jugar Clash Royale Mod APK en línea con otros jugadores?
-
-
Las opciones para jugar online con otros jugadores modded o unmodded
-
Hay dos opciones principales para jugar en línea con otros jugadores usando Clash Royale Mod APK: servidores privados y servidores públicos.
-
Los servidores privados son servidores alojados por el modder o un proveedor de terceros, y solo son accesibles por los jugadores que tienen la misma versión modificada del juego. Estos servidores suelen ser libres de unirse, pero pueden tener capacidad, estabilidad o características limitadas. También pueden requerir una contraseña o una invitación para unirse. Los servidores privados son ideales para jugar con tus amigos u otros jugadores que comparten tu interés en la versión modificada del juego. Puede encontrar servidores privados buscando en línea, pidiendo el modder o uniéndose a una comunidad de jugadores modded.
-
Los servidores públicos son servidores alojados por Supercell, el desarrollador original del juego, y son accesibles por todos los jugadores que tienen la versión oficial o cualquier versión modificada del juego. Estos servidores suelen ser más fiables, seguros y actualizados que los servidores privados, pero también tienen más restricciones y riesgos. Los servidores públicos son ideales para jugar con jugadores al azar o para probar tus habilidades contra jugadores sin odded. Sin embargo, debes tener cuidado de no usar ninguna característica que sea exclusiva de la versión modificada del juego, como tarjetas nuevas, recursos ilimitados o configuraciones personalizadas. Esto se debe a que estas características pueden no funcionar correctamente en los servidores públicos, y también pueden ser detectados por el sistema anti-cheat de Supercell, que puede resultar en una prohibición o suspensión del juego.
-
Los consejos y trucos para ganar más batallas y trofeos con las nuevas cartas
-
Ya sea que juegues en servidores privados o públicos, querrás ganar más batallas y trofeos con las nuevas cartas a las que tienes acceso en Clash Royale Mod APK. Aquí hay algunos consejos y trucos para ayudarte a hacerlo:
-
-
-
Construir una cubierta equilibrada y versátil que puede contrarrestar diferentes tipos de enemigos y situaciones. Puedes hacer esto incluyendo cartas que pueden atacar o defenderse contra objetivos aéreos y terrestres, hacer daño de un solo objetivo o salpicar, apoyar o distraer a otras tropas, etc.
-
Usa tu elixir sabiamente y eficientemente. Puedes hacer esto desplegando tus tarjetas en el momento y lugar correctos, evitando sobrecomisionar o subcomunicar tu elixir, ciclando tus tarjetas lo suficientemente rápido como para obtener las que necesitas y administrando tu ventaja o desventaja de elixir.
-
Usa tus nuevas tarjetas de forma creativa y estratégica. Puedes hacer esto sorprendiendo a tu oponente con movimientos inesperados, explotando sus debilidades o errores, adaptándose a sus estrategias o contadores, y creando combos o empujes que son difíciles de detener.
-
Diviértete y disfruta del juego. Puedes hacer esto probando diferentes mazos y modos, desafiándote con oponentes o objetivos más difíciles, uniéndote a un clan o una comunidad de jugadores modded, y compartiendo tus comentarios o sugerencias con el modder.
-
-
Conclusión
-
En conclusión, Clash Royale Mod APK es una versión modificada del popular juego de estrategia en tiempo real de Supercell que le permite acceder a nuevas tarjetas, recursos ilimitados y otras características que no están disponibles en la versión oficial del juego. Puede hacer el juego más divertido y emocionante para algunos jugadores que quieren probar nuevas estrategias, experimentar con diferentes combinaciones o simplemente disfrutar de más opciones y libertad. Sin embargo, también tiene algunos riesgos y limitaciones que debe tener en cuenta antes de descargarlo e instalarlo en su dispositivo. También debe respetar las reglas y los derechos del desarrollador original y otros jugadores, y no utilizar la versión modificada para cualquier propósito ilegal o poco ético.
-
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes y respuestas relacionadas con Clash Royale Mod APK:
Q: Es Clash Royale Mod APK seguro de usar?
-
A: Clash Royale Mod APK no es un producto oficial de Supercell, y por lo tanto no está respaldado o apoyado por ellos. Tampoco es verificado o probado por ninguna autoridad o plataforma de renombre. Por lo tanto, no hay garantía de que sea seguro de usar, y puede contener malware, virus o archivos falsos que pueden dañar su dispositivo o robar su información personal. Debe usarlo bajo su propio riesgo y responsabilidad, y solo descargarlo de una fuente confiable.
-
Q: ¿Puedo jugar Clash Royale Mod APK offline?
-
A: No, no se puede jugar Clash Royale Mod APK fuera de línea. El juego requiere una conexión a Internet para acceder a los servidores, ya sean privados o públicos. Si intenta jugar el juego sin conexión, verá un mensaje de error que dice "No hay conexión a Internet" o "Error de conexión". Tendrás que conectarte a una red Wi-Fi o de datos móvil estable y segura para jugar.
-
Q: ¿Puedo usar mi cuenta de juego original para jugar Clash Royale Mod APK?
-
A: No, no puede utilizar su cuenta de juego original para jugar Clash Royale Mod APK. Esto se debe a que la versión modificada del juego tiene diferentes características y ajustes que la versión original, y no son compatibles entre sí. Si intentas iniciar sesión con tu cuenta de juego original, puedes encontrar errores, fallos o prohibiciones. Deberías usar una cuenta de invitado o crear una nueva cuenta para jugar con la versión modificada del juego.
-
Q: ¿Cómo puedo actualizar Clash Royale Mod APK?
-
-
Q: ¿Dónde puedo encontrar más información o soporte para Clash Royale Mod APK?
-
A: Puede encontrar más información o soporte para Clash Royale Mod APK visitando el sitio web del modder o la fuente que proporciona el archivo modded. También puede unirse a una comunidad de jugadores modificados en plataformas de redes sociales, foros o grupos de chat. Allí, puedes hacer preguntas, compartir comentarios o reportar problemas relacionados con la versión modificada del juego. Sin embargo, debe tener cuidado de no confiar en ninguna información o soporte que no sea de una fuente confiable o confiable.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/httpchecksum.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/httpchecksum.py
deleted file mode 100644
index b0b84b400bc943dafe44c2f91035bf454f0b671c..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/httpchecksum.py
+++ /dev/null
@@ -1,483 +0,0 @@
-# Copyright 2021 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# http://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-
-""" The interfaces in this module are not intended for public use.
-
-This module defines interfaces for applying checksums to HTTP requests within
-the context of botocore. This involves both resolving the checksum to be used
-based on client configuration and environment, as well as application of the
-checksum to the request.
-"""
-import base64
-import io
-import logging
-from binascii import crc32
-from hashlib import sha1, sha256
-
-from botocore.compat import HAS_CRT
-from botocore.exceptions import (
- AwsChunkedWrapperError,
- FlexibleChecksumError,
- MissingDependencyException,
-)
-from botocore.response import StreamingBody
-from botocore.utils import (
- conditionally_calculate_md5,
- determine_content_length,
-)
-
-if HAS_CRT:
- from awscrt import checksums as crt_checksums
-else:
- crt_checksums = None
-
-logger = logging.getLogger(__name__)
-
-
-class BaseChecksum:
- _CHUNK_SIZE = 1024 * 1024
-
- def update(self, chunk):
- pass
-
- def digest(self):
- pass
-
- def b64digest(self):
- bs = self.digest()
- return base64.b64encode(bs).decode("ascii")
-
- def _handle_fileobj(self, fileobj):
- start_position = fileobj.tell()
- for chunk in iter(lambda: fileobj.read(self._CHUNK_SIZE), b""):
- self.update(chunk)
- fileobj.seek(start_position)
-
- def handle(self, body):
- if isinstance(body, (bytes, bytearray)):
- self.update(body)
- else:
- self._handle_fileobj(body)
- return self.b64digest()
-
-
-class Crc32Checksum(BaseChecksum):
- def __init__(self):
- self._int_crc32 = 0
-
- def update(self, chunk):
- self._int_crc32 = crc32(chunk, self._int_crc32) & 0xFFFFFFFF
-
- def digest(self):
- return self._int_crc32.to_bytes(4, byteorder="big")
-
-
-class CrtCrc32Checksum(BaseChecksum):
- # Note: This class is only used if the CRT is available
- def __init__(self):
- self._int_crc32 = 0
-
- def update(self, chunk):
- new_checksum = crt_checksums.crc32(chunk, self._int_crc32)
- self._int_crc32 = new_checksum & 0xFFFFFFFF
-
- def digest(self):
- return self._int_crc32.to_bytes(4, byteorder="big")
-
-
-class CrtCrc32cChecksum(BaseChecksum):
- # Note: This class is only used if the CRT is available
- def __init__(self):
- self._int_crc32c = 0
-
- def update(self, chunk):
- new_checksum = crt_checksums.crc32c(chunk, self._int_crc32c)
- self._int_crc32c = new_checksum & 0xFFFFFFFF
-
- def digest(self):
- return self._int_crc32c.to_bytes(4, byteorder="big")
-
-
-class Sha1Checksum(BaseChecksum):
- def __init__(self):
- self._checksum = sha1()
-
- def update(self, chunk):
- self._checksum.update(chunk)
-
- def digest(self):
- return self._checksum.digest()
-
-
-class Sha256Checksum(BaseChecksum):
- def __init__(self):
- self._checksum = sha256()
-
- def update(self, chunk):
- self._checksum.update(chunk)
-
- def digest(self):
- return self._checksum.digest()
-
-
-class AwsChunkedWrapper:
- _DEFAULT_CHUNK_SIZE = 1024 * 1024
-
- def __init__(
- self,
- raw,
- checksum_cls=None,
- checksum_name="x-amz-checksum",
- chunk_size=None,
- ):
- self._raw = raw
- self._checksum_name = checksum_name
- self._checksum_cls = checksum_cls
- self._reset()
-
- if chunk_size is None:
- chunk_size = self._DEFAULT_CHUNK_SIZE
- self._chunk_size = chunk_size
-
- def _reset(self):
- self._remaining = b""
- self._complete = False
- self._checksum = None
- if self._checksum_cls:
- self._checksum = self._checksum_cls()
-
- def seek(self, offset, whence=0):
- if offset != 0 or whence != 0:
- raise AwsChunkedWrapperError(
- error_msg="Can only seek to start of stream"
- )
- self._reset()
- self._raw.seek(0)
-
- def read(self, size=None):
- # Normalize "read all" size values to None
- if size is not None and size <= 0:
- size = None
-
- # If the underlying body is done and we have nothing left then
- # end the stream
- if self._complete and not self._remaining:
- return b""
-
- # While we're not done and want more bytes
- want_more_bytes = size is None or size > len(self._remaining)
- while not self._complete and want_more_bytes:
- self._remaining += self._make_chunk()
- want_more_bytes = size is None or size > len(self._remaining)
-
- # If size was None, we want to return everything
- if size is None:
- size = len(self._remaining)
-
- # Return a chunk up to the size asked for
- to_return = self._remaining[:size]
- self._remaining = self._remaining[size:]
- return to_return
-
- def _make_chunk(self):
- # NOTE: Chunk size is not deterministic as read could return less. This
- # means we cannot know the content length of the encoded aws-chunked
- # stream ahead of time without ensuring a consistent chunk size
- raw_chunk = self._raw.read(self._chunk_size)
- hex_len = hex(len(raw_chunk))[2:].encode("ascii")
- self._complete = not raw_chunk
-
- if self._checksum:
- self._checksum.update(raw_chunk)
-
- if self._checksum and self._complete:
- name = self._checksum_name.encode("ascii")
- checksum = self._checksum.b64digest().encode("ascii")
- return b"0\r\n%s:%s\r\n\r\n" % (name, checksum)
-
- return b"%s\r\n%s\r\n" % (hex_len, raw_chunk)
-
- def __iter__(self):
- while not self._complete:
- yield self._make_chunk()
-
-
-class StreamingChecksumBody(StreamingBody):
- def __init__(self, raw_stream, content_length, checksum, expected):
- super().__init__(raw_stream, content_length)
- self._checksum = checksum
- self._expected = expected
-
- def read(self, amt=None):
- chunk = super().read(amt=amt)
- self._checksum.update(chunk)
- if amt is None or (not chunk and amt > 0):
- self._validate_checksum()
- return chunk
-
- def _validate_checksum(self):
- if self._checksum.digest() != base64.b64decode(self._expected):
- error_msg = (
- f"Expected checksum {self._expected} did not match calculated "
- f"checksum: {self._checksum.b64digest()}"
- )
- raise FlexibleChecksumError(error_msg=error_msg)
-
-
-def resolve_checksum_context(request, operation_model, params):
- resolve_request_checksum_algorithm(request, operation_model, params)
- resolve_response_checksum_algorithms(request, operation_model, params)
-
-
-def resolve_request_checksum_algorithm(
- request,
- operation_model,
- params,
- supported_algorithms=None,
-):
- http_checksum = operation_model.http_checksum
- algorithm_member = http_checksum.get("requestAlgorithmMember")
- if algorithm_member and algorithm_member in params:
- # If the client has opted into using flexible checksums and the
- # request supports it, use that instead of checksum required
- if supported_algorithms is None:
- supported_algorithms = _SUPPORTED_CHECKSUM_ALGORITHMS
-
- algorithm_name = params[algorithm_member].lower()
- if algorithm_name not in supported_algorithms:
- if not HAS_CRT and algorithm_name in _CRT_CHECKSUM_ALGORITHMS:
- raise MissingDependencyException(
- msg=(
- f"Using {algorithm_name.upper()} requires an "
- "additional dependency. You will need to pip install "
- "botocore[crt] before proceeding."
- )
- )
- raise FlexibleChecksumError(
- error_msg="Unsupported checksum algorithm: %s" % algorithm_name
- )
-
- location_type = "header"
- if operation_model.has_streaming_input:
- # Operations with streaming input must support trailers.
- if request["url"].startswith("https:"):
- # We only support unsigned trailer checksums currently. As this
- # disables payload signing we'll only use trailers over TLS.
- location_type = "trailer"
-
- algorithm = {
- "algorithm": algorithm_name,
- "in": location_type,
- "name": "x-amz-checksum-%s" % algorithm_name,
- }
-
- if algorithm["name"] in request["headers"]:
- # If the header is already set by the customer, skip calculation
- return
-
- checksum_context = request["context"].get("checksum", {})
- checksum_context["request_algorithm"] = algorithm
- request["context"]["checksum"] = checksum_context
- elif operation_model.http_checksum_required or http_checksum.get(
- "requestChecksumRequired"
- ):
- # Otherwise apply the old http checksum behavior via Content-MD5
- checksum_context = request["context"].get("checksum", {})
- checksum_context["request_algorithm"] = "conditional-md5"
- request["context"]["checksum"] = checksum_context
-
-
-def apply_request_checksum(request):
- checksum_context = request.get("context", {}).get("checksum", {})
- algorithm = checksum_context.get("request_algorithm")
-
- if not algorithm:
- return
-
- if algorithm == "conditional-md5":
- # Special case to handle the http checksum required trait
- conditionally_calculate_md5(request)
- elif algorithm["in"] == "header":
- _apply_request_header_checksum(request)
- elif algorithm["in"] == "trailer":
- _apply_request_trailer_checksum(request)
- else:
- raise FlexibleChecksumError(
- error_msg="Unknown checksum variant: %s" % algorithm["in"]
- )
-
-
-def _apply_request_header_checksum(request):
- checksum_context = request.get("context", {}).get("checksum", {})
- algorithm = checksum_context.get("request_algorithm")
- location_name = algorithm["name"]
- if location_name in request["headers"]:
- # If the header is already set by the customer, skip calculation
- return
- checksum_cls = _CHECKSUM_CLS.get(algorithm["algorithm"])
- digest = checksum_cls().handle(request["body"])
- request["headers"][location_name] = digest
-
-
-def _apply_request_trailer_checksum(request):
- checksum_context = request.get("context", {}).get("checksum", {})
- algorithm = checksum_context.get("request_algorithm")
- location_name = algorithm["name"]
- checksum_cls = _CHECKSUM_CLS.get(algorithm["algorithm"])
-
- headers = request["headers"]
- body = request["body"]
-
- if location_name in headers:
- # If the header is already set by the customer, skip calculation
- return
-
- headers["Transfer-Encoding"] = "chunked"
- if "Content-Encoding" in headers:
- # We need to preserve the existing content encoding and add
- # aws-chunked as a new content encoding.
- headers["Content-Encoding"] += ",aws-chunked"
- else:
- headers["Content-Encoding"] = "aws-chunked"
- headers["X-Amz-Trailer"] = location_name
-
- content_length = determine_content_length(body)
- if content_length is not None:
- # Send the decoded content length if we can determine it. Some
- # services such as S3 may require the decoded content length
- headers["X-Amz-Decoded-Content-Length"] = str(content_length)
-
- if isinstance(body, (bytes, bytearray)):
- body = io.BytesIO(body)
-
- request["body"] = AwsChunkedWrapper(
- body,
- checksum_cls=checksum_cls,
- checksum_name=location_name,
- )
-
-
-def resolve_response_checksum_algorithms(
- request, operation_model, params, supported_algorithms=None
-):
- http_checksum = operation_model.http_checksum
- mode_member = http_checksum.get("requestValidationModeMember")
- if mode_member and mode_member in params:
- if supported_algorithms is None:
- supported_algorithms = _SUPPORTED_CHECKSUM_ALGORITHMS
- response_algorithms = {
- a.lower() for a in http_checksum.get("responseAlgorithms", [])
- }
-
- usable_algorithms = []
- for algorithm in _ALGORITHMS_PRIORITY_LIST:
- if algorithm not in response_algorithms:
- continue
- if algorithm in supported_algorithms:
- usable_algorithms.append(algorithm)
-
- checksum_context = request["context"].get("checksum", {})
- checksum_context["response_algorithms"] = usable_algorithms
- request["context"]["checksum"] = checksum_context
-
-
-def handle_checksum_body(http_response, response, context, operation_model):
- headers = response["headers"]
- checksum_context = context.get("checksum", {})
- algorithms = checksum_context.get("response_algorithms")
-
- if not algorithms:
- return
-
- for algorithm in algorithms:
- header_name = "x-amz-checksum-%s" % algorithm
- # If the header is not found, check the next algorithm
- if header_name not in headers:
- continue
-
- # If a - is in the checksum this is not valid Base64. S3 returns
- # checksums that include a -# suffix to indicate a checksum derived
- # from the hash of all part checksums. We cannot wrap this response
- if "-" in headers[header_name]:
- continue
-
- if operation_model.has_streaming_output:
- response["body"] = _handle_streaming_response(
- http_response, response, algorithm
- )
- else:
- response["body"] = _handle_bytes_response(
- http_response, response, algorithm
- )
-
- # Expose metadata that the checksum check actually occured
- checksum_context = response["context"].get("checksum", {})
- checksum_context["response_algorithm"] = algorithm
- response["context"]["checksum"] = checksum_context
- return
-
- logger.info(
- f'Skipping checksum validation. Response did not contain one of the '
- f'following algorithms: {algorithms}.'
- )
-
-
-def _handle_streaming_response(http_response, response, algorithm):
- checksum_cls = _CHECKSUM_CLS.get(algorithm)
- header_name = "x-amz-checksum-%s" % algorithm
- return StreamingChecksumBody(
- http_response.raw,
- response["headers"].get("content-length"),
- checksum_cls(),
- response["headers"][header_name],
- )
-
-
-def _handle_bytes_response(http_response, response, algorithm):
- body = http_response.content
- header_name = "x-amz-checksum-%s" % algorithm
- checksum_cls = _CHECKSUM_CLS.get(algorithm)
- checksum = checksum_cls()
- checksum.update(body)
- expected = response["headers"][header_name]
- if checksum.digest() != base64.b64decode(expected):
- error_msg = (
- "Expected checksum %s did not match calculated checksum: %s"
- % (
- expected,
- checksum.b64digest(),
- )
- )
- raise FlexibleChecksumError(error_msg=error_msg)
- return body
-
-
-_CHECKSUM_CLS = {
- "crc32": Crc32Checksum,
- "sha1": Sha1Checksum,
- "sha256": Sha256Checksum,
-}
-_CRT_CHECKSUM_ALGORITHMS = ["crc32", "crc32c"]
-if HAS_CRT:
- # Use CRT checksum implementations if available
- _CRT_CHECKSUM_CLS = {
- "crc32": CrtCrc32Checksum,
- "crc32c": CrtCrc32cChecksum,
- }
- _CHECKSUM_CLS.update(_CRT_CHECKSUM_CLS)
- # Validate this list isn't out of sync with _CRT_CHECKSUM_CLS keys
- assert all(
- name in _CRT_CHECKSUM_ALGORITHMS for name in _CRT_CHECKSUM_CLS.keys()
- )
-_SUPPORTED_CHECKSUM_ALGORITHMS = list(_CHECKSUM_CLS.keys())
-_ALGORITHMS_PRIORITY_LIST = ['crc32c', 'crc32', 'sha1', 'sha256']
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/monitoring.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/monitoring.py
deleted file mode 100644
index 71d7230246b034f1a66f69b7a050a433b0ab9d13..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/monitoring.py
+++ /dev/null
@@ -1,586 +0,0 @@
-# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# http://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-import json
-import logging
-import re
-import time
-
-from botocore.compat import ensure_bytes, ensure_unicode, urlparse
-from botocore.retryhandler import EXCEPTION_MAP as RETRYABLE_EXCEPTIONS
-
-logger = logging.getLogger(__name__)
-
-
-class Monitor:
- _EVENTS_TO_REGISTER = [
- 'before-parameter-build',
- 'request-created',
- 'response-received',
- 'after-call',
- 'after-call-error',
- ]
-
- def __init__(self, adapter, publisher):
- """Abstraction for monitoring clients API calls
-
- :param adapter: An adapter that takes event emitter events
- and produces monitor events
-
- :param publisher: A publisher for generated monitor events
- """
- self._adapter = adapter
- self._publisher = publisher
-
- def register(self, event_emitter):
- """Register an event emitter to the monitor"""
- for event_to_register in self._EVENTS_TO_REGISTER:
- event_emitter.register_last(event_to_register, self.capture)
-
- def capture(self, event_name, **payload):
- """Captures an incoming event from the event emitter
-
- It will feed an event emitter event to the monitor's adaptor to create
- a monitor event and then publish that event to the monitor's publisher.
- """
- try:
- monitor_event = self._adapter.feed(event_name, payload)
- if monitor_event:
- self._publisher.publish(monitor_event)
- except Exception as e:
- logger.debug(
- 'Exception %s raised by client monitor in handling event %s',
- e,
- event_name,
- exc_info=True,
- )
-
-
-class MonitorEventAdapter:
- def __init__(self, time=time.time):
- """Adapts event emitter events to produce monitor events
-
- :type time: callable
- :param time: A callable that produces the current time
- """
- self._time = time
-
- def feed(self, emitter_event_name, emitter_payload):
- """Feed an event emitter event to generate a monitor event
-
- :type emitter_event_name: str
- :param emitter_event_name: The name of the event emitted
-
- :type emitter_payload: dict
- :param emitter_payload: The payload to associated to the event
- emitted
-
- :rtype: BaseMonitorEvent
- :returns: A monitor event based on the event emitter events
- fired
- """
- return self._get_handler(emitter_event_name)(**emitter_payload)
-
- def _get_handler(self, event_name):
- return getattr(
- self, '_handle_' + event_name.split('.')[0].replace('-', '_')
- )
-
- def _handle_before_parameter_build(self, model, context, **kwargs):
- context['current_api_call_event'] = APICallEvent(
- service=model.service_model.service_id,
- operation=model.wire_name,
- timestamp=self._get_current_time(),
- )
-
- def _handle_request_created(self, request, **kwargs):
- context = request.context
- new_attempt_event = context[
- 'current_api_call_event'
- ].new_api_call_attempt(timestamp=self._get_current_time())
- new_attempt_event.request_headers = request.headers
- new_attempt_event.url = request.url
- context['current_api_call_attempt_event'] = new_attempt_event
-
- def _handle_response_received(
- self, parsed_response, context, exception, **kwargs
- ):
- attempt_event = context.pop('current_api_call_attempt_event')
- attempt_event.latency = self._get_latency(attempt_event)
- if parsed_response is not None:
- attempt_event.http_status_code = parsed_response[
- 'ResponseMetadata'
- ]['HTTPStatusCode']
- attempt_event.response_headers = parsed_response[
- 'ResponseMetadata'
- ]['HTTPHeaders']
- attempt_event.parsed_error = parsed_response.get('Error')
- else:
- attempt_event.wire_exception = exception
- return attempt_event
-
- def _handle_after_call(self, context, parsed, **kwargs):
- context['current_api_call_event'].retries_exceeded = parsed[
- 'ResponseMetadata'
- ].get('MaxAttemptsReached', False)
- return self._complete_api_call(context)
-
- def _handle_after_call_error(self, context, exception, **kwargs):
- # If the after-call-error was emitted and the error being raised
- # was a retryable connection error, then the retries must have exceeded
- # for that exception as this event gets emitted **after** retries
- # happen.
- context[
- 'current_api_call_event'
- ].retries_exceeded = self._is_retryable_exception(exception)
- return self._complete_api_call(context)
-
- def _is_retryable_exception(self, exception):
- return isinstance(
- exception, tuple(RETRYABLE_EXCEPTIONS['GENERAL_CONNECTION_ERROR'])
- )
-
- def _complete_api_call(self, context):
- call_event = context.pop('current_api_call_event')
- call_event.latency = self._get_latency(call_event)
- return call_event
-
- def _get_latency(self, event):
- return self._get_current_time() - event.timestamp
-
- def _get_current_time(self):
- return int(self._time() * 1000)
-
-
-class BaseMonitorEvent:
- def __init__(self, service, operation, timestamp):
- """Base monitor event
-
- :type service: str
- :param service: A string identifying the service associated to
- the event
-
- :type operation: str
- :param operation: A string identifying the operation of service
- associated to the event
-
- :type timestamp: int
- :param timestamp: Epoch time in milliseconds from when the event began
- """
- self.service = service
- self.operation = operation
- self.timestamp = timestamp
-
- def __repr__(self):
- return f'{self.__class__.__name__}({self.__dict__!r})'
-
- def __eq__(self, other):
- if isinstance(other, self.__class__):
- return self.__dict__ == other.__dict__
- return False
-
-
-class APICallEvent(BaseMonitorEvent):
- def __init__(
- self,
- service,
- operation,
- timestamp,
- latency=None,
- attempts=None,
- retries_exceeded=False,
- ):
- """Monitor event for a single API call
-
- This event corresponds to a single client method call, which includes
- every HTTP requests attempt made in order to complete the client call
-
- :type service: str
- :param service: A string identifying the service associated to
- the event
-
- :type operation: str
- :param operation: A string identifying the operation of service
- associated to the event
-
- :type timestamp: int
- :param timestamp: Epoch time in milliseconds from when the event began
-
- :type latency: int
- :param latency: The time in milliseconds to complete the client call
-
- :type attempts: list
- :param attempts: The list of APICallAttempts associated to the
- APICall
-
- :type retries_exceeded: bool
- :param retries_exceeded: True if API call exceeded retries. False
- otherwise
- """
- super().__init__(
- service=service, operation=operation, timestamp=timestamp
- )
- self.latency = latency
- self.attempts = attempts
- if attempts is None:
- self.attempts = []
- self.retries_exceeded = retries_exceeded
-
- def new_api_call_attempt(self, timestamp):
- """Instantiates APICallAttemptEvent associated to the APICallEvent
-
- :type timestamp: int
- :param timestamp: Epoch time in milliseconds to associate to the
- APICallAttemptEvent
- """
- attempt_event = APICallAttemptEvent(
- service=self.service, operation=self.operation, timestamp=timestamp
- )
- self.attempts.append(attempt_event)
- return attempt_event
-
-
-class APICallAttemptEvent(BaseMonitorEvent):
- def __init__(
- self,
- service,
- operation,
- timestamp,
- latency=None,
- url=None,
- http_status_code=None,
- request_headers=None,
- response_headers=None,
- parsed_error=None,
- wire_exception=None,
- ):
- """Monitor event for a single API call attempt
-
- This event corresponds to a single HTTP request attempt in completing
- the entire client method call.
-
- :type service: str
- :param service: A string identifying the service associated to
- the event
-
- :type operation: str
- :param operation: A string identifying the operation of service
- associated to the event
-
- :type timestamp: int
- :param timestamp: Epoch time in milliseconds from when the HTTP request
- started
-
- :type latency: int
- :param latency: The time in milliseconds to complete the HTTP request
- whether it succeeded or failed
-
- :type url: str
- :param url: The URL the attempt was sent to
-
- :type http_status_code: int
- :param http_status_code: The HTTP status code of the HTTP response
- if there was a response
-
- :type request_headers: dict
- :param request_headers: The HTTP headers sent in making the HTTP
- request
-
- :type response_headers: dict
- :param response_headers: The HTTP headers returned in the HTTP response
- if there was a response
-
- :type parsed_error: dict
- :param parsed_error: The error parsed if the service returned an
- error back
-
- :type wire_exception: Exception
- :param wire_exception: The exception raised in sending the HTTP
- request (i.e. ConnectionError)
- """
- super().__init__(
- service=service, operation=operation, timestamp=timestamp
- )
- self.latency = latency
- self.url = url
- self.http_status_code = http_status_code
- self.request_headers = request_headers
- self.response_headers = response_headers
- self.parsed_error = parsed_error
- self.wire_exception = wire_exception
-
-
-class CSMSerializer:
- _MAX_CLIENT_ID_LENGTH = 255
- _MAX_EXCEPTION_CLASS_LENGTH = 128
- _MAX_ERROR_CODE_LENGTH = 128
- _MAX_USER_AGENT_LENGTH = 256
- _MAX_MESSAGE_LENGTH = 512
- _RESPONSE_HEADERS_TO_EVENT_ENTRIES = {
- 'x-amzn-requestid': 'XAmznRequestId',
- 'x-amz-request-id': 'XAmzRequestId',
- 'x-amz-id-2': 'XAmzId2',
- }
- _AUTH_REGEXS = {
- 'v4': re.compile(
- r'AWS4-HMAC-SHA256 '
- r'Credential=(?P\w+)/\d+/'
- r'(?P[a-z0-9-]+)/'
- ),
- 's3': re.compile(r'AWS (?P\w+):'),
- }
- _SERIALIZEABLE_EVENT_PROPERTIES = [
- 'service',
- 'operation',
- 'timestamp',
- 'attempts',
- 'latency',
- 'retries_exceeded',
- 'url',
- 'request_headers',
- 'http_status_code',
- 'response_headers',
- 'parsed_error',
- 'wire_exception',
- ]
-
- def __init__(self, csm_client_id):
- """Serializes monitor events to CSM (Client Side Monitoring) format
-
- :type csm_client_id: str
- :param csm_client_id: The application identifier to associate
- to the serialized events
- """
- self._validate_client_id(csm_client_id)
- self.csm_client_id = csm_client_id
-
- def _validate_client_id(self, csm_client_id):
- if len(csm_client_id) > self._MAX_CLIENT_ID_LENGTH:
- raise ValueError(
- f'The value provided for csm_client_id: {csm_client_id} exceeds '
- f'the maximum length of {self._MAX_CLIENT_ID_LENGTH} characters'
- )
-
- def serialize(self, event):
- """Serializes a monitor event to the CSM format
-
- :type event: BaseMonitorEvent
- :param event: The event to serialize to bytes
-
- :rtype: bytes
- :returns: The CSM serialized form of the event
- """
- event_dict = self._get_base_event_dict(event)
- event_type = self._get_event_type(event)
- event_dict['Type'] = event_type
- for attr in self._SERIALIZEABLE_EVENT_PROPERTIES:
- value = getattr(event, attr, None)
- if value is not None:
- getattr(self, '_serialize_' + attr)(
- value, event_dict, event_type=event_type
- )
- return ensure_bytes(json.dumps(event_dict, separators=(',', ':')))
-
- def _get_base_event_dict(self, event):
- return {
- 'Version': 1,
- 'ClientId': self.csm_client_id,
- }
-
- def _serialize_service(self, service, event_dict, **kwargs):
- event_dict['Service'] = service
-
- def _serialize_operation(self, operation, event_dict, **kwargs):
- event_dict['Api'] = operation
-
- def _serialize_timestamp(self, timestamp, event_dict, **kwargs):
- event_dict['Timestamp'] = timestamp
-
- def _serialize_attempts(self, attempts, event_dict, **kwargs):
- event_dict['AttemptCount'] = len(attempts)
- if attempts:
- self._add_fields_from_last_attempt(event_dict, attempts[-1])
-
- def _add_fields_from_last_attempt(self, event_dict, last_attempt):
- if last_attempt.request_headers:
- # It does not matter which attempt to use to grab the region
- # for the ApiCall event, but SDKs typically do the last one.
- region = self._get_region(last_attempt.request_headers)
- if region is not None:
- event_dict['Region'] = region
- event_dict['UserAgent'] = self._get_user_agent(
- last_attempt.request_headers
- )
- if last_attempt.http_status_code is not None:
- event_dict['FinalHttpStatusCode'] = last_attempt.http_status_code
- if last_attempt.parsed_error is not None:
- self._serialize_parsed_error(
- last_attempt.parsed_error, event_dict, 'ApiCall'
- )
- if last_attempt.wire_exception is not None:
- self._serialize_wire_exception(
- last_attempt.wire_exception, event_dict, 'ApiCall'
- )
-
- def _serialize_latency(self, latency, event_dict, event_type):
- if event_type == 'ApiCall':
- event_dict['Latency'] = latency
- elif event_type == 'ApiCallAttempt':
- event_dict['AttemptLatency'] = latency
-
- def _serialize_retries_exceeded(
- self, retries_exceeded, event_dict, **kwargs
- ):
- event_dict['MaxRetriesExceeded'] = 1 if retries_exceeded else 0
-
- def _serialize_url(self, url, event_dict, **kwargs):
- event_dict['Fqdn'] = urlparse(url).netloc
-
- def _serialize_request_headers(
- self, request_headers, event_dict, **kwargs
- ):
- event_dict['UserAgent'] = self._get_user_agent(request_headers)
- if self._is_signed(request_headers):
- event_dict['AccessKey'] = self._get_access_key(request_headers)
- region = self._get_region(request_headers)
- if region is not None:
- event_dict['Region'] = region
- if 'X-Amz-Security-Token' in request_headers:
- event_dict['SessionToken'] = request_headers[
- 'X-Amz-Security-Token'
- ]
-
- def _serialize_http_status_code(
- self, http_status_code, event_dict, **kwargs
- ):
- event_dict['HttpStatusCode'] = http_status_code
-
- def _serialize_response_headers(
- self, response_headers, event_dict, **kwargs
- ):
- for header, entry in self._RESPONSE_HEADERS_TO_EVENT_ENTRIES.items():
- if header in response_headers:
- event_dict[entry] = response_headers[header]
-
- def _serialize_parsed_error(
- self, parsed_error, event_dict, event_type, **kwargs
- ):
- field_prefix = 'Final' if event_type == 'ApiCall' else ''
- event_dict[field_prefix + 'AwsException'] = self._truncate(
- parsed_error['Code'], self._MAX_ERROR_CODE_LENGTH
- )
- event_dict[field_prefix + 'AwsExceptionMessage'] = self._truncate(
- parsed_error['Message'], self._MAX_MESSAGE_LENGTH
- )
-
- def _serialize_wire_exception(
- self, wire_exception, event_dict, event_type, **kwargs
- ):
- field_prefix = 'Final' if event_type == 'ApiCall' else ''
- event_dict[field_prefix + 'SdkException'] = self._truncate(
- wire_exception.__class__.__name__, self._MAX_EXCEPTION_CLASS_LENGTH
- )
- event_dict[field_prefix + 'SdkExceptionMessage'] = self._truncate(
- str(wire_exception), self._MAX_MESSAGE_LENGTH
- )
-
- def _get_event_type(self, event):
- if isinstance(event, APICallEvent):
- return 'ApiCall'
- elif isinstance(event, APICallAttemptEvent):
- return 'ApiCallAttempt'
-
- def _get_access_key(self, request_headers):
- auth_val = self._get_auth_value(request_headers)
- _, auth_match = self._get_auth_match(auth_val)
- return auth_match.group('access_key')
-
- def _get_region(self, request_headers):
- if not self._is_signed(request_headers):
- return None
- auth_val = self._get_auth_value(request_headers)
- signature_version, auth_match = self._get_auth_match(auth_val)
- if signature_version != 'v4':
- return None
- return auth_match.group('signing_region')
-
- def _get_user_agent(self, request_headers):
- return self._truncate(
- ensure_unicode(request_headers.get('User-Agent', '')),
- self._MAX_USER_AGENT_LENGTH,
- )
-
- def _is_signed(self, request_headers):
- return 'Authorization' in request_headers
-
- def _get_auth_value(self, request_headers):
- return ensure_unicode(request_headers['Authorization'])
-
- def _get_auth_match(self, auth_val):
- for signature_version, regex in self._AUTH_REGEXS.items():
- match = regex.match(auth_val)
- if match:
- return signature_version, match
- return None, None
-
- def _truncate(self, text, max_length):
- if len(text) > max_length:
- logger.debug(
- 'Truncating following value to maximum length of ' '%s: %s',
- text,
- max_length,
- )
- return text[:max_length]
- return text
-
-
-class SocketPublisher:
- _MAX_MONITOR_EVENT_LENGTH = 8 * 1024
-
- def __init__(self, socket, host, port, serializer):
- """Publishes monitor events to a socket
-
- :type socket: socket.socket
- :param socket: The socket object to use to publish events
-
- :type host: string
- :param host: The host to send events to
-
- :type port: integer
- :param port: The port on the host to send events to
-
- :param serializer: The serializer to use to serialize the event
- to a form that can be published to the socket. This must
- have a `serialize()` method that accepts a monitor event
- and return bytes
- """
- self._socket = socket
- self._address = (host, port)
- self._serializer = serializer
-
- def publish(self, event):
- """Publishes a specified monitor event
-
- :type event: BaseMonitorEvent
- :param event: The monitor event to be sent
- over the publisher's socket to the desired address.
- """
- serialized_event = self._serializer.serialize(event)
- if len(serialized_event) > self._MAX_MONITOR_EVENT_LENGTH:
- logger.debug(
- 'Serialized event of size %s exceeds the maximum length '
- 'allowed: %s. Not sending event to socket.',
- len(serialized_event),
- self._MAX_MONITOR_EVENT_LENGTH,
- )
- return
- self._socket.sendto(serialized_event, self._address)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/packaging/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/packaging/__init__.py
deleted file mode 100644
index 3c50c5dcfeeda2efed282200a5c5cc8c5f7542f7..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/packaging/__init__.py
+++ /dev/null
@@ -1,25 +0,0 @@
-# This file is dual licensed under the terms of the Apache License, Version
-# 2.0, and the BSD License. See the LICENSE file in the root of this repository
-# for complete details.
-
-from .__about__ import (
- __author__,
- __copyright__,
- __email__,
- __license__,
- __summary__,
- __title__,
- __uri__,
- __version__,
-)
-
-__all__ = [
- "__title__",
- "__summary__",
- "__uri__",
- "__version__",
- "__author__",
- "__email__",
- "__license__",
- "__copyright__",
-]
diff --git a/spaces/Boilin/URetinex-Net/test.py b/spaces/Boilin/URetinex-Net/test.py
deleted file mode 100644
index f4c4aa52fed3a2435a9ba63adbc1131c49bd5def..0000000000000000000000000000000000000000
--- a/spaces/Boilin/URetinex-Net/test.py
+++ /dev/null
@@ -1,200 +0,0 @@
-import argparse
-import torch
-import torch.nn as nn
-from network.Math_Module import P, Q
-from network.decom import Decom
-import os
-#import torchvision
-import torchvision.transforms as transforms
-from PIL import Image
-import time
-from utils import *
-import cv2
-
-def one2three(x):
- return torch.cat([x, x, x], dim=1).to(x)
-
-
-class Inference(nn.Module):
- def __init__(self, opts):
- super().__init__()
- self.opts = opts
- # loading decomposition model
- self.model_Decom_low = Decom()
- self.model_Decom_low = load_initialize(self.model_Decom_low,
- self.opts.Decom_model_low_path)
- # loading R; old_model_opts; and L model
- self.unfolding_opts, self.model_R, self.model_L = load_unfolding(
- self.opts.unfolding_model_path)
- # loading adjustment model
- self.adjust_model = load_adjustment(self.opts.adjust_model_path)
- self.P = P()
- self.Q = Q()
- transform = [
- transforms.ToTensor(),
- ]
- self.transform = transforms.Compose(transform)
- print(self.model_Decom_low)
- print(self.model_R)
- print(self.model_L)
- print(self.adjust_model)
- #time.sleep(8)
-
- def unfolding(self, input_low_img):
- for t in range(self.unfolding_opts.round):
- if t == 0: # initialize R0, L0
- P, Q = self.model_Decom_low(input_low_img)
- else: # update P and Q
- w_p = (self.unfolding_opts.gamma +
- self.unfolding_opts.Roffset * t)
- w_q = (self.unfolding_opts.lamda +
- self.unfolding_opts.Loffset * t)
- P = self.P(I=input_low_img, Q=Q, R=R, gamma=w_p)
- Q = self.Q(I=input_low_img, P=P, L=L, lamda=w_q)
- R = self.model_R(r=P, l=Q)
- L = self.model_L(l=Q)
- return R, L
-
- def lllumination_adjust(self, L, ratio):
- ratio = torch.ones(L.shape) * self.opts.ratio
- return self.adjust_model(l=L, alpha=ratio)
-
- def forward(self, input_low_img):
- # if not torch.cuda.is_available():
- # input_low_img = input_low_img.cuda()
- with torch.no_grad():
- start = time.time()
- R, L = self.unfolding(input_low_img)
- High_L = self.lllumination_adjust(L, self.opts.ratio)
- I_enhance = High_L * R
- p_time = (time.time() - start)
- return I_enhance, p_time
-
- def run(self, low_img_path):
- file_name = os.path.basename(self.opts.img_path)
- name = file_name.split('.')[0]
- low_img = self.transform(Image.open(low_img_path)).unsqueeze(0)
-
-# print('**************************************************************************')
-# print(low_img)
-# print(type(low_img))
-# print(type(Image.open(low_img_path)))
-# print(Image.open(low_img_path))
-
- enhance, p_time = self.forward(input_low_img=low_img)
- if not os.path.exists(self.opts.output):
- os.makedirs(self.opts.output)
- save_path = os.path.join(
- self.opts.output,
- file_name.replace(name,
- "%s_%d_URetinexNet" % (name, self.opts.ratio)))
- np_save_TensorImg(enhance, save_path)
- print(
- "================================= time for %s: %f============================"
- % (file_name, p_time))
-
-
-
- # 这是我自己修改的 run 函数
- # 避免了把图片储存到硬盘上面
- # 后续也可以修改把图片储存到硬盘上面
- def runForWeb(self, image):
- # 首先对输入的图片进行下采样直到符合最低运行像素限制
- max_pixel_limit=600*600
- pyr_down_times=0
- while True:
- a=len(image)
- b=len(image[0])
- c=a*b
- if(c<=max_pixel_limit):
- break
- pyr_down_times+=1
- image=cv2.pyrDown(image)
-
- print(image.shape)
- # 输入
- low_img = self.transform(Image.fromarray(np.uint8(image))).unsqueeze(0)
-
-
- # low_img=Image.fromarray(image.astype('uint8')).convert('RGB')
- # print('#############################################')
- # print(type(low_img))
- # print(low_img)
-
-
- # 训练
- enhance, p_time = self.forward(input_low_img=low_img)
-
- # print('UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU')
-
- # 输出
- # 这里需要修改一下 utils.py 的结果放回函数,参考上面 run 函数 np_save_TensorImg 这里需要修改一下的位置
- # 退训练结果进行上采样,还原原图大小
- result_image=result_for_gradio(enhance)
- for i in range(pyr_down_times):
- result_image=cv2.pyrUp(result_image)
- # return result_for_gradio(enhance)
- print(result_image.shape)
- return result_image
-
-
-# 这是提供给 gradio 框架调用的接口
-# gradio 框架负责提供后端操控和前端的页面展示
-def functionForGradio(image):
- parser = argparse.ArgumentParser(description='Configure')
- # specify your data path here!
- parser.add_argument('--img_path', type=str, default="./demo/input/3.png")
- parser.add_argument('--output', type=str, default="./demo/output")
- # ratio are recommended to be 3-5, bigger ratio will lead to over-exposure
- parser.add_argument('--ratio', type=int, default=5)
- # model path
- parser.add_argument('--Decom_model_low_path',
- type=str,
- default="./ckpt/init_low.pth")
- parser.add_argument('--unfolding_model_path',
- type=str,
- default="./ckpt/unfolding.pth")
- parser.add_argument('--adjust_model_path',
- type=str,
- default="./ckpt/L_adjust.pth")
- parser.add_argument('--gpu_id', type=int, default=0)
-
- opts = parser.parse_args()
- for k, v in vars(opts).items():
- print(k, v)
-
- os.environ['CUDA_VISIBLE_DEVICES'] = str(opts.gpu_id)
- model = Inference(opts)
-
- # 这里传入 numpy 数组然后开始训练
- return model.runForWeb(image)
-
-
-# 这是算法本来的主函数,上面提供的 gradio 框架调用的接口就是修改自主函数
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(description='Configure')
- # specify your data path here!
- parser.add_argument('--img_path', type=str, default="./demo/input/test3.jpg")
- parser.add_argument('--output', type=str, default="./demo/output")
- # ratio are recommended to be 3-5, bigger ratio will lead to over-exposure
- parser.add_argument('--ratio', type=int, default=5)
- # model path
- parser.add_argument('--Decom_model_low_path',
- type=str,
- default="./ckpt/init_low.pth")
- parser.add_argument('--unfolding_model_path',
- type=str,
- default="./ckpt/unfolding.pth")
- parser.add_argument('--adjust_model_path',
- type=str,
- default="./ckpt/L_adjust.pth")
- parser.add_argument('--gpu_id', type=int, default=0)
-
- opts = parser.parse_args()
- for k, v in vars(opts).items():
- print(k, v)
-
- os.environ['CUDA_VISIBLE_DEVICES'] = str(opts.gpu_id)
- model = Inference(opts)
- model.run(opts.img_path)
diff --git a/spaces/CVPR/BigDL-Nano_inference/app.py b/spaces/CVPR/BigDL-Nano_inference/app.py
deleted file mode 100644
index 468066d5d078715f3cf3dfdda952a39ac233ec50..0000000000000000000000000000000000000000
--- a/spaces/CVPR/BigDL-Nano_inference/app.py
+++ /dev/null
@@ -1,193 +0,0 @@
-#
-# Copyright 2016 The BigDL Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-# Part of the code in this file is adapted from
-# https://github.com/rnwzd/FSPBT-Image-Translation/blob/master/eval.py and
-# https://github.com/rnwzd/FSPBT-Image-Translation/blob/master/train.py
-
-# MIT License
-
-# Copyright (c) 2022 Lorenzo Breschi
-
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-
-# The above copyright notice and this permission notice shall be included in all
-# copies or substantial portions of the Software.
-
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-
-import gradio as gr
-import numpy as np
-import time
-from data import write_image_tensor, PatchDataModule, prepare_data, image2tensor, tensor2image
-import torch
-from tqdm import tqdm
-from bigdl.nano.pytorch import InferenceOptimizer
-from torch.utils.data import DataLoader
-from pathlib import Path
-from torch.utils.data import Dataset
-import datetime
-import huggingface_hub
-
-
-device = 'cpu'
-dtype = torch.float32
-MODEL_REPO = 'CVPR/FSPBT'
-ckpt_path = huggingface_hub.hf_hub_download(
- MODEL_REPO, 'generator.pt')
-generator = torch.load(ckpt_path)
-generator.eval()
-generator.to(device, dtype)
-params = {'batch_size': 1,
- 'num_workers': 0}
-
-
-class ImageDataset(Dataset):
- def __init__(self, img):
- self.imgs = [image2tensor(img)]
- def __getitem__(self, idx: int) -> dict:
- return self.imgs[idx]
-
- def __len__(self) -> int:
- return len(self.imgs)
-
-
-data_path = Path('data')
-train_image_dd = prepare_data(data_path)
-dm = PatchDataModule(train_image_dd, patch_size=2**6,
- batch_size=2**3, patch_num=2**6)
-
-# quantize model
-train_loader = dm.train_dataloader()
-train_loader_iter = iter(train_loader)
-quantized_model = InferenceOptimizer.quantize(generator,
- accelerator=None,
- calib_dataloader=train_loader)
-
-
-def original_transfer(input_img):
- w, h, _ = input_img.shape
- print(datetime.datetime.now())
- print("input size: ", w, h)
- # resize too large image
- if w > 3000 or h > 3000:
- ratio = min(3000 / w, 3000 / h)
- w = int(w * ratio)
- h = int(h * ratio)
- if w % 4 != 0 or h % 4 != 0:
- NW = int((w // 4) * 4)
- NH = int((h // 4) * 4)
- input_img = np.resize(input_img,(NW,NH,3))
- st = time.perf_counter()
- dataset = ImageDataset(input_img)
- loader = DataLoader(dataset, **params)
- with torch.no_grad():
- for inputs in tqdm(loader):
- inputs = inputs.to(device, dtype)
- st = time.perf_counter()
- outputs = generator(inputs)
- ori_time = time.perf_counter() - st
- ori_time = "{:.3f}s".format(ori_time)
- ori_image = np.array(tensor2image(outputs[0]))
- del inputs
- del outputs
- return ori_image, ori_time
-
-def nano_transfer(input_img):
- w, h, _ = input_img.shape
- print(datetime.datetime.now())
- print("input size: ", w, h)
- # resize too large image
- if w > 3000 or h > 3000:
- ratio = min(3000 / w, 3000 / h)
- w = int(w * ratio)
- h = int(h * ratio)
- if w % 4 != 0 or h % 4 != 0:
- NW = int((w // 4) * 4)
- NH = int((h // 4) * 4)
- input_img = np.resize(input_img,(NW,NH,3))
- st = time.perf_counter()
- dataset = ImageDataset(input_img)
- loader = DataLoader(dataset, **params)
- with torch.no_grad():
- for inputs in tqdm(loader):
- inputs = inputs.to(device, dtype)
- st = time.perf_counter()
- outputs = quantized_model(inputs)
- nano_time = time.perf_counter() - st
- nano_time = "{:.3f}s".format(nano_time)
- nano_image = np.array(tensor2image(outputs[0]))
- del inputs
- del outputs
- return nano_image, nano_time
-
-
-def clear():
- return None, None, None, None
-
-
-demo = gr.Blocks()
-
-with demo:
- gr.Markdown("
BigDL-Nano inference demo
")
- with gr.Row().style(equal_height=False):
- with gr.Column():
- gr.Markdown('''
-
Overview
-
- BigDL-Nano is a library in [BigDL 2.0](https://github.com/intel-analytics/BigDL) that allows the users to transparently accelerate their deep learning pipelines (including data processing, training and inference) by automatically integrating optimized libraries, best-known configurations, and software optimizations.
-
- The video on the right shows how the user can easily enable quantization using BigDL-Nano (with just a couple of lines of code); you may refer to our [CVPR 2022 demo paper](https://arxiv.org/abs/2204.01715) for more details.
- ''')
- with gr.Column():
- gr.Video(value="data/nano_quantize_api.mp4")
- gr.Markdown('''
-
Demo
-
- This section uses an image stylization example to demostrate the speedup of the above code when using quantization in BigDL-Nano (about 2~3x inference time speedup).
- The demo is adapted from the original [FSPBT-Image-Translation code](https://github.com/rnwzd/FSPBT-Image-Translation),
- and the default image is from [the COCO dataset](https://cocodataset.org/#home).
- ''')
- with gr.Row().style(equal_height=False):
- input_img = gr.Image(label="input image", value="data/COCO_image.jpg", source="upload")
- with gr.Column():
- ori_but = gr.Button("Standard PyTorch")
- nano_but = gr.Button("BigDL-Nano")
- clear_but = gr.Button("Clear Output")
- with gr.Row().style(equal_height=False):
- with gr.Column():
- ori_time = gr.Text(label="Standard PyTorch latency")
- ori_image = gr.Image(label="Standard PyTorch output image")
- with gr.Column():
- nano_time = gr.Text(label="BigDL-Nano latency")
- nano_image = gr.Image(label="BigDL-Nano output image")
-
- ori_but.click(original_transfer, inputs=input_img, outputs=[ori_image, ori_time])
- nano_but.click(nano_transfer, inputs=input_img, outputs=[nano_image, nano_time])
- clear_but.click(clear, inputs=None, outputs=[ori_image, ori_time, nano_image, nano_time])
-
-
-demo.launch(share=True, enable_queue=True)
\ No newline at end of file
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/backbone/fpn.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/backbone/fpn.py
deleted file mode 100644
index 7b967318f63421e71613154565bd5f8f7d9b8312..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/backbone/fpn.py
+++ /dev/null
@@ -1,245 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import math
-import fvcore.nn.weight_init as weight_init
-import torch.nn.functional as F
-from torch import nn
-
-from detectron2.layers import Conv2d, ShapeSpec, get_norm
-
-from .backbone import Backbone
-from .build import BACKBONE_REGISTRY
-from .resnet import build_resnet_backbone
-
-__all__ = ["build_resnet_fpn_backbone", "build_retinanet_resnet_fpn_backbone", "FPN"]
-
-
-class FPN(Backbone):
- """
- This module implements Feature Pyramid Network.
- It creates pyramid features built on top of some input feature maps.
- """
-
- def __init__(
- self, bottom_up, in_features, out_channels, norm="", top_block=None, fuse_type="sum"
- ):
- """
- Args:
- bottom_up (Backbone): module representing the bottom up subnetwork.
- Must be a subclass of :class:`Backbone`. The multi-scale feature
- maps generated by the bottom up network, and listed in `in_features`,
- are used to generate FPN levels.
- in_features (list[str]): names of the input feature maps coming
- from the backbone to which FPN is attached. For example, if the
- backbone produces ["res2", "res3", "res4"], any *contiguous* sublist
- of these may be used; order must be from high to low resolution.
- out_channels (int): number of channels in the output feature maps.
- norm (str): the normalization to use.
- top_block (nn.Module or None): if provided, an extra operation will
- be performed on the output of the last (smallest resolution)
- FPN output, and the result will extend the result list. The top_block
- further downsamples the feature map. It must have an attribute
- "num_levels", meaning the number of extra FPN levels added by
- this block, and "in_feature", which is a string representing
- its input feature (e.g., p5).
- fuse_type (str): types for fusing the top down features and the lateral
- ones. It can be "sum" (default), which sums up element-wise; or "avg",
- which takes the element-wise mean of the two.
- """
- super(FPN, self).__init__()
- assert isinstance(bottom_up, Backbone)
-
- # Feature map strides and channels from the bottom up network (e.g. ResNet)
- input_shapes = bottom_up.output_shape()
- in_strides = [input_shapes[f].stride for f in in_features]
- in_channels = [input_shapes[f].channels for f in in_features]
-
- _assert_strides_are_log2_contiguous(in_strides)
- lateral_convs = []
- output_convs = []
-
- use_bias = norm == ""
- for idx, in_channels in enumerate(in_channels):
- lateral_norm = get_norm(norm, out_channels)
- output_norm = get_norm(norm, out_channels)
-
- lateral_conv = Conv2d(
- in_channels, out_channels, kernel_size=1, bias=use_bias, norm=lateral_norm
- )
- output_conv = Conv2d(
- out_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- bias=use_bias,
- norm=output_norm,
- )
- weight_init.c2_xavier_fill(lateral_conv)
- weight_init.c2_xavier_fill(output_conv)
- stage = int(math.log2(in_strides[idx]))
- self.add_module("fpn_lateral{}".format(stage), lateral_conv)
- self.add_module("fpn_output{}".format(stage), output_conv)
-
- lateral_convs.append(lateral_conv)
- output_convs.append(output_conv)
- # Place convs into top-down order (from low to high resolution)
- # to make the top-down computation in forward clearer.
- self.lateral_convs = lateral_convs[::-1]
- self.output_convs = output_convs[::-1]
- self.top_block = top_block
- self.in_features = in_features
- self.bottom_up = bottom_up
- # Return feature names are "p", like ["p2", "p3", ..., "p6"]
- self._out_feature_strides = {"p{}".format(int(math.log2(s))): s for s in in_strides}
- # top block output feature maps.
- if self.top_block is not None:
- for s in range(stage, stage + self.top_block.num_levels):
- self._out_feature_strides["p{}".format(s + 1)] = 2 ** (s + 1)
-
- self._out_features = list(self._out_feature_strides.keys())
- self._out_feature_channels = {k: out_channels for k in self._out_features}
- self._size_divisibility = in_strides[-1]
- assert fuse_type in {"avg", "sum"}
- self._fuse_type = fuse_type
-
- @property
- def size_divisibility(self):
- return self._size_divisibility
-
- def forward(self, x):
- """
- Args:
- input (dict[str->Tensor]): mapping feature map name (e.g., "res5") to
- feature map tensor for each feature level in high to low resolution order.
-
- Returns:
- dict[str->Tensor]:
- mapping from feature map name to FPN feature map tensor
- in high to low resolution order. Returned feature names follow the FPN
- paper convention: "p", where stage has stride = 2 ** stage e.g.,
- ["p2", "p3", ..., "p6"].
- """
- # Reverse feature maps into top-down order (from low to high resolution)
- bottom_up_features = self.bottom_up(x)
- x = [bottom_up_features[f] for f in self.in_features[::-1]]
- results = []
- prev_features = self.lateral_convs[0](x[0])
- results.append(self.output_convs[0](prev_features))
- for features, lateral_conv, output_conv in zip(
- x[1:], self.lateral_convs[1:], self.output_convs[1:]
- ):
- top_down_features = F.interpolate(prev_features, scale_factor=2, mode="nearest")
- lateral_features = lateral_conv(features)
- prev_features = lateral_features + top_down_features
- if self._fuse_type == "avg":
- prev_features /= 2
- results.insert(0, output_conv(prev_features))
-
- if self.top_block is not None:
- top_block_in_feature = bottom_up_features.get(self.top_block.in_feature, None)
- if top_block_in_feature is None:
- top_block_in_feature = results[self._out_features.index(self.top_block.in_feature)]
- results.extend(self.top_block(top_block_in_feature))
- assert len(self._out_features) == len(results)
- return dict(zip(self._out_features, results))
-
- def output_shape(self):
- return {
- name: ShapeSpec(
- channels=self._out_feature_channels[name], stride=self._out_feature_strides[name]
- )
- for name in self._out_features
- }
-
-
-def _assert_strides_are_log2_contiguous(strides):
- """
- Assert that each stride is 2x times its preceding stride, i.e. "contiguous in log2".
- """
- for i, stride in enumerate(strides[1:], 1):
- assert stride == 2 * strides[i - 1], "Strides {} {} are not log2 contiguous".format(
- stride, strides[i - 1]
- )
-
-
-class LastLevelMaxPool(nn.Module):
- """
- This module is used in the original FPN to generate a downsampled
- P6 feature from P5.
- """
-
- def __init__(self):
- super().__init__()
- self.num_levels = 1
- self.in_feature = "p5"
-
- def forward(self, x):
- return [F.max_pool2d(x, kernel_size=1, stride=2, padding=0)]
-
-
-class LastLevelP6P7(nn.Module):
- """
- This module is used in RetinaNet to generate extra layers, P6 and P7 from
- C5 feature.
- """
-
- def __init__(self, in_channels, out_channels, in_feature="res5"):
- super().__init__()
- self.num_levels = 2
- self.in_feature = in_feature
- self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1)
- self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1)
- for module in [self.p6, self.p7]:
- weight_init.c2_xavier_fill(module)
-
- def forward(self, c5):
- p6 = self.p6(c5)
- p7 = self.p7(F.relu(p6))
- return [p6, p7]
-
-
-@BACKBONE_REGISTRY.register()
-def build_resnet_fpn_backbone(cfg, input_shape: ShapeSpec):
- """
- Args:
- cfg: a detectron2 CfgNode
-
- Returns:
- backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`.
- """
- bottom_up = build_resnet_backbone(cfg, input_shape)
- in_features = cfg.MODEL.FPN.IN_FEATURES
- out_channels = cfg.MODEL.FPN.OUT_CHANNELS
- backbone = FPN(
- bottom_up=bottom_up,
- in_features=in_features,
- out_channels=out_channels,
- norm=cfg.MODEL.FPN.NORM,
- top_block=LastLevelMaxPool(),
- fuse_type=cfg.MODEL.FPN.FUSE_TYPE,
- )
- return backbone
-
-
-@BACKBONE_REGISTRY.register()
-def build_retinanet_resnet_fpn_backbone(cfg, input_shape: ShapeSpec):
- """
- Args:
- cfg: a detectron2 CfgNode
-
- Returns:
- backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`.
- """
- bottom_up = build_resnet_backbone(cfg, input_shape)
- in_features = cfg.MODEL.FPN.IN_FEATURES
- out_channels = cfg.MODEL.FPN.OUT_CHANNELS
- in_channels_p6p7 = bottom_up.output_shape()["res5"].channels
- backbone = FPN(
- bottom_up=bottom_up,
- in_features=in_features,
- out_channels=out_channels,
- norm=cfg.MODEL.FPN.NORM,
- top_block=LastLevelP6P7(in_channels_p6p7, out_channels),
- fuse_type=cfg.MODEL.FPN.FUSE_TYPE,
- )
- return backbone
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/adjacent_difference.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/adjacent_difference.h
deleted file mode 100644
index 648ddba3e9bea6bf2f7b4c7a8b1b8fc330ac1818..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/adjacent_difference.h
+++ /dev/null
@@ -1,540 +0,0 @@
-/******************************************************************************
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * * Neither the name of the NVIDIA CORPORATION nor the
- * names of its contributors may be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- ******************************************************************************/
-#pragma once
-
-
-#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
-#include
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-namespace thrust
-{
-
-template
-__host__ __device__ OutputIterator
-adjacent_difference(
- const thrust::detail::execution_policy_base &exec,
- InputIterator first,
- InputIterator last,
- OutputIterator result,
- BinaryFunction binary_op);
-
-namespace cuda_cub {
-
-namespace __adjacent_difference {
-
- namespace mpl = thrust::detail::mpl::math;
-
- template
- struct PtxPolicy
- {
- enum
- {
- BLOCK_THREADS = _BLOCK_THREADS,
- ITEMS_PER_THREAD = _ITEMS_PER_THREAD,
- ITEMS_PER_TILE = BLOCK_THREADS * ITEMS_PER_THREAD
- };
-
- static const cub::BlockLoadAlgorithm LOAD_ALGORITHM = _LOAD_ALGORITHM;
- static const cub::CacheLoadModifier LOAD_MODIFIER = _LOAD_MODIFIER;
- static const cub::BlockStoreAlgorithm STORE_ALGORITHM = _STORE_ALGORITHM;
- };
-
- template
- struct items_per_thread
- {
- enum
- {
- value = (INPUT_SIZE <= 8)
- ? NOMINAL_4B_ITEMS_PER_THREAD
- : mpl::min<
- int,
- NOMINAL_4B_ITEMS_PER_THREAD,
- mpl::max::value>::value
- };
- };
-
- template
- struct Tuning;
-
- template
- struct Tuning
- {
- enum
- {
- INPUT_SIZE = sizeof(T),
- NOMINAL_4B_ITEMS_PER_THREAD = 7,
- ITEMS_PER_THREAD = items_per_thread::value
- };
- typedef PtxPolicy<128,
- ITEMS_PER_THREAD,
- cub::BLOCK_LOAD_WARP_TRANSPOSE,
- cub::LOAD_DEFAULT,
- cub::BLOCK_STORE_WARP_TRANSPOSE>
- type;
- };
- template
- struct Tuning : Tuning
- {
- enum
- {
- NOMINAL_4B_ITEMS_PER_THREAD = 7,
- ITEMS_PER_THREAD = items_per_thread::value
- };
- typedef PtxPolicy<128,
- ITEMS_PER_THREAD,
- cub::BLOCK_LOAD_WARP_TRANSPOSE,
- cub::LOAD_LDG,
- cub::BLOCK_STORE_WARP_TRANSPOSE>
- type;
- };
-
- template
- struct AdjacentDifferenceAgent
- {
- typedef typename iterator_traits::value_type input_type;
-
- // XXX output type must be result of BinaryOp(input_type,input_type);
- typedef input_type output_type;
-
- template
- struct PtxPlan : Tuning::type
- {
- typedef Tuning tuning;
-
- typedef typename core::LoadIterator::type LoadIt;
- typedef typename core::BlockLoad::type BlockLoad;
-
- typedef typename core::BlockStore::type
- BlockStore;
-
- typedef cub::BlockAdjacentDifference
- BlockAdjacentDifference;
-
- union TempStorage
- {
- typename BlockAdjacentDifference::TempStorage discontinuity;
- typename BlockLoad::TempStorage load;
- typename BlockStore::TempStorage store;
- }; // union TempStorage
- }; // struct PtxPlan
-
- typedef typename core::specialize_plan_msvc10_war::type::type ptx_plan;
-
- typedef typename ptx_plan::LoadIt LoadIt;
- typedef typename ptx_plan::BlockLoad BlockLoad;
- typedef typename ptx_plan::BlockStore BlockStore;
- typedef typename ptx_plan::BlockAdjacentDifference BlockAdjacentDifference;
- typedef typename ptx_plan::TempStorage TempStorage;
-
-
- enum
- {
- ITEMS_PER_THREAD = ptx_plan::ITEMS_PER_THREAD,
- BLOCK_THREADS = ptx_plan::BLOCK_THREADS,
- ITEMS_PER_TILE = ptx_plan::ITEMS_PER_TILE,
- };
-
- struct impl
- {
-
- //---------------------------------------------------------------------
- // Per-thread fields
- //---------------------------------------------------------------------
-
- TempStorage &temp_storage;
- LoadIt load_it; // iterator to the first element
- input_type * first_tile_previous; // iterator to the first element of previous tile value
- OutputIt output_it;
- BinaryOp binary_op;
-
- template
- void THRUST_DEVICE_FUNCTION
- consume_tile_impl(int num_remaining,
- int tile_idx,
- Size tile_base)
- {
- input_type input[ITEMS_PER_THREAD];
- input_type input_prev[ITEMS_PER_THREAD];
- output_type output[ITEMS_PER_THREAD];
-
- if (IS_LAST_TILE)
- {
- // Fill last elements with the first element
- // because collectives are not suffix guarded
- BlockLoad(temp_storage.load)
- .Load(load_it + tile_base,
- input,
- num_remaining,
- *(load_it + tile_base));
- }
- else
- {
- BlockLoad(temp_storage.load).Load(load_it + tile_base, input);
- }
-
-
- core::sync_threadblock();
-
- if (IS_FIRST_TILE)
- {
- BlockAdjacentDifference(temp_storage.discontinuity)
- .FlagHeads(output, input, input_prev, binary_op);
- if (threadIdx.x == 0)
- output[0] = input[0];
- }
- else
- {
- input_type tile_prev_input = first_tile_previous[tile_idx];
- BlockAdjacentDifference(temp_storage.discontinuity)
- .FlagHeads(output, input, input_prev, binary_op, tile_prev_input);
- }
-
- core::sync_threadblock();
-
- if (IS_LAST_TILE)
- {
- BlockStore(temp_storage.store)
- .Store(output_it + tile_base, output, num_remaining);
- }
- else
- {
- BlockStore(temp_storage.store).Store(output_it + tile_base, output);
- }
- }
-
-
- template
- void THRUST_DEVICE_FUNCTION
- consume_tile(int num_remaining,
- int tile_idx,
- Size tile_base)
- {
- if (tile_idx == 0)
- {
- consume_tile_impl(num_remaining,
- tile_idx,
- tile_base);
- }
- else
- {
- consume_tile_impl(num_remaining,
- tile_idx,
- tile_base);
- }
- }
-
- void THRUST_DEVICE_FUNCTION
- consume_range(Size num_items)
- {
- int tile_idx = blockIdx.x;
- Size tile_base = static_cast(tile_idx) * ITEMS_PER_TILE;
- Size num_remaining = num_items - tile_base;
-
- if (num_remaining > ITEMS_PER_TILE) // not a last tile
- {
- consume_tile(num_remaining, tile_idx, tile_base);
- }
- else if (num_remaining > 0)
- {
- consume_tile(num_remaining, tile_idx, tile_base);
- }
- }
-
- //---------------------------------------------------------------------
- // Constructor
- //---------------------------------------------------------------------
-
- THRUST_DEVICE_FUNCTION
- impl(TempStorage &temp_storage_,
- InputIt input_it_,
- input_type * first_tile_previous_,
- OutputIt result_,
- BinaryOp binary_op_,
- Size num_items)
- : temp_storage(temp_storage_),
- load_it(core::make_load_iterator(ptx_plan(), input_it_)),
- first_tile_previous(first_tile_previous_),
- output_it(result_),
- binary_op(binary_op_)
- {
- consume_range(num_items);
- }
- }; // struct impl
-
- //---------------------------------------------------------------------
- // Agent entry point
- //---------------------------------------------------------------------
-
- THRUST_AGENT_ENTRY(InputIt first,
- input_type *first_element,
- OutputIt result,
- BinaryOp binary_op,
- Size num_items,
- char * shmem)
- {
- TempStorage &storage = *reinterpret_cast(shmem);
- impl(storage, first, first_element, result, binary_op, num_items);
- }
- }; // struct AdjacentDifferenceAgent
-
- template
- struct InitAgent
- {
- template
- struct PtxPlan : PtxPolicy<128> {};
- typedef core::specialize_plan ptx_plan;
-
- //---------------------------------------------------------------------
- // Agent entry point
- //---------------------------------------------------------------------
-
- THRUST_AGENT_ENTRY(InputIt first,
- OutputIt result,
- Size num_tiles,
- int items_per_tile,
- char * /*shmem*/)
- {
- int tile_idx = blockIdx.x * blockDim.x + threadIdx.x;
- Size tile_base = static_cast(tile_idx) * items_per_tile;
- if (tile_base > 0 && tile_idx < num_tiles)
- result[tile_idx] = first[tile_base - 1];
- }
- }; // struct InitAgent
-
- template
- cudaError_t THRUST_RUNTIME_FUNCTION
- doit_step(void * d_temp_storage,
- size_t & temp_storage_bytes,
- InputIt first,
- OutputIt result,
- BinaryOp binary_op,
- Size num_items,
- cudaStream_t stream,
- bool debug_sync)
- {
- if (num_items == 0)
- return cudaSuccess;
-
- using core::AgentPlan;
- using core::AgentLauncher;
-
- cudaError_t status = cudaSuccess;
-
- typedef AgentLauncher<
- AdjacentDifferenceAgent >
- difference_agent;
-
- typedef typename iterator_traits::value_type input_type;
- typedef AgentLauncher > init_agent;
-
- AgentPlan difference_plan = difference_agent::get_plan(stream);
- AgentPlan init_plan = init_agent::get_plan();
-
-
- Size tile_size = difference_plan.items_per_tile;
- Size num_tiles = (num_items + tile_size - 1) / tile_size;
-
- size_t tmp1 = num_tiles * sizeof(input_type);
- size_t vshmem_size = core::vshmem_size(difference_plan.shared_memory_size,
- num_tiles);
-
- size_t allocation_sizes[2] = {tmp1, vshmem_size};
- void * allocations[2] = {NULL, NULL};
-
- status = core::alias_storage(d_temp_storage,
- temp_storage_bytes,
- allocations,
- allocation_sizes);
- CUDA_CUB_RET_IF_FAIL(status);
-
- if (d_temp_storage == NULL)
- {
- return status;
- }
-
- input_type *first_tile_previous = (input_type *)allocations[0];
- char *vshmem_ptr = vshmem_size > 0 ? (char *)allocations[1] : NULL;
-
- init_agent ia(init_plan, num_tiles, stream, "adjacent_difference::init_agent", debug_sync);
- ia.launch(first, first_tile_previous, num_tiles, tile_size);
- CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError());
-
- difference_agent da(difference_plan, num_items, stream, vshmem_ptr, "adjacent_difference::difference_agent", debug_sync);
- da.launch(first,
- first_tile_previous,
- result,
- binary_op,
- num_items);
- CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError());
- return status;
- }
-
- template
- OutputIt THRUST_RUNTIME_FUNCTION
- adjacent_difference(execution_policy& policy,
- InputIt first,
- InputIt last,
- OutputIt result,
- BinaryOp binary_op)
- {
- typedef typename iterator_traits::difference_type size_type;
-
- size_type num_items = thrust::distance(first, last);
- size_t storage_size = 0;
- cudaStream_t stream = cuda_cub::stream(policy);
- bool debug_sync = THRUST_DEBUG_SYNC_FLAG;
-
- cudaError_t status;
- THRUST_INDEX_TYPE_DISPATCH(status, doit_step, num_items,
- (NULL, storage_size, first, result, binary_op,
- num_items_fixed, stream, debug_sync));
- cuda_cub::throw_on_error(status, "adjacent_difference failed on 1st step");
-
- // Allocate temporary storage.
- thrust::detail::temporary_array
- tmp(policy, storage_size);
- void *ptr = static_cast(tmp.data().get());
-
- THRUST_INDEX_TYPE_DISPATCH(status, doit_step, num_items,
- (ptr, storage_size, first, result, binary_op,
- num_items_fixed, stream, debug_sync));
- cuda_cub::throw_on_error(status, "adjacent_difference failed on 2nd step");
-
- status = cuda_cub::synchronize(policy);
- cuda_cub::throw_on_error(status, "adjacent_difference failed to synchronize");
-
- return result + num_items;
- }
-
-} // namespace __adjacent_difference
-
-//-------------------------
-// Thrust API entry points
-//-------------------------
-
-__thrust_exec_check_disable__
-template
-OutputIt __host__ __device__
-adjacent_difference(execution_policy &policy,
- InputIt first,
- InputIt last,
- OutputIt result,
- BinaryOp binary_op)
-{
- OutputIt ret = result;
- if (__THRUST_HAS_CUDART__)
- {
- ret = __adjacent_difference::adjacent_difference(policy,
- first,
- last,
- result,
- binary_op);
- }
- else
- {
-#if !__THRUST_HAS_CUDART__
- ret = thrust::adjacent_difference(cvt_to_seq(derived_cast(policy)),
- first,
- last,
- result,
- binary_op);
-#endif
- }
-
- return ret;
-}
-
-template
-OutputIt __host__ __device__
-adjacent_difference(execution_policy &policy,
- InputIt first,
- InputIt last,
- OutputIt result)
-{
- typedef typename iterator_traits::value_type input_type;
- return cuda_cub::adjacent_difference(policy,
- first,
- last,
- result,
- minus());
-}
-
-
-} // namespace cuda_cub
-} // end namespace thrust
-
-//
-#include
-#include
-#endif
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/set_operations.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/set_operations.h
deleted file mode 100644
index 421fa8a4bd955706497d0c9b30614035ccbbc46f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/set_operations.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system inherits set_operations
-#include
-
diff --git a/spaces/CVPR/WALT/mmdet/models/necks/bfp.py b/spaces/CVPR/WALT/mmdet/models/necks/bfp.py
deleted file mode 100644
index 123f5515ab6b51867d5781aa1572a0810670235f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/necks/bfp.py
+++ /dev/null
@@ -1,104 +0,0 @@
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import ConvModule, xavier_init
-from mmcv.cnn.bricks import NonLocal2d
-
-from ..builder import NECKS
-
-
-@NECKS.register_module()
-class BFP(nn.Module):
- """BFP (Balanced Feature Pyramids)
-
- BFP takes multi-level features as inputs and gather them into a single one,
- then refine the gathered feature and scatter the refined results to
- multi-level features. This module is used in Libra R-CNN (CVPR 2019), see
- the paper `Libra R-CNN: Towards Balanced Learning for Object Detection
- `_ for details.
-
- Args:
- in_channels (int): Number of input channels (feature maps of all levels
- should have the same channels).
- num_levels (int): Number of input feature levels.
- conv_cfg (dict): The config dict for convolution layers.
- norm_cfg (dict): The config dict for normalization layers.
- refine_level (int): Index of integration and refine level of BSF in
- multi-level features from bottom to top.
- refine_type (str): Type of the refine op, currently support
- [None, 'conv', 'non_local'].
- """
-
- def __init__(self,
- in_channels,
- num_levels,
- refine_level=2,
- refine_type=None,
- conv_cfg=None,
- norm_cfg=None):
- super(BFP, self).__init__()
- assert refine_type in [None, 'conv', 'non_local']
-
- self.in_channels = in_channels
- self.num_levels = num_levels
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
-
- self.refine_level = refine_level
- self.refine_type = refine_type
- assert 0 <= self.refine_level < self.num_levels
-
- if self.refine_type == 'conv':
- self.refine = ConvModule(
- self.in_channels,
- self.in_channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg)
- elif self.refine_type == 'non_local':
- self.refine = NonLocal2d(
- self.in_channels,
- reduction=1,
- use_scale=False,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg)
-
- def init_weights(self):
- """Initialize the weights of FPN module."""
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- xavier_init(m, distribution='uniform')
-
- def forward(self, inputs):
- """Forward function."""
- assert len(inputs) == self.num_levels
-
- # step 1: gather multi-level features by resize and average
- feats = []
- gather_size = inputs[self.refine_level].size()[2:]
- for i in range(self.num_levels):
- if i < self.refine_level:
- gathered = F.adaptive_max_pool2d(
- inputs[i], output_size=gather_size)
- else:
- gathered = F.interpolate(
- inputs[i], size=gather_size, mode='nearest')
- feats.append(gathered)
-
- bsf = sum(feats) / len(feats)
-
- # step 2: refine gathered features
- if self.refine_type is not None:
- bsf = self.refine(bsf)
-
- # step 3: scatter refined features to multi-levels by a residual path
- outs = []
- for i in range(self.num_levels):
- out_size = inputs[i].size()[2:]
- if i < self.refine_level:
- residual = F.interpolate(bsf, size=out_size, mode='nearest')
- else:
- residual = F.adaptive_max_pool2d(bsf, output_size=out_size)
- outs.append(residual + inputs[i])
-
- return tuple(outs)
diff --git a/spaces/CVPR/winoground-explorer/app.py b/spaces/CVPR/winoground-explorer/app.py
deleted file mode 100644
index 5bf7865ed324378302d611a36457ef81ddb32ace..0000000000000000000000000000000000000000
--- a/spaces/CVPR/winoground-explorer/app.py
+++ /dev/null
@@ -1,31 +0,0 @@
-from datasets import load_dataset
-import gradio as gr
-import os
-import random
-
-auth_token = os.environ.get("token")
-winoground = load_dataset("facebook/winoground", use_auth_token=auth_token)["test"]
-
-def func(index):
- example = winoground[index]
- return example["image_0"], example["caption_0"], example["image_1"], example["caption_1"]
-
-demo = gr.Blocks()
-
-with demo:
- gr.Markdown("# Slide across the slider to see various examples from WinoGround")
-
- with gr.Column():
- slider = gr.Slider(minimum=0, maximum=400)
- with gr.Row():
- index = random.choice(range(0, 400))
- with gr.Column():
- image_input_1 = gr.Image(value=winoground[index]["image_0"])
- text_input_1 = gr.Textbox(value=winoground[index]["caption_0"])
- with gr.Column():
- image_input_2 = gr.Image(value=winoground[index]["image_1"])
- text_input_2 = gr.Textbox(value=winoground[index]["caption_1"])
-
- slider.change(func, inputs=[slider], outputs=[image_input_1, text_input_1, image_input_2, text_input_2])
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/web_selenium.py b/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/web_selenium.py
deleted file mode 100644
index 11bdfeb1f1630fc6ff6f55d68e8d7233281c5098..0000000000000000000000000000000000000000
--- a/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/web_selenium.py
+++ /dev/null
@@ -1,154 +0,0 @@
-"""Selenium web scraping module."""
-from __future__ import annotations
-
-import logging
-from pathlib import Path
-from sys import platform
-
-from bs4 import BeautifulSoup
-from selenium import webdriver
-from selenium.webdriver.chrome.options import Options as ChromeOptions
-from selenium.webdriver.common.by import By
-from selenium.webdriver.firefox.options import Options as FirefoxOptions
-from selenium.webdriver.remote.webdriver import WebDriver
-from selenium.webdriver.safari.options import Options as SafariOptions
-from selenium.webdriver.support import expected_conditions as EC
-from selenium.webdriver.support.wait import WebDriverWait
-from webdriver_manager.chrome import ChromeDriverManager
-from webdriver_manager.firefox import GeckoDriverManager
-
-import autogpt.processing.text as summary
-from autogpt.config import Config
-from autogpt.processing.html import extract_hyperlinks, format_hyperlinks
-
-FILE_DIR = Path(__file__).parent.parent
-CFG = Config()
-
-
-def browse_website(url: str, question: str) -> tuple[str, WebDriver]:
- """Browse a website and return the answer and links to the user
-
- Args:
- url (str): The url of the website to browse
- question (str): The question asked by the user
-
- Returns:
- Tuple[str, WebDriver]: The answer and links to the user and the webdriver
- """
- driver, text = scrape_text_with_selenium(url)
- add_header(driver)
- summary_text = summary.summarize_text(url, text, question, driver)
- links = scrape_links_with_selenium(driver, url)
-
- # Limit links to 5
- if len(links) > 5:
- links = links[:5]
- close_browser(driver)
- return f"Answer gathered from website: {summary_text} \n \n Links: {links}", driver
-
-
-def scrape_text_with_selenium(url: str) -> tuple[WebDriver, str]:
- """Scrape text from a website using selenium
-
- Args:
- url (str): The url of the website to scrape
-
- Returns:
- Tuple[WebDriver, str]: The webdriver and the text scraped from the website
- """
- logging.getLogger("selenium").setLevel(logging.CRITICAL)
-
- options_available = {
- "chrome": ChromeOptions,
- "safari": SafariOptions,
- "firefox": FirefoxOptions,
- }
-
- options = options_available[CFG.selenium_web_browser]()
- options.add_argument(
- "user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.5615.49 Safari/537.36"
- )
-
- if CFG.selenium_web_browser == "firefox":
- driver = webdriver.Firefox(
- executable_path=GeckoDriverManager().install(), options=options
- )
- elif CFG.selenium_web_browser == "safari":
- # Requires a bit more setup on the users end
- # See https://developer.apple.com/documentation/webkit/testing_with_webdriver_in_safari
- driver = webdriver.Safari(options=options)
- else:
- if platform == "linux" or platform == "linux2":
- options.add_argument("--disable-dev-shm-usage")
- options.add_argument("--remote-debugging-port=9222")
-
- options.add_argument("--no-sandbox")
- if CFG.selenium_headless:
- options.add_argument("--headless")
- options.add_argument("--disable-gpu")
-
- driver = webdriver.Chrome(
- executable_path=ChromeDriverManager().install(), options=options
- )
- driver.get(url)
-
- WebDriverWait(driver, 10).until(
- EC.presence_of_element_located((By.TAG_NAME, "body"))
- )
-
- # Get the HTML content directly from the browser's DOM
- page_source = driver.execute_script("return document.body.outerHTML;")
- soup = BeautifulSoup(page_source, "html.parser")
-
- for script in soup(["script", "style"]):
- script.extract()
-
- text = soup.get_text()
- lines = (line.strip() for line in text.splitlines())
- chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
- text = "\n".join(chunk for chunk in chunks if chunk)
- return driver, text
-
-
-def scrape_links_with_selenium(driver: WebDriver, url: str) -> list[str]:
- """Scrape links from a website using selenium
-
- Args:
- driver (WebDriver): The webdriver to use to scrape the links
-
- Returns:
- List[str]: The links scraped from the website
- """
- page_source = driver.page_source
- soup = BeautifulSoup(page_source, "html.parser")
-
- for script in soup(["script", "style"]):
- script.extract()
-
- hyperlinks = extract_hyperlinks(soup, url)
-
- return format_hyperlinks(hyperlinks)
-
-
-def close_browser(driver: WebDriver) -> None:
- """Close the browser
-
- Args:
- driver (WebDriver): The webdriver to close
-
- Returns:
- None
- """
- driver.quit()
-
-
-def add_header(driver: WebDriver) -> None:
- """Add a header to the website
-
- Args:
- driver (WebDriver): The webdriver to use to add the header
-
- Returns:
- None
- """
- driver.execute_script(open(f"{FILE_DIR}/js/overlay.js", "r").read())
diff --git a/spaces/ChrisCaviar/ControlNet-v1-1/app_openpose.py b/spaces/ChrisCaviar/ControlNet-v1-1/app_openpose.py
deleted file mode 100644
index a4dd2aa4a97d9526e239633e95fdd0d6162ffe9d..0000000000000000000000000000000000000000
--- a/spaces/ChrisCaviar/ControlNet-v1-1/app_openpose.py
+++ /dev/null
@@ -1,104 +0,0 @@
-#!/usr/bin/env python
-
-import gradio as gr
-
-from utils import randomize_seed_fn
-
-
-def create_demo(process, max_images=12, default_num_images=3):
- with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- image = gr.Image()
- prompt = gr.Textbox(label='Prompt')
- run_button = gr.Button(label='Run')
- with gr.Accordion('Advanced options', open=False):
- preprocessor_name = gr.Radio(label='Preprocessor',
- choices=['Openpose', 'None'],
- type='value',
- value='Openpose')
- num_samples = gr.Slider(label='Number of images',
- minimum=1,
- maximum=max_images,
- value=default_num_images,
- step=1)
- image_resolution = gr.Slider(label='Image resolution',
- minimum=256,
- maximum=512,
- value=512,
- step=256)
- preprocess_resolution = gr.Slider(
- label='Preprocess resolution',
- minimum=128,
- maximum=512,
- value=512,
- step=1)
- num_steps = gr.Slider(label='Number of steps',
- minimum=1,
- maximum=100,
- value=20,
- step=1)
- guidance_scale = gr.Slider(label='Guidance scale',
- minimum=0.1,
- maximum=30.0,
- value=9.0,
- step=0.1)
- seed = gr.Slider(label='Seed',
- minimum=0,
- maximum=1000000,
- step=1,
- value=0,
- randomize=True)
- randomize_seed = gr.Checkbox(label='Randomize seed',
- value=True)
- a_prompt = gr.Textbox(
- label='Additional prompt',
- value='best quality, extremely detailed')
- n_prompt = gr.Textbox(
- label='Negative prompt',
- value=
- 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
- )
- with gr.Column():
- result = gr.Gallery(label='Output', show_label=False).style(
- columns=2, object_fit='scale-down')
- inputs = [
- image,
- prompt,
- a_prompt,
- n_prompt,
- num_samples,
- image_resolution,
- preprocess_resolution,
- num_steps,
- guidance_scale,
- seed,
- preprocessor_name,
- ]
- prompt.submit(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- ).then(
- fn=process,
- inputs=inputs,
- outputs=result,
- )
- run_button.click(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- ).then(
- fn=process,
- inputs=inputs,
- outputs=result,
- api_name='openpose',
- )
- return demo
-
-
-if __name__ == '__main__':
- from model import Model
- model = Model(task_name='Openpose')
- demo = create_demo(model.process_openpose)
- demo.queue().launch()
diff --git a/spaces/Clebersla/RVC_V2_Huggingface_Version/vc_infer_pipeline.py b/spaces/Clebersla/RVC_V2_Huggingface_Version/vc_infer_pipeline.py
deleted file mode 100644
index a0b50d4c703b7638d7c951c9d820a1e59c275fc3..0000000000000000000000000000000000000000
--- a/spaces/Clebersla/RVC_V2_Huggingface_Version/vc_infer_pipeline.py
+++ /dev/null
@@ -1,646 +0,0 @@
-import numpy as np, parselmouth, torch, pdb, sys, os
-from time import time as ttime
-import torch.nn.functional as F
-import torchcrepe # Fork feature. Use the crepe f0 algorithm. New dependency (pip install torchcrepe)
-from torch import Tensor
-import scipy.signal as signal
-import pyworld, os, traceback, faiss, librosa, torchcrepe
-from scipy import signal
-from functools import lru_cache
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-
-bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
-
-input_audio_path2wav = {}
-
-
-@lru_cache
-def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period):
- audio = input_audio_path2wav[input_audio_path]
- f0, t = pyworld.harvest(
- audio,
- fs=fs,
- f0_ceil=f0max,
- f0_floor=f0min,
- frame_period=frame_period,
- )
- f0 = pyworld.stonemask(audio, f0, t, fs)
- return f0
-
-
-def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比
- # print(data1.max(),data2.max())
- rms1 = librosa.feature.rms(
- y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2
- ) # 每半秒一个点
- rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2)
- rms1 = torch.from_numpy(rms1)
- rms1 = F.interpolate(
- rms1.unsqueeze(0), size=data2.shape[0], mode="linear"
- ).squeeze()
- rms2 = torch.from_numpy(rms2)
- rms2 = F.interpolate(
- rms2.unsqueeze(0), size=data2.shape[0], mode="linear"
- ).squeeze()
- rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6)
- data2 *= (
- torch.pow(rms1, torch.tensor(1 - rate))
- * torch.pow(rms2, torch.tensor(rate - 1))
- ).numpy()
- return data2
-
-
-class VC(object):
- def __init__(self, tgt_sr, config):
- self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = (
- config.x_pad,
- config.x_query,
- config.x_center,
- config.x_max,
- config.is_half,
- )
- self.sr = 16000 # hubert输入采样率
- self.window = 160 # 每帧点数
- self.t_pad = self.sr * self.x_pad # 每条前后pad时间
- self.t_pad_tgt = tgt_sr * self.x_pad
- self.t_pad2 = self.t_pad * 2
- self.t_query = self.sr * self.x_query # 查询切点前后查询时间
- self.t_center = self.sr * self.x_center # 查询切点位置
- self.t_max = self.sr * self.x_max # 免查询时长阈值
- self.device = config.device
-
- # Fork Feature: Get the best torch device to use for f0 algorithms that require a torch device. Will return the type (torch.device)
- def get_optimal_torch_device(self, index: int = 0) -> torch.device:
- # Get cuda device
- if torch.cuda.is_available():
- return torch.device(
- f"cuda:{index % torch.cuda.device_count()}"
- ) # Very fast
- elif torch.backends.mps.is_available():
- return torch.device("mps")
- # Insert an else here to grab "xla" devices if available. TO DO later. Requires the torch_xla.core.xla_model library
- # Else wise return the "cpu" as a torch device,
- return torch.device("cpu")
-
- # Fork Feature: Compute f0 with the crepe method
- def get_f0_crepe_computation(
- self,
- x,
- f0_min,
- f0_max,
- p_len,
- hop_length=160, # 512 before. Hop length changes the speed that the voice jumps to a different dramatic pitch. Lower hop lengths means more pitch accuracy but longer inference time.
- model="full", # Either use crepe-tiny "tiny" or crepe "full". Default is full
- ):
- x = x.astype(
- np.float32
- ) # fixes the F.conv2D exception. We needed to convert double to float.
- x /= np.quantile(np.abs(x), 0.999)
- torch_device = self.get_optimal_torch_device()
- audio = torch.from_numpy(x).to(torch_device, copy=True)
- audio = torch.unsqueeze(audio, dim=0)
- if audio.ndim == 2 and audio.shape[0] > 1:
- audio = torch.mean(audio, dim=0, keepdim=True).detach()
- audio = audio.detach()
- print("Initiating prediction with a crepe_hop_length of: " + str(hop_length))
- pitch: Tensor = torchcrepe.predict(
- audio,
- self.sr,
- hop_length,
- f0_min,
- f0_max,
- model,
- batch_size=hop_length * 2,
- device=torch_device,
- pad=True,
- )
- p_len = p_len or x.shape[0] // hop_length
- # Resize the pitch for final f0
- source = np.array(pitch.squeeze(0).cpu().float().numpy())
- source[source < 0.001] = np.nan
- target = np.interp(
- np.arange(0, len(source) * p_len, len(source)) / p_len,
- np.arange(0, len(source)),
- source,
- )
- f0 = np.nan_to_num(target)
- return f0 # Resized f0
-
- def get_f0_official_crepe_computation(
- self,
- x,
- f0_min,
- f0_max,
- model="full",
- ):
- # Pick a batch size that doesn't cause memory errors on your gpu
- batch_size = 512
- # Compute pitch using first gpu
- audio = torch.tensor(np.copy(x))[None].float()
- f0, pd = torchcrepe.predict(
- audio,
- self.sr,
- self.window,
- f0_min,
- f0_max,
- model,
- batch_size=batch_size,
- device=self.device,
- return_periodicity=True,
- )
- pd = torchcrepe.filter.median(pd, 3)
- f0 = torchcrepe.filter.mean(f0, 3)
- f0[pd < 0.1] = 0
- f0 = f0[0].cpu().numpy()
- return f0
-
- # Fork Feature: Compute pYIN f0 method
- def get_f0_pyin_computation(self, x, f0_min, f0_max):
- y, sr = librosa.load("saudio/Sidney.wav", self.sr, mono=True)
- f0, _, _ = librosa.pyin(y, sr=self.sr, fmin=f0_min, fmax=f0_max)
- f0 = f0[1:] # Get rid of extra first frame
- return f0
-
- # Fork Feature: Acquire median hybrid f0 estimation calculation
- def get_f0_hybrid_computation(
- self,
- methods_str,
- input_audio_path,
- x,
- f0_min,
- f0_max,
- p_len,
- filter_radius,
- crepe_hop_length,
- time_step,
- ):
- # Get various f0 methods from input to use in the computation stack
- s = methods_str
- s = s.split("hybrid")[1]
- s = s.replace("[", "").replace("]", "")
- methods = s.split("+")
- f0_computation_stack = []
-
- print("Calculating f0 pitch estimations for methods: %s" % str(methods))
- x = x.astype(np.float32)
- x /= np.quantile(np.abs(x), 0.999)
- # Get f0 calculations for all methods specified
- for method in methods:
- f0 = None
- if method == "pm":
- f0 = (
- parselmouth.Sound(x, self.sr)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=f0_min,
- pitch_ceiling=f0_max,
- )
- .selected_array["frequency"]
- )
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(
- f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
- )
- elif method == "crepe":
- f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max)
- f0 = f0[1:] # Get rid of extra first frame
- elif method == "crepe-tiny":
- f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max, "tiny")
- f0 = f0[1:] # Get rid of extra first frame
- elif method == "mangio-crepe":
- f0 = self.get_f0_crepe_computation(
- x, f0_min, f0_max, p_len, crepe_hop_length
- )
- elif method == "mangio-crepe-tiny":
- f0 = self.get_f0_crepe_computation(
- x, f0_min, f0_max, p_len, crepe_hop_length, "tiny"
- )
- elif method == "harvest":
- f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10)
- if filter_radius > 2:
- f0 = signal.medfilt(f0, 3)
- f0 = f0[1:] # Get rid of first frame.
- elif method == "dio": # Potentially buggy?
- f0, t = pyworld.dio(
- x.astype(np.double),
- fs=self.sr,
- f0_ceil=f0_max,
- f0_floor=f0_min,
- frame_period=10,
- )
- f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr)
- f0 = signal.medfilt(f0, 3)
- f0 = f0[1:]
- # elif method == "pyin": Not Working just yet
- # f0 = self.get_f0_pyin_computation(x, f0_min, f0_max)
- # Push method to the stack
- f0_computation_stack.append(f0)
-
- for fc in f0_computation_stack:
- print(len(fc))
-
- print("Calculating hybrid median f0 from the stack of: %s" % str(methods))
- f0_median_hybrid = None
- if len(f0_computation_stack) == 1:
- f0_median_hybrid = f0_computation_stack[0]
- else:
- f0_median_hybrid = np.nanmedian(f0_computation_stack, axis=0)
- return f0_median_hybrid
-
- def get_f0(
- self,
- input_audio_path,
- x,
- p_len,
- f0_up_key,
- f0_method,
- filter_radius,
- crepe_hop_length,
- inp_f0=None,
- ):
- global input_audio_path2wav
- time_step = self.window / self.sr * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- if f0_method == "pm":
- f0 = (
- parselmouth.Sound(x, self.sr)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=f0_min,
- pitch_ceiling=f0_max,
- )
- .selected_array["frequency"]
- )
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(
- f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
- )
- elif f0_method == "harvest":
- input_audio_path2wav[input_audio_path] = x.astype(np.double)
- f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10)
- if filter_radius > 2:
- f0 = signal.medfilt(f0, 3)
- elif f0_method == "dio": # Potentially Buggy?
- f0, t = pyworld.dio(
- x.astype(np.double),
- fs=self.sr,
- f0_ceil=f0_max,
- f0_floor=f0_min,
- frame_period=10,
- )
- f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr)
- f0 = signal.medfilt(f0, 3)
- elif f0_method == "crepe":
- f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max)
- elif f0_method == "crepe-tiny":
- f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max, "tiny")
- elif f0_method == "mangio-crepe":
- f0 = self.get_f0_crepe_computation(
- x, f0_min, f0_max, p_len, crepe_hop_length
- )
- elif f0_method == "mangio-crepe-tiny":
- f0 = self.get_f0_crepe_computation(
- x, f0_min, f0_max, p_len, crepe_hop_length, "tiny"
- )
- elif f0_method == "rmvpe":
- if hasattr(self, "model_rmvpe") == False:
- from rmvpe import RMVPE
-
- print("loading rmvpe model")
- self.model_rmvpe = RMVPE(
- "rmvpe.pt", is_half=self.is_half, device=self.device
- )
- f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03)
-
- elif "hybrid" in f0_method:
- # Perform hybrid median pitch estimation
- input_audio_path2wav[input_audio_path] = x.astype(np.double)
- f0 = self.get_f0_hybrid_computation(
- f0_method,
- input_audio_path,
- x,
- f0_min,
- f0_max,
- p_len,
- filter_radius,
- crepe_hop_length,
- time_step,
- )
-
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0]
- f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[
- :shape
- ]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
-
- return f0_coarse, f0bak # 1-0
-
- def vc(
- self,
- model,
- net_g,
- sid,
- audio0,
- pitch,
- pitchf,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- ): # ,file_index,file_big_npy
- feats = torch.from_numpy(audio0)
- if self.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False)
-
- inputs = {
- "source": feats.to(self.device),
- "padding_mask": padding_mask,
- "output_layer": 9 if version == "v1" else 12,
- }
- t0 = ttime()
- with torch.no_grad():
- logits = model.extract_features(**inputs)
- feats = model.final_proj(logits[0]) if version == "v1" else logits[0]
- if protect < 0.5 and pitch != None and pitchf != None:
- feats0 = feats.clone()
- if (
- isinstance(index, type(None)) == False
- and isinstance(big_npy, type(None)) == False
- and index_rate != 0
- ):
- npy = feats[0].cpu().numpy()
- if self.is_half:
- npy = npy.astype("float32")
-
- # _, I = index.search(npy, 1)
- # npy = big_npy[I.squeeze()]
-
- score, ix = index.search(npy, k=8)
- weight = np.square(1 / score)
- weight /= weight.sum(axis=1, keepdims=True)
- npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
-
- if self.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate
- + (1 - index_rate) * feats
- )
-
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- if protect < 0.5 and pitch != None and pitchf != None:
- feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute(
- 0, 2, 1
- )
- t1 = ttime()
- p_len = audio0.shape[0] // self.window
- if feats.shape[1] < p_len:
- p_len = feats.shape[1]
- if pitch != None and pitchf != None:
- pitch = pitch[:, :p_len]
- pitchf = pitchf[:, :p_len]
-
- if protect < 0.5 and pitch != None and pitchf != None:
- pitchff = pitchf.clone()
- pitchff[pitchf > 0] = 1
- pitchff[pitchf < 1] = protect
- pitchff = pitchff.unsqueeze(-1)
- feats = feats * pitchff + feats0 * (1 - pitchff)
- feats = feats.to(feats0.dtype)
- p_len = torch.tensor([p_len], device=self.device).long()
- with torch.no_grad():
- if pitch != None and pitchf != None:
- audio1 = (
- (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0])
- .data.cpu()
- .float()
- .numpy()
- )
- else:
- audio1 = (
- (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy()
- )
- del feats, p_len, padding_mask
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- t2 = ttime()
- times[0] += t1 - t0
- times[2] += t2 - t1
- return audio1
-
- def pipeline(
- self,
- model,
- net_g,
- sid,
- audio,
- input_audio_path,
- times,
- f0_up_key,
- f0_method,
- file_index,
- # file_big_npy,
- index_rate,
- if_f0,
- filter_radius,
- tgt_sr,
- resample_sr,
- rms_mix_rate,
- version,
- protect,
- crepe_hop_length,
- f0_file=None,
- ):
- if (
- file_index != ""
- # and file_big_npy != ""
- # and os.path.exists(file_big_npy) == True
- and os.path.exists(file_index) == True
- and index_rate != 0
- ):
- try:
- index = faiss.read_index(file_index)
- # big_npy = np.load(file_big_npy)
- big_npy = index.reconstruct_n(0, index.ntotal)
- except:
- traceback.print_exc()
- index = big_npy = None
- else:
- index = big_npy = None
- audio = signal.filtfilt(bh, ah, audio)
- audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect")
- opt_ts = []
- if audio_pad.shape[0] > self.t_max:
- audio_sum = np.zeros_like(audio)
- for i in range(self.window):
- audio_sum += audio_pad[i : i - self.window]
- for t in range(self.t_center, audio.shape[0], self.t_center):
- opt_ts.append(
- t
- - self.t_query
- + np.where(
- np.abs(audio_sum[t - self.t_query : t + self.t_query])
- == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min()
- )[0][0]
- )
- s = 0
- audio_opt = []
- t = None
- t1 = ttime()
- audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect")
- p_len = audio_pad.shape[0] // self.window
- inp_f0 = None
- if hasattr(f0_file, "name") == True:
- try:
- with open(f0_file.name, "r") as f:
- lines = f.read().strip("\n").split("\n")
- inp_f0 = []
- for line in lines:
- inp_f0.append([float(i) for i in line.split(",")])
- inp_f0 = np.array(inp_f0, dtype="float32")
- except:
- traceback.print_exc()
- sid = torch.tensor(sid, device=self.device).unsqueeze(0).long()
- pitch, pitchf = None, None
- if if_f0 == 1:
- pitch, pitchf = self.get_f0(
- input_audio_path,
- audio_pad,
- p_len,
- f0_up_key,
- f0_method,
- filter_radius,
- crepe_hop_length,
- inp_f0,
- )
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- if self.device == "mps":
- pitchf = pitchf.astype(np.float32)
- pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long()
- pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float()
- t2 = ttime()
- times[1] += t2 - t1
- for t in opt_ts:
- t = t // self.window * self.window
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- pitch[:, s // self.window : (t + self.t_pad2) // self.window],
- pitchf[:, s // self.window : (t + self.t_pad2) // self.window],
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- s = t
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- pitch[:, t // self.window :] if t is not None else pitch,
- pitchf[:, t // self.window :] if t is not None else pitchf,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- audio_opt = np.concatenate(audio_opt)
- if rms_mix_rate != 1:
- audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate)
- if resample_sr >= 16000 and tgt_sr != resample_sr:
- audio_opt = librosa.resample(
- audio_opt, orig_sr=tgt_sr, target_sr=resample_sr
- )
- audio_max = np.abs(audio_opt).max() / 0.99
- max_int16 = 32768
- if audio_max > 1:
- max_int16 /= audio_max
- audio_opt = (audio_opt * max_int16).astype(np.int16)
- del pitch, pitchf, sid
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- return audio_opt
diff --git a/spaces/Cletrason/Cletrason-toad-in-the-mario-movie/optimization _tf.py b/spaces/Cletrason/Cletrason-toad-in-the-mario-movie/optimization _tf.py
deleted file mode 100644
index 451e3eb9179d4bddf275e7f217fa648a37d2ac61..0000000000000000000000000000000000000000
--- a/spaces/Cletrason/Cletrason-toad-in-the-mario-movie/optimization _tf.py
+++ /dev/null
@@ -1,371 +0,0 @@
-# Copyright 2019 The TensorFlow Authors, The Hugging Face Team. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Functions and classes related to optimization (weight updates)."""
-
-
-import re
-from typing import Callable, List, Optional, Union
-
-import tensorflow as tf
-
-
-try:
- from tensorflow.keras.optimizers.legacy import Adam
-except ImportError:
- from tensorflow.keras.optimizers import Adam
-
-
-class WarmUp(tf.keras.optimizers.schedules.LearningRateSchedule):
- """
- Applies a warmup schedule on a given learning rate decay schedule.
-
- Args:
- initial_learning_rate (`float`):
- The initial learning rate for the schedule after the warmup (so this will be the learning rate at the end
- of the warmup).
- decay_schedule_fn (`Callable`):
- The schedule function to apply after the warmup for the rest of training.
- warmup_steps (`int`):
- The number of steps for the warmup part of training.
- power (`float`, *optional*, defaults to 1):
- The power to use for the polynomial warmup (defaults is a linear warmup).
- name (`str`, *optional*):
- Optional name prefix for the returned tensors during the schedule.
- """
-
- def __init__(
- self,
- initial_learning_rate: float,
- decay_schedule_fn: Callable,
- warmup_steps: int,
- power: float = 1.0,
- name: str = None,
- ):
- super().__init__()
- self.initial_learning_rate = initial_learning_rate
- self.warmup_steps = warmup_steps
- self.power = power
- self.decay_schedule_fn = decay_schedule_fn
- self.name = name
-
- def __call__(self, step):
- with tf.name_scope(self.name or "WarmUp") as name:
- # Implements polynomial warmup. i.e., if global_step < warmup_steps, the
- # learning rate will be `global_step/num_warmup_steps * init_lr`.
- global_step_float = tf.cast(step, tf.float32)
- warmup_steps_float = tf.cast(self.warmup_steps, tf.float32)
- warmup_percent_done = global_step_float / warmup_steps_float
- warmup_learning_rate = self.initial_learning_rate * tf.math.pow(warmup_percent_done, self.power)
- return tf.cond(
- global_step_float < warmup_steps_float,
- lambda: warmup_learning_rate,
- lambda: self.decay_schedule_fn(step - self.warmup_steps),
- name=name,
- )
-
- def get_config(self):
- return {
- "initial_learning_rate": self.initial_learning_rate,
- "decay_schedule_fn": self.decay_schedule_fn,
- "warmup_steps": self.warmup_steps,
- "power": self.power,
- "name": self.name,
- }
-
-
-def create_optimizer(
- init_lr: float,
- num_train_steps: int,
- num_warmup_steps: int,
- min_lr_ratio: float = 0.0,
- adam_beta1: float = 0.9,
- adam_beta2: float = 0.999,
- adam_epsilon: float = 1e-8,
- adam_clipnorm: Optional[float] = None,
- adam_global_clipnorm: Optional[float] = None,
- weight_decay_rate: float = 0.0,
- power: float = 1.0,
- include_in_weight_decay: Optional[List[str]] = None,
-):
- """
- Creates an optimizer with a learning rate schedule using a warmup phase followed by a linear decay.
-
- Args:
- init_lr (`float`):
- The desired learning rate at the end of the warmup phase.
- num_train_steps (`int`):
- The total number of training steps.
- num_warmup_steps (`int`):
- The number of warmup steps.
- min_lr_ratio (`float`, *optional*, defaults to 0):
- The final learning rate at the end of the linear decay will be `init_lr * min_lr_ratio`.
- adam_beta1 (`float`, *optional*, defaults to 0.9):
- The beta1 to use in Adam.
- adam_beta2 (`float`, *optional*, defaults to 0.999):
- The beta2 to use in Adam.
- adam_epsilon (`float`, *optional*, defaults to 1e-8):
- The epsilon to use in Adam.
- adam_clipnorm: (`float`, *optional*, defaults to `None`):
- If not `None`, clip the gradient norm for each weight tensor to this value.
- adam_global_clipnorm: (`float`, *optional*, defaults to `None`)
- If not `None`, clip gradient norm to this value. When using this argument, the norm is computed over all
- weight tensors, as if they were concatenated into a single vector.
- weight_decay_rate (`float`, *optional*, defaults to 0):
- The weight decay to use.
- power (`float`, *optional*, defaults to 1.0):
- The power to use for PolynomialDecay.
- include_in_weight_decay (`List[str]`, *optional*):
- List of the parameter names (or re patterns) to apply weight decay to. If none is passed, weight decay is
- applied to all parameters except bias and layer norm parameters.
- """
- # Implements linear decay of the learning rate.
- lr_schedule = tf.keras.optimizers.schedules.PolynomialDecay(
- initial_learning_rate=init_lr,
- decay_steps=num_train_steps - num_warmup_steps,
- end_learning_rate=init_lr * min_lr_ratio,
- power=power,
- )
- if num_warmup_steps:
- lr_schedule = WarmUp(
- initial_learning_rate=init_lr,
- decay_schedule_fn=lr_schedule,
- warmup_steps=num_warmup_steps,
- )
- if weight_decay_rate > 0.0:
- optimizer = AdamWeightDecay(
- learning_rate=lr_schedule,
- weight_decay_rate=weight_decay_rate,
- beta_1=adam_beta1,
- beta_2=adam_beta2,
- epsilon=adam_epsilon,
- clipnorm=adam_clipnorm,
- global_clipnorm=adam_global_clipnorm,
- exclude_from_weight_decay=["LayerNorm", "layer_norm", "bias"],
- include_in_weight_decay=include_in_weight_decay,
- )
- else:
- optimizer = tf.keras.optimizers.Adam(
- learning_rate=lr_schedule,
- beta_1=adam_beta1,
- beta_2=adam_beta2,
- epsilon=adam_epsilon,
- clipnorm=adam_clipnorm,
- global_clipnorm=adam_global_clipnorm,
- )
- # We return the optimizer and the LR scheduler in order to better track the
- # evolution of the LR independently of the optimizer.
- return optimizer, lr_schedule
-
-
-class AdamWeightDecay(Adam):
- """
- Adam enables L2 weight decay and clip_by_global_norm on gradients. Just adding the square of the weights to the
- loss function is *not* the correct way of using L2 regularization/weight decay with Adam, since that will interact
- with the m and v parameters in strange ways as shown in [Decoupled Weight Decay
- Regularization](https://arxiv.org/abs/1711.05101).
-
- Instead we want to decay the weights in a manner that doesn't interact with the m/v parameters. This is equivalent
- to adding the square of the weights to the loss with plain (non-momentum) SGD.
-
- Args:
- learning_rate (`Union[float, tf.keras.optimizers.schedules.LearningRateSchedule]`, *optional*, defaults to 1e-3):
- The learning rate to use or a schedule.
- beta_1 (`float`, *optional*, defaults to 0.9):
- The beta1 parameter in Adam, which is the exponential decay rate for the 1st momentum estimates.
- beta_2 (`float`, *optional*, defaults to 0.999):
- The beta2 parameter in Adam, which is the exponential decay rate for the 2nd momentum estimates.
- epsilon (`float`, *optional*, defaults to 1e-7):
- The epsilon parameter in Adam, which is a small constant for numerical stability.
- amsgrad (`bool`, *optional*, default to `False`):
- Whether to apply AMSGrad variant of this algorithm or not, see [On the Convergence of Adam and
- Beyond](https://arxiv.org/abs/1904.09237).
- weight_decay_rate (`float`, *optional*, defaults to 0):
- The weight decay to apply.
- include_in_weight_decay (`List[str]`, *optional*):
- List of the parameter names (or re patterns) to apply weight decay to. If none is passed, weight decay is
- applied to all parameters by default (unless they are in `exclude_from_weight_decay`).
- exclude_from_weight_decay (`List[str]`, *optional*):
- List of the parameter names (or re patterns) to exclude from applying weight decay to. If a
- `include_in_weight_decay` is passed, the names in it will supersede this list.
- name (`str`, *optional*, defaults to 'AdamWeightDecay'):
- Optional name for the operations created when applying gradients.
- kwargs:
- Keyword arguments. Allowed to be {`clipnorm`, `clipvalue`, `lr`, `decay`}. `clipnorm` is clip gradients by
- norm; `clipvalue` is clip gradients by value, `decay` is included for backward compatibility to allow time
- inverse decay of learning rate. `lr` is included for backward compatibility, recommended to use
- `learning_rate` instead.
- """
-
- def __init__(
- self,
- learning_rate: Union[float, tf.keras.optimizers.schedules.LearningRateSchedule] = 0.001,
- beta_1: float = 0.9,
- beta_2: float = 0.999,
- epsilon: float = 1e-7,
- amsgrad: bool = False,
- weight_decay_rate: float = 0.0,
- include_in_weight_decay: Optional[List[str]] = None,
- exclude_from_weight_decay: Optional[List[str]] = None,
- name: str = "AdamWeightDecay",
- **kwargs,
- ):
- super().__init__(learning_rate, beta_1, beta_2, epsilon, amsgrad, name, **kwargs)
- self.weight_decay_rate = weight_decay_rate
- self._include_in_weight_decay = include_in_weight_decay
- self._exclude_from_weight_decay = exclude_from_weight_decay
-
- @classmethod
- def from_config(cls, config):
- """Creates an optimizer from its config with WarmUp custom object."""
- custom_objects = {"WarmUp": WarmUp}
- return super(AdamWeightDecay, cls).from_config(config, custom_objects=custom_objects)
-
- def _prepare_local(self, var_device, var_dtype, apply_state):
- super(AdamWeightDecay, self)._prepare_local(var_device, var_dtype, apply_state)
- apply_state[(var_device, var_dtype)]["weight_decay_rate"] = tf.constant(
- self.weight_decay_rate, name="adam_weight_decay_rate"
- )
-
- def _decay_weights_op(self, var, learning_rate, apply_state):
- do_decay = self._do_use_weight_decay(var.name)
- if do_decay:
- return var.assign_sub(
- learning_rate * var * apply_state[(var.device, var.dtype.base_dtype)]["weight_decay_rate"],
- use_locking=self._use_locking,
- )
- return tf.no_op()
-
- def apply_gradients(self, grads_and_vars, name=None, **kwargs):
- grads, tvars = list(zip(*grads_and_vars))
- return super(AdamWeightDecay, self).apply_gradients(zip(grads, tvars), name=name, **kwargs)
-
- def _get_lr(self, var_device, var_dtype, apply_state):
- """Retrieves the learning rate with the given state."""
- if apply_state is None:
- return self._decayed_lr_t[var_dtype], {}
-
- apply_state = apply_state or {}
- coefficients = apply_state.get((var_device, var_dtype))
- if coefficients is None:
- coefficients = self._fallback_apply_state(var_device, var_dtype)
- apply_state[(var_device, var_dtype)] = coefficients
-
- return coefficients["lr_t"], {"apply_state": apply_state}
-
- def _resource_apply_dense(self, grad, var, apply_state=None):
- lr_t, kwargs = self._get_lr(var.device, var.dtype.base_dtype, apply_state)
- decay = self._decay_weights_op(var, lr_t, apply_state)
- with tf.control_dependencies([decay]):
- return super(AdamWeightDecay, self)._resource_apply_dense(grad, var, **kwargs)
-
- def _resource_apply_sparse(self, grad, var, indices, apply_state=None):
- lr_t, kwargs = self._get_lr(var.device, var.dtype.base_dtype, apply_state)
- decay = self._decay_weights_op(var, lr_t, apply_state)
- with tf.control_dependencies([decay]):
- return super(AdamWeightDecay, self)._resource_apply_sparse(grad, var, indices, **kwargs)
-
- def get_config(self):
- config = super().get_config()
- config.update({"weight_decay_rate": self.weight_decay_rate})
- return config
-
- def _do_use_weight_decay(self, param_name):
- """Whether to use L2 weight decay for `param_name`."""
- if self.weight_decay_rate == 0:
- return False
-
- if self._include_in_weight_decay:
- for r in self._include_in_weight_decay:
- if re.search(r, param_name) is not None:
- return True
-
- if self._exclude_from_weight_decay:
- for r in self._exclude_from_weight_decay:
- if re.search(r, param_name) is not None:
- return False
- return True
-
-
-# Extracted from https://github.com/OpenNMT/OpenNMT-tf/blob/master/opennmt/optimizers/utils.py
-class GradientAccumulator(object):
- """
- Gradient accumulation utility. When used with a distribution strategy, the accumulator should be called in a
- replica context. Gradients will be accumulated locally on each replica and without synchronization. Users should
- then call `.gradients`, scale the gradients if required, and pass the result to `apply_gradients`.
- """
-
- # We use the ON_READ synchronization policy so that no synchronization is
- # performed on assignment. To get the value, we call .value() which returns the
- # value on the current replica without synchronization.
-
- def __init__(self):
- """Initializes the accumulator."""
- self._gradients = []
- self._accum_steps = None
-
- @property
- def step(self):
- """Number of accumulated steps."""
- if self._accum_steps is None:
- self._accum_steps = tf.Variable(
- tf.constant(0, dtype=tf.int64),
- trainable=False,
- synchronization=tf.VariableSynchronization.ON_READ,
- aggregation=tf.VariableAggregation.ONLY_FIRST_REPLICA,
- )
-
- return self._accum_steps.value()
-
- @property
- def gradients(self):
- """The accumulated gradients on the current replica."""
- if not self._gradients:
- raise ValueError("The accumulator should be called first to initialize the gradients")
- return [gradient.value() if gradient is not None else gradient for gradient in self._gradients]
-
- def __call__(self, gradients):
- """Accumulates `gradients` on the current replica."""
- if not self._gradients:
- _ = self.step # Create the step variable.
- self._gradients.extend(
- [
- tf.Variable(
- tf.zeros_like(gradient),
- trainable=False,
- synchronization=tf.VariableSynchronization.ON_READ,
- aggregation=tf.VariableAggregation.ONLY_FIRST_REPLICA,
- )
- if gradient is not None
- else gradient
- for gradient in gradients
- ]
- )
- if len(gradients) != len(self._gradients):
- raise ValueError(f"Expected {len(self._gradients)} gradients, but got {len(gradients)}")
-
- for accum_gradient, gradient in zip(self._gradients, gradients):
- if accum_gradient is not None and gradient is not None:
- accum_gradient.assign_add(gradient)
-
- self._accum_steps.assign_add(1)
-
- def reset(self):
- """Resets the accumulated gradients on the current replica."""
- if not self._gradients:
- return
- self._accum_steps.assign(0)
- for gradient in self._gradients:
- if gradient is not None:
- gradient.assign(tf.zeros_like(gradient))
\ No newline at end of file
diff --git a/spaces/CoffeeBrewer/CompVis-stable-diffusion-v1-4/README.md b/spaces/CoffeeBrewer/CompVis-stable-diffusion-v1-4/README.md
deleted file mode 100644
index 6db6a1856c9f9cee3d13601b0516189e43520888..0000000000000000000000000000000000000000
--- a/spaces/CoffeeBrewer/CompVis-stable-diffusion-v1-4/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: CompVis Stable Diffusion V1 4
-emoji: 🏃
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 3.20.1
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CognitiveLabs/Research-Assistant/actions/duck_search.py b/spaces/CognitiveLabs/Research-Assistant/actions/duck_search.py
deleted file mode 100644
index d324475f5c82105dd76a603a01fae72e1a352f2b..0000000000000000000000000000000000000000
--- a/spaces/CognitiveLabs/Research-Assistant/actions/duck_search.py
+++ /dev/null
@@ -1,11 +0,0 @@
-from duckduckgo_search import DDGS
-
-
-def duckduckgo_search(query, max_search_result=3):
- with DDGS() as ddgs:
- responses = list()
- for i, r in enumerate(ddgs.text(query, region='wt-wt', safesearch='off', timelimit='y')):
- if i == max_search_result:
- break
- responses.append(r)
- return responses
\ No newline at end of file
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_core/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_core/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/dropdown.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/dropdown.py
deleted file mode 100644
index 473e105268e80bbec2b76eb6eda463410c0114d3..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/dropdown.py
+++ /dev/null
@@ -1,243 +0,0 @@
-"""gr.Dropdown() component."""
-
-from __future__ import annotations
-
-import warnings
-from typing import Any, Callable, Literal
-
-from gradio_client.documentation import document, set_documentation_group
-from gradio_client.serializing import SimpleSerializable
-
-from gradio.components.base import FormComponent, IOComponent, _Keywords
-from gradio.deprecation import warn_style_method_deprecation
-from gradio.events import (
- Blurrable,
- Changeable,
- EventListenerMethod,
- Inputable,
- Selectable,
-)
-
-set_documentation_group("component")
-
-
-@document()
-class Dropdown(
- FormComponent,
- Changeable,
- Inputable,
- Selectable,
- Blurrable,
- IOComponent,
- SimpleSerializable,
-):
- """
- Creates a dropdown of choices from which entries can be selected.
- Preprocessing: passes the value of the selected dropdown entry as a {str} or its index as an {int} into the function, depending on `type`.
- Postprocessing: expects a {str} corresponding to the value of the dropdown entry to be selected.
- Examples-format: a {str} representing the drop down value to select.
- Demos: sentence_builder, titanic_survival
- """
-
- def __init__(
- self,
- choices: list[str] | None = None,
- *,
- value: str | list[str] | Callable | None = None,
- type: Literal["value", "index"] = "value",
- multiselect: bool | None = None,
- max_choices: int | None = None,
- label: str | None = None,
- info: str | None = None,
- every: float | None = None,
- show_label: bool | None = None,
- container: bool = True,
- scale: int | None = None,
- min_width: int = 160,
- interactive: bool | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- elem_classes: list[str] | str | None = None,
- allow_custom_value: bool = False,
- **kwargs,
- ):
- """
- Parameters:
- choices: list of options to select from.
- value: default value(s) selected in dropdown. If None, no value is selected by default. If callable, the function will be called whenever the app loads to set the initial value of the component.
- type: Type of value to be returned by component. "value" returns the string of the choice selected, "index" returns the index of the choice selected.
- multiselect: if True, multiple choices can be selected.
- max_choices: maximum number of choices that can be selected. If None, no limit is enforced.
- label: component name in interface.
- info: additional component description.
- every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.
- show_label: if True, will display label.
- container: If True, will place the component in a container - providing some extra padding around the border.
- scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer.
- min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.
- interactive: if True, choices in this dropdown will be selectable; if False, selection will be disabled. If not provided, this is inferred based on whether the component is used as an input or output.
- visible: If False, component will be hidden.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
- allow_custom_value: If True, allows user to enter a custom value that is not in the list of choices.
- """
- self.choices = [str(choice) for choice in choices] if choices else []
- valid_types = ["value", "index"]
- if type not in valid_types:
- raise ValueError(
- f"Invalid value for parameter `type`: {type}. Please choose from one of: {valid_types}"
- )
- self.type = type
- self.multiselect = multiselect
- if multiselect and isinstance(value, str):
- value = [value]
- if not multiselect and max_choices is not None:
- warnings.warn(
- "The `max_choices` parameter is ignored when `multiselect` is False."
- )
- self.max_choices = max_choices
- self.allow_custom_value = allow_custom_value
- if multiselect and allow_custom_value:
- raise ValueError(
- "Custom values are not supported when `multiselect` is True."
- )
- self.interpret_by_tokens = False
- self.select: EventListenerMethod
- """
- Event listener for when the user selects Dropdown option.
- Uses event data gradio.SelectData to carry `value` referring to label of selected option, and `index` to refer to index.
- See EventData documentation on how to use this event data.
- """
- IOComponent.__init__(
- self,
- label=label,
- info=info,
- every=every,
- show_label=show_label,
- container=container,
- scale=scale,
- min_width=min_width,
- interactive=interactive,
- visible=visible,
- elem_id=elem_id,
- elem_classes=elem_classes,
- value=value,
- **kwargs,
- )
-
- def api_info(self) -> dict[str, dict | bool]:
- if self.multiselect:
- type = {
- "type": "array",
- "items": {"type": "string"},
- "description": f"List of options from: {self.choices}",
- }
- else:
- type = {"type": "string", "description": f"Option from: {self.choices}"}
- return {"info": type, "serialized_info": False}
-
- def example_inputs(self) -> dict[str, Any]:
- if self.multiselect:
- return {
- "raw": [self.choices[0]] if self.choices else [],
- "serialized": [self.choices[0]] if self.choices else [],
- }
- else:
- return {
- "raw": self.choices[0] if self.choices else None,
- "serialized": self.choices[0] if self.choices else None,
- }
-
- def get_config(self):
- return {
- "choices": self.choices,
- "value": self.value,
- "multiselect": self.multiselect,
- "max_choices": self.max_choices,
- "allow_custom_value": self.allow_custom_value,
- "container": self.container,
- **IOComponent.get_config(self),
- }
-
- @staticmethod
- def update(
- value: Any | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE,
- choices: str | list[str] | None = None,
- label: str | None = None,
- info: str | None = None,
- show_label: bool | None = None,
- container: bool | None = None,
- scale: int | None = None,
- min_width: int | None = None,
- interactive: bool | None = None,
- placeholder: str | None = None,
- visible: bool | None = None,
- ):
- return {
- "choices": choices,
- "label": label,
- "info": info,
- "show_label": show_label,
- "container": container,
- "scale": scale,
- "min_width": min_width,
- "visible": visible,
- "value": value,
- "interactive": interactive,
- "placeholder": placeholder,
- "__type__": "update",
- }
-
- def preprocess(
- self, x: str | list[str]
- ) -> str | int | list[str] | list[int] | None:
- """
- Parameters:
- x: selected choice(s)
- Returns:
- selected choice(s) as string or index within choice list or list of string or indices
- """
- if self.type == "value":
- return x
- elif self.type == "index":
- if x is None:
- return None
- elif self.multiselect:
- return [self.choices.index(c) for c in x]
- else:
- if isinstance(x, str):
- return self.choices.index(x) if x in self.choices else None
- else:
- raise ValueError(
- f"Unknown type: {self.type}. Please choose from: 'value', 'index'."
- )
-
- def set_interpret_parameters(self):
- """
- Calculates interpretation score of each choice by comparing the output against each of the outputs when alternative choices are selected.
- """
- return self
-
- def get_interpretation_neighbors(self, x):
- choices = list(self.choices)
- choices.remove(x)
- return choices, {}
-
- def get_interpretation_scores(
- self, x, neighbors, scores: list[float | None], **kwargs
- ) -> list:
- """
- Returns:
- Each value represents the interpretation score corresponding to each choice.
- """
- scores.insert(self.choices.index(x), None)
- return scores
-
- def style(self, *, container: bool | None = None, **kwargs):
- """
- This method is deprecated. Please set these arguments in the constructor instead.
- """
- warn_style_method_deprecation()
- if container is not None:
- self.container = container
- return self
diff --git a/spaces/Demosthene-OR/avr23-cds-translation/tabs/exploration_tab.py b/spaces/Demosthene-OR/avr23-cds-translation/tabs/exploration_tab.py
deleted file mode 100644
index 109819ecc3a5e437e672ba6472d6c540daf191bd..0000000000000000000000000000000000000000
--- a/spaces/Demosthene-OR/avr23-cds-translation/tabs/exploration_tab.py
+++ /dev/null
@@ -1,426 +0,0 @@
-import streamlit as st
-import os
-import numpy as np
-import pandas as pd
-import collections
-from nltk.tokenize import word_tokenize
-from nltk import download
-from ast import literal_eval
-# import contextlib
-# import re
-# import nltk
-# from nltk.corpus import stopwords
-
-title = "Exploration et Preprocessing"
-sidebar_name = "Exploration et Preprocessing"
-
-# Indiquer si l'on veut enlever les stop words. C'est un processus long
-stopwords_to_do = True
-# Indiquer si l'on veut lemmatiser les phrases, un fois les stop words enlevés. C'est un processus long (approximativement 8 minutes)
-lemmatize_to_do = True
-# Indiquer si l'on veut calculer le score Bleu pour tout le corpus. C'est un processus très long long (approximativement 10 minutes pour les 10 dictionnaires)
-bleu_score_to_do = True
-# Première ligne à charger
-first_line = 0
-# Nombre maximum de lignes à charger
-max_lines = 140000
-if ((first_line+max_lines)>137860):
- max_lines = max(137860-first_line ,0)
-# Nombre maximum de ligne à afficher pour les DataFrame
-max_lines_to_display = 50
-
-
-download('punkt')
-# nltk.download('averaged_perceptron_tagger')
-# nltk.download('stopwords')
-
-@st.cache_data
-def load_data(path):
-
- input_file = os.path.join(path)
- with open(input_file, "r", encoding="utf-8") as f:
- data = f.read()
-
- # On convertit les majuscules en minulcule
- data = data.lower()
- data = data.split('\n')
- return data[first_line:min(len(data),first_line+max_lines)]
-
-@st.cache_data
-def load_preprocessed_data(path,data_type):
-
- input_file = os.path.join(path)
- if data_type == 1:
- return pd.read_csv(input_file, encoding="utf-8", index_col=0)
- else:
- with open(input_file, "r", encoding="utf-8") as f:
- data = f.read()
- data = data.split('\n')
- if data_type==0:
- data=data[:-1]
- elif data_type == 2:
- data=[eval(i) for i in data[:-1]]
- elif data_type ==3:
- data2 = []
- for d in data[:-1]:
- data2.append(literal_eval(d))
- data=data2
- return data
-
-# @st.cache_data(ttl='1h00s')
-def load_all_preprocessed_data(lang):
- txt =load_preprocessed_data('data/preprocess_txt_'+lang,0)
- txt_split = load_preprocessed_data('data/preprocess_txt_split_'+lang,3)
- txt_lem = load_preprocessed_data('data/preprocess_txt_lem_'+lang,0)
- txt_wo_stopword = load_preprocessed_data('data/preprocess_txt_wo_stopword_'+lang,0)
- df_count_word = pd.concat([load_preprocessed_data('data/preprocess_df_count_word1_'+lang,1), load_preprocessed_data('data/preprocess_df_count_word2_'+lang,1)])
- return txt, txt_split, txt_lem, txt_wo_stopword, df_count_word
-
-#Chargement des textes complet dans les 2 langues
-full_txt_en = load_data('data/small_vocab_en')
-full_txt_fr = load_data('data/small_vocab_fr')
-
-# Chargement du résultat du préprocessing
-_ , full_txt_split_en, full_txt_lem_en, full_txt_wo_stopword_en, full_df_count_word_en = load_all_preprocessed_data('en')
-_ , full_txt_split_fr, full_txt_lem_fr, full_txt_wo_stopword_fr, full_df_count_word_fr = load_all_preprocessed_data('fr')
-"""
-def remove_stopwords(text, lang):
- stop_words = set(stopwords.words(lang))
- # stop_words will contain set all english stopwords
- filtered_sentence = []
- for word in text.split():
- if word not in stop_words:
- filtered_sentence.append(word)
- return " ".join(filtered_sentence)
-
-def clean_undesirable_from_text(sentence, lang):
-
- # Removing URLs
- sentence = re.sub(r"https?://\S+|www\.\S+", "", sentence )
-
- # Removing Punctuations (we keep the . character)
- REPLACEMENTS = [("..", "."),
- (",", ""),
- (";", ""),
- (":", ""),
- ("?", ""),
- ('"', ""),
- ("-", " "),
- ("it's", "it is"),
- ("isn't","is not"),
- ("'", " ")
- ]
- for old, new in REPLACEMENTS:
- sentence = sentence.replace(old, new)
-
- # Removing Digits
- sentence= re.sub(r'[0-9]','',sentence)
-
- # Removing Additional Spaces
- sentence = re.sub(' +', ' ', sentence)
-
- return sentence
-
-def clean_untranslated_sentence(data1, data2):
- i=0
- while i137860):
- max_lines = max(137860-first_line,0)
- # if ((max_lines-first_line)>1000):
- # lemmatize_to_do = True
- # else:
- # lemmatize_to_do = False
-
- last_line = first_line+max_lines
- if (Langue=='Anglais'):
- st.dataframe(pd.DataFrame(data=full_txt_en,columns=['Texte']).loc[first_line:last_line-1].head(max_lines_to_display), width=800)
- else:
- st.dataframe(pd.DataFrame(data=full_txt_fr,columns=['Texte']).loc[first_line:last_line-1].head(max_lines_to_display), width=800)
- st.write("")
-
- # Chargement du résultat du préprocessing (max lignes = max_lines)
- txt_en = full_txt_en[first_line:last_line]
- txt_split_en = full_txt_split_en[first_line:last_line]
- txt_lem_en = full_txt_lem_en[first_line:last_line]
- txt_wo_stopword_en = full_txt_wo_stopword_en[first_line:last_line]
- df_count_word_en = full_df_count_word_en.loc[first_line:last_line-1]
- txt_fr = full_txt_fr[first_line:last_line]
- txt_split_fr = full_txt_split_fr[first_line:last_line]
- txt_lem_fr = full_txt_lem_fr[first_line:last_line]
- txt_wo_stopword_fr = full_txt_wo_stopword_fr[first_line:last_line]
- df_count_word_fr = full_df_count_word_fr.loc[first_line:last_line-1]
-
- # Lancement du préprocessing du texte qui va spliter nettoyer les phrases et les spliter en mots
- # et calculer nombre d'occurences des mots dans chaque phrase
- if (Langue == 'Anglais'):
- st.write("## **Préprocessing de small_vocab_en :**\n")
- if max_lines>10000:
- with st.status(":sunglasses:", expanded=True):
- # txt_en, corpus_en, txt_split_en, txt_lem_en, txt_wo_stopword_en, df_count_word_en,sent_len_en, sent_wo_sw_len_en, sent_lem_len_en = preprocess_txt (txt_en,'en')
- display_preprocess_results('en',txt_en, txt_split_en, txt_lem_en, txt_wo_stopword_en, df_count_word_en)
- else:
- # txt_en, corpus_en, txt_split_en, txt_lem_en, txt_wo_stopword_en, df_count_word_en,sent_len_en, sent_wo_sw_len_en, sent_lem_len_en = preprocess_txt (txt_en,'en')
- display_preprocess_results('en',txt_en, txt_split_en, txt_lem_en, txt_wo_stopword_en, df_count_word_en)
- else:
- st.write("## **Préprocessing de small_vocab_fr :**\n")
- if max_lines>10000:
- with st.status(":sunglasses:", expanded=True):
- # txt_fr, corpus_fr, txt_split_fr, txt_lem_fr, txt_wo_stopword_fr, df_count_word_fr,sent_len_fr, sent_wo_sw_len_fr, sent_lem_len_fr = preprocess_txt (txt_fr,'fr')
- display_preprocess_results('fr', txt_fr, txt_split_fr, txt_lem_fr, txt_wo_stopword_fr, df_count_word_fr)
- else:
- # txt_fr, corpus_fr, txt_split_fr, txt_lem_fr, txt_wo_stopword_fr, df_count_word_fr,sent_len_fr, sent_wo_sw_len_fr, sent_lem_len_fr = preprocess_txt (txt_fr,'fr')
- display_preprocess_results('fr', txt_fr, txt_split_fr, txt_lem_fr, txt_wo_stopword_fr, df_count_word_fr)
-
-
-
- # Might be used later....
- # DEFAULT_TEXT = """Google was founded in September 1998 by Larry Page and Sergey Brin while they were Ph.D. students at Stanford University in California. Together they own about 14 percent of its shares and control 56 percent of the stockholder voting power through supervoting stock. They incorporated Google as a California privately held company on September 4, 1998, in California. Google was then reincorporated in Delaware on October 22, 2002."""
- """
- spacy_model = "en_core_web_sm"
-
- text = st.text_area("Text to analyze", DEFAULT_TEXT, height=200)
- doc = spacy_streamlit.process_text(spacy_model, text)
-
- spacy_streamlit.visualize_ner(
- doc,
- labels=["PERSON", "DATE", "GPE"],
- show_table=False,
- title="Persons, dates and locations",
- )
- st.text(f"Analyzed using spaCy model {spacy_model}")
- """
-
- # models = ["en_core_web_sm"]
- # default_text = "Google was founded in September 1998 by Larry Page and Sergey Brin while they were Ph.D. students at Stanford University in California. Together they own about 14 percent of its shares and control 56 percent of the stockholder voting power through supervoting stock. They incorporated Google as a California privately held company on September 4, 1998, in California. Google was then reincorporated in Delaware on October 22, 2002."
- # spacy_streamlit.visualize(models, default_text)
-
-
-
-
-
-
-
-
-
diff --git a/spaces/Dinoking/Guccio-AI-Designer/visualize.py b/spaces/Dinoking/Guccio-AI-Designer/visualize.py
deleted file mode 100644
index 433ae2ea8963c56a37e5e91932ad6d359495ed47..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Guccio-AI-Designer/visualize.py
+++ /dev/null
@@ -1,314 +0,0 @@
-# Copyright 2020 Erik Härkönen. All rights reserved.
-# This file is licensed to you under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License. You may obtain a copy
-# of the License at http://www.apache.org/licenses/LICENSE-2.0
-
-# Unless required by applicable law or agreed to in writing, software distributed under
-# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS
-# OF ANY KIND, either express or implied. See the License for the specific language
-# governing permissions and limitations under the License.
-
-# Patch for broken CTRL+C handler
-# https://github.com/ContinuumIO/anaconda-issues/issues/905
-import os
-os.environ['FOR_DISABLE_CONSOLE_CTRL_HANDLER'] = '1'
-
-import torch, json, numpy as np
-from types import SimpleNamespace
-import matplotlib.pyplot as plt
-from pathlib import Path
-from os import makedirs
-from PIL import Image
-from netdissect import proggan, nethook, easydict, zdataset
-from netdissect.modelconfig import create_instrumented_model
-from estimators import get_estimator
-from models import get_instrumented_model
-from scipy.cluster.vq import kmeans
-import re
-import sys
-import datetime
-import argparse
-from tqdm import trange
-from config import Config
-from decomposition import get_random_dirs, get_or_compute, get_max_batch_size, SEED_VISUALIZATION
-from utils import pad_frames
-
-def x_closest(p):
- distances = np.sqrt(np.sum((X - p)**2, axis=-1))
- idx = np.argmin(distances)
- return distances[idx], X[idx]
-
-def make_gif(imgs, duration_secs, outname):
- head, *tail = [Image.fromarray((x * 255).astype(np.uint8)) for x in imgs]
- ms_per_frame = 1000 * duration_secs / instances
- head.save(outname, format='GIF', append_images=tail, save_all=True, duration=ms_per_frame, loop=0)
-
-def make_mp4(imgs, duration_secs, outname):
- import shutil
- import subprocess as sp
-
- FFMPEG_BIN = shutil.which("ffmpeg")
- assert FFMPEG_BIN is not None, 'ffmpeg not found, install with "conda install -c conda-forge ffmpeg"'
- assert len(imgs[0].shape) == 3, 'Invalid shape of frame data'
-
- resolution = imgs[0].shape[0:2]
- fps = int(len(imgs) / duration_secs)
-
- command = [ FFMPEG_BIN,
- '-y', # overwrite output file
- '-f', 'rawvideo',
- '-vcodec','rawvideo',
- '-s', f'{resolution[0]}x{resolution[1]}', # size of one frame
- '-pix_fmt', 'rgb24',
- '-r', f'{fps}',
- '-i', '-', # imput from pipe
- '-an', # no audio
- '-c:v', 'libx264',
- '-preset', 'slow',
- '-crf', '17',
- str(Path(outname).with_suffix('.mp4')) ]
-
- frame_data = np.concatenate([(x * 255).astype(np.uint8).reshape(-1) for x in imgs])
- with sp.Popen(command, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE) as p:
- ret = p.communicate(frame_data.tobytes())
- if p.returncode != 0:
- print(ret[1].decode("utf-8"))
- raise sp.CalledProcessError(p.returncode, command)
-
-
-def make_grid(latent, lat_mean, lat_comp, lat_stdev, act_mean, act_comp, act_stdev, scale=1, n_rows=10, n_cols=5, make_plots=True, edit_type='latent'):
- from notebooks.notebook_utils import create_strip_centered
-
- inst.remove_edits()
- x_range = np.linspace(-scale, scale, n_cols, dtype=np.float32) # scale in sigmas
-
- rows = []
- for r in range(n_rows):
- curr_row = []
- out_batch = create_strip_centered(inst, edit_type, layer_key, [latent],
- act_comp[r], lat_comp[r], act_stdev[r], lat_stdev[r], act_mean, lat_mean, scale, 0, -1, n_cols)[0]
- for i, img in enumerate(out_batch):
- curr_row.append(('c{}_{:.2f}'.format(r, x_range[i]), img))
-
- rows.append(curr_row[:n_cols])
-
- inst.remove_edits()
-
- if make_plots:
- # If more rows than columns, make several blocks side by side
- n_blocks = 2 if n_rows > n_cols else 1
-
- for r, data in enumerate(rows):
- # Add white borders
- imgs = pad_frames([img for _, img in data])
-
- coord = ((r * n_blocks) % n_rows) + ((r * n_blocks) // n_rows)
- plt.subplot(n_rows//n_blocks, n_blocks, 1 + coord)
- plt.imshow(np.hstack(imgs))
-
- # Custom x-axis labels
- W = imgs[0].shape[1] # image width
- P = imgs[1].shape[1] # padding width
- locs = [(0.5*W + i*(W+P)) for i in range(n_cols)]
- plt.xticks(locs, ["{:.2f}".format(v) for v in x_range])
- plt.yticks([])
- plt.ylabel(f'C{r}')
-
- plt.tight_layout()
- plt.subplots_adjust(top=0.96) # make room for suptitle
-
- return [img for row in rows for img in row]
-
-
-######################
-### Visualize results
-######################
-
-if __name__ == '__main__':
- global max_batch, sample_shape, feature_shape, inst, args, layer_key, model
-
- args = Config().from_args()
- t_start = datetime.datetime.now()
- timestamp = lambda : datetime.datetime.now().strftime("%d.%m %H:%M")
- print(f'[{timestamp()}] {args.model}, {args.layer}, {args.estimator}')
-
- # Ensure reproducibility
- torch.manual_seed(0) # also sets cuda seeds
- np.random.seed(0)
-
- # Speed up backend
- torch.backends.cudnn.benchmark = True
- torch.autograd.set_grad_enabled(False)
-
- has_gpu = torch.cuda.is_available()
- device = torch.device('cuda' if has_gpu else 'cpu')
- layer_key = args.layer
- layer_name = layer_key #layer_key.lower().split('.')[-1]
-
- basedir = Path(__file__).parent.resolve()
- outdir = basedir / 'out'
-
- # Load model
- inst = get_instrumented_model(args.model, args.output_class, layer_key, device, use_w=args.use_w)
- model = inst.model
- feature_shape = inst.feature_shape[layer_key]
- latent_shape = model.get_latent_shape()
- print('Feature shape:', feature_shape)
-
- # Layout of activations
- if len(feature_shape) != 4: # non-spatial
- axis_mask = np.ones(len(feature_shape), dtype=np.int32)
- else:
- axis_mask = np.array([0, 1, 1, 1]) # only batch fixed => whole activation volume used
-
- # Shape of sample passed to PCA
- sample_shape = feature_shape*axis_mask
- sample_shape[sample_shape == 0] = 1
-
- # Load or compute components
- dump_name = get_or_compute(args, inst)
- data = np.load(dump_name, allow_pickle=False) # does not contain object arrays
- X_comp = data['act_comp']
- X_global_mean = data['act_mean']
- X_stdev = data['act_stdev']
- X_var_ratio = data['var_ratio']
- X_stdev_random = data['random_stdevs']
- Z_global_mean = data['lat_mean']
- Z_comp = data['lat_comp']
- Z_stdev = data['lat_stdev']
- n_comp = X_comp.shape[0]
- data.close()
-
- # Transfer components to device
- tensors = SimpleNamespace(
- X_comp = torch.from_numpy(X_comp).to(device).float(), #-1, 1, C, H, W
- X_global_mean = torch.from_numpy(X_global_mean).to(device).float(), # 1, C, H, W
- X_stdev = torch.from_numpy(X_stdev).to(device).float(),
- Z_comp = torch.from_numpy(Z_comp).to(device).float(),
- Z_stdev = torch.from_numpy(Z_stdev).to(device).float(),
- Z_global_mean = torch.from_numpy(Z_global_mean).to(device).float(),
- )
-
- transformer = get_estimator(args.estimator, n_comp, args.sparsity)
- tr_param_str = transformer.get_param_str()
-
- # Compute max batch size given VRAM usage
- max_batch = args.batch_size or (get_max_batch_size(inst, device) if has_gpu else 1)
- print('Batch size:', max_batch)
-
- def show():
- if args.batch_mode:
- plt.close('all')
- else:
- plt.show()
-
- print(f'[{timestamp()}] Creating visualizations')
-
- # Ensure visualization gets new samples
- torch.manual_seed(SEED_VISUALIZATION)
- np.random.seed(SEED_VISUALIZATION)
-
- # Make output directories
- est_id = f'spca_{args.sparsity}' if args.estimator == 'spca' else args.estimator
- outdir_comp = outdir/model.name/layer_key.lower()/est_id/'comp'
- outdir_inst = outdir/model.name/layer_key.lower()/est_id/'inst'
- outdir_summ = outdir/model.name/layer_key.lower()/est_id/'summ'
- makedirs(outdir_comp, exist_ok=True)
- makedirs(outdir_inst, exist_ok=True)
- makedirs(outdir_summ, exist_ok=True)
-
- # Measure component sparsity (!= activation sparsity)
- sparsity = np.mean(X_comp == 0) # percentage of zero values in components
- print(f'Sparsity: {sparsity:.2f}')
-
- def get_edit_name(mode):
- if mode == 'activation':
- is_stylegan = 'StyleGAN' in args.model
- is_w = layer_key in ['style', 'g_mapping']
- return 'W' if (is_stylegan and is_w) else 'ACT'
- elif mode == 'latent':
- return model.latent_space_name()
- elif mode == 'both':
- return 'BOTH'
- else:
- raise RuntimeError(f'Unknown edit mode {mode}')
-
- # Only visualize applicable edit modes
- if args.use_w and layer_key in ['style', 'g_mapping']:
- edit_modes = ['latent'] # activation edit is the same
- else:
- edit_modes = ['activation', 'latent']
-
- # Summary grid, real components
- for edit_mode in edit_modes:
- plt.figure(figsize = (14,12))
- plt.suptitle(f"{args.estimator.upper()}: {model.name} - {layer_name}, {get_edit_name(edit_mode)} edit", size=16)
- make_grid(tensors.Z_global_mean, tensors.Z_global_mean, tensors.Z_comp, tensors.Z_stdev, tensors.X_global_mean,
- tensors.X_comp, tensors.X_stdev, scale=args.sigma, edit_type=edit_mode, n_rows=14)
- plt.savefig(outdir_summ / f'components_{get_edit_name(edit_mode)}.jpg', dpi=300)
- show()
-
- if args.make_video:
- components = 15
- instances = 150
-
- # One reasonable, one over the top
- for sigma in [args.sigma, 3*args.sigma]:
- for c in range(components):
- for edit_mode in edit_modes:
- frames = make_grid(tensors.Z_global_mean, tensors.Z_global_mean, tensors.Z_comp[c:c+1, :, :], tensors.Z_stdev[c:c+1], tensors.X_global_mean,
- tensors.X_comp[c:c+1, :, :], tensors.X_stdev[c:c+1], n_rows=1, n_cols=instances, scale=sigma, make_plots=False, edit_type=edit_mode)
- plt.close('all')
-
- frames = [x for _, x in frames]
- frames = frames + frames[::-1]
- make_mp4(frames, 5, outdir_comp / f'{get_edit_name(edit_mode)}_sigma{sigma}_comp{c}.mp4')
-
-
- # Summary grid, random directions
- # Using the stdevs of the principal components for same norm
- random_dirs_act = torch.from_numpy(get_random_dirs(n_comp, np.prod(sample_shape)).reshape(-1, *sample_shape)).to(device)
- random_dirs_z = torch.from_numpy(get_random_dirs(n_comp, np.prod(inst.input_shape)).reshape(-1, *latent_shape)).to(device)
-
- for edit_mode in edit_modes:
- plt.figure(figsize = (14,12))
- plt.suptitle(f"{model.name} - {layer_name}, random directions w/ PC stdevs, {get_edit_name(edit_mode)} edit", size=16)
- make_grid(tensors.Z_global_mean, tensors.Z_global_mean, random_dirs_z, tensors.Z_stdev,
- tensors.X_global_mean, random_dirs_act, tensors.X_stdev, scale=args.sigma, edit_type=edit_mode, n_rows=14)
- plt.savefig(outdir_summ / f'random_dirs_{get_edit_name(edit_mode)}.jpg', dpi=300)
- show()
-
- # Random instances w/ components added
- n_random_imgs = 10
- latents = model.sample_latent(n_samples=n_random_imgs)
-
- for img_idx in trange(n_random_imgs, desc='Random images', ascii=True):
- #print(f'Creating visualizations for random image {img_idx+1}/{n_random_imgs}')
- z = latents[img_idx][None, ...]
-
- # Summary grid, real components
- for edit_mode in edit_modes:
- plt.figure(figsize = (14,12))
- plt.suptitle(f"{args.estimator.upper()}: {model.name} - {layer_name}, {get_edit_name(edit_mode)} edit", size=16)
- make_grid(z, tensors.Z_global_mean, tensors.Z_comp, tensors.Z_stdev,
- tensors.X_global_mean, tensors.X_comp, tensors.X_stdev, scale=args.sigma, edit_type=edit_mode, n_rows=14)
- plt.savefig(outdir_summ / f'samp{img_idx}_real_{get_edit_name(edit_mode)}.jpg', dpi=300)
- show()
-
- if args.make_video:
- components = 5
- instances = 150
-
- # One reasonable, one over the top
- for sigma in [args.sigma, 3*args.sigma]: #[2, 5]:
- for edit_mode in edit_modes:
- imgs = make_grid(z, tensors.Z_global_mean, tensors.Z_comp, tensors.Z_stdev, tensors.X_global_mean, tensors.X_comp, tensors.X_stdev,
- n_rows=components, n_cols=instances, scale=sigma, make_plots=False, edit_type=edit_mode)
- plt.close('all')
-
- for c in range(components):
- frames = [x for _, x in imgs[c*instances:(c+1)*instances]]
- frames = frames + frames[::-1]
- make_mp4(frames, 5, outdir_inst / f'{get_edit_name(edit_mode)}_sigma{sigma}_img{img_idx}_comp{c}.mp4')
-
- print('Done in', datetime.datetime.now() - t_start)
\ No newline at end of file
diff --git a/spaces/DrGabrielLopez/BERTopic/app.py b/spaces/DrGabrielLopez/BERTopic/app.py
deleted file mode 100644
index 5d506b8003334b35a74e423e0439b1b05ab919dd..0000000000000000000000000000000000000000
--- a/spaces/DrGabrielLopez/BERTopic/app.py
+++ /dev/null
@@ -1,152 +0,0 @@
-import pandas as pd
-import numpy as np
-import spacy
-import os
-import gradio as gr
-import umap
-from sklearn.cluster import OPTICS
-from transformers import BertTokenizer, TFBertModel
-import plotly.io as pio
-
-# configuration params
-pio.templates.default = "plotly_dark"
-
-# setting up the text in the page
-TITLE = "
BERTopic - For topics detection on text
"
-DESCRIPTION = r"""
Apply BERTopic to a given dataset end extract the most relevant topics.
- """
-EXAMPLES = [
- ["data/ecomm500.csv"],
-]
-ARTICLE = r"""
- Done by dr. Gabriel Lopez
- This program follows the BERTopic philosophy, but actually has its own implementation.
- For more please visit: My Page
- For info about the BERTopic model can be found here
-
"""
-
-
-def load_data(fileobj):
- """Load dataset (keep only 500 rows for efficiency)"""
- data = pd.read_csv(fileobj.name, on_bad_lines='skip', nrows=500)
- assert "text" in data.columns, "The data must have a column named 'text'"
- return data[['text']]
-
-
-def run_nlp_processing(data):
- """As reference for standard NLP processing"""
- # NLP processing
- docs = []
- nlp = spacy.load("en_core_web_sm", disable=["tagger", "parser", "ner"])
- for doc in nlp.pipe(data["text"].values, n_process=os.cpu_count() - 1):
- lemmas = []
- for token in doc:
- if token.is_punct or token.is_stop:
- continue
- lemmas.append(token.lemma_.lower())
- docs.append(" ".join(lemmas))
- # Make new column
- data = data.assign(text=docs)
- return data
-
-
-def run_bert_tokenization(data):
- """Show the action of the WordPiece alogorithm"""
- # load BERT model (for embeddings)
- checkpoint = "bert-base-uncased"
- tokenizer = BertTokenizer.from_pretrained(checkpoint)
- model = TFBertModel.from_pretrained(checkpoint)
- # Run BERT tokenizing + encoding
- descr_processed_tokenized = tokenizer(
- list(data["text"]),
- return_tensors="tf",
- truncation=True,
- padding=True,
- max_length=128,
- )
- data = data.assign(text_tokenized=descr_processed_tokenized)
- return data
-
-
-def run_bertopic(data):
- """ " End-to-end BERTopic model"""
- # load BERT model (for embeddings)
- checkpoint = "bert-base-uncased"
- tokenizer = BertTokenizer.from_pretrained(checkpoint)
- model = TFBertModel.from_pretrained(checkpoint)
- # Run BERT tokenizing + encoding
- descr_processed_tokenized = tokenizer(
- list(data["text"]),
- return_tensors="tf",
- truncation=True,
- padding=True,
- max_length=128,
- )
- output_bert = model(descr_processed_tokenized)
- # Get sentence embeddings from BERTs word embeddings
- mean_vect = []
- for vect in output_bert.last_hidden_state:
- mean_vect.append(np.mean(vect, axis=0))
- data = data.assign(descr_vect=mean_vect)
- # Use UMAP to lower the dimensionality of the embedding to 3D - [stack makes array(array()) --> array2d]
- descr_vect_3d = umap.UMAP(n_components=3).fit_transform(
- np.stack(data["descr_vect"].values)
- )
- data["descr_vect_2d"] = list(descr_vect_3d)
- # Use BERT's + UMAP vector embeddings for clustering using OPTICS
- clustering = OPTICS(min_samples=50).fit(np.stack(data["descr_vect_2d"].values))
- data["cluster_label"] = clustering.labels_
- # Plot the 3D embedding
- fig_bertopic = plot_bertopic(descr_vect_3d, data)
- # Extract topic wordclouds
- return fig_bertopic
-
-
-def plot_bertopic(descr_vect_3d, data):
- """ " Show the topic clusters over an 3d embedding space"""
- import plotly.express as px
-
- fig = px.scatter_3d(
- x=descr_vect_3d[:, 0],
- y=descr_vect_3d[:, 1],
- z=descr_vect_3d[:, 2],
- color=data["cluster_label"],
- )
- return fig
-
-
-# gradio interface
-blocks = gr.Blocks()
-with blocks:
- # physical elements
- session_state = gr.State([])
- gr.Markdown(TITLE)
- gr.Markdown(DESCRIPTION)
- with gr.Row():
- with gr.Column():
- gr.Markdown(
- "## Load the data (must be a csv file with a column named 'text')"
- )
- in_file = gr.File()
- gr.Markdown("## Inspect the data")
- in_data = gr.Dataframe(max_rows=5)
- submit_button = gr.Button("Run BERTopic!")
- gr.Examples(inputs=in_file, examples=EXAMPLES)
- with gr.Column():
- gr.Markdown("## BERTopic Flow")
- gr.Markdown(
- "Text -> Word-Piece Tokenization -> BERT-embedding -> UMAP -> HDBSCAN -> Topic"
- )
- gr.Markdown("## Processed Text")
- out_dataset = gr.Dataframe(max_rows=5)
- gr.Markdown("## Embedding + Projection + Clustering")
- embedding_plot = gr.Plot(label="BERTopic projections")
- gr.Markdown("## Extracted Topics")
- topics_text = gr.Textbox(label="Topics", lines=50)
- gr.Markdown(ARTICLE)
- # event listeners
- in_file = in_file.upload(inputs=in_file, outputs=in_data, fn=load_data)
- submit_button.click(inputs=in_data, outputs=out_dataset, fn=run_bert_tokenization)
- # out_dataset.change(inputs=out_dataset, outputs=embedding_plot, fn=run_bertopic)
-
-blocks.launch()
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/criteria/id_loss.py b/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/criteria/id_loss.py
deleted file mode 100644
index a828023e115243e48918538d31b91d662cd12d0f..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/criteria/id_loss.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import torch
-from torch import nn
-
-from models.facial_recognition.model_irse import Backbone
-
-
-class IDLoss(nn.Module):
- def __init__(self, opts):
- super(IDLoss, self).__init__()
- print('Loading ResNet ArcFace')
- self.facenet = Backbone(input_size=112, num_layers=50, drop_ratio=0.6, mode='ir_se')
- self.facenet.load_state_dict(torch.load(opts.ir_se50_weights))
- self.pool = torch.nn.AdaptiveAvgPool2d((256, 256))
- self.face_pool = torch.nn.AdaptiveAvgPool2d((112, 112))
- self.facenet.eval()
- self.opts = opts
-
- def extract_feats(self, x):
- if x.shape[2] != 256:
- x = self.pool(x)
- x = x[:, :, 35:223, 32:220] # Crop interesting region
- x = self.face_pool(x)
- x_feats = self.facenet(x)
- return x_feats
-
- def forward(self, y_hat, y):
- n_samples = y.shape[0]
- y_feats = self.extract_feats(y) # Otherwise use the feature from there
- y_hat_feats = self.extract_feats(y_hat)
- y_feats = y_feats.detach()
- loss = 0
- sim_improvement = 0
- count = 0
- for i in range(n_samples):
- diff_target = y_hat_feats[i].dot(y_feats[i])
- loss += 1 - diff_target
- count += 1
-
- return loss / count, sim_improvement / count
diff --git a/spaces/DragGan/DragGan-Inversion/torch_utils/ops/__init__.py b/spaces/DragGan/DragGan-Inversion/torch_utils/ops/__init__.py
deleted file mode 100644
index 939e7c6c8f94c4ea1141885c3c3295fe083b06aa..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/torch_utils/ops/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-# empty
diff --git a/spaces/DragGan/DragGan/stylegan_human/pti/pti_configs/hyperparameters.py b/spaces/DragGan/DragGan/stylegan_human/pti/pti_configs/hyperparameters.py
deleted file mode 100644
index c1014875cc3d46871056cf17fdc8c778ac6139de..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/stylegan_human/pti/pti_configs/hyperparameters.py
+++ /dev/null
@@ -1,28 +0,0 @@
-## Architechture
-lpips_type = 'alex'
-first_inv_type = 'w+'#'w+'
-optim_type = 'adam'
-
-## Locality regularization
-latent_ball_num_of_samples = 1
-locality_regularization_interval = 1
-use_locality_regularization = False
-regulizer_l2_lambda = 0.1
-regulizer_lpips_lambda = 0.1
-regulizer_alpha = 30
-
-## Loss
-pt_l2_lambda = 1
-pt_lpips_lambda = 1
-
-## Steps
-LPIPS_value_threshold = 0.04
-max_pti_steps = 350
-first_inv_steps = 450
-max_images_to_invert = 30
-
-## Optimization
-pti_learning_rate = 5e-4
-first_inv_lr = 8e-3
-train_batch_size = 1
-use_last_w_pivots = False
diff --git a/spaces/DragGan/DragGan/stylegan_human/pti/training/coaches/localitly_regulizer.py b/spaces/DragGan/DragGan/stylegan_human/pti/training/coaches/localitly_regulizer.py
deleted file mode 100644
index 4a4edc3694dd4134d9caa6af0184909451032cc6..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/stylegan_human/pti/training/coaches/localitly_regulizer.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import torch
-import numpy as np
-import wandb
-from pti.pti_configs import hyperparameters, global_config
-l2_criterion = torch.nn.MSELoss(reduction='mean')
-
-
-def l2_loss(real_images, generated_images):
- loss = l2_criterion(real_images, generated_images)
- return loss
-
-
-class Space_Regulizer:
- def __init__(self, original_G, lpips_net):
- self.original_G = original_G
- self.morphing_regulizer_alpha = hyperparameters.regulizer_alpha
- self.lpips_loss = lpips_net
-
- def get_morphed_w_code(self, new_w_code, fixed_w):
- interpolation_direction = new_w_code - fixed_w
- interpolation_direction_norm = torch.norm(interpolation_direction, p=2)
- direction_to_move = hyperparameters.regulizer_alpha * interpolation_direction / interpolation_direction_norm
- result_w = fixed_w + direction_to_move
- self.morphing_regulizer_alpha * fixed_w + (1 - self.morphing_regulizer_alpha) * new_w_code
-
- return result_w
-
- def get_image_from_ws(self, w_codes, G):
- return torch.cat([G.synthesis(w_code, noise_mode='none', force_fp32=True) for w_code in w_codes])
-
- def ball_holder_loss_lazy(self, new_G, num_of_sampled_latents, w_batch, use_wandb=False):
- loss = 0.0
-
- z_samples = np.random.randn(num_of_sampled_latents, self.original_G.z_dim)
- w_samples = self.original_G.mapping(torch.from_numpy(z_samples).to(global_config.device), None,
- truncation_psi=0.5)
- territory_indicator_ws = [self.get_morphed_w_code(w_code.unsqueeze(0), w_batch) for w_code in w_samples]
-
- for w_code in territory_indicator_ws:
- new_img = new_G.synthesis(w_code, noise_mode='none', force_fp32=True)
- with torch.no_grad():
- old_img = self.original_G.synthesis(w_code, noise_mode='none', force_fp32=True)
-
- if hyperparameters.regulizer_l2_lambda > 0:
- l2_loss_val = l2_loss.l2_loss(old_img, new_img)
- if use_wandb:
- wandb.log({f'space_regulizer_l2_loss_val': l2_loss_val.detach().cpu()},
- step=global_config.training_step)
- loss += l2_loss_val * hyperparameters.regulizer_l2_lambda
-
- if hyperparameters.regulizer_lpips_lambda > 0:
- loss_lpips = self.lpips_loss(old_img, new_img)
- loss_lpips = torch.mean(torch.squeeze(loss_lpips))
- if use_wandb:
- wandb.log({f'space_regulizer_lpips_loss_val': loss_lpips.detach().cpu()},
- step=global_config.training_step)
- loss += loss_lpips * hyperparameters.regulizer_lpips_lambda
-
- return loss / len(territory_indicator_ws)
-
- def space_regulizer_loss(self, new_G, w_batch, use_wandb):
- ret_val = self.ball_holder_loss_lazy(new_G, hyperparameters.latent_ball_num_of_samples, w_batch, use_wandb)
- return ret_val
diff --git a/spaces/EDGAhab/VITS-Aatrox-AI/text/__init__.py b/spaces/EDGAhab/VITS-Aatrox-AI/text/__init__.py
deleted file mode 100644
index 227cdc4a81d7bcdd3dbae299947278998d12276b..0000000000000000000000000000000000000000
--- a/spaces/EDGAhab/VITS-Aatrox-AI/text/__init__.py
+++ /dev/null
@@ -1,66 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-from text import cleaners
-from text.symbols import symbols,symbols_zh
-
-
-# Mappings from symbol to numeric ID and vice versa:
-# _symbol_to_id = {s: i for i, s in enumerate(symbols)}
-# _id_to_symbol = {i: s for i, s in enumerate(symbols)}
-
-chinese_mode = True
-if chinese_mode:
- _symbol_to_id = {s: i for i, s in enumerate(symbols_zh)}
- _id_to_symbol = {i: s for i, s in enumerate(symbols_zh)}
-else:
- _symbol_to_id = {s: i for i, s in enumerate(symbols)}
- _id_to_symbol = {i: s for i, s in enumerate(symbols)}
-
-def text_to_sequence(text, cleaner_names, ):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- cleaner_names: names of the cleaner functions to run the text through
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- sequence = []
-
- clean_text = _clean_text(text, cleaner_names)
- for symbol in clean_text:
- if symbol not in _symbol_to_id.keys():
- coutinue
- symbol_id = _symbol_to_id[symbol]
- sequence += [symbol_id]
- return sequence
-
-
-def cleaned_text_to_sequence(cleaned_text, chinese_mode=True):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- # if chinese_mode:
- # sequence = [_symbol_to_id_zh[symbol] for symbol in cleaned_text]
- # else:
- sequence = [_symbol_to_id[symbol] for symbol in cleaned_text]
- return sequence
-
-
-def sequence_to_text(sequence):
- '''Converts a sequence of IDs back to a string'''
- result = ''
- for symbol_id in sequence:
- s = _id_to_symbol[symbol_id]
- result += s
- return result
-
-
-def _clean_text(text, cleaner_names):
- for name in cleaner_names:
- cleaner = getattr(cleaners, name)
- if not cleaner:
- raise Exception('Unknown cleaner: %s' % name)
- text = cleaner(text)
- return text
diff --git a/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/pixel_decoder/msdeformattn.py b/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/pixel_decoder/msdeformattn.py
deleted file mode 100644
index 0ff1a81a3ed0c05464dad2143830bacac5951dfe..0000000000000000000000000000000000000000
--- a/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/pixel_decoder/msdeformattn.py
+++ /dev/null
@@ -1,358 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import numpy as np
-from typing import Callable, Dict, List, Optional, Tuple, Union
-
-import fvcore.nn.weight_init as weight_init
-import torch
-from torch import nn
-from torch.nn import functional as F
-from torch.nn.init import xavier_uniform_, constant_, uniform_, normal_
-from torch.cuda.amp import autocast
-
-from detectron2.config import configurable
-from detectron2.layers import Conv2d, ShapeSpec, get_norm
-from detectron2.modeling import SEM_SEG_HEADS_REGISTRY
-
-from ..transformer_decoder.position_encoding import PositionEmbeddingSine
-from ..transformer_decoder.transformer import _get_clones, _get_activation_fn
-from .ops.modules import MSDeformAttn
-
-
-# MSDeformAttn Transformer encoder in deformable detr
-class MSDeformAttnTransformerEncoderOnly(nn.Module):
- def __init__(self, d_model=256, nhead=8,
- num_encoder_layers=6, dim_feedforward=1024, dropout=0.1,
- activation="relu",
- num_feature_levels=4, enc_n_points=4,
- ):
- super().__init__()
-
- self.d_model = d_model
- self.nhead = nhead
-
- encoder_layer = MSDeformAttnTransformerEncoderLayer(d_model, dim_feedforward,
- dropout, activation,
- num_feature_levels, nhead, enc_n_points)
- self.encoder = MSDeformAttnTransformerEncoder(encoder_layer, num_encoder_layers)
-
- self.level_embed = nn.Parameter(torch.Tensor(num_feature_levels, d_model))
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
- for m in self.modules():
- if isinstance(m, MSDeformAttn):
- m._reset_parameters()
- normal_(self.level_embed)
-
- def get_valid_ratio(self, mask):
- _, H, W = mask.shape
- valid_H = torch.sum(~mask[:, :, 0], 1)
- valid_W = torch.sum(~mask[:, 0, :], 1)
- valid_ratio_h = valid_H.float() / H
- valid_ratio_w = valid_W.float() / W
- valid_ratio = torch.stack([valid_ratio_w, valid_ratio_h], -1)
- return valid_ratio
-
- def forward(self, srcs, pos_embeds):
- masks = [torch.zeros((x.size(0), x.size(2), x.size(3)), device=x.device, dtype=torch.bool) for x in srcs]
- # prepare input for encoder
- src_flatten = []
- mask_flatten = []
- lvl_pos_embed_flatten = []
- spatial_shapes = []
- for lvl, (src, mask, pos_embed) in enumerate(zip(srcs, masks, pos_embeds)):
- bs, c, h, w = src.shape
- spatial_shape = (h, w)
- spatial_shapes.append(spatial_shape)
- src = src.flatten(2).transpose(1, 2)
- mask = mask.flatten(1)
- pos_embed = pos_embed.flatten(2).transpose(1, 2)
- lvl_pos_embed = pos_embed + self.level_embed[lvl].view(1, 1, -1)
- lvl_pos_embed_flatten.append(lvl_pos_embed)
- src_flatten.append(src)
- mask_flatten.append(mask)
- src_flatten = torch.cat(src_flatten, 1)
- mask_flatten = torch.cat(mask_flatten, 1)
- lvl_pos_embed_flatten = torch.cat(lvl_pos_embed_flatten, 1)
- spatial_shapes = torch.as_tensor(spatial_shapes, dtype=torch.long, device=src_flatten.device)
- level_start_index = torch.cat((spatial_shapes.new_zeros((1, )), spatial_shapes.prod(1).cumsum(0)[:-1]))
- valid_ratios = torch.stack([self.get_valid_ratio(m) for m in masks], 1)
-
- # encoder
- memory = self.encoder(src_flatten, spatial_shapes, level_start_index, valid_ratios, lvl_pos_embed_flatten, mask_flatten)
-
- return memory, spatial_shapes, level_start_index
-
-
-class MSDeformAttnTransformerEncoderLayer(nn.Module):
- def __init__(self,
- d_model=256, d_ffn=1024,
- dropout=0.1, activation="relu",
- n_levels=4, n_heads=8, n_points=4):
- super().__init__()
-
- # self attention
- self.self_attn = MSDeformAttn(d_model, n_levels, n_heads, n_points)
- self.dropout1 = nn.Dropout(dropout)
- self.norm1 = nn.LayerNorm(d_model)
-
- # ffn
- self.linear1 = nn.Linear(d_model, d_ffn)
- self.activation = _get_activation_fn(activation)
- self.dropout2 = nn.Dropout(dropout)
- self.linear2 = nn.Linear(d_ffn, d_model)
- self.dropout3 = nn.Dropout(dropout)
- self.norm2 = nn.LayerNorm(d_model)
-
- @staticmethod
- def with_pos_embed(tensor, pos):
- return tensor if pos is None else tensor + pos
-
- def forward_ffn(self, src):
- src2 = self.linear2(self.dropout2(self.activation(self.linear1(src))))
- src = src + self.dropout3(src2)
- src = self.norm2(src)
- return src
-
- def forward(self, src, pos, reference_points, spatial_shapes, level_start_index, padding_mask=None):
- # self attention
- src2 = self.self_attn(self.with_pos_embed(src, pos), reference_points, src, spatial_shapes, level_start_index, padding_mask)
- src = src + self.dropout1(src2)
- src = self.norm1(src)
-
- # ffn
- src = self.forward_ffn(src)
-
- return src
-
-
-class MSDeformAttnTransformerEncoder(nn.Module):
- def __init__(self, encoder_layer, num_layers):
- super().__init__()
- self.layers = _get_clones(encoder_layer, num_layers)
- self.num_layers = num_layers
-
- @staticmethod
- def get_reference_points(spatial_shapes, valid_ratios, device):
- reference_points_list = []
- for lvl, (H_, W_) in enumerate(spatial_shapes):
-
- ref_y, ref_x = torch.meshgrid(torch.linspace(0.5, H_ - 0.5, H_, dtype=torch.float32, device=device),
- torch.linspace(0.5, W_ - 0.5, W_, dtype=torch.float32, device=device))
- ref_y = ref_y.reshape(-1)[None] / (valid_ratios[:, None, lvl, 1] * H_)
- ref_x = ref_x.reshape(-1)[None] / (valid_ratios[:, None, lvl, 0] * W_)
- ref = torch.stack((ref_x, ref_y), -1)
- reference_points_list.append(ref)
- reference_points = torch.cat(reference_points_list, 1)
- reference_points = reference_points[:, :, None] * valid_ratios[:, None]
- return reference_points
-
- def forward(self, src, spatial_shapes, level_start_index, valid_ratios, pos=None, padding_mask=None):
- output = src
- reference_points = self.get_reference_points(spatial_shapes, valid_ratios, device=src.device)
- for _, layer in enumerate(self.layers):
- output = layer(output, pos, reference_points, spatial_shapes, level_start_index, padding_mask)
-
- return output
-
-
-@SEM_SEG_HEADS_REGISTRY.register()
-class MSDeformAttnPixelDecoder(nn.Module):
- @configurable
- def __init__(
- self,
- input_shape: Dict[str, ShapeSpec],
- *,
- transformer_dropout: float,
- transformer_nheads: int,
- transformer_dim_feedforward: int,
- transformer_enc_layers: int,
- conv_dim: int,
- mask_dim: int,
- norm: Optional[Union[str, Callable]] = None,
- # deformable transformer encoder args
- transformer_in_features: List[str],
- common_stride: int,
- ):
- """
- NOTE: this interface is experimental.
- Args:
- input_shape: shapes (channels and stride) of the input features
- transformer_dropout: dropout probability in transformer
- transformer_nheads: number of heads in transformer
- transformer_dim_feedforward: dimension of feedforward network
- transformer_enc_layers: number of transformer encoder layers
- conv_dims: number of output channels for the intermediate conv layers.
- mask_dim: number of output channels for the final conv layer.
- norm (str or callable): normalization for all conv layers
- """
- super().__init__()
- transformer_input_shape = {
- k: v for k, v in input_shape.items() if k in transformer_in_features
- }
-
- # this is the input shape of pixel decoder
- input_shape = sorted(input_shape.items(), key=lambda x: x[1].stride)
- self.in_features = [k for k, v in input_shape] # starting from "res2" to "res5"
- self.feature_strides = [v.stride for k, v in input_shape]
- self.feature_channels = [v.channels for k, v in input_shape]
-
- # this is the input shape of transformer encoder (could use less features than pixel decoder
- transformer_input_shape = sorted(transformer_input_shape.items(), key=lambda x: x[1].stride)
- self.transformer_in_features = [k for k, v in transformer_input_shape] # starting from "res2" to "res5"
- transformer_in_channels = [v.channels for k, v in transformer_input_shape]
- self.transformer_feature_strides = [v.stride for k, v in transformer_input_shape] # to decide extra FPN layers
-
- self.transformer_num_feature_levels = len(self.transformer_in_features)
- if self.transformer_num_feature_levels > 1:
- input_proj_list = []
- # from low resolution to high resolution (res5 -> res2)
- for in_channels in transformer_in_channels[::-1]:
- input_proj_list.append(nn.Sequential(
- nn.Conv2d(in_channels, conv_dim, kernel_size=1),
- nn.GroupNorm(32, conv_dim),
- ))
- self.input_proj = nn.ModuleList(input_proj_list)
- else:
- self.input_proj = nn.ModuleList([
- nn.Sequential(
- nn.Conv2d(transformer_in_channels[-1], conv_dim, kernel_size=1),
- nn.GroupNorm(32, conv_dim),
- )])
-
- for proj in self.input_proj:
- nn.init.xavier_uniform_(proj[0].weight, gain=1)
- nn.init.constant_(proj[0].bias, 0)
-
- self.transformer = MSDeformAttnTransformerEncoderOnly(
- d_model=conv_dim,
- dropout=transformer_dropout,
- nhead=transformer_nheads,
- dim_feedforward=transformer_dim_feedforward,
- num_encoder_layers=transformer_enc_layers,
- num_feature_levels=self.transformer_num_feature_levels,
- )
- N_steps = conv_dim // 2
- self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True)
-
- self.mask_dim = mask_dim
- # use 1x1 conv instead
- self.mask_features = Conv2d(
- conv_dim,
- mask_dim,
- kernel_size=1,
- stride=1,
- padding=0,
- )
- weight_init.c2_xavier_fill(self.mask_features)
-
- self.maskformer_num_feature_levels = 3 # always use 3 scales
- self.common_stride = common_stride
-
- # extra fpn levels
- stride = min(self.transformer_feature_strides)
- self.num_fpn_levels = int(np.log2(stride) - np.log2(self.common_stride))
-
- lateral_convs = []
- output_convs = []
-
- use_bias = norm == ""
- for idx, in_channels in enumerate(self.feature_channels[:self.num_fpn_levels]):
- lateral_norm = get_norm(norm, conv_dim)
- output_norm = get_norm(norm, conv_dim)
-
- lateral_conv = Conv2d(
- in_channels, conv_dim, kernel_size=1, bias=use_bias, norm=lateral_norm
- )
- output_conv = Conv2d(
- conv_dim,
- conv_dim,
- kernel_size=3,
- stride=1,
- padding=1,
- bias=use_bias,
- norm=output_norm,
- activation=F.relu,
- )
- weight_init.c2_xavier_fill(lateral_conv)
- weight_init.c2_xavier_fill(output_conv)
- self.add_module("adapter_{}".format(idx + 1), lateral_conv)
- self.add_module("layer_{}".format(idx + 1), output_conv)
-
- lateral_convs.append(lateral_conv)
- output_convs.append(output_conv)
- # Place convs into top-down order (from low to high resolution)
- # to make the top-down computation in forward clearer.
- self.lateral_convs = lateral_convs[::-1]
- self.output_convs = output_convs[::-1]
-
- @classmethod
- def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]):
- ret = {}
- ret["input_shape"] = {
- k: v for k, v in input_shape.items() if k in cfg.MODEL.SEM_SEG_HEAD.IN_FEATURES
- }
- ret["conv_dim"] = cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM
- ret["mask_dim"] = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM
- ret["norm"] = cfg.MODEL.SEM_SEG_HEAD.NORM
- ret["transformer_dropout"] = cfg.MODEL.MASK_FORMER.DROPOUT
- ret["transformer_nheads"] = cfg.MODEL.MASK_FORMER.NHEADS
- # ret["transformer_dim_feedforward"] = cfg.MODEL.MASK_FORMER.DIM_FEEDFORWARD
- ret["transformer_dim_feedforward"] = 1024 # use 1024 for deformable transformer encoder
- ret[
- "transformer_enc_layers"
- ] = cfg.MODEL.SEM_SEG_HEAD.TRANSFORMER_ENC_LAYERS # a separate config
- ret["transformer_in_features"] = cfg.MODEL.SEM_SEG_HEAD.DEFORMABLE_TRANSFORMER_ENCODER_IN_FEATURES
- ret["common_stride"] = cfg.MODEL.SEM_SEG_HEAD.COMMON_STRIDE
- return ret
-
- @autocast(enabled=False)
- def forward_features(self, features):
- srcs = []
- pos = []
- # Reverse feature maps into top-down order (from low to high resolution)
- for idx, f in enumerate(self.transformer_in_features[::-1]):
- x = features[f].float() # deformable detr does not support half precision
- srcs.append(self.input_proj[idx](x))
- pos.append(self.pe_layer(x))
-
- y, spatial_shapes, level_start_index = self.transformer(srcs, pos)
- bs = y.shape[0]
-
- split_size_or_sections = [None] * self.transformer_num_feature_levels
- for i in range(self.transformer_num_feature_levels):
- if i < self.transformer_num_feature_levels - 1:
- split_size_or_sections[i] = level_start_index[i + 1] - level_start_index[i]
- else:
- split_size_or_sections[i] = y.shape[1] - level_start_index[i]
- y = torch.split(y, split_size_or_sections, dim=1)
-
- out = []
- multi_scale_features = []
- num_cur_levels = 0
- for i, z in enumerate(y):
- out.append(z.transpose(1, 2).view(bs, -1, spatial_shapes[i][0], spatial_shapes[i][1]))
-
- # append `out` with extra FPN levels
- # Reverse feature maps into top-down order (from low to high resolution)
- for idx, f in enumerate(self.in_features[:self.num_fpn_levels][::-1]):
- x = features[f].float()
- lateral_conv = self.lateral_convs[idx]
- output_conv = self.output_convs[idx]
- cur_fpn = lateral_conv(x)
- # Following FPN implementation, we use nearest upsampling here
- y = cur_fpn + F.interpolate(out[-1], size=cur_fpn.shape[-2:], mode="bilinear", align_corners=False)
- y = output_conv(y)
- out.append(y)
-
- for o in out:
- if num_cur_levels < self.maskformer_num_feature_levels:
- multi_scale_features.append(o)
- num_cur_levels += 1
-
- return self.mask_features(out[-1]), out[0], multi_scale_features
diff --git a/spaces/ESG-TFM-UV/ESG_API_BATCH/app.py b/spaces/ESG-TFM-UV/ESG_API_BATCH/app.py
deleted file mode 100644
index 241e2aada96ea6d49dfba56e47d642731f8834af..0000000000000000000000000000000000000000
--- a/spaces/ESG-TFM-UV/ESG_API_BATCH/app.py
+++ /dev/null
@@ -1,428 +0,0 @@
-
-import os
-import re
-import math
-import requests
-import json
-import itertools
-
-import numpy as np
-import pandas as pd
-
-import onnxruntime
-import onnx
-import gradio as gr
-
-from huggingface_hub import hf_hub_url, cached_download
-from transformers import AutoTokenizer
-from transformers import pipeline
-
-try:
- from extractnet import Extractor
- EXTRACTOR_NET = 'extractnet'
-except ImportError:
- try:
- from dragnet import extract_content
- EXTRACTOR_NET = 'dragnet'
- except ImportError:
- try:
- import trafilatura
- from trafilatura.settings import use_config
- EXTRACTOR_NET = 'trafilatura'
- trafilatura_config = use_config()
- trafilatura_config.set("DEFAULT", "EXTRACTION_TIMEOUT", "0") #To avoid it runnig signals to avoid clashing with gradio threads
- except ImportError:
- raise ImportError
-
-print('[i] Using',EXTRACTOR_NET)
-
-import spacy
-
-from bertopic import BERTopic
-
-import nltk
-nltk.download('stopwords')
-nltk.download('wordnet')
-nltk.download('omw-1.4')
-from nltk.corpus import stopwords
-from nltk.stem import WordNetLemmatizer
-from nltk.stem import PorterStemmer
-
-from unicodedata import normalize
-
-
-
-OUT_HEADERS = ['E','S','G']
-DF_SP500 = pd.read_csv('SP500_constituents.zip',compression=dict(method='zip'))
-
-MODEL_TRANSFORMER_BASED = "distilbert-base-uncased"
-MODEL_ONNX_FNAME = "ESG_classifier_batch.onnx"
-MODEL_SENTIMENT_ANALYSIS = "ProsusAI/finbert"
-#MODEL3
-#BERTOPIC_REPO_ID = "oMateos2020/BERTopic-paraphrase-MiniLM-L3-v2-51topics-guided-model3"
-#BERTOPIC_FILENAME = "BERTopic-paraphrase-MiniLM-L3-v2-51topics-guided-model3"
-#bertopic_model = BERTopic.load(cached_download(hf_hub_url(BERTOPIC_REPO_ID , BERTOPIC_FILENAME )), embedding_model="paraphrase-MiniLM-L3-v2")
-
-BERTOPIC_REPO_ID = "oMateos2020/BERTopic-distilbert-base-nli-mean-tokens"
-BERTOPIC_FILENAME = "BERTopic-distilbert-base-nli-mean-tokens"
-bertopic_model = BERTopic.load(cached_download(hf_hub_url(BERTOPIC_REPO_ID , BERTOPIC_FILENAME )))
-
-#SECTOR_LIST = list(DF_SP500.Sector.unique())
-SECTOR_LIST = ['Industry',
- 'Health',
- 'Technology',
- 'Communication',
- 'Consumer Staples',
- 'Consumer Discretionary',
- 'Utilities',
- 'Financials',
- 'Materials',
- 'Real Estate',
- 'Energy']
-
-
-def _topic_sanitize_word(text):
- """Función realiza una primera limpieza-normalización del texto a traves de expresiones regex"""
- text = re.sub(r'@[\w_]+|#[\w_]+|https?://[\w_./]+', '', text) # Elimina menciones y URL, esto sería más para Tweets pero por si hay alguna mención o URL al ser criticas web
- text = re.sub('\S*@\S*\s?', '', text) # Elimina correos electronicos
- text = re.sub(r'\((\d+)\)', '', text) #Elimina numeros entre parentesis
- text = re.sub(r'^\d+', '', text) #Elimina numeros sueltos
- text = re.sub(r'\n', '', text) #Elimina saltos de linea
- text = re.sub('\s+', ' ', text) # Elimina espacios en blanco adicionales
- text = re.sub(r'[“”]', '', text) # Elimina caracter citas
- text = re.sub(r'[()]', '', text) # Elimina parentesis
- text = re.sub('\.', '', text) # Elimina punto
- text = re.sub('\,', '', text) # Elimina coma
- text = re.sub('’s', '', text) # Elimina posesivos
- #text = re.sub(r'-+', '', text) # Quita guiones para unir palabras compuestas (normalizaría algunos casos, exmujer y ex-mujer, todos a exmujer)
- text = re.sub(r'\.{3}', ' ', text) # Reemplaza puntos suspensivos
- # Esta exp regular se ha incluido "a mano" tras ver que era necesaria para algunos ejemplos
- text = re.sub(r"([\.\?])", r"\1 ", text) # Introduce espacio despues de punto e interrogacion
- # -> NFD (Normalization Form Canonical Decomposition) y eliminar diacríticos
- text = re.sub(r"([^n\u0300-\u036f]|n(?!\u0303(?![\u0300-\u036f])))[\u0300-\u036f]+", r"\1",
- normalize( "NFD", text), 0, re.I) # Eliminación de diacriticos (acentos y variantes puntuadas de caracteres por su forma simple excepto la 'ñ')
- # -> NFC (Normalization Form Canonical Composition)
- text = normalize( 'NFC', text)
-
- return text.lower().strip()
-
-def _topic_clean_text(text, lemmatize=True, stem=True):
- words = text.split()
- non_stopwords = [word for word in words if word not in stopwords.words('english')]
- clean_text = [_topic_sanitize_word(word) for word in non_stopwords]
- if lemmatize:
- lemmatizer = WordNetLemmatizer()
- clean_text = [lemmatizer.lemmatize(word) for word in clean_text]
- if stem:
- ps =PorterStemmer()
- clean_text = [ps.stem(word) for word in clean_text]
-
- return ' '.join(clean_text).strip()
-
-SECTOR_TOPICS = []
-for sector in SECTOR_LIST:
- topics, _ = bertopic_model.find_topics(_topic_clean_text(sector), top_n=5)
- SECTOR_TOPICS.append(topics)
-
-def _topic2sector(pred_topics):
- out = []
- for pred_topic in pred_topics:
- relevant_sectors = []
- for i in range(len(SECTOR_LIST)):
- if pred_topic in SECTOR_TOPICS[i]:
- relevant_sectors.append(list(DF_SP500.Sector.unique())[i])
- out.append(relevant_sectors)
- return out
-
-def _inference_topic_match(text):
- out, _ = bertopic_model.transform([_topic_clean_text(t) for t in text])
- return out
-
-def get_company_sectors(extracted_names, threshold=0.95):
- '''
- '''
- from thefuzz import process, fuzz
- output = []
- standard_names_tuples = []
- for extracted_name in extracted_names:
- name_match = process.extractOne(extracted_name,
- DF_SP500.Name,
- scorer=fuzz.token_set_ratio)
- similarity = name_match[1]/100
- if similarity >= threshold:
- standard_names_tuples.append(name_match[:2])
-
- for extracted_name in extracted_names:
- name_match = process.extractOne(extracted_name,
- DF_SP500.Symbol,
- scorer=fuzz.token_set_ratio)
- similarity = name_match[1]/100
- if similarity >= threshold:
- standard_names_tuples.append(name_match[:2])
-
- for std_comp_name, _ in standard_names_tuples:
- sectors = list(DF_SP500[['Name','Sector','Symbol']].where( (DF_SP500.Name == std_comp_name) | (DF_SP500.Symbol == std_comp_name)).dropna().itertuples(index=False, name=None))
- output += sectors
- return output
-
-def filter_spans(spans, keep_longest=True):
- """Filter a sequence of spans and remove duplicates or overlaps. Useful for
- creating named entities (where one token can only be part of one entity) or
- when merging spans with `Retokenizer.merge`. When spans overlap, the (first)
- longest span is preferred over shorter spans.
- spans (Iterable[Span]): The spans to filter.
- keep_longest (bool): Specify whether to keep longer or shorter spans.
- RETURNS (List[Span]): The filtered spans.
- """
- get_sort_key = lambda span: (span.end - span.start, -span.start)
- sorted_spans = sorted(spans, key=get_sort_key, reverse=keep_longest)
- #print(f'sorted_spans: {sorted_spans}')
- result = []
- seen_tokens = set()
- for span in sorted_spans:
- # Check for end - 1 here because boundaries are inclusive
- if span.start not in seen_tokens and span.end - 1 not in seen_tokens:
- result.append(span)
- seen_tokens.update(range(span.start, span.end))
- result = sorted(result, key=lambda span: span.start)
- return result
-
-
-def _inference_ner_spancat(text, limit_outputs=10):
- nlp = spacy.load("en_pipeline")
- out = []
- for doc in nlp.pipe(text):
- spans = doc.spans["sc"]
- #comp_raw_text = dict( sorted( dict(zip([str(x) for x in spans],[float(x)*penalty for x in spans.attrs['scores']])).items(), key=lambda x: x[1], reverse=True) )
- company_list = list(set([str(span).replace('\'s', '').replace('\u2019s','') for span in filter_spans(spans, keep_longest=True)]))[:limit_outputs]
- out.append(get_company_sectors(company_list))
- return out
-
-#def _inference_summary_model_pipeline(text):
-# pipe = pipeline("text2text-generation", model=MODEL_SUMMARY_PEGASUS)
-# return pipe(text,truncation='longest_first')
-
-def _inference_sentiment_model_pipeline(text):
- tokenizer_kwargs = {'padding':True,'truncation':True,'max_length':512}#,'return_tensors':'pt'}
- pipe = pipeline("sentiment-analysis", model=MODEL_SENTIMENT_ANALYSIS )
- return pipe(text,**tokenizer_kwargs)
-
-#def _inference_sentiment_model_via_api_query(payload):
-# response = requests.post(API_HF_SENTIMENT_URL , headers={"Authorization": os.environ['hf_api_token']}, json=payload)
-# return response.json()
-
-def _lematise_text(text):
- nlp = spacy.load("en_core_web_sm", disable=['ner'])
- text_out = []
- for doc in nlp.pipe(text): #see https://spacy.io/models#design
- new_text = ""
- for token in doc:
- if (not token.is_punct
- and not token.is_stop
- and not token.like_url
- and not token.is_space
- and not token.like_email
- #and not token.like_num
- and not token.pos_ == "CONJ"):
-
- new_text = new_text + " " + token.lemma_
-
- text_out.append( new_text )
- return text_out
-
-def sigmoid(x):
- return 1 / (1 + np.exp(-x))
-
-def to_numpy(tensor):
- return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
-
-def is_in_archive(url):
- try:
- r = requests.get('http://archive.org/wayback/available?url='+url)
- archive = json.loads(r.text)
-
- if archive['archived_snapshots'] :
- archive['archived_snapshots']['closest']
- return {'archived':archive['archived_snapshots']['closest']['available'], 'url':archive['archived_snapshots']['closest']['url'],'error':0}
- else:
- return {'archived':False, 'url':"", 'error':0}
- except:
- print(f"[E] Quering URL ({url}) from archive.org")
- return {'archived':False, 'url':"", 'error':-1}
-
-#def _inference_ner(text):
-# return labels
-
-def _inference_classifier(text):
- tokenizer = AutoTokenizer.from_pretrained(MODEL_TRANSFORMER_BASED)
- inputs = tokenizer(_lematise_text(text), return_tensors="np", padding="max_length", truncation=True) #this assumes head-only!
- ort_session = onnxruntime.InferenceSession(MODEL_ONNX_FNAME)
- onnx_model = onnx.load(MODEL_ONNX_FNAME)
- onnx.checker.check_model(onnx_model)
-
- # compute ONNX Runtime output prediction
- ort_outs = ort_session.run(None, input_feed=dict(inputs))
-
- return sigmoid(ort_outs[0])
-
-def inference(input_batch,isurl,use_archive,filt_companies_topic,limit_companies=10):
- url_list = [] #Only used if isurl
- input_batch_content = []
-# if file_in.name is not "":
-# print("[i] Input is file:",file_in.name)
-# dft = pd.read_csv(
-# file_in.name,
-# compression=dict(method='zip')
-# )
-# assert file_col_name in dft.columns, "Indicated col_name not found in file"
-# input_batch_r = dft[file_col_name].values.tolist()
-# else:
- print("[i] Input is list")
- assert len(input_batch) > 0, "input_batch array is empty"
- input_batch_r = input_batch
-
- print("[i] Input size:",len(input_batch_r))
-
- if isurl:
- print("[i] Data is URL")
- if use_archive:
- print("[i] Use chached URL from archive.org")
- print("[i] Extracting contents using",EXTRACTOR_NET)
- for row_in in input_batch_r:
- if isinstance(row_in , list):
- url = row_in[0]
- else:
- url = row_in
- url_list.append(url)
- if use_archive:
- archive = is_in_archive(url)
- if archive['archived']:
- url = archive['url']
- #Extract the data from url
- if(EXTRACTOR_NET == 'extractnet'):
- extracted = Extractor().extract(requests.get(url).text)
- input_batch_content.append(extracted['content'])
- elif(EXTRACTOR_NET == 'dragnet'):
- extracted = extract_content(requests.get(url).content)
- input_batch_content.append(extracted)
- elif(EXTRACTOR_NET == 'trafilatura'):
- try:
- extracted = trafilatura.extract(trafilatura.fetch_url(url), include_comments=False, config=trafilatura_config, include_tables=False)
- assert len(extracted)>100, "[W] Failed extracting "+url+" retrying with archived version"
- except:
- archive = is_in_archive(url)
- if archive['archived']:
- print("[W] Using archive.org version of",url)
- url = archive['url']
- extracted = trafilatura.extract(trafilatura.fetch_url(url), include_comments=False, config=trafilatura_config, include_tables=False)
- else:
- print("[E] URL=",url,"not found")
- extracted = ""
- url_list.pop() #Remove last from list
-
- if len(extracted)>100:
- input_batch_content.append(extracted)
- else:
- print("[i] Data is news contents")
- if isinstance(input_batch_r[0], list):
- print("[i] Data is list of lists format")
- for row_in in input_batch_r:
- input_batch_content.append(row_in[0])
- else:
- print("[i] Data is single list format")
- input_batch_content = input_batch_r
-
- print("[i] Batch size:",len(input_batch_content))
- print("[i] Running ESG classifier inference...")
- prob_outs = _inference_classifier(input_batch_content)
- print("[i] Classifier output shape:",prob_outs.shape)
- print("[i] Running sentiment using",MODEL_SENTIMENT_ANALYSIS ,"inference...")
- sentiment = _inference_sentiment_model_pipeline(input_batch_content )
- print("[i] Running NER using custom spancat inference...")
- ner_labels = _inference_ner_spancat(input_batch_content ,limit_outputs=limit_companies)
- print("[i] Extracting topic using custom BERTopic...")
- topics = _inference_topic_match(input_batch_content)
- news_sectors = _topic2sector(topics)
-
- df = pd.DataFrame(prob_outs,columns =['E','S','G'])
- if isurl:
- df['URL'] = url_list
- else:
- df['content_id'] = range(1, len(input_batch_r)+1)
- df['sent_lbl'] = [d['label'] for d in sentiment ]
- df['sent_score'] = [d['score'] for d in sentiment ]
- df['topic'] = pd.DataFrame(news_sectors).iloc[:, 0]
- #df['sector_pred'] = pd.DataFrame(_topic2sector(topics)).iloc[:, 0]
- print("[i] Pandas output shape:",df.shape)
- #[[], [('Nvidia', 'Information Technology')], [('Twitter', 'Communication Services'), ('Apple', 'Information Technology')], [], [], [], [], [], []]
- df["company"] = np.nan
- df["sector"] = np.nan
- df["symbol"] = np.nan
- dfo = pd.DataFrame(columns=['E','S','G','URL','sent_lbl','sent_score','topic','company','sector','symbol'])
- for idx in range(len(df.index)):
- if ner_labels[idx]: #not empty
- for ner in ner_labels[idx]:
- if filt_companies_topic:
- if news_sectors[idx]: #not empty
- if news_sectors[idx][0] not in ner[1]:
- continue
- dfo = pd.concat( [dfo, df.loc[[idx]].assign(company=ner[0], sector=ner[1], symbol=ner[2])], join='outer', ignore_index=True) #axis=0
- print("[i] Pandas output shape:",dfo.shape)
- return dfo.drop_duplicates()
-
-title = "ESG API Demo"
-description = """This is a demonstration of the full ESG pipeline backend where given a list of URL (english, news) the news contents are extracted, using extractnet, and fed to three models:
-
-- A custom scheme for company extraction
-- A custom ESG classifier for the ESG labeling of the news
-- An off-the-shelf sentiment classification model (ProsusAI/finbert)
-
-API input parameters:
-- List: list of text. Either list of Url of the news (english) or list of extracted news contents
-- 'Data type': int. 0=list is of extracted news contents, 1=list is of urls.
-- `use_archive`: boolean. The model will extract the archived version in archive.org of the url indicated. This is useful with old news and to bypass news behind paywall
-- `filter_companies`: boolean. Filter companies by news' topic
-- `limit_companies`: integer. Number of found relevant companies to report.
-
-"""
-examples = [[ [['https://www.bbc.com/news/uk-62732447'],
- ["https://www.science.org/content/article/suspicions-grow-nanoparticles-pfizer-s-covid-19-vaccine-trigger-rare-allergic-reactions"],
- ["https://www.cnbc.com/2022/09/14/omicron-specific-covid-booster-shot-side-effects-what-to-expect.html"],
- ["https://www.reuters.com/business/healthcare-pharmaceuticals/brazil-approves-pfizer-vaccine-children-young-six-months-2022-09-17/"],
- ["https://www.statnews.com/2022/09/06/pfizer-covid-vaccines-researchers-next-gen-studies/"],
- ["https://www.cms.gov/newsroom/news-alert/updated-covid-19-vaccines-providing-protection-against-omicron-variant-available-no-cost"],
- ["https://www.bbc.com/news/health-62691102"],
- ["https://news.bloomberglaw.com/esg/abbvie-board-faces-new-investor-suit-over-humira-kickback-claims"],
- ["https://esgnews.com/amazon-backed-infinium-to-provide-ultra-low-carbon-electrofuels-for-use-in-trucking-fleet-in-2023/"],
- ["https://esgnews.com/comcast-announces-plan-to-double-energy-efficiency-by-2030-to-power-a-greener-internet/"],
- ["https://esgnews.com/ges-facts-technology-helps-the-city-of-los-angeles-move-closer-to-its-renewable-energy-goals/"],
- ['https://www.bbc.com/news/science-environment-62758811'],
- ['https://www.bbc.com/news/business-62524031'],
- ["https://www.knowesg.com/investors/blackstone-and-sphera-work-together-for-portfolio-decarbonization-program-17022022"],
- ["https://www.esgtoday.com/amazon-partners-with-matt-damons-water-org-to-provide-water-access-to-100-million-people/"],
- ["https://www.esgtoday.com/walmart-allocates-over-1-billion-to-renewable-energy-sustainable-buildings-circular-economy/"],
- ["https://www.esgtoday.com/anglo-american-ties-interest-on-745-million-bond-to-climate-water-job-creation-goals/"],
- ["https://www.esgtoday.com/blackrock-acquires-new-zealand-solar-as-a-service-provider-solarzero/"],
- ["https://www.esgtoday.com/blackrock-strikes-back-against-climate-activism-claims/"],
- ["https://www.esgtoday.com/hm-to-remove-sustainability-labels-from-products-following-investigation-by-regulator/"],
- ["https://www.knowesg.com/sustainable-finance/exxonmobil-fails-the-energy-transition-due-to-failed-governance-structure-04122021"],
- ["https://www.knowesg.com/companies/tesla-is-investigated-by-the-securities-and-exchange-commission-sec-on-solar-07122021"],
- ["https://www.knowesg.com/tech/pcg-and-exxonmobil-will-collaborate-on-plastic-recycling-in-malaysia-20092022"],
- ["https://esgnews.com/nike-launches-community-climate-resilience-program-with-2-million-grant-to-trust-for-public-land/"],
- ["https://esgnews.com/walmart-and-unitedhealth-group-collaborate-to-deliver-access-to-high-quality-affordable-health-care/"],
- ['https://www.bbc.com/news/science-environment-62680423']],'url',False,False,5]]
-demo = gr.Interface(fn=inference,
- inputs=[gr.Dataframe(label='input batch', col_count=1, datatype='str', type='array', wrap=True),
- gr.Dropdown(label='data type', choices=['text','url'], type='index', value='url'),
- gr.Checkbox(label='Parse cached in archive.org'),
- gr.Checkbox(label='Filter out companies by topic'),
- gr.Slider(minimum=1, maximum=10, step=1, label='Limit NER output', value=5)],
- outputs=[gr.Dataframe(label='output raw', col_count=1, type='pandas', wrap=True, header=OUT_HEADERS)],
- #gr.Label(label='Company'),
- #gr.Label(label='ESG'),
- #gr.Label(label='Sentiment'),
- #gr.Markdown()],
- title=title,
- description=description,
- examples=examples)
-demo.launch()
diff --git a/spaces/Eddycrack864/Applio-Inference/utils/i18n.py b/spaces/Eddycrack864/Applio-Inference/utils/i18n.py
deleted file mode 100644
index 8e75d2bc26ff86ab1716b8d7f239ad9f5cc1e32d..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/utils/i18n.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import locale
-import json
-import os
-
-
-def load_language_list(language):
- with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f:
- language_list = json.load(f)
- return language_list
-
-
-class I18nAuto:
- def __init__(self, language=None):
- if language in ["Auto", None]:
- language = "es_ES"
- if not os.path.exists(f"./i18n/{language}.json"):
- language = "es_ES"
- language = "es_ES"
- self.language = language
- # print("Use Language:", language)
- self.language_map = load_language_list(language)
-
- def __call__(self, key):
- return self.language_map.get(key, key)
-
- def print(self):
- # print("Use Language:", self.language)
- print("")
diff --git a/spaces/EuroPython2022/Paddy_Disease_Classification/app.py b/spaces/EuroPython2022/Paddy_Disease_Classification/app.py
deleted file mode 100644
index 6c4c180a0bb8fd6d97908e2b6906421db6a65349..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/Paddy_Disease_Classification/app.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import albumentations
-import cv2
-import torch
-import timm
-import gradio as gr
-import numpy as np
-import os
-import random
-
-device = torch.device('cpu')
-
-labels = {
- 0: 'bacterial_leaf_blight',
- 1: 'bacterial_leaf_streak',
- 2: 'bacterial_panicle_blight',
- 3: 'blast',
- 4: 'brown_spot',
- 5: 'dead_heart',
- 6: 'downy_mildew',
- 7: 'hispa',
- 8: 'normal',
- 9: 'tungro'
- }
-
-def inference_fn(model, image=None):
- model.eval()
- image = image.to(device)
- with torch.no_grad():
- output = model(image.unsqueeze(0))
- out = output.sigmoid().detach().cpu().numpy().flatten()
- return out
-
-
-def predict(image=None) -> dict:
- mean = (0.485, 0.456, 0.406)
- std = (0.229, 0.224, 0.225)
-
- augmentations = albumentations.Compose(
- [
- albumentations.Resize(256, 256),
- albumentations.HorizontalFlip(p=0.5),
- albumentations.VerticalFlip(p=0.5),
- albumentations.Normalize(mean, std, max_pixel_value=255.0, always_apply=True),
- ]
- )
-
- augmented = augmentations(image=image)
- image = augmented["image"]
- image = np.transpose(image, (2, 0, 1))
- image = torch.tensor(image, dtype=torch.float32)
- model = timm.create_model('efficientnet_b0', pretrained=False, num_classes=10)
- model.load_state_dict(torch.load("paddy_model.pth", map_location=torch.device(device)))
- model.to(device)
-
- predicted = inference_fn(model, image)
-
- return {labels[i]: float(predicted[i]) for i in range(10)}
-
-
-gr.Interface(fn=predict,
- inputs=gr.inputs.Image(),
- outputs=gr.outputs.Label(num_top_classes=10),
- examples=["200005.jpg", "200006.jpg"], interpretation='default').launch()
\ No newline at end of file
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_datasets/synthtext.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_datasets/synthtext.py
deleted file mode 100644
index fb9a44b3422dae5a9788d39b0901335dfc6076a9..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_datasets/synthtext.py
+++ /dev/null
@@ -1,18 +0,0 @@
-dataset_type = 'TextDetDataset'
-data_root = 'data/synthtext'
-
-train = dict(
- type=dataset_type,
- ann_file=f'{data_root}/instances_training.lmdb',
- loader=dict(
- type='AnnFileLoader',
- repeat=1,
- file_format='lmdb',
- parser=dict(
- type='LineJsonParser',
- keys=['file_name', 'height', 'width', 'annotations'])),
- img_prefix=f'{data_root}/imgs',
- pipeline=None)
-
-train_list = [train]
-test_list = [train]
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/textsnake_r50_fpn_unet.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/textsnake_r50_fpn_unet.py
deleted file mode 100644
index 7d74f376b8c635451a3036e780ffc88e7640bf2c..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/textsnake_r50_fpn_unet.py
+++ /dev/null
@@ -1,22 +0,0 @@
-model = dict(
- type='TextSnake',
- backbone=dict(
- type='mmdet.ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1,
- norm_cfg=dict(type='BN', requires_grad=True),
- init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'),
- norm_eval=True,
- style='caffe'),
- neck=dict(
- type='FPN_UNet', in_channels=[256, 512, 1024, 2048], out_channels=32),
- bbox_head=dict(
- type='TextSnakeHead',
- in_channels=32,
- loss=dict(type='TextSnakeLoss'),
- postprocessor=dict(
- type='TextSnakePostprocessor', text_repr_type='poly')),
- train_cfg=None,
- test_cfg=None)
diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/losses/loss_util.py b/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/losses/loss_util.py
deleted file mode 100644
index 744eeb46d1f3b5a7b4553ca23237ddd9c899a698..0000000000000000000000000000000000000000
--- a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/losses/loss_util.py
+++ /dev/null
@@ -1,95 +0,0 @@
-import functools
-from torch.nn import functional as F
-
-
-def reduce_loss(loss, reduction):
- """Reduce loss as specified.
-
- Args:
- loss (Tensor): Elementwise loss tensor.
- reduction (str): Options are 'none', 'mean' and 'sum'.
-
- Returns:
- Tensor: Reduced loss tensor.
- """
- reduction_enum = F._Reduction.get_enum(reduction)
- # none: 0, elementwise_mean:1, sum: 2
- if reduction_enum == 0:
- return loss
- elif reduction_enum == 1:
- return loss.mean()
- else:
- return loss.sum()
-
-
-def weight_reduce_loss(loss, weight=None, reduction='mean'):
- """Apply element-wise weight and reduce loss.
-
- Args:
- loss (Tensor): Element-wise loss.
- weight (Tensor): Element-wise weights. Default: None.
- reduction (str): Same as built-in losses of PyTorch. Options are
- 'none', 'mean' and 'sum'. Default: 'mean'.
-
- Returns:
- Tensor: Loss values.
- """
- # if weight is specified, apply element-wise weight
- if weight is not None:
- assert weight.dim() == loss.dim()
- assert weight.size(1) == 1 or weight.size(1) == loss.size(1)
- loss = loss * weight
-
- # if weight is not specified or reduction is sum, just reduce the loss
- if weight is None or reduction == 'sum':
- loss = reduce_loss(loss, reduction)
- # if reduction is mean, then compute mean over weight region
- elif reduction == 'mean':
- if weight.size(1) > 1:
- weight = weight.sum()
- else:
- weight = weight.sum() * loss.size(1)
- loss = loss.sum() / weight
-
- return loss
-
-
-def weighted_loss(loss_func):
- """Create a weighted version of a given loss function.
-
- To use this decorator, the loss function must have the signature like
- `loss_func(pred, target, **kwargs)`. The function only needs to compute
- element-wise loss without any reduction. This decorator will add weight
- and reduction arguments to the function. The decorated function will have
- the signature like `loss_func(pred, target, weight=None, reduction='mean',
- **kwargs)`.
-
- :Example:
-
- >>> import torch
- >>> @weighted_loss
- >>> def l1_loss(pred, target):
- >>> return (pred - target).abs()
-
- >>> pred = torch.Tensor([0, 2, 3])
- >>> target = torch.Tensor([1, 1, 1])
- >>> weight = torch.Tensor([1, 0, 1])
-
- >>> l1_loss(pred, target)
- tensor(1.3333)
- >>> l1_loss(pred, target, weight)
- tensor(1.5000)
- >>> l1_loss(pred, target, reduction='none')
- tensor([1., 1., 2.])
- >>> l1_loss(pred, target, weight, reduction='sum')
- tensor(3.)
- """
-
- @functools.wraps(loss_func)
- def wrapper(pred, target, weight=None, reduction='mean', **kwargs):
- # get element-wise loss
- loss = loss_func(pred, target, **kwargs)
- loss = weight_reduce_loss(loss, weight, reduction)
- return loss
-
- return wrapper
diff --git a/spaces/Feraxin/chatGPT/baidu_translate/module.py b/spaces/Feraxin/chatGPT/baidu_translate/module.py
deleted file mode 100644
index b9be1ed0230456ff6b53fe62fa6e550056f917d8..0000000000000000000000000000000000000000
--- a/spaces/Feraxin/chatGPT/baidu_translate/module.py
+++ /dev/null
@@ -1,106 +0,0 @@
-import argparse
-import random, os
-from hashlib import md5
-from typing import Optional
-
-import requests
-
-import paddlehub as hub
-from paddlehub.module.module import moduleinfo
-from paddlehub.module.module import runnable
-from paddlehub.module.module import serving
-
-
-def make_md5(s, encoding='utf-8'):
- return md5(s.encode(encoding)).hexdigest()
-
-
-@moduleinfo(name="baidu_translate",
- version="1.0.0",
- type="text/machine_translation",
- summary="",
- author="baidu-nlp",
- author_email="paddle-dev@baidu.com")
-class BaiduTranslate:
-
- def __init__(self, appid=None, appkey=None):
- """
- :param appid: appid for requesting Baidu translation service.
- :param appkey: appkey for requesting Baidu translation service.
- """
- appid = os.environ.get('baidu_translate_appid')
- appkey = os.environ.get('baidu_translate_appkey')
- # Set your own appid/appkey.
- if appid is None:
- self.appid = ''
- else:
- self.appid = appid
- if appkey is None:
- self.appkey = ''
- else:
- self.appkey = appkey
- self.url = 'http://api.fanyi.baidu.com/api/trans/vip/translate'
-
- def translate(self, query: str, from_lang: Optional[str] = "en", to_lang: Optional[int] = "zh"):
- """
- Create image by text prompts using ErnieVilG model.
-
- :param query: Text to be translated.
- :param from_lang: Source language.
- :param to_lang: Dst language.
-
- Return translated string.
- """
- # Generate salt and sign
- salt = random.randint(32768, 65536)
- sign = make_md5(self.appid + query + str(salt) + self.appkey)
-
- # Build request
- headers = {'Content-Type': 'application/x-www-form-urlencoded'}
- payload = {'appid': self.appid, 'q': query, 'from': from_lang, 'to': to_lang, 'salt': salt, 'sign': sign}
-
- # Send request
- try:
- r = requests.post(self.url, params=payload, headers=headers)
- result = r.json()
- except Exception as e:
- error_msg = str(e)
- raise RuntimeError(error_msg)
- if 'error_code' in result:
- raise RuntimeError(result['error_msg'])
- return result['trans_result'][0]['dst']
-
- @runnable
- def run_cmd(self, argvs):
- """
- Run as a command.
- """
- self.parser = argparse.ArgumentParser(description="Run the {} module.".format(self.name),
- prog='hub run {}'.format(self.name),
- usage='%(prog)s',
- add_help=True)
- self.arg_input_group = self.parser.add_argument_group(title="Input options", description="Input data. Required")
- self.add_module_input_arg()
- args = self.parser.parse_args(argvs)
- if args.appid is not None and args.appkey is not None:
- self.appid = args.appid
- self.appkey = args.appkey
- result = self.translate(args.query, args.from_lang, args.to_lang)
- return result
-
- @serving
- def serving_method(self, query, from_lang, to_lang):
- """
- Run as a service.
- """
- return self.translate(query, from_lang, to_lang)
-
- def add_module_input_arg(self):
- """
- Add the command input options.
- """
- self.arg_input_group.add_argument('--query', type=str)
- self.arg_input_group.add_argument('--from_lang', type=str, default='en', help="源语言")
- self.arg_input_group.add_argument('--to_lang', type=str, default='zh', help="目标语言")
- self.arg_input_group.add_argument('--appid', type=str, default=None, help="注册得到的个人appid")
- self.arg_input_group.add_argument('--appkey', type=str, default=None, help="注册得到的个人appkey")
diff --git a/spaces/GXSA/bingo/src/components/tone-selector.tsx b/spaces/GXSA/bingo/src/components/tone-selector.tsx
deleted file mode 100644
index 5c6e464c91f564b895acd121f0a4a79ed9c5c356..0000000000000000000000000000000000000000
--- a/spaces/GXSA/bingo/src/components/tone-selector.tsx
+++ /dev/null
@@ -1,43 +0,0 @@
-import React from 'react'
-import { BingConversationStyle } from '@/lib/bots/bing/types'
-import { cn } from '@/lib/utils'
-
-type ToneItem = {
- type: BingConversationStyle,
- name: string
-}
-
-const ToneList: ToneItem[] = [
- { name: '有创造力', type: BingConversationStyle.Creative },
- { name: '更平衡', type: BingConversationStyle.Balanced },
- { name: '更精确', type: BingConversationStyle.Precise }
-]
-
-interface ToneSelectorProps {
- type: BingConversationStyle | ''
- onChange?: (type: BingConversationStyle) => void
-}
-
-export function ToneSelector({ type, onChange }: ToneSelectorProps) {
- return (
-
-
- 选择对话样式
-
-
-
- {
- ToneList.map(tone => (
-
onChange?.(tone.type)}>
-
-
- ))
- }
-
-
-
- )
-}
diff --git a/spaces/GeorgeOrville/bingo/src/pages/api/kblob.ts b/spaces/GeorgeOrville/bingo/src/pages/api/kblob.ts
deleted file mode 100644
index 0ce7e6063cdc06838e76f1cff1d5982d34ef52de..0000000000000000000000000000000000000000
--- a/spaces/GeorgeOrville/bingo/src/pages/api/kblob.ts
+++ /dev/null
@@ -1,56 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import FormData from 'form-data'
-import { fetch } from '@/lib/isomorphic'
-import { KBlobRequest } from '@/lib/bots/bing/types'
-
-const API_DOMAIN = 'https://bing.vcanbb.top'
-
-export const config = {
- api: {
- bodyParser: {
- sizeLimit: '10mb' // Set desired value here
- }
- }
-}
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const { knowledgeRequest, imageBase64 } = req.body as KBlobRequest
-
- const formData = new FormData()
- formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest))
- if (imageBase64) {
- formData.append('imageBase64', imageBase64)
- }
-
- const response = await fetch(`${API_DOMAIN}/images/kblob`,
- {
- method: 'POST',
- body: formData.getBuffer(),
- headers: {
- "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"",
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": "\"Windows\"",
- "Referer": `${API_DOMAIN}/web/index.html`,
- "Referrer-Policy": "origin-when-cross-origin",
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- ...formData.getHeaders()
- }
- }
- ).then(res => res.text())
-
- res.writeHead(200, {
- 'Content-Type': 'application/json',
- })
- res.end(response || JSON.stringify({ result: { value: 'UploadFailed', message: '请更换 IP 或代理后重试' } }))
- } catch (e) {
- return res.json({
- result: {
- value: 'UploadFailed',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/Gradio-Blocks/ViTPose/model.py b/spaces/Gradio-Blocks/ViTPose/model.py
deleted file mode 100644
index f4a2d2b0480e4ba3c036006b6b27104d67d6d57b..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/ViTPose/model.py
+++ /dev/null
@@ -1,221 +0,0 @@
-from __future__ import annotations
-
-import os
-import pathlib
-import shlex
-import subprocess
-import sys
-
-if os.getenv('SYSTEM') == 'spaces':
- import mim
-
- mim.uninstall('mmcv-full', confirm_yes=True)
- mim.install('mmcv-full==1.5.0', is_yes=True)
-
- subprocess.run(shlex.split('pip uninstall -y opencv-python'))
- subprocess.run(shlex.split('pip uninstall -y opencv-python-headless'))
- subprocess.run(shlex.split('pip install opencv-python-headless==4.8.0.74'))
-
-import huggingface_hub
-import numpy as np
-import torch
-import torch.nn as nn
-
-app_dir = pathlib.Path(__file__).parent
-submodule_dir = app_dir / 'ViTPose'
-sys.path.insert(0, submodule_dir.as_posix())
-
-from mmdet.apis import inference_detector, init_detector
-from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
- process_mmdet_results, vis_pose_result)
-
-
-class DetModel:
- MODEL_DICT = {
- 'YOLOX-tiny': {
- 'config':
- 'mmdet_configs/configs/yolox/yolox_tiny_8x8_300e_coco.py',
- 'model':
- 'https://download.openmmlab.com/mmdetection/v2.0/yolox/yolox_tiny_8x8_300e_coco/yolox_tiny_8x8_300e_coco_20211124_171234-b4047906.pth',
- },
- 'YOLOX-s': {
- 'config':
- 'mmdet_configs/configs/yolox/yolox_s_8x8_300e_coco.py',
- 'model':
- 'https://download.openmmlab.com/mmdetection/v2.0/yolox/yolox_s_8x8_300e_coco/yolox_s_8x8_300e_coco_20211121_095711-4592a793.pth',
- },
- 'YOLOX-l': {
- 'config':
- 'mmdet_configs/configs/yolox/yolox_l_8x8_300e_coco.py',
- 'model':
- 'https://download.openmmlab.com/mmdetection/v2.0/yolox/yolox_l_8x8_300e_coco/yolox_l_8x8_300e_coco_20211126_140236-d3bd2b23.pth',
- },
- 'YOLOX-x': {
- 'config':
- 'mmdet_configs/configs/yolox/yolox_x_8x8_300e_coco.py',
- 'model':
- 'https://download.openmmlab.com/mmdetection/v2.0/yolox/yolox_x_8x8_300e_coco/yolox_x_8x8_300e_coco_20211126_140254-1ef88d67.pth',
- },
- }
-
- def __init__(self):
- self.device = torch.device(
- 'cuda:0' if torch.cuda.is_available() else 'cpu')
- self._load_all_models_once()
- self.model_name = 'YOLOX-l'
- self.model = self._load_model(self.model_name)
-
- def _load_all_models_once(self) -> None:
- for name in self.MODEL_DICT:
- self._load_model(name)
-
- def _load_model(self, name: str) -> nn.Module:
- d = self.MODEL_DICT[name]
- return init_detector(d['config'], d['model'], device=self.device)
-
- def set_model(self, name: str) -> None:
- if name == self.model_name:
- return
- self.model_name = name
- self.model = self._load_model(name)
-
- def detect_and_visualize(
- self, image: np.ndarray,
- score_threshold: float) -> tuple[list[np.ndarray], np.ndarray]:
- out = self.detect(image)
- vis = self.visualize_detection_results(image, out, score_threshold)
- return out, vis
-
- def detect(self, image: np.ndarray) -> list[np.ndarray]:
- image = image[:, :, ::-1] # RGB -> BGR
- out = inference_detector(self.model, image)
- return out
-
- def visualize_detection_results(
- self,
- image: np.ndarray,
- detection_results: list[np.ndarray],
- score_threshold: float = 0.3) -> np.ndarray:
- person_det = [detection_results[0]] + [np.array([]).reshape(0, 5)] * 79
-
- image = image[:, :, ::-1] # RGB -> BGR
- vis = self.model.show_result(image,
- person_det,
- score_thr=score_threshold,
- bbox_color=None,
- text_color=(200, 200, 200),
- mask_color=None)
- return vis[:, :, ::-1] # BGR -> RGB
-
-
-class AppDetModel(DetModel):
- def run(self, model_name: str, image: np.ndarray,
- score_threshold: float) -> tuple[list[np.ndarray], np.ndarray]:
- self.set_model(model_name)
- return self.detect_and_visualize(image, score_threshold)
-
-
-class PoseModel:
- MODEL_DICT = {
- 'ViTPose-B (single-task train)': {
- 'config':
- 'ViTPose/configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/ViTPose_base_coco_256x192.py',
- 'model': 'models/vitpose-b.pth',
- },
- 'ViTPose-L (single-task train)': {
- 'config':
- 'ViTPose/configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/ViTPose_large_coco_256x192.py',
- 'model': 'models/vitpose-l.pth',
- },
- 'ViTPose-B (multi-task train, COCO)': {
- 'config':
- 'ViTPose/configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/ViTPose_base_coco_256x192.py',
- 'model': 'models/vitpose-b-multi-coco.pth',
- },
- 'ViTPose-L (multi-task train, COCO)': {
- 'config':
- 'ViTPose/configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/ViTPose_large_coco_256x192.py',
- 'model': 'models/vitpose-l-multi-coco.pth',
- },
- }
-
- def __init__(self):
- self.device = torch.device(
- 'cuda:0' if torch.cuda.is_available() else 'cpu')
- self.model_name = 'ViTPose-B (multi-task train, COCO)'
- self.model = self._load_model(self.model_name)
-
- def _load_all_models_once(self) -> None:
- for name in self.MODEL_DICT:
- self._load_model(name)
-
- def _load_model(self, name: str) -> nn.Module:
- d = self.MODEL_DICT[name]
- ckpt_path = huggingface_hub.hf_hub_download('public-data/ViTPose',
- d['model'])
- model = init_pose_model(d['config'], ckpt_path, device=self.device)
- return model
-
- def set_model(self, name: str) -> None:
- if name == self.model_name:
- return
- self.model_name = name
- self.model = self._load_model(name)
-
- def predict_pose_and_visualize(
- self,
- image: np.ndarray,
- det_results: list[np.ndarray],
- box_score_threshold: float,
- kpt_score_threshold: float,
- vis_dot_radius: int,
- vis_line_thickness: int,
- ) -> tuple[list[dict[str, np.ndarray]], np.ndarray]:
- out = self.predict_pose(image, det_results, box_score_threshold)
- vis = self.visualize_pose_results(image, out, kpt_score_threshold,
- vis_dot_radius, vis_line_thickness)
- return out, vis
-
- def predict_pose(
- self,
- image: np.ndarray,
- det_results: list[np.ndarray],
- box_score_threshold: float = 0.5) -> list[dict[str, np.ndarray]]:
- image = image[:, :, ::-1] # RGB -> BGR
- person_results = process_mmdet_results(det_results, 1)
- out, _ = inference_top_down_pose_model(self.model,
- image,
- person_results=person_results,
- bbox_thr=box_score_threshold,
- format='xyxy')
- return out
-
- def visualize_pose_results(self,
- image: np.ndarray,
- pose_results: list[np.ndarray],
- kpt_score_threshold: float = 0.3,
- vis_dot_radius: int = 4,
- vis_line_thickness: int = 1) -> np.ndarray:
- image = image[:, :, ::-1] # RGB -> BGR
- vis = vis_pose_result(self.model,
- image,
- pose_results,
- kpt_score_thr=kpt_score_threshold,
- radius=vis_dot_radius,
- thickness=vis_line_thickness)
- return vis[:, :, ::-1] # BGR -> RGB
-
-
-class AppPoseModel(PoseModel):
- def run(
- self, model_name: str, image: np.ndarray,
- det_results: list[np.ndarray], box_score_threshold: float,
- kpt_score_threshold: float, vis_dot_radius: int,
- vis_line_thickness: int
- ) -> tuple[list[dict[str, np.ndarray]], np.ndarray]:
- self.set_model(model_name)
- return self.predict_pose_and_visualize(image, det_results,
- box_score_threshold,
- kpt_score_threshold,
- vis_dot_radius,
- vis_line_thickness)
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/exp/mask_rcnn_1x_hybrid_small/run.sh b/spaces/Gradio-Blocks/uniformer_image_detection/exp/mask_rcnn_1x_hybrid_small/run.sh
deleted file mode 100644
index fbe76fb398212d2eb93f98007ea28d31cbb65ebe..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/exp/mask_rcnn_1x_hybrid_small/run.sh
+++ /dev/null
@@ -1,10 +0,0 @@
-#!/usr/bin/env bash
-
-work_path=$(dirname $0)
-PYTHONPATH="$(dirname $0)/../../":$PYTHONPATH \
-python -m torch.distributed.launch --nproc_per_node=8 \
- tools/train.py ${work_path}/config.py \
- --launcher pytorch \
- --cfg-options model.backbone.pretrained_path='your_model_path/uniformer_small_in1k.pth' \
- --work-dir ${work_path}/ckpt \
- 2>&1 | tee -a ${work_path}/log.txt
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x512_20k_voc12aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x512_20k_voc12aug.py
deleted file mode 100644
index 010f86f1aac1b5c827dec29f692d137dc1c399bf..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x512_20k_voc12aug.py
+++ /dev/null
@@ -1,7 +0,0 @@
-_base_ = [
- '../_base_/models/danet_r50-d8.py',
- '../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_20k.py'
-]
-model = dict(
- decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dnlnet/dnl_r50-d8_512x512_80k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dnlnet/dnl_r50-d8_512x512_80k_ade20k.py
deleted file mode 100644
index 09604c39729abfc9015eb971069b987c8d8a82cb..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dnlnet/dnl_r50-d8_512x512_80k_ade20k.py
+++ /dev/null
@@ -1,6 +0,0 @@
-_base_ = [
- '../_base_/models/dnl_r50-d8.py', '../_base_/datasets/ade20k.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
-model = dict(
- decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/resnest/deeplabv3_s101-d8_512x512_160k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/resnest/deeplabv3_s101-d8_512x512_160k_ade20k.py
deleted file mode 100644
index e3924ad679cb3d7ba731322f9cdb67410baae59a..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/resnest/deeplabv3_s101-d8_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = '../deeplabv3/deeplabv3_r101-d8_512x512_160k_ade20k.py'
-model = dict(
- pretrained='open-mmlab://resnest101',
- backbone=dict(
- type='ResNeSt',
- stem_channels=128,
- radix=2,
- reduction_factor=4,
- avg_down_stride=True))
diff --git a/spaces/HUBioDataLab/DrugGEN/models.py b/spaces/HUBioDataLab/DrugGEN/models.py
deleted file mode 100644
index 9927302aee0f987095ad035513e55f34c27fe1d5..0000000000000000000000000000000000000000
--- a/spaces/HUBioDataLab/DrugGEN/models.py
+++ /dev/null
@@ -1,210 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from layers import TransformerEncoder, TransformerDecoder
-
-class Generator(nn.Module):
- """Generator network."""
- def __init__(self, z_dim, act, vertexes, edges, nodes, dropout, dim, depth, heads, mlp_ratio, submodel):
- super(Generator, self).__init__()
-
- self.submodel = submodel
- self.vertexes = vertexes
- self.edges = edges
- self.nodes = nodes
- self.depth = depth
- self.dim = dim
- self.heads = heads
- self.mlp_ratio = mlp_ratio
-
- self.dropout = dropout
- self.z_dim = z_dim
-
- if act == "relu":
- act = nn.ReLU()
- elif act == "leaky":
- act = nn.LeakyReLU()
- elif act == "sigmoid":
- act = nn.Sigmoid()
- elif act == "tanh":
- act = nn.Tanh()
- self.features = vertexes * vertexes * edges + vertexes * nodes
- self.transformer_dim = vertexes * vertexes * dim + vertexes * dim
- self.pos_enc_dim = 5
- #self.pos_enc = nn.Linear(self.pos_enc_dim, self.dim)
-
- self.node_layers = nn.Sequential(nn.Linear(nodes, 64), act, nn.Linear(64,dim), act, nn.Dropout(self.dropout))
- self.edge_layers = nn.Sequential(nn.Linear(edges, 64), act, nn.Linear(64,dim), act, nn.Dropout(self.dropout))
-
- self.TransformerEncoder = TransformerEncoder(dim=self.dim, depth=self.depth, heads=self.heads, act = act,
- mlp_ratio=self.mlp_ratio, drop_rate=self.dropout)
-
- self.readout_e = nn.Linear(self.dim, edges)
- self.readout_n = nn.Linear(self.dim, nodes)
- self.softmax = nn.Softmax(dim = -1)
-
- def _generate_square_subsequent_mask(self, sz):
- mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1)
- mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
- return mask
-
- def laplacian_positional_enc(self, adj):
-
- A = adj
- D = torch.diag(torch.count_nonzero(A, dim=-1))
- L = torch.eye(A.shape[0], device=A.device) - D * A * D
-
- EigVal, EigVec = torch.linalg.eig(L)
-
- idx = torch.argsort(torch.real(EigVal))
- EigVal, EigVec = EigVal[idx], torch.real(EigVec[:,idx])
- pos_enc = EigVec[:,1:self.pos_enc_dim + 1]
-
- return pos_enc
-
- def forward(self, z_e, z_n):
- b, n, c = z_n.shape
- _, _, _ , d = z_e.shape
- #random_mask_e = torch.randint(low=0,high=2,size=(b,n,n,d)).to(z_e.device).float()
- #random_mask_n = torch.randint(low=0,high=2,size=(b,n,c)).to(z_n.device).float()
- #z_e = F.relu(z_e - random_mask_e)
- #z_n = F.relu(z_n - random_mask_n)
-
- #mask = self._generate_square_subsequent_mask(self.vertexes).to(z_e.device)
-
- node = self.node_layers(z_n)
-
- edge = self.edge_layers(z_e)
-
- edge = (edge + edge.permute(0,2,1,3))/2
-
- #lap = [self.laplacian_positional_enc(torch.max(x,-1)[1]) for x in edge]
-
- #lap = torch.stack(lap).to(node.device)
-
- #pos_enc = self.pos_enc(lap)
-
- #node = node + pos_enc
-
- node, edge = self.TransformerEncoder(node,edge)
-
- node_sample = self.softmax(self.readout_n(node))
-
- edge_sample = self.softmax(self.readout_e(edge))
-
- return node, edge, node_sample, edge_sample
-
-
-
-class Generator2(nn.Module):
- def __init__(self, dim, dec_dim, depth, heads, mlp_ratio, drop_rate, drugs_m_dim, drugs_b_dim, submodel):
- super().__init__()
- self.submodel = submodel
- self.depth = depth
- self.dim = dim
- self.mlp_ratio = mlp_ratio
- self.heads = heads
- self.dropout_rate = drop_rate
- self.drugs_m_dim = drugs_m_dim
- self.drugs_b_dim = drugs_b_dim
-
- self.pos_enc_dim = 5
-
-
- if self.submodel == "Prot":
- self.prot_n = torch.nn.Linear(3822, 45) ## exact dimension of protein features
- self.prot_e = torch.nn.Linear(298116, 2025) ## exact dimension of protein features
-
- self.protn_dim = torch.nn.Linear(1, dec_dim)
- self.prote_dim = torch.nn.Linear(1, dec_dim)
-
-
- self.mol_nodes = nn.Linear(dim, dec_dim)
- self.mol_edges = nn.Linear(dim, dec_dim)
-
- self.drug_nodes = nn.Linear(self.drugs_m_dim, dec_dim)
- self.drug_edges = nn.Linear(self.drugs_b_dim, dec_dim)
-
- self.TransformerDecoder = TransformerDecoder(dec_dim, depth, heads, mlp_ratio, drop_rate=self.dropout_rate)
-
- self.nodes_output_layer = nn.Linear(dec_dim, self.drugs_m_dim)
- self.edges_output_layer = nn.Linear(dec_dim, self.drugs_b_dim)
- self.softmax = nn.Softmax(dim=-1)
-
- def laplacian_positional_enc(self, adj):
-
- A = adj
- D = torch.diag(torch.count_nonzero(A, dim=-1))
- L = torch.eye(A.shape[0], device=A.device) - D * A * D
-
- EigVal, EigVec = torch.linalg.eig(L)
-
- idx = torch.argsort(torch.real(EigVal))
- EigVal, EigVec = EigVal[idx], torch.real(EigVec[:,idx])
- pos_enc = EigVec[:,1:self.pos_enc_dim + 1]
-
- return pos_enc
-
- def _generate_square_subsequent_mask(self, sz):
- mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1)
- mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
- return mask
-
- def forward(self, edges_logits, nodes_logits ,akt1_adj,akt1_annot):
-
- edges_logits = self.mol_edges(edges_logits)
- nodes_logits = self.mol_nodes(nodes_logits)
-
- if self.submodel != "Prot":
- akt1_annot = self.drug_nodes(akt1_annot)
- akt1_adj = self.drug_edges(akt1_adj)
-
- else:
- akt1_adj = self.prote_dim(self.prot_e(akt1_adj).view(1,45,45,1))
- akt1_annot = self.protn_dim(self.prot_n(akt1_annot).view(1,45,1))
-
-
- #lap = [self.laplacian_positional_enc(torch.max(x,-1)[1]) for x in drug_e]
- #lap = torch.stack(lap).to(drug_e.device)
- #pos_enc = self.pos_enc(lap)
- #drug_n = drug_n + pos_enc
-
- if self.submodel == "Ligand" or self.submodel == "RL" :
- nodes_logits,akt1_annot, edges_logits, akt1_adj = self.TransformerDecoder(akt1_annot,nodes_logits,akt1_adj,edges_logits)
-
- else:
- nodes_logits,akt1_annot, edges_logits, akt1_adj = self.TransformerDecoder(nodes_logits,akt1_annot,edges_logits,akt1_adj)
-
- edges_logits = self.edges_output_layer(edges_logits)
- nodes_logits = self.nodes_output_layer(nodes_logits)
-
- edges_logits = self.softmax(edges_logits)
- nodes_logits = self.softmax(nodes_logits)
-
- return edges_logits, nodes_logits
-
-
-class simple_disc(nn.Module):
- def __init__(self, act, m_dim, vertexes, b_dim):
- super().__init__()
- if act == "relu":
- act = nn.ReLU()
- elif act == "leaky":
- act = nn.LeakyReLU()
- elif act == "sigmoid":
- act = nn.Sigmoid()
- elif act == "tanh":
- act = nn.Tanh()
- features = vertexes * m_dim + vertexes * vertexes * b_dim
-
- self.predictor = nn.Sequential(nn.Linear(features,256), act, nn.Linear(256,128), act, nn.Linear(128,64), act,
- nn.Linear(64,32), act, nn.Linear(32,16), act,
- nn.Linear(16,1))
-
- def forward(self, x):
-
- prediction = self.predictor(x)
-
- #prediction = F.softmax(prediction,dim=-1)
-
- return prediction
\ No newline at end of file
diff --git a/spaces/Hallucinate/demo/k_diffusion/layers.py b/spaces/Hallucinate/demo/k_diffusion/layers.py
deleted file mode 100644
index aa647bd3c1e0bef91e475f2376b4a79f6bb0823d..0000000000000000000000000000000000000000
--- a/spaces/Hallucinate/demo/k_diffusion/layers.py
+++ /dev/null
@@ -1,256 +0,0 @@
-import math
-
-from einops import rearrange, repeat
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from . import sampling, utils
-
-# Karras et al. preconditioned denoiser
-
-class Denoiser(nn.Module):
- """A Karras et al. preconditioner for denoising diffusion models."""
-
- def __init__(self, inner_model, sigma_data=1.):
- super().__init__()
- self.inner_model = inner_model
- self.sigma_data = sigma_data
-
- def get_scalings(self, sigma):
- c_skip = self.sigma_data ** 2 / (sigma ** 2 + self.sigma_data ** 2)
- c_out = sigma * self.sigma_data / (sigma ** 2 + self.sigma_data ** 2) ** 0.5
- c_in = 1 / (sigma ** 2 + self.sigma_data ** 2) ** 0.5
- return c_skip, c_out, c_in
-
- def loss(self, input, noise, sigma, **kwargs):
- c_skip, c_out, c_in = [utils.append_dims(x, input.ndim) for x in self.get_scalings(sigma)]
- noised_input = input + noise * utils.append_dims(sigma, input.ndim)
- model_output = self.inner_model(noised_input * c_in, sigma, **kwargs)
- target = (input - c_skip * noised_input) / c_out
- return (model_output - target).pow(2).flatten(1).mean(1)
-
- def forward(self, input, sigma, **kwargs):
- c_skip, c_out, c_in = [utils.append_dims(x, input.ndim) for x in self.get_scalings(sigma)]
- return self.inner_model(input * c_in, sigma, **kwargs) * c_out + input * c_skip
-
-
-class DenoiserWithVariance(Denoiser):
- def loss(self, input, noise, sigma, **kwargs):
- c_skip, c_out, c_in = [utils.append_dims(x, input.ndim) for x in self.get_scalings(sigma)]
- noised_input = input + noise * utils.append_dims(sigma, input.ndim)
- model_output, logvar = self.inner_model(noised_input * c_in, sigma, return_variance=True, **kwargs)
- logvar = utils.append_dims(logvar, model_output.ndim)
- target = (input - c_skip * noised_input) / c_out
- losses = ((model_output - target) ** 2 / logvar.exp() + logvar) / 2
- return losses.flatten(1).mean(1)
-
-
-class SimpleLossDenoiser(Denoiser):
- """L_simple with the Karras et al. preconditioner."""
-
- def loss(self, input, noise, sigma, **kwargs):
- noised_input = input + noise * utils.append_dims(sigma, input.ndim)
- denoised = self(noised_input, sigma, **kwargs)
- eps = sampling.to_d(noised_input, sigma, denoised)
- return (eps - noise).pow(2).flatten(1).mean(1)
-
-
-# Residual blocks
-
-class ResidualBlock(nn.Module):
- def __init__(self, *main, skip=None):
- super().__init__()
- self.main = nn.Sequential(*main)
- self.skip = skip if skip else nn.Identity()
-
- def forward(self, input):
- return self.main(input) + self.skip(input)
-
-
-# Noise level (and other) conditioning
-
-class ConditionedModule(nn.Module):
- pass
-
-
-class UnconditionedModule(ConditionedModule):
- def __init__(self, module):
- super().__init__()
- self.module = module
-
- def forward(self, input, cond=None):
- return self.module(input)
-
-
-class ConditionedSequential(nn.Sequential, ConditionedModule):
- def forward(self, input, cond):
- for module in self:
- if isinstance(module, ConditionedModule):
- input = module(input, cond)
- else:
- input = module(input)
- return input
-
-
-class ConditionedResidualBlock(ConditionedModule):
- def __init__(self, *main, skip=None):
- super().__init__()
- self.main = ConditionedSequential(*main)
- self.skip = skip if skip else nn.Identity()
-
- def forward(self, input, cond):
- skip = self.skip(input, cond) if isinstance(self.skip, ConditionedModule) else self.skip(input)
- return self.main(input, cond) + skip
-
-
-class AdaGN(ConditionedModule):
- def __init__(self, feats_in, c_out, num_groups, eps=1e-5, cond_key='cond'):
- super().__init__()
- self.num_groups = num_groups
- self.eps = eps
- self.cond_key = cond_key
- self.mapper = nn.Linear(feats_in, c_out * 2)
-
- def forward(self, input, cond):
- weight, bias = self.mapper(cond[self.cond_key]).chunk(2, dim=-1)
- input = F.group_norm(input, self.num_groups, eps=self.eps)
- return torch.addcmul(utils.append_dims(bias, input.ndim), input, utils.append_dims(weight, input.ndim) + 1)
-
-
-# Attention
-
-class SelfAttention2d(ConditionedModule):
- def __init__(self, c_in, n_head, norm, dropout_rate=0.):
- super().__init__()
- assert c_in % n_head == 0
- self.norm_in = norm(c_in)
- self.n_head = n_head
- self.qkv_proj = nn.Conv2d(c_in, c_in * 3, 1)
- self.out_proj = nn.Conv2d(c_in, c_in, 1)
- self.dropout = nn.Dropout(dropout_rate)
-
- def forward(self, input, cond):
- n, c, h, w = input.shape
- qkv = self.qkv_proj(self.norm_in(input, cond))
- qkv = qkv.view([n, self.n_head * 3, c // self.n_head, h * w]).transpose(2, 3)
- q, k, v = qkv.chunk(3, dim=1)
- scale = k.shape[3] ** -0.25
- att = ((q * scale) @ (k.transpose(2, 3) * scale)).softmax(3)
- att = self.dropout(att)
- y = (att @ v).transpose(2, 3).contiguous().view([n, c, h, w])
- return input + self.out_proj(y)
-
-
-class CrossAttention2d(ConditionedModule):
- def __init__(self, c_dec, c_enc, n_head, norm_dec, dropout_rate=0.,
- cond_key='cross', cond_key_padding='cross_padding'):
- super().__init__()
- assert c_dec % n_head == 0
- self.cond_key = cond_key
- self.cond_key_padding = cond_key_padding
- self.norm_enc = nn.LayerNorm(c_enc)
- self.norm_dec = norm_dec(c_dec)
- self.n_head = n_head
- self.q_proj = nn.Conv2d(c_dec, c_dec, 1)
- self.kv_proj = nn.Linear(c_enc, c_dec * 2)
- self.out_proj = nn.Conv2d(c_dec, c_dec, 1)
- self.dropout = nn.Dropout(dropout_rate)
-
- def forward(self, input, cond):
- n, c, h, w = input.shape
- q = self.q_proj(self.norm_dec(input, cond))
- q = q.view([n, self.n_head, c // self.n_head, h * w]).transpose(2, 3)
- kv = self.kv_proj(self.norm_enc(cond[self.cond_key]))
- kv = kv.view([n, -1, self.n_head * 2, c // self.n_head]).transpose(1, 2)
- k, v = kv.chunk(2, dim=1)
- scale = k.shape[3] ** -0.25
- att = ((q * scale) @ (k.transpose(2, 3) * scale))
- att = att - (cond[self.cond_key_padding][:, None, None, :]) * 10000
- att = att.softmax(3)
- att = self.dropout(att)
- y = (att @ v).transpose(2, 3)
- y = y.contiguous().view([n, c, h, w])
- return input + self.out_proj(y)
-
-
-# Downsampling/upsampling
-
-_kernels = {
- 'linear':
- [1 / 8, 3 / 8, 3 / 8, 1 / 8],
- 'cubic':
- [-0.01171875, -0.03515625, 0.11328125, 0.43359375,
- 0.43359375, 0.11328125, -0.03515625, -0.01171875],
- 'lanczos3':
- [0.003689131001010537, 0.015056144446134567, -0.03399861603975296,
- -0.066637322306633, 0.13550527393817902, 0.44638532400131226,
- 0.44638532400131226, 0.13550527393817902, -0.066637322306633,
- -0.03399861603975296, 0.015056144446134567, 0.003689131001010537]
-}
-_kernels['bilinear'] = _kernels['linear']
-_kernels['bicubic'] = _kernels['cubic']
-
-
-class Downsample2d(nn.Module):
- def __init__(self, kernel='linear', pad_mode='reflect'):
- super().__init__()
- self.pad_mode = pad_mode
- kernel_1d = torch.tensor([_kernels[kernel]])
- self.pad = kernel_1d.shape[1] // 2 - 1
- self.register_buffer('kernel', kernel_1d.T @ kernel_1d)
-
- def forward(self, x):
- x = F.pad(x, (self.pad,) * 4, self.pad_mode)
- weight = x.new_zeros([x.shape[1], x.shape[1], self.kernel.shape[0], self.kernel.shape[1]])
- indices = torch.arange(x.shape[1], device=x.device)
- weight[indices, indices] = self.kernel.to(weight)
- return F.conv2d(x, weight, stride=2)
-
-
-class Upsample2d(nn.Module):
- def __init__(self, kernel='linear', pad_mode='reflect'):
- super().__init__()
- self.pad_mode = pad_mode
- kernel_1d = torch.tensor([_kernels[kernel]]) * 2
- self.pad = kernel_1d.shape[1] // 2 - 1
- self.register_buffer('kernel', kernel_1d.T @ kernel_1d)
-
- def forward(self, x):
- x = F.pad(x, ((self.pad + 1) // 2,) * 4, self.pad_mode)
- weight = x.new_zeros([x.shape[1], x.shape[1], self.kernel.shape[0], self.kernel.shape[1]])
- indices = torch.arange(x.shape[1], device=x.device)
- weight[indices, indices] = self.kernel.to(weight)
- return F.conv_transpose2d(x, weight, stride=2, padding=self.pad * 2 + 1)
-
-
-# Embeddings
-
-class FourierFeatures(nn.Module):
- def __init__(self, in_features, out_features, std=1.):
- super().__init__()
- assert out_features % 2 == 0
- self.register_buffer('weight', torch.randn([out_features // 2, in_features]) * std)
-
- def forward(self, input):
- f = 2 * math.pi * input @ self.weight.T
- return torch.cat([f.cos(), f.sin()], dim=-1)
-
-
-# U-Nets
-
-class UNet(ConditionedModule):
- def __init__(self, d_blocks, u_blocks, skip_stages=0):
- super().__init__()
- self.d_blocks = nn.ModuleList(d_blocks)
- self.u_blocks = nn.ModuleList(u_blocks)
- self.skip_stages = skip_stages
-
- def forward(self, input, cond):
- skips = []
- for block in self.d_blocks[self.skip_stages:]:
- input = block(input, cond)
- skips.append(input)
- for i, (block, skip) in enumerate(zip(self.u_blocks, reversed(skips))):
- input = block(input, cond, skip if i > 0 else None)
- return input
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/speech2unit/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/speech2unit/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/fairseq_incremental_decoder.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/fairseq_incremental_decoder.py
deleted file mode 100644
index cc72a0f8f3da238a8ce846240e5008d91ce1bc1a..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/fairseq_incremental_decoder.py
+++ /dev/null
@@ -1,118 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-from typing import Dict, Optional
-
-from fairseq.incremental_decoding_utils import with_incremental_state
-from fairseq.models import FairseqDecoder
-from torch import Tensor
-
-
-logger = logging.getLogger(__name__)
-
-
-@with_incremental_state
-class FairseqIncrementalDecoder(FairseqDecoder):
- """Base class for incremental decoders.
-
- Incremental decoding is a special mode at inference time where the Model
- only receives a single timestep of input corresponding to the previous
- output token (for teacher forcing) and must produce the next output
- *incrementally*. Thus the model must cache any long-term state that is
- needed about the sequence, e.g., hidden states, convolutional states, etc.
-
- Compared to the standard :class:`FairseqDecoder` interface, the incremental
- decoder interface allows :func:`forward` functions to take an extra keyword
- argument (*incremental_state*) that can be used to cache state across
- time-steps.
-
- The :class:`FairseqIncrementalDecoder` interface also defines the
- :func:`reorder_incremental_state` method, which is used during beam search
- to select and reorder the incremental state based on the selection of beams.
-
- To learn more about how incremental decoding works, refer to `this blog
- `_.
- """
-
- def __init__(self, dictionary):
- super().__init__(dictionary)
-
- def forward(
- self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs
- ):
- """
- Args:
- prev_output_tokens (LongTensor): shifted output tokens of shape
- `(batch, tgt_len)`, for teacher forcing
- encoder_out (dict, optional): output from the encoder, used for
- encoder-side attention
- incremental_state (dict, optional): dictionary used for storing
- state during :ref:`Incremental decoding`
-
- Returns:
- tuple:
- - the decoder's output of shape `(batch, tgt_len, vocab)`
- - a dictionary with any model-specific outputs
- """
- raise NotImplementedError
-
- def extract_features(
- self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs
- ):
- """
- Returns:
- tuple:
- - the decoder's features of shape `(batch, tgt_len, embed_dim)`
- - a dictionary with any model-specific outputs
- """
- raise NotImplementedError
-
- def reorder_incremental_state(
- self,
- incremental_state: Dict[str, Dict[str, Optional[Tensor]]],
- new_order: Tensor,
- ):
- """Reorder incremental state.
-
- This will be called when the order of the input has changed from the
- previous time step. A typical use case is beam search, where the input
- order changes between time steps based on the selection of beams.
- """
- pass
-
- def reorder_incremental_state_scripting(
- self,
- incremental_state: Dict[str, Dict[str, Optional[Tensor]]],
- new_order: Tensor,
- ):
- """Main entry point for reordering the incremental state.
-
- Due to limitations in TorchScript, we call this function in
- :class:`fairseq.sequence_generator.SequenceGenerator` instead of
- calling :func:`reorder_incremental_state` directly.
- """
- for module in self.modules():
- if hasattr(module, "reorder_incremental_state"):
- result = module.reorder_incremental_state(incremental_state, new_order)
- if result is not None:
- incremental_state = result
-
- def set_beam_size(self, beam_size):
- """Sets the beam size in the decoder and all children."""
- if getattr(self, "_beam_size", -1) != beam_size:
- seen = set()
-
- def apply_set_beam_size(module):
- if (
- module != self
- and hasattr(module, "set_beam_size")
- and module not in seen
- ):
- seen.add(module)
- module.set_beam_size(beam_size)
-
- self.apply(apply_set_beam_size)
- self._beam_size = beam_size
diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/text/symbols.py b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/text/symbols.py
deleted file mode 100644
index 3460be23cdf863cea1df9a57255c759175d37595..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/text/symbols.py
+++ /dev/null
@@ -1,23 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-
-"""
-Defines the set of symbols used in text input to the model.
-
-The default is a set of ASCII characters that works well for English or text that has been run through Unidecode. For other data, you can modify _characters. See TRAINING_DATA.md for details. """
-import utils
-import os
-
-hps = utils.get_hparams()
-
-# with open(os.path.abspath(hps.data.chars_file), encoding='utf-8') as file:
-# chars = file.read()
-
-# with open(os.path.abspath(hps.data.punc_file), encoding='utf-8') as file:
-# punc = file.read()
-
-_punctuation = hps.data.punc
-_letters = hps.data.chars
-
-# export all characters as list
-
-symbols = list(_punctuation) + list(_letters)
diff --git a/spaces/Hasani/Specific_Object_Recognition_in_the_Wild/app.py b/spaces/Hasani/Specific_Object_Recognition_in_the_Wild/app.py
deleted file mode 100644
index 9ff15ae6048b4a41f3c64923d49d04a376962da3..0000000000000000000000000000000000000000
--- a/spaces/Hasani/Specific_Object_Recognition_in_the_Wild/app.py
+++ /dev/null
@@ -1,609 +0,0 @@
-import cv2
-import requests
-
-from PIL import Image
-import PIL
-from PIL import ImageDraw
-
-from matplotlib import pyplot as plt
-import matplotlib
-from matplotlib import rcParams
-
-import os
-import tempfile
-from io import BytesIO
-from pathlib import Path
-import argparse
-import random
-import numpy as np
-import torch
-import matplotlib.cm as cm
-import pandas as pd
-
-
-from transformers import OwlViTProcessor, OwlViTForObjectDetection
-from transformers.image_utils import ImageFeatureExtractionMixin
-
-
-from SuperGluePretrainedNetwork.models.matching import Matching
-from SuperGluePretrainedNetwork.models.utils import (compute_pose_error, compute_epipolar_error,
- estimate_pose,
- error_colormap, AverageTimer, pose_auc, read_image,
- rotate_intrinsics, rotate_pose_inplane,
- scale_intrinsics)
-
-torch.set_grad_enabled(False)
-
-
-
-
-mixin = ImageFeatureExtractionMixin()
-model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32")
-processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32")
-
-
-# Use GPU if available
-if torch.cuda.is_available():
- device = torch.device("cuda")
-else:
- device = torch.device("cpu")
-
-
-import requests
-from PIL import Image, ImageDraw
-from io import BytesIO
-import matplotlib.pyplot as plt
-import numpy as np
-import torch
-import cv2
-import tempfile
-
-def detect_and_crop2(target_image_path,
- query_image_path,
- model,
- processor,
- mixin,
- device,
- threshold=0.5,
- nms_threshold=0.3,
- visualize=True):
-
- # Open target image
- image = Image.open(target_image_path).convert('RGB')
- image_size = model.config.vision_config.image_size + 5
- image = mixin.resize(image, image_size)
- target_sizes = torch.Tensor([image.size[::-1]])
-
- # Open query image
- query_image = Image.open(query_image_path).convert('RGB')
- image_size = model.config.vision_config.image_size + 5
- query_image = mixin.resize(query_image, image_size)
-
- # Process input and query image
- inputs = processor(images=image, query_images=query_image, return_tensors="pt").to(device)
-
- # Get predictions
- with torch.no_grad():
- outputs = model.image_guided_detection(**inputs)
-
- # Convert predictions to CPU
- img = cv2.cvtColor(np.array(image), cv2.COLOR_BGR2RGB)
- outputs.logits = outputs.logits.cpu()
- outputs.target_pred_boxes = outputs.target_pred_boxes.cpu()
-
- # Post process the predictions
- results = processor.post_process_image_guided_detection(outputs=outputs, threshold=threshold, nms_threshold=nms_threshold, target_sizes=target_sizes)
- boxes, scores = results[0]["boxes"], results[0]["scores"]
-
- # If no boxes, return an empty list
- if len(boxes) == 0 and visualize:
- print(f"No boxes detected for image: {target_image_path}")
- fig, ax = plt.subplots(figsize=(6, 6))
- ax.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
- ax.set_title("Original Image")
- ax.axis("off")
- plt.show()
- return []
-
- # Filter boxes
- img_with_all_boxes = img.copy()
- filtered_boxes = []
- filtered_scores = []
- img_width, img_height = img.shape[1], img.shape[0]
- for box, score in zip(boxes, scores):
- x1, y1, x2, y2 = [int(i) for i in box.tolist()]
- if x1 < 0 or y1 < 0 or x2 < 0 or y2 < 0:
- continue
- if (x2 - x1) / img_width >= 0.94 and (y2 - y1) / img_height >= 0.94:
- continue
- filtered_boxes.append([x1, y1, x2, y2])
- filtered_scores.append(score)
-
- # Draw boxes on original image
- draw = ImageDraw.Draw(image)
- for box in filtered_boxes:
- draw.rectangle(box, outline="red",width=3)
-
- cropped_images = []
- for box in filtered_boxes:
- x1, y1, x2, y2 = box
- cropped_img = img[y1:y2, x1:x2]
- if cropped_img.size != 0:
- cropped_images.append(cropped_img)
-
- if visualize:
- # Visualization
- if not filtered_boxes:
- fig, ax = plt.subplots(figsize=(6, 6))
- ax.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
- ax.set_title("Original Image")
- ax.axis("off")
- plt.show()
- else:
- fig, axs = plt.subplots(1, len(cropped_images) + 2, figsize=(15, 5))
- axs[0].imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
- axs[0].set_title("Original Image")
- axs[0].axis("off")
-
- for i, (box, score) in enumerate(zip(filtered_boxes, filtered_scores)):
- x1, y1, x2, y2 = box
- cropped_img = img[y1:y2, x1:x2]
- font = cv2.FONT_HERSHEY_SIMPLEX
- text = f"{score:.2f}"
- cv2.putText(cropped_img, text, (5, cropped_img.shape[0]-10), font, 0.5, (255,0,0), 1, cv2.LINE_AA)
- axs[i+2].imshow(cv2.cvtColor(cropped_img, cv2.COLOR_BGR2RGB))
- axs[i+2].set_title("Score: " + text)
- axs[i+2].axis("off")
- plt.tight_layout()
- plt.show()
-
- return cropped_images, image # return original image with boxes drawn
-
-def save_array_to_temp_image(arr):
- # Convert the array to an image
- img = Image.fromarray(arr)
-
- # Create a temporary file for the image
- temp_file = tempfile.NamedTemporaryFile(delete=False, suffix='.png', dir=tempfile.gettempdir())
- temp_file_name = temp_file.name
- temp_file.close() # We close it because we're not writing to it directly, PIL will handle the writing
-
- # Save the image to the temp file
- img.save(temp_file_name)
-
- return temp_file_name
-
-'''
-def process_resize(w: int, h: int, resize_dims: list) -> tuple:
- if len(resize_dims) == 1 and resize_dims[0] > -1:
- scale = resize_dims[0] / max(h, w)
- w_new, h_new = int(round(w * scale)), int(round(h * scale))
- return w_new, h_new
- return w, h
-'''
-
-def plot_image_pair(imgs, dpi=100, size=6, pad=.5):
- n = len(imgs)
- assert n == 2, 'number of images must be two'
- figsize = (size*n, size*3/4) if size is not None else None
- _, ax = plt.subplots(1, n, figsize=figsize, dpi=dpi)
- for i in range(n):
- ax[i].imshow(imgs[i], cmap=plt.get_cmap('gray'), vmin=0, vmax=255)
- ax[i].get_yaxis().set_ticks([])
- ax[i].get_xaxis().set_ticks([])
- for spine in ax[i].spines.values(): # remove frame
- spine.set_visible(False)
- plt.tight_layout(pad=pad)
-
-def plot_keypoints(kpts0, kpts1, color='w', ps=2):
- ax = plt.gcf().axes
- ax[0].scatter(kpts0[:, 0], kpts0[:, 1], c=color, s=ps)
- ax[1].scatter(kpts1[:, 0], kpts1[:, 1], c=color, s=ps)
-
-def plot_matches(kpts0, kpts1, color, lw=1.5, ps=4):
- fig = plt.gcf()
- ax = fig.axes
- fig.canvas.draw()
-
- transFigure = fig.transFigure.inverted()
- fkpts0 = transFigure.transform(ax[0].transData.transform(kpts0))
- fkpts1 = transFigure.transform(ax[1].transData.transform(kpts1))
-
- fig.lines = [matplotlib.lines.Line2D(
- (fkpts0[i, 0], fkpts1[i, 0]), (fkpts0[i, 1], fkpts1[i, 1]), zorder=1,
- transform=fig.transFigure, c=color[i], linewidth=lw)
- for i in range(len(kpts0))]
- ax[0].scatter(kpts0[:, 0], kpts0[:, 1], c=color, s=ps)
- ax[1].scatter(kpts1[:, 0], kpts1[:, 1], c=color, s=ps)
-
-def unified_matching_plot2(image0, image1, kpts0, kpts1, mkpts0, mkpts1,
- color, text, path=None, show_keypoints=False,
- fast_viz=False, opencv_display=False,
- opencv_title='matches', small_text=[]):
-
- # Set the background color for the plot
- plt.figure(facecolor='#eeeeee')
- plot_image_pair([image0, image1])
-
- # Elegant points and lines for matches
- if show_keypoints:
- plot_keypoints(kpts0, kpts1, color='k', ps=4)
- plot_keypoints(kpts0, kpts1, color='w', ps=2)
- plot_matches(mkpts0, mkpts1, color, lw=1)
-
- fig = plt.gcf()
-
- # Add text
- fig.text(
- 0.01, 0.01, '\n'.join(small_text), transform=fig.axes[0].transAxes,
- fontsize=10, va='bottom', ha='left', color='#333333', fontweight='bold',
- bbox=dict(facecolor='white', alpha=0.7, edgecolor='none', boxstyle="round,pad=0.3"))
-
- fig.text(
- 0.01, 0.99, '\n'.join(text), transform=fig.axes[0].transAxes,
- fontsize=15, va='top', ha='left', color='#333333', fontweight='bold',
- bbox=dict(facecolor='white', alpha=0.7, edgecolor='none', boxstyle="round,pad=0.3"))
-
- # Optional: remove axis for a cleaner look
- plt.axis('off')
-
- # Convert the figure to an OpenCV image
- buf = BytesIO()
- plt.savefig(buf, format='png', bbox_inches='tight', pad_inches=0)
- buf.seek(0)
- img_arr = np.frombuffer(buf.getvalue(), dtype=np.uint8)
- buf.close()
- img = cv2.imdecode(img_arr, 1)
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
-
- # Close the figure to free memory
- plt.close(fig)
-
- return img
-
-def create_image_pyramid2(image_path, longest_side, scales=[0.25, 0.5, 1.0]):
- original_image = cv2.imread(image_path)
- oh, ow, _ = original_image.shape
-
- # Determine the scaling factor based on the longest side
- if oh > ow:
- output_height = longest_side
- output_width = int((ow / oh) * longest_side)
- else:
- output_width = longest_side
- output_height = int((oh / ow) * longest_side)
- output_size = (output_width, output_height)
-
- pyramid = []
-
- for scale in scales:
- # Resize based on the scale factor
- resized = cv2.resize(original_image, None, fx=scale, fy=scale)
- rh, rw, _ = resized.shape
-
- if scale < 1.0: # downsampling
- # Calculate the amount of padding required
- dy_top = max((output_size[1] - rh) // 2, 0)
- dy_bottom = output_size[1] - rh - dy_top
- dx_left = max((output_size[0] - rw) // 2, 0)
- dx_right = output_size[0] - rw - dx_left
-
- # Create padded image
- padded = cv2.copyMakeBorder(resized, dy_top, dy_bottom, dx_left, dx_right, cv2.BORDER_CONSTANT, value=[255, 255, 255])
- pyramid.append(padded)
- elif scale > 1.0: # upsampling
- # We need to crop the image to fit the desired output size
- dy = (rh - output_size[1]) // 2
- dx = (rw - output_size[0]) // 2
- cropped = resized[dy:dy+output_size[1], dx:dx+output_size[0]]
- pyramid.append(cropped)
- else: # scale == 1.0
- pyramid.append(resized)
-
- return pyramid
-
-# Example usage
-# pyramid = create_image_pyramid('path_to_image.jpg', 800)
-def image_matching(query_img, target_img, image_dims=[640*2], scale_factors=[0.33,0.66,1], visualize=True, k_thresh=None, m_thresh=None, write=False):
-
- image1, inp1, scales1 = read_image(target_img, device, [640*2], 0, True)
- query_pyramid = create_image_pyramid2(query_img, image_dims[0], scale_factors)
-
- all_valid = []
- all_inliers = []
- all_return_imgs = []
- max_matches_img = None
- max_matches = -1
-
- for idx, query_level in enumerate(query_pyramid):
- temp_file_path = "temp_level_{}.png".format(idx)
- cv2.imwrite(temp_file_path, query_level)
-
- image0, inp0, scales0 = read_image(temp_file_path, device, [640*2], 0, True)
-
- if image0 is None or image1 is None:
- print('Problem reading image pair: {} {}'.format(query_img, target_img))
- else:
- # Matching
- pred = matching({'image0': inp0, 'image1': inp1})
- pred = {k: v[0] for k, v in pred.items()}
- kpts0, kpts1 = pred['keypoints0'], pred['keypoints1']
- matches, conf = pred['matches0'], pred['matching_scores0']
-
- valid = matches > -1
- mkpts0 = kpts0[valid]
- mkpts1 = kpts1[matches[valid]]
- mconf = conf[valid]
- #color = cm.jet(mconf)[:len(mkpts0)] # Ensure consistent size
- color = cm.jet(mconf.detach().numpy())[:len(mkpts0)]
-
- all_valid.append(np.sum( valid.tolist() ))
-
- # Convert torch tensors to numpy arrays.
- mkpts0_np = mkpts0.cpu().numpy()
- mkpts1_np = mkpts1.cpu().numpy()
-
- try:
- # Use RANSAC to find the homography matrix.
- H, inliers = cv2.findHomography(mkpts0_np, mkpts1_np, cv2.RANSAC, 5.0)
- except:
- H = 0
- inliers = 0
- print ("Not enough points for homography")
- # Convert inliers from shape (N, 1) to shape (N,) and count them.
- num_inliers = np.sum(inliers)
-
- all_inliers.append(num_inliers)
-
- # Visualization
- text = [
- 'Engagify Image Matching',
- 'Keypoints: {}:{}'.format(len(kpts0), len(kpts1)),
- 'Scaling Factor: {}'.format( scale_factors[idx]),
- 'Matches: {}'.format(len(mkpts0)),
- 'Inliers: {}'.format(num_inliers),
- ]
-
-
- k_thresh = matching.superpoint.config['keypoint_threshold']
- m_thresh = matching.superglue.config['match_threshold']
-
- small_text = [
- 'Keypoint Threshold: {:.4f}'.format(k_thresh),
- 'Match Threshold: {:.2f}'.format(m_thresh),
- ]
-
- visualized_img = None # To store the visualized image
-
- if visualize:
- ret_img = unified_matching_plot2(
- image0, image1, kpts0, kpts1, mkpts0, mkpts1, color, text, 'Test_Level_{}'.format(idx), True, False, True, 'Matches_Level_{}'.format(idx), small_text)
- all_return_imgs.append(ret_img)
- # Storing image with most matches
- #if len(mkpts0) > max_matches:
- # max_matches = len(mkpts0)
- # max_matches_img = 'Matches_Level_{}'.format(idx)
-
- avg_valid = np.sum(all_valid) / len(scale_factors)
- avg_inliers = np.sum(all_inliers) / len(scale_factors)
-
-# Convert the image with the most matches to base64 encoded format
-# with open(max_matches_img, "rb") as image_file:
-# encoded_string = base64.b64encode(image_file.read()).decode()
-
- return {'valid':all_valid, 'inliers':all_inliers, 'visualized_image':all_return_imgs} #, encoded_string
-
-# Usage:
-#results = image_matching('Samples/Poster/poster_event_small_22.jpg', 'Samples/Images/16.jpeg', visualize=True)
-#print (results)
-
-def image_matching_no_pyramid(query_img, target_img, visualize=True, write=False):
-
- image1, inp1, scales1 = read_image(target_img, device, [640*2], 0, True)
- image0, inp0, scales0 = read_image(query_img, device, [640*2], 0, True)
-
- if image0 is None or image1 is None:
- print('Problem reading image pair: {} {}'.format(query_img, target_img))
- return None
-
- # Matching
- pred = matching({'image0': inp0, 'image1': inp1})
- pred = {k: v[0] for k, v in pred.items()}
- kpts0, kpts1 = pred['keypoints0'], pred['keypoints1']
- matches, conf = pred['matches0'], pred['matching_scores0']
-
- valid = matches > -1
- mkpts0 = kpts0[valid]
- mkpts1 = kpts1[matches[valid]]
- mconf = conf[valid]
- #color = cm.jet(mconf)[:len(mkpts0)] # Ensure consistent size
- color = cm.jet(mconf.detach().numpy())[:len(mkpts0)]
-
- valid_count = np.sum(valid.tolist())
-
- # Convert torch tensors to numpy arrays.
- mkpts0_np = mkpts0.cpu().numpy()
- mkpts1_np = mkpts1.cpu().numpy()
-
- try:
- # Use RANSAC to find the homography matrix.
- H, inliers = cv2.findHomography(mkpts0_np, mkpts1_np, cv2.RANSAC, 5.0)
- except:
- H = 0
- inliers = 0
- print("Not enough points for homography")
-
- # Convert inliers from shape (N, 1) to shape (N,) and count them.
- num_inliers = np.sum(inliers)
-
- # Visualization
- text = [
- 'Engagify Image Matching',
- 'Keypoints: {}:{}'.format(len(kpts0), len(kpts1)),
- 'Matches: {}'.format(len(mkpts0)),
- 'Inliers: {}'.format(num_inliers),
- ]
-
- k_thresh = matching.superpoint.config['keypoint_threshold']
- m_thresh = matching.superglue.config['match_threshold']
-
- small_text = [
- 'Keypoint Threshold: {:.4f}'.format(k_thresh),
- 'Match Threshold: {:.2f}'.format(m_thresh),
- ]
-
- visualized_img = None # To store the visualized image
-
- if visualize:
- visualized_img = unified_matching_plot2(
- image0, image1, kpts0, kpts1, mkpts0, mkpts1, color, text, 'Test_Match', True, False, True, 'Matches', small_text)
-
- return {
- 'valid': [valid_count],
- 'inliers': [num_inliers],
- 'visualized_image': [visualized_img]
- }
-
-# Usage:
-#results = image_matching_no_pyramid('Samples/Poster/poster_event_small_22.jpg', 'Samples/Images/16.jpeg', visualize=True)
-
-# Load the SuperPoint and SuperGlue models.
-device = 'cuda' if torch.cuda.is_available() and not opt.force_cpu else 'cpu'
-print('Running inference on device \"{}\"'.format(device))
-config = {
- 'superpoint': {
- 'nms_radius': 4,
- 'keypoint_threshold': 0.005,
- 'max_keypoints': 1024
- },
- 'superglue': {
- 'weights': 'outdoor',
- 'sinkhorn_iterations': 20,
- 'match_threshold': 0.2,
- }
-}
-matching = Matching(config).eval().to(device)
-
-from PIL import Image
-
-def stitch_images(images):
- """Stitches a list of images vertically."""
- if not images:
- # Return a placeholder image if the images list is empty
- return Image.new('RGB', (100, 100), color='gray')
-
- max_width = max([img.width for img in images])
- total_height = sum(img.height for img in images)
-
- composite = Image.new('RGB', (max_width, total_height))
-
- y_offset = 0
- for img in images:
- composite.paste(img, (0, y_offset))
- y_offset += img.height
-
- return composite
-
-def check_object_in_image3(query_image, target_image, threshold=50, scale_factor=[0.33,0.66,1]):
- decision_on = []
- # Convert cv2 images to PIL images and add them to a list
- images_to_return = []
-
- cropped_images, bbox_image = detect_and_crop2(target_image_path=target_image,
- query_image_path=query_image,
- model=model,
- processor=processor,
- mixin=mixin,
- device=device,
- visualize=False)
-
- temp_files = [save_array_to_temp_image(i) for i in cropped_images]
- crop_results = [image_matching_no_pyramid(query_image, i, visualize=True) for i in temp_files]
-
- cropped_visuals = []
- cropped_inliers = []
- for result in crop_results:
- # Add visualized images to the temporary list
- for img in result['visualized_image']:
- cropped_visuals.append(Image.fromarray(img))
- for inliers_ in result['inliers']:
- cropped_inliers.append(inliers_)
- # Stitch the cropped visuals into one image
- images_to_return.append(stitch_images(cropped_visuals))
-
- pyramid_results = image_matching(query_image, target_image, visualize=True, scale_factors=scale_factor)
-
- pyramid_visuals = [Image.fromarray(img) for img in pyramid_results['visualized_image']]
- # Stitch the pyramid visuals into one image
- images_to_return.append(stitch_images(pyramid_visuals))
-
- # Check inliers and determine if the object is present
- print (cropped_inliers)
- is_present = any(value > threshold for value in cropped_inliers)
- if is_present == True:
- decision_on.append('Object Detection')
- is_present = any(value > threshold for value in pyramid_results["inliers"])
- if is_present == True:
- decision_on.append('Pyramid Max Point')
- if is_present == False:
- decision_on.append("Neither, It Failed All Tests")
-
- # Return results as a dictionary
- return {
- 'is_present': is_present,
- 'images': images_to_return,
- 'scale factors': scale_factor,
- 'object detection inliers': cropped_inliers,
- 'pyramid_inliers' : pyramid_results["inliers"],
- 'bbox_image':bbox_image,
- 'decision_on':decision_on,
-
- }
-
-# Example call:
-#result = check_object_in_image3('Samples/Poster/poster_event_small.jpg', 'Samples/Images/True_Image_3423234.jpeg', 50)
-# Accessing the results:
-#print(result['is_present']) # prints True/False
-#print(result['images']) # is a list of 2 stitched images.
-
-
-import gradio as gr
-import cv2
-from PIL import Image
-
-def gradio_interface(query_image_path, target_image_path, threshold):
- result = check_object_in_image3(query_image_path, target_image_path, threshold)
- # Depending on how many images are in the list, you can return them like this:
- return result['bbox_image'], result['images'][0], result['object detection inliers'], result['scale factors'], result['pyramid_inliers'], result['images'][1], str(result['is_present']), result['decision_on']
-
-
-# Define the Gradio interface
-interface = gr.Interface(
- fn=gradio_interface, # function to be called on button press
- inputs=[
- gr.components.Image(label="Query Image (Drop the Image you want to detect here)", type="filepath"),
- gr.components.Image(label="Target Image (Drop the Image youd like to search here)", type="filepath"),
- gr.components.Slider(minimum=0, maximum=200, value=50, step=5, label="Enter the Inlier Threshold"),
- ],
- outputs=[
- gr.components.Image(label='Filtered Regions of Interest (Candidates)'),
- gr.components.Image(label="Cropped Visuals from Image Guided Object Detection "),
- gr.components.Text(label='Inliers detected for Image Guided Object Detection '),
- gr.components.Text(label='Scale Factors Used for Pyramid (Results below, In Order)'),
- gr.components.Text(label='Inliers detected for Pyramid Search (In Order)'),
- gr.components.Image(label="Pyramid Visuals"),
- gr.components.Textbox(label="Object Present?"),
- gr.components.Textbox(label="Decision Taken Based on?"),
- ],
- theme=gr.themes.Monochrome(),
- title="'Image Specific Image Recognition + Matching Tool",
- description="[Author: Ibrahim Hasani] \n "
- " This tool leverages Transformer, Deep Learning, and Traditional Computer Vision techniques to determine if a specified object "
- "(given by the query image) is present within a target image. \n"
- "1. Image-Guided Object Detection where we detect potential regions of interest. (Owl-Vit-Google). \n"
- "2. Pyramid Search that looks at various scales of the target image. Results provide "
- "visual representations of the matching process and a final verdict on the object's presence.\n"
- "3. SuperPoint (MagicLeap) + SuperGlue + Homography to extract inliers, which are thresholded for decision making."
-)
-
-interface.launch()
\ No newline at end of file
diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/preprocessing/get_common_voice_audio_manifest.py b/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/preprocessing/get_common_voice_audio_manifest.py
deleted file mode 100644
index a30254604311a488a1d4959f941051890ed32b2e..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/preprocessing/get_common_voice_audio_manifest.py
+++ /dev/null
@@ -1,140 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import logging
-from pathlib import Path
-from collections import defaultdict
-from typing import List, Dict, Tuple
-
-import pandas as pd
-import numpy as np
-import torchaudio
-from tqdm import tqdm
-
-from examples.speech_to_text.data_utils import load_df_from_tsv, save_df_to_tsv
-
-
-log = logging.getLogger(__name__)
-
-SPLITS = ["train", "dev", "test"]
-
-
-def get_top_n(
- root: Path, n_speakers: int = 10, min_n_tokens: int = 5
-) -> pd.DataFrame:
- df = load_df_from_tsv(root / "validated.tsv")
- df["n_tokens"] = [len(s.split()) for s in df["sentence"]]
- df = df[df["n_tokens"] >= min_n_tokens]
- df["n_frames"] = [
- torchaudio.info((root / "clips" / p).as_posix()).num_frames
- for p in tqdm(df["path"])
- ]
- df["id"] = [Path(p).stem for p in df["path"]]
- total_duration_ms = df.groupby("client_id")["n_frames"].agg(["sum"])
- total_duration_ms = total_duration_ms.sort_values("sum", ascending=False)
-
- top_n_total_duration_ms = total_duration_ms.head(n_speakers)
- top_n_client_ids = set(top_n_total_duration_ms.index.tolist())
- df_top_n = df[df["client_id"].isin(top_n_client_ids)]
- return df_top_n
-
-
-def get_splits(
- df, train_split_ratio=0.99, speaker_in_all_splits=False, rand_seed=0
-) -> Tuple[Dict[str, str], List[str]]:
- np.random.seed(rand_seed)
- dev_split_ratio = (1. - train_split_ratio) / 3
- grouped = list(df.groupby("client_id"))
- id_to_split = {}
- for _, cur_df in tqdm(grouped):
- cur_n_examples = len(cur_df)
- if speaker_in_all_splits and cur_n_examples < 3:
- continue
- cur_n_train = int(cur_n_examples * train_split_ratio)
- cur_n_dev = int(cur_n_examples * dev_split_ratio)
- cur_n_test = cur_n_examples - cur_n_dev - cur_n_train
- if speaker_in_all_splits and cur_n_dev * cur_n_test == 0:
- cur_n_dev, cur_n_test = 1, 1
- cur_n_train = cur_n_examples - cur_n_dev - cur_n_test
- cur_indices = cur_df.index.tolist()
- cur_shuffled_indices = np.random.permutation(cur_n_examples)
- cur_shuffled_indices = [cur_indices[i] for i in cur_shuffled_indices]
- cur_indices_by_split = {
- "train": cur_shuffled_indices[:cur_n_train],
- "dev": cur_shuffled_indices[cur_n_train: cur_n_train + cur_n_dev],
- "test": cur_shuffled_indices[cur_n_train + cur_n_dev:]
- }
- for split in SPLITS:
- for i in cur_indices_by_split[split]:
- id_ = df["id"].loc[i]
- id_to_split[id_] = split
- return id_to_split, sorted(df["client_id"].unique())
-
-
-def convert_to_wav(root: Path, filenames: List[str], target_sr=16_000):
- out_root = root / "wav"
- out_root.mkdir(exist_ok=True, parents=True)
- print("Converting to WAV...")
- for n in tqdm(filenames):
- in_path = (root / "clips" / n).as_posix()
- waveform, sr = torchaudio.load(in_path)
- converted, converted_sr = torchaudio.sox_effects.apply_effects_tensor(
- waveform, sr, [["rate", str(target_sr)], ["channels", "1"]]
- )
- out_path = (out_root / Path(n).with_suffix(".wav").name).as_posix()
- torchaudio.save(out_path, converted, converted_sr, encoding="PCM_S",
- bits_per_sample=16)
-
-
-def process(args):
- data_root = Path(args.data_root).absolute() / args.lang
-
- # Generate TSV manifest
- print("Generating manifest...")
-
- df_top_n = get_top_n(data_root)
- id_to_split, speakers = get_splits(df_top_n)
-
- if args.convert_to_wav:
- convert_to_wav(data_root, df_top_n["path"].tolist())
-
- manifest_by_split = {split: defaultdict(list) for split in SPLITS}
- for sample in tqdm(df_top_n.to_dict(orient="index").values()):
- sample_id = sample["id"]
- split = id_to_split[sample_id]
- manifest_by_split[split]["id"].append(sample_id)
- if args.convert_to_wav:
- audio_path = data_root / "wav" / f"{sample_id}.wav"
- else:
- audio_path = data_root / "clips" / f"{sample_id}.mp3"
- manifest_by_split[split]["audio"].append(audio_path.as_posix())
- manifest_by_split[split]["n_frames"].append(sample["n_frames"])
- manifest_by_split[split]["tgt_text"].append(sample["sentence"])
- manifest_by_split[split]["speaker"].append(sample["client_id"])
- manifest_by_split[split]["src_text"].append(sample["sentence"])
-
- output_root = Path(args.output_manifest_root).absolute()
- output_root.mkdir(parents=True, exist_ok=True)
- for split in SPLITS:
- save_df_to_tsv(
- pd.DataFrame.from_dict(manifest_by_split[split]),
- output_root / f"{split}.audio.tsv"
- )
-
-
-def main():
- parser = argparse.ArgumentParser()
- parser.add_argument("--data-root", "-d", required=True, type=str)
- parser.add_argument("--output-manifest-root", "-m", required=True, type=str)
- parser.add_argument("--lang", "-l", required=True, type=str)
- parser.add_argument("--convert-to-wav", action="store_true")
- args = parser.parse_args()
-
- process(args)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/__init__.py b/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/concat_sentences_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/concat_sentences_dataset.py
deleted file mode 100644
index 625a29370e90f9d1d7274024afb902ed83a22325..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/data/concat_sentences_dataset.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-from . import FairseqDataset
-
-
-class ConcatSentencesDataset(FairseqDataset):
- def __init__(self, *datasets):
- super().__init__()
- self.datasets = datasets
- assert all(
- len(ds) == len(datasets[0]) for ds in datasets
- ), "datasets must have the same length"
-
- def __getitem__(self, index):
- return torch.cat([ds[index] for ds in self.datasets])
-
- def __len__(self):
- return len(self.datasets[0])
-
- def collater(self, samples):
- return self.datasets[0].collater(samples)
-
- @property
- def sizes(self):
- return sum(ds.sizes for ds in self.datasets)
-
- def num_tokens(self, index):
- return sum(ds.num_tokens(index) for ds in self.datasets)
-
- def size(self, index):
- return sum(ds.size(index) for ds in self.datasets)
-
- def ordered_indices(self):
- return self.datasets[0].ordered_indices()
-
- @property
- def supports_prefetch(self):
- return any(getattr(ds, "supports_prefetch", False) for ds in self.datasets)
-
- def prefetch(self, indices):
- for ds in self.datasets:
- if getattr(ds, "supports_prefetch", False):
- ds.prefetch(indices)
-
- def set_epoch(self, epoch):
- super().set_epoch(epoch)
- for ds in self.datasets:
- if hasattr(ds, "set_epoch"):
- ds.set_epoch(epoch)
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/tasks/audio_pretraining.py b/spaces/ICML2022/OFA/fairseq/fairseq/tasks/audio_pretraining.py
deleted file mode 100644
index cc310088db8852e80cd2e65d51f06f8f7cb592e3..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/tasks/audio_pretraining.py
+++ /dev/null
@@ -1,206 +0,0 @@
-# Copyright (c) 2017-present, Facebook, Inc.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the LICENSE file in
-# the root directory of this source tree. An additional grant of patent rights
-# can be found in the PATENTS file in the same directory.
-
-import logging
-import os
-import sys
-
-from argparse import Namespace
-from dataclasses import dataclass, field
-from typing import Optional
-from omegaconf import MISSING, II, OmegaConf
-
-from fairseq.data import BinarizedAudioDataset, FileAudioDataset
-from fairseq.dataclass import FairseqDataclass, ChoiceEnum
-from fairseq.data.text_compressor import TextCompressionLevel
-
-from . import FairseqTask, register_task
-
-
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class InferredW2vConfig:
- # The following are needed to precompute mask and mask channel indices
- # before model's forward.
- mask_length: Optional[int] = II("model.mask_length")
- mask_prob: Optional[float] = II("model.mask_prob")
- mask_selection: Optional[str] = II("model.mask_selection")
- mask_other: Optional[float] = II("model.mask_other")
- no_mask_overlap: Optional[bool] = II("model.no_mask_overlap")
- mask_min_space: Optional[int] = II("model.mask_min_space")
- mask_channel_length: Optional[int] = II("model.mask_channel_length")
- mask_channel_prob: Optional[float] = II("model.mask_channel_prob")
- mask_channel_selection: Optional[str] = II("model.mask_channel_selection")
- mask_channel_other: Optional[float] = II("model.mask_channel_other")
- no_mask_channel_overlap: Optional[bool] = II("model.no_mask_channel_overlap")
- mask_channel_min_space: Optional[int] = II("model.mask_channel_min_space")
-
- conv_feature_layers: Optional[str] = II("model.conv_feature_layers")
- encoder_embed_dim: Optional[int] = II("model.encoder_embed_dim")
-
-
-@dataclass
-class AudioPretrainingConfig(FairseqDataclass):
- data: str = field(default=MISSING, metadata={"help": "path to data directory"})
- labels: Optional[str] = field(
- default=None,
- metadata={
- "help": "extension of the label file to load, used for fine-tuning"},
- )
- binarized_dataset: bool = field(
- default=False,
- metadata={
- "help": "if true, loads binarized dataset (useful for very large datasets). "
- "See examples/wav2vec/scripts/binarize_manifest.sh"
- },
- )
- sample_rate: int = field(
- default=16_000,
- metadata={
- "help": "target sample rate. audio files will be up/down sampled to this rate"
- },
- )
- normalize: bool = field(
- default=False,
- metadata={"help": "if set, normalizes input to have 0 mean and unit variance"},
- )
- enable_padding: bool = field(
- default=False, metadata={"help": "pad shorter samples instead of cropping"}
- )
- max_sample_size: Optional[int] = field(
- default=None, metadata={"help": "max sample size to crop to for batching"}
- )
- min_sample_size: Optional[int] = field(
- default=None, metadata={"help": "min sample size to skip small examples"}
- )
- num_batch_buckets: int = field(
- default=0,
- metadata={"help": "number of buckets"},
- )
- precompute_mask_indices: bool = field(
- default=False,
- metadata={
- "help": "flag to compute mask indices in data preparation.",
- },
- )
-
- inferred_w2v_config: Optional[InferredW2vConfig] = field(
- default=None,
- metadata={
- "help": "wav2vec 2.0 masking arguments used to pre-compute masks (required for TPU)",
- },
- )
-
- tpu: bool = II("common.tpu")
- text_compression_level: ChoiceEnum([x.name for x in TextCompressionLevel]) = field(
- default="none",
- metadata={
- "help": "compression level for texts (e.g. audio filenames, "
- "target texts): none/low/high (default: none). "
- }
- )
-
-
-@register_task("audio_pretraining", dataclass=AudioPretrainingConfig)
-class AudioPretrainingTask(FairseqTask):
- """ """
-
- cfg: AudioPretrainingConfig
-
- @classmethod
- def setup_task(cls, cfg: AudioPretrainingConfig, **kwargs):
- """Setup the task (e.g., load dictionaries).
-
- Args:
- cfg (AudioPretrainingConfig): configuration of this task
- """
-
- return cls(cfg)
-
- def _get_mask_precompute_kwargs(self, cfg):
- if self.cfg.precompute_mask_indices or self.cfg.tpu:
- assert (
- cfg.inferred_w2v_config is not None
- ), "inferred_w2v_config must be set"
- return OmegaConf.to_container(
- cfg.inferred_w2v_config, resolve=True, enum_to_str=True
- )
- else:
- return {}
-
- def load_dataset(self, split: str, task_cfg: FairseqDataclass = None, **kwargs):
- data_path = self.cfg.data
- task_cfg = task_cfg or self.cfg
-
- # upgrade old task
- if isinstance(task_cfg, Namespace):
- if not hasattr(task_cfg, "autoregressive"):
- task_cfg.autoregressive = not task_cfg.criterion == "ctc"
-
- text_compression_level = getattr(
- TextCompressionLevel, str(self.cfg.text_compression_level)
- )
- if getattr(task_cfg, "binarized_dataset", False):
- self.datasets[split] = BinarizedAudioDataset(
- data_path,
- split=split,
- sample_rate=task_cfg.get("sample_rate", self.cfg.sample_rate),
- max_sample_size=self.cfg.max_sample_size,
- min_sample_size=self.cfg.min_sample_size,
- pad=task_cfg.labels is not None or task_cfg.enable_padding,
- normalize=task_cfg.normalize,
- num_buckets=self.cfg.num_batch_buckets or int(self.cfg.tpu),
- compute_mask_indices=(self.cfg.precompute_mask_indices or self.cfg.tpu),
- **self._get_mask_precompute_kwargs(task_cfg),
- )
- else:
- manifest_path = os.path.join(data_path, "{}.tsv".format(split))
-
- self.datasets[split] = FileAudioDataset(
- manifest_path=manifest_path,
- sample_rate=task_cfg.get("sample_rate", self.cfg.sample_rate),
- max_sample_size=self.cfg.max_sample_size,
- min_sample_size=self.cfg.min_sample_size,
- pad=task_cfg.labels is not None or task_cfg.enable_padding,
- normalize=task_cfg.normalize,
- num_buckets=self.cfg.num_batch_buckets or int(self.cfg.tpu),
- compute_mask_indices=(self.cfg.precompute_mask_indices or self.cfg.tpu),
- text_compression_level=text_compression_level,
- **self._get_mask_precompute_kwargs(task_cfg),
- )
-
- if self.cfg.tpu and task_cfg.inferred_w2v_config.mask_channel_prob == 0.0:
- logger.info(
- "Pretraining on TPUs may suffer convergence "
- "issues when training with `mask_channel_prob` value of "
- "0. You may want to set this to a low value close to 0."
- )
-
- @property
- def source_dictionary(self):
- return None
-
- @property
- def target_dictionary(self):
- return None
-
- def max_positions(self):
- """Maximum input length supported by the encoder."""
- return sys.maxsize, sys.maxsize
-
- def build_model(self, model_cfg: FairseqDataclass):
- model = super().build_model(model_cfg)
-
- actualized_cfg = getattr(model, "cfg", None)
- if actualized_cfg is not None:
- # if "w2v_args" in actualized_cfg:
- if hasattr(actualized_cfg, "w2v_args"):
- model_cfg.w2v_args = actualized_cfg.w2v_args
-
- return model
diff --git a/spaces/Illumotion/Koboldcpp/otherarch/mpt_v3.cpp b/spaces/Illumotion/Koboldcpp/otherarch/mpt_v3.cpp
deleted file mode 100644
index 57ed90888fb5309b861c5ebe917c6bd1dfc667c3..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/otherarch/mpt_v3.cpp
+++ /dev/null
@@ -1,581 +0,0 @@
-#include "ggml.h"
-#include "otherarch.h"
-
-#include "utils.h"
-
-#include
-#include
-#include
-#include
-#include
-#include