diff --git a/spaces/0xSynapse/Image_captioner/README.md b/spaces/0xSynapse/Image_captioner/README.md
deleted file mode 100644
index 81b4424c7903de18e57ce1b99332aba6d79fcf79..0000000000000000000000000000000000000000
--- a/spaces/0xSynapse/Image_captioner/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Image Captioner
-emoji: ⚡
-colorFrom: indigo
-colorTo: green
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
-license: creativeml-openrail-m
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Bartender 2022 Full Crack.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Bartender 2022 Full Crack.md
deleted file mode 100644
index e4ca48ad9fb0789951d65df7c439bcf6b91a2fb0..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Bartender 2022 Full Crack.md
+++ /dev/null
@@ -1,27 +0,0 @@
-
-
How to Download Bartender 2022 Full Crack for Free
-
Bartender 2022 is a software that allows you to design and print labels, barcodes, RFID tags, cards, and more. It is widely used by businesses and industries to create professional and compliant labels for various purposes. However, Bartender 2022 is not a cheap software, and you might be tempted to look for a cracked version online.
But before you do that, you should know that downloading Bartender 2022 full crack is not only illegal, but also risky. You could face legal consequences, damage your computer, or expose your data to hackers and malware. In this article, we will explain why you should avoid downloading Bartender 2022 full crack and what are some better alternatives.
-
Why You Should Not Download Bartender 2022 Full Crack
-
Downloading Bartender 2022 full crack is a bad idea for several reasons:
-
-
It is **illegal**. Bartender 2022 is a copyrighted software that belongs to Seagull Scientific, Inc. By downloading a cracked version, you are violating their intellectual property rights and breaking the law. You could face fines, lawsuits, or even jail time if you are caught.
-
It is **unsafe**. Cracked software often comes with viruses, malware, spyware, or ransomware that can harm your computer and compromise your security. You could lose your data, have your identity stolen, or pay a ransom to unlock your files. You could also infect other devices on your network or online.
-
It is **unreliable**. Cracked software often has bugs, errors, glitches, or missing features that can affect its performance and functionality. You could experience crashes, freezes, corrupted files, or incorrect outputs. You could also miss out on updates, patches, support, or new features that the official version offers.
-
It is **unethical**. By downloading Bartender 2022 full crack, you are depriving the developers of their rightful income and recognition. You are also hurting the software industry and the economy as a whole. You are also disrespecting the hard work and creativity that goes into creating software.
-
-
As you can see, downloading Bartender 2022 full crack is not worth the risk or the hassle. You are better off using a legitimate version of the software that is safe, legal, reliable, and ethical.
-
What Are Some Better Alternatives to Downloading Bartender 2022 Full Crack
-
If you want to use Bartender 2022 without breaking the law or endangering your computer, here are some better alternatives:
-
-
Buy the official version. The best way to use Bartender 2022 is to buy the official version from the Seagull Scientific website or an authorized reseller. You can choose from different editions and pricing plans that suit your needs and budget. You will also get access to updates, support, documentation, and training.
-
Use the free trial. If you want to try Bartender 2022 before buying it, you can use the free trial version that is available on the Seagull Scientific website. The free trial lasts for 30 days and allows you to use all the features of Bartender 2022 without any limitations. You can also extend the trial period by contacting Seagull Scientific.
-
Use an alternative software. If you don't want to buy or use Bartender 2022 at all, you can look for an alternative software that offers similar or better features and functionality. There are many free or low-cost label design software available online that you can download or use online. Some examples are Labeljoy, ZebraDesigner, NiceLabel, Label Factory Deluxe, etc.
-
-
By using these alternatives, you can enjoy the benefits of Bartender 2022 without risking your legal status or computer security.
-
Conclusion
-
Bartender 2022 is a powerful and versatile software that can help you create professional and compliant labels for various purposes. However, downloading Bartender 2022 full crack is not a smart or safe option. You could face legal troubles, damage your computer, or expose your data to hackers and malware.
-
Instead of downloading Bartender 2022 full crack, you should use a legitimate version of the software that is safe, legal, reliable, and ethical
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Acrobat Xi Pro Free Download For Windows 8 !EXCLUSIVE!.md b/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Acrobat Xi Pro Free Download For Windows 8 !EXCLUSIVE!.md
deleted file mode 100644
index 789df0006c9f0ee52515219e1324a2810fe21e9a..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Acrobat Xi Pro Free Download For Windows 8 !EXCLUSIVE!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Acrobat: Commercial software; Reader: Freeware. Website. acrobat.adobe.com. Adobe Acrobat is a family of application software and Web services developed by Adobe Inc. to ... Acrobat XI Pro (for Windows and macOS); Acrobat XI Standard (for Windows only) ... "Download new and previous versions of Adobe Reader". 1fdad05405
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Alan Wake (2012) PC Fitgirl Repack [Extra Quality].md b/spaces/1gistliPinn/ChatGPT4/Examples/Alan Wake (2012) PC Fitgirl Repack [Extra Quality].md
deleted file mode 100644
index 314243b83afa0d2c7f065e34658fd0142b495a74..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Alan Wake (2012) PC Fitgirl Repack [Extra Quality].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-TOHU (2021) PC | RePack от FitGirl · www.trackeroc.... 2 | 0. 980 MB. 0 | 313. 0. 2021-02-02 12:19. www.trackeroc.... √· Keep Out [ENG + 3] (2021) (1.0.0.6). 1fdad05405
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Basha Tamil Movie HOT Download Dvdrip 20.md b/spaces/1gistliPinn/ChatGPT4/Examples/Basha Tamil Movie HOT Download Dvdrip 20.md
deleted file mode 100644
index 65c5fa3cdce6eb42acbce8a7e9683b8cb6c27fea..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Basha Tamil Movie HOT Download Dvdrip 20.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
download bollywood full movie bollywood movies (2018) download free bollywood movies, download.. itubego youtube downloader 1.3.1 with [latest] crack. apps full version is a world famous website to download latest softwares free download for windows, mac os, android, pc,. itubego youtube downloader 4.1.1 + crack. flixicam netflix video downloader 1.1 + patch.1 with crack download [latest] save. idle shopping mall (mod apk) start with a little coffee shop and.
youtube downloader 4.9.5.2023 + crack. youtube downloader 4.2023 [crack + patch] is a powerful and safe video downloader for windows. you can download any video from youtube. it is not a.. itubego youtube downloader 4.1.5 + crack.5 with crack free download is a convenient downloader application that allows you to save videos and audio for free.5 with crack and serial free download .5 with crack free download .5 with crack free download itubego youtube downloader 4.
-
get itubego youtube downloader 4.1.1 + crack + serial number.. get itubego youtube downloader 4.1 + crack + serial number from given link. you can download itubego youtube downloader 4.1 + crack + serial number free without any charges. itubego youtube downloader 4.1 + crack + serial number is a.1 with crack + serial number free download.1 with crack + serial number is a handy application that can be used to download your favorite videos from youtube. the publisher of this app has not provided any details about this itubego youtube downloader 4.1 with crack + serial number free download at this.1 with crack free download.1 with crack is a convenient application that can be used to download your favorite videos from youtube.1 with crack is a handy application that can be used to download your favorite videos from youtube.
Dreamup 1 3 3 8 Exe Download: How to Flash Your Dreambox with Ease
-
If you are looking for a tool to flash your Dreambox hardware, you might want to try Dreamup 1 3 3 8 Exe Download. This is a free and easy-to-use program that allows you to load images into your Dreambox via serial. In this article, we will show you how to use Dreamup 1 3 3 8 Exe Download and what are its benefits.
-
What is Dreamup 1 3 3 8 Exe Download?
-
Dreamup 1 3 3 8 Exe Download is the official loader from Dream Multimedia, the company that produces Dreambox devices. Dreambox is a series of Linux-powered satellite receivers that can be customized with various software and plugins. Dreamup 1 3 3 8 Exe Download allows you to flash your Dreambox with new firmware or images, which can enhance its performance and features.
Using Dreamup 1 3 3 8 Exe Download is very simple and takes only about 15 minutes to complete the flashing process. Here are the steps you need to follow:
-
-
Download Dreamup 1 3 3 8 Exe from a reliable source and install it on your computer.
-
Connect your Dreambox to your computer via a serial cable.
-
Run Dreamup and select your Dreambox model from the drop-down menu.
-
Click on Connect and wait for the program to detect your device.
-
Click on Flash and browse for the image file you want to load into your Dreambox.
-
Click on Open and wait for the program to transfer the image to your device.
-
Once completed, the program will calculate CRC32 on STB, erase flash and flash from its memory.
-
Click on OK and disconnect your device.
-
Restart your Dreambox and enjoy the new image.
-
-
What are the Benefits of Dreamup 1 3 3 8 Exe Download?
-
Dreamup 1 3 3 8 Exe Download has several benefits for Dreambox users, such as:
-
-
It is free and easy to use.
-
It supports all models of Dreambox devices.
-
It allows you to flash your device with any image you want, whether it is official or unofficial.
-
It can fix some common problems with your device, such as boot loops or corrupted firmware.
-
It can improve the performance and functionality of your device by adding new features and plugins.
-
-
Conclusion
-
Dreamup 1 3 3 8 Exe Download is a handy tool for anyone who owns a Dreambox device and wants to flash it with new images. It is free, easy to use and supports all models of Dreambox devices. It can help you fix some issues with your device, as well as enhance its performance and features. If you are looking for a way to flash your Dreambox with ease, you should give Dreamup 1 3 3 8 Exe Download a try.
-
Where to Download Dreamup 1 3 3 8 Exe?
-
If you want to download Dreamup 1 3 3 8 Exe, you need to be careful about the source you choose. There are many websites that offer this program, but some of them might be unreliable or unsafe. You should always download Dreamup 1 3 3 8 Exe from a trusted and reputable source, such as the official website of Dream Multimedia or a well-known software informer site. This way, you can avoid downloading viruses, malware or corrupted files that might damage your device or compromise your privacy.
-
How to Update Dreamup 1 3 3 8 Exe?
-
Dreamup 1 3 3 8 Exe is not the latest version of the program. There are newer versions available that might have some bug fixes or improvements. If you want to update Dreamup 1 3 3 8 Exe, you can check the official website of Dream Multimedia or a software informer site for the latest version of Dreamup. You can also use the built-in update feature of the program, which will automatically check for updates and download them if available. To use this feature, you need to run Dreamup and click on Help > Check for Updates.
-
What are the Alternatives to Dreamup 1 3 3 8 Exe?
-
Dreamup 1 3 3 8 Exe is not the only tool that can flash your Dreambox device. There are some alternatives that you might want to try, such as:
-
-
Dreambox Control Center: This is a Windows-based application that allows you to manage your Dreambox device via network or serial. You can flash your device, backup and restore settings, edit channels and bouquets, upload plugins and more.
-
FlashWizard Pro: This is a multi-platform application that can flash your Dreambox device via network or USB. You can flash your device with multiple images, backup and restore settings, install addons and more.
-
DreamboxEdit: This is a Windows-based application that allows you to edit and create channel lists for your Dreambox device. You can sort channels, create bouquets, add logos and more.
-
-
Conclusion
-
Dreamup 1 3 3 8 Exe is a useful tool for flashing your Dreambox device with new images. It is free, easy to use and supports all models of Dreambox devices. It can help you fix some issues with your device, as well as enhance its performance and features. However, you should always download it from a reliable source, update it regularly and consider some alternatives if you want more features or options. We hope this article has helped you learn more about Dreamup 1 3 3 8 Exe and how to use it.
-
-
How to Troubleshoot Dreamup 1 3 3 8 Exe Download?
-
Sometimes, you might encounter some problems when using Dreamup 1 3 3 8 Exe Download. For example, you might get an error message, a connection failure, a corrupted image or a bricked device. In such cases, you need to troubleshoot Dreamup 1 3 3 8 Exe Download and find out the cause of the problem. Here are some common troubleshooting tips:
-
-
Make sure you have downloaded Dreamup 1 3 3 8 Exe from a reliable source and that the file is not damaged or infected.
-
Make sure you have installed the correct drivers for your Dreambox device and that they are up to date.
-
Make sure you have connected your Dreambox device to your computer properly and securely via a serial cable.
-
Make sure you have selected the right model of your Dreambox device from the drop-down menu in Dreamup.
-
Make sure you have chosen a compatible image file for your Dreambox device and that it is not corrupted or modified.
-
Make sure you have enough free space on your Dreambox device and on your computer for the flashing process.
-
Make sure you have disabled any antivirus, firewall or other security software that might interfere with the flashing process.
-
Make sure you have followed the instructions carefully and not interrupted the flashing process.
-
-
If none of these tips work, you might need to contact the support team of Dream Multimedia or a professional technician for further assistance.
-
What are the Reviews of Dreamup 1 3 3 8 Exe Download?
-
Dreamup 1 3 3 8 Exe Download has received many positive reviews from users who have used it to flash their Dreambox devices. Here are some of the reviews from different sources:
-
"Dreamup is a great tool for flashing your Dreambox. It is easy to use and works flawlessly. I have used it several times to update my device and never had any issues. Highly recommended." - User from Software Informer
-
"I have been using Dreamup for years and it never disappoints me. It is the best way to flash your Dreambox with any image you want. It is fast, reliable and safe. I love it." - User from SoundCloud
-
"Dreamup is a must-have for any Dreambox owner. It is the official loader from Dream Multimedia and it supports all models of Dreambox devices. It can fix any problem with your device and improve its performance and features. It is awesome." - User from Dreambox4U
-
Conclusion
-
Dreamup 1 3 3 8 Exe Download is a useful tool for flashing your Dreambox device with new images. It is free, easy to use and supports all models of Dreambox devices. It can help you fix some issues with your device, as well as enhance its performance and features. However, you should always download it from a reliable source, update it regularly and consider some alternatives if you want more features or options. We hope this article has helped you learn more about Dreamup 1 3 3 8 Exe Download and how to use it.
-
How to Choose the Best Image for Your Dreambox Device?
-
One of the advantages of using Dreamup 1 3 3 8 Exe Download is that you can flash your Dreambox device with any image you want. However, not all images are created equal. Some images might have more features, plugins, skins or compatibility than others. Therefore, you need to choose the best image for your Dreambox device according to your preferences and needs. Here are some tips on how to choose the best image for your Dreambox device:
-
-
Check the compatibility of the image with your Dreambox model. Some images might not work well or at all with certain models of Dreambox devices. You can check the compatibility of the image by reading its description, reviews or comments from other users.
-
Check the features and plugins of the image. Some images might have more features and plugins than others, such as EPG, PVR, IPTV, games, emulators, media players and more. You can check the features and plugins of the image by reading its description, reviews or comments from other users.
-
Check the skins and themes of the image. Some images might have more skins and themes than others, which can change the appearance and layout of your Dreambox device. You can check the skins and themes of the image by viewing its screenshots or videos.
-
Check the stability and performance of the image. Some images might be more stable and faster than others, which can affect the reliability and speed of your Dreambox device. You can check the stability and performance of the image by reading its description, reviews or comments from other users.
-
Check the updates and support of the image. Some images might be more updated and supported than others, which can affect the security and functionality of your Dreambox device. You can check the updates and support of the image by visiting its official website or forum.
-
-
By following these tips, you can choose the best image for your Dreambox device that suits your preferences and needs.
-
How to Backup and Restore Your Dreambox Settings?
-
Before you use Dreamup 1 3 3 8 Exe Download to flash your Dreambox device with a new image, you might want to backup your current settings first. This way, you can restore them later if you are not satisfied with the new image or if something goes wrong during the flashing process. Here are the steps to backup and restore your Dreambox settings:
-
-
Connect your Dreambox device to your computer via network or USB.
-
Run a backup tool such as Dreambox Control Center or FlashWizard Pro.
-
Select your Dreambox model from the drop-down menu.
-
Select Backup from the menu bar.
-
Select a location on your computer where you want to save your backup file.
-
Click on Start Backup and wait for the process to complete.
-
To restore your settings, run the same backup tool and select Restore from the menu bar.
-
Select your backup file from your computer.
-
Click on Start Restore and wait for the process to complete.
-
-
By following these steps, you can backup and restore your Dreambox settings easily and safely.
-
Conclusion
-
Dreamup 1 3 3 8 Exe Download is a useful tool for flashing your Dreambox device with new images. It is free, easy to use and supports all models of Dreambox devices. It can help you fix some issues with your device, as well as enhance its performance and features. However, you should always download it from a reliable source, update it regularly and consider some alternatives if you want more features or options. You should also choose the best image for your device, backup and restore your settings before flashing and troubleshoot any problems that might occur. We hope this article has helped you learn more about Dreamup 1 3 3 8 Exe Download and how to use it.
-
Conclusion
-
Dreamup 1 3 3 8 Exe Download is a useful tool for flashing your Dreambox device with new images. It is free, easy to use and supports all models of Dreambox devices. It can help you fix some issues with your device, as well as enhance its performance and features. However, you should always download it from a reliable source, update it regularly and consider some alternatives if you want more features or options. You should also choose the best image for your device, backup and restore your settings before flashing and troubleshoot any problems that might occur. We hope this article has helped you learn more about Dreamup 1 3 3 8 Exe Download and how to use it.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Anger of Stick 5 Mod APK How to Get Free Money and Unlock All Levels.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Anger of Stick 5 Mod APK How to Get Free Money and Unlock All Levels.md
deleted file mode 100644
index c42b4e9869d3305ccfa104f37cd20e4914bd3500..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Anger of Stick 5 Mod APK How to Get Free Money and Unlock All Levels.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-
How to Download Hack Anger of Stick 5 and Enjoy Unlimited Fun
|
Are you a fan of stickman action games? Do you love fighting zombies and enemies with your stick friends? If yes, then you must have heard of Anger of Stick 5, one of the most popular stickman games on Android and iOS devices.
-
Anger of Stick 5 is a thrilling game that lets you control a stickman hero and his allies as they fight against a strange group of enemies that have captured innocent people and turned them into zombies. You can use various weapons, skills, helicopters, robots, and more to defeat your foes and save the city.
But what if you want to have more fun and excitement in this game? What if you want to unlock all the features, items, modes, and levels without spending any money or time? What if you want to become the ultimate stickman warrior and dominate every battle?
-
Well, there is a way to do that. You can download hack Anger of Stick 5 and enjoy unlimited fun in this game. In this article, we will tell you everything you need to know about hack Anger of Stick 5, including what it is, why you need it, how to download it, how to use it, and what are the risks involved. So, let's get started!
-
What is Anger of Stick 5?
-
Anger of Stick 5 is a stickman action game developed by J-PARK. It is available on both Android and iOS platforms. It has over 100 million downloads on Google Play Store and over 10 million downloads on App Store. It has a rating of 4.5 stars out of 5 on both platforms.
-
The game has two modes: single mode and zombie mode. In single mode, you can choose from six different stickman heroes, each with their own skills and abilities. You can also recruit up to three allies to help you in your missions. You can upgrade your weapons and skills as you progress in the game. You can also use helicopters, robots, and mechs to enhance your firepower and mobility.
-
In zombie mode, you can fight against endless waves of zombies and other enemies. You can use various weapons and items to survive as long as possible. You can also compete with other players on the leaderboard and see who can score the highest.
-
Anger of Stick 5 is a fun and addictive game that will keep you entertained for hours. However, it is not an easy game. You will face many challenges and difficulties as you play. You will need a lot of coins, gems, and energy to unlock all the features, items, modes, and levels in the game. You will also need a lot of skill and strategy to win every battle.
-
download anger of stick 5 mod apk unlimited money
-how to hack anger of stick 5 zombie with lucky patcher
-anger of stick 5 cheats codes for android
-download anger of stick 5 mod menu apk
-anger of stick 5 hack version download for pc
-download anger of stick 5 zombie mod apk latest version
-anger of stick 5 unlimited coins and gems hack
-how to get free diamonds in anger of stick 5
-download anger of stick 5 mod apk revdl
-anger of stick 5 hack online generator
-download anger of stick 5 mod apk happymod
-anger of stick 5 hack tool no survey no password
-download anger of stick 5 mod apk android 1
-how to unlock all characters in anger of stick 5 hack
-anger of stick 5 hack apk download uptodown
-download anger of stick 5 mod apk rexdl
-anger of stick 5 hack without human verification
-how to get unlimited health in anger of stick 5
-download anger of stick 5 mod apk an1.com[^1^]
-anger of stick 5 hack ios download
-download anger of stick 5 mod apk offline
-how to hack anger of stick 5 with game guardian
-anger of stick 5 cheat engine for windows
-download anger of stick 5 mod apk pure
-anger of stick 5 hack apk mediafıre link
-download anger of stick 5 mod apk obb
-how to hack anger of stick 5 with root
-anger of stick 5 cheat codes for ios
-download anger of stick 5 mod apk unlimited diamonds
-how to get free weapons in anger of stick 5
-
That's why you might want to download hack Anger of Stick 5 and enjoy unlimited fun in this game.
-
Why Do You Need Hack Anger of Stick 5?
-
Hack Anger of Stick 5 is a tool that can help you modify the game and get unlimited resources, such as coins, gems, energy, weapons, skills, helicopters, robots, mechs, and more. With hack Anger of Stick 5, you can enjoy the following benefits:
-
-
You can unlock all the features, items, modes, and levels in the game without spending any money or time.
-
You can get unlimited coins and gems to buy anything you want in the game.
-
You can get unlimited energy to play as long as you want without waiting for it to recharge.
-
You can get unlimited weapons and skills to equip your stickman hero and allies with the best gear.
-
You can get unlimited helicopters, robots, and mechs to use in your missions and battles.
-
You can get unlimited health and damage to become invincible and defeat any enemy.
-
You can get unlimited fun and excitement in this game without any hassle or frustration.
-
-
On the other hand, if you play without hack Anger of Stick 5, you might face the following drawbacks:
-
-
You might have to spend a lot of money or time to unlock all the features, items, modes, and levels in the game.
-
You might run out of coins and gems to buy anything you want in the game.
-
You might run out of energy to play as long as you want and have to wait for it to recharge.
-
You might run out of weapons and skills to equip your stickman hero and allies with the best gear.
-
You might run out of helicopters, robots, and mechs to use in your missions and battles.
-
You might run out of health and damage to become invincible and defeat any enemy.
-
You might run out of fun and excitement in this game due to the hassle or frustration.
-
-
As you can see, hack Anger of Stick 5 can make a huge difference in your gaming experience. It can make the game more enjoyable and rewarding for you. It can also save you a lot of time and money that you would otherwise spend on the game.
-
How to Download Hack Anger of Stick 5?
-
If you are convinced that hack Anger of Stick 5 is what you need to have more fun in this game, then you might be wondering how to download it. Well, it's not that hard. You just need to follow these simple steps:
-
-
Find a reliable website that offers hack tools for Anger of Stick 5. You can search on Google or ask your friends for recommendations.
-
Choose the hack tool that suits your needs and preferences. There are different types of hack tools for Anger of Stick 5, such as mod apk files, online generators, cheat codes, etc. Each type has its own advantages and disadvantages. You should read the reviews and ratings of each hack tool before downloading it.
-
Download the hack tool from the website. Make sure that the website is safe and secure. Avoid downloading from suspicious or unknown sources that might contain viruses or malware.
-
Install the hack tool on your device. If you are using a mod apk file, you will need to enable unknown sources in your device settings before installing it. If you are using an online generator or cheat code, you will need to enter your username or email address associated with your game account before generating or activating it.
-
Launch the hack tool and enjoy unlimited fun in Anger of Stick 5!
How to Use Hack Anger of Stick 5?
-
Now that you have downloaded hack Anger of Stick 5, you might be wondering how to use it. Well, it's not that hard either. You just need to follow these simple tips and tricks:
-
-
If you are using a mod apk file, you will need to uninstall the original game from your device before installing the hacked version. This will prevent any conflicts or errors between the two versions.
-
If you are using an online generator or cheat code, you will need to connect your device to the internet before using it. This will ensure that the hack tool can access your game account and modify it accordingly.
-
If you are using a hack tool that requires verification or human confirmation, you will need to complete a short survey or offer before using it. This will prove that you are not a bot and prevent abuse of the hack tool.
-
Once you have launched the hack tool, you will see a user interface that will allow you to customize your game settings and preferences. You can choose how many coins, gems, energy, weapons, skills, helicopters, robots, mechs, and more you want to have in your game.
-
After you have made your choices, you can click on the start or generate button and wait for a few seconds or minutes for the hack tool to work its magic. You will see a confirmation message or notification when the hack tool has successfully modified your game account.
-
Finally, you can open your game and enjoy unlimited fun in Anger of Stick 5!
-
-
That's it! You have successfully used hack Anger of Stick 5 and made the game more enjoyable and rewarding for yourself. You can now play as long as you want without any limitations or restrictions. You can now unlock all the features, items, modes, and levels in the game without any hassle or frustration. You can now become the ultimate stickman warrior and dominate every battle.
-
What are the Risks of Using Hack Anger of Stick 5?
-
However, before you get too excited and start using hack Anger of Stick 5, you should also be aware of the risks involved. Using hack tools for any game is not without consequences. You might face some potential dangers or problems if you use hack Anger of Stick 5. Here are some of them:
-
-
You might get banned from the game or lose your game account. The developers of Anger of Stick 5 might detect that you are using hack tools and take action against you. They might suspend or terminate your game account for violating their terms of service or policies. They might also ban your device from accessing their servers or services.
-
You might get infected with viruses or malware. The website or source that offers hack tools for Anger of Stick 5 might not be trustworthy or secure. They might contain harmful or malicious files that can damage your device or steal your personal information. They might also redirect you to phishing or scam sites that can trick you into giving up your money or data.
-
You might get scammed or cheated. The website or source that offers hack tools for Anger of Stick 5 might not be reliable or honest. They might not deliver what they promise or charge you hidden fees or subscriptions. They might also ask for your personal or financial information that they can use for fraudulent purposes.
-
-
As you can see, using hack Anger of Stick 5 is not without risks. You might end up losing more than what you gain if you use hack tools for this game. You might also ruin the fun and challenge of the game by making it too easy or unfair.
-
So, how can you avoid or minimize these risks? Here are some ways:
-
-
You should use hack tools for Anger of Stick 5 at your own risk and discretion. You should understand the consequences and responsibilities of using hack tools for this game. You should also respect the rights and rules of the developers and other players of this game.
-
You should use hack tools for Anger of Stick 5 sparingly and moderately. You should not abuse or overuse hack tools for this game. You should also not use hack tools for this game in competitive or multiplayer modes where they can affect other players negatively.
-
You should use hack tools for Anger of Stick 5 from reputable and verified sources only. You should do some research and check the reviews and ratings of each website or source that offers hack tools for this game. You should also scan and test each file or link before downloading or using it.
-
-
By following these tips, you can reduce the chances of getting into trouble or harm when using hack Anger of Stick 5. You can also enjoy the game more without compromising its quality or integrity.
-
Conclusion
-
In conclusion, hack Anger of Stick 5 is a tool that can help you modify the game and get unlimited resources, such as coins, gems, energy, weapons, skills, helicopters, robots, mechs, and more. It can make the game more fun and exciting for you. It can also save you a lot of time and money that you would otherwise spend on the game.
-
However, hack Anger of Stick 5 is not without risks. You might get banned from the game or lose your game account. You might get infected with viruses or malware. You might get scammed or cheated. You might also ruin the fun and challenge of the game by making it too easy or unfair.
-
Therefore, you should use hack Anger of Stick 5 at your own risk and discretion. You should understand the consequences and responsibilities of using hack tools for this game. You should also respect the rights and rules of the developers and other players of this game.
-
You should also use hack Anger of Stick 5 sparingly and moderately. You should not abuse or overuse hack tools for this game. You should also not use hack tools for this game in competitive or multiplayer modes where they can affect other players negatively.
-
Finally, you should use hack Anger of Stick 5 from reputable and verified sources only. You should do some research and check the reviews and ratings of each website or source that offers hack tools for this game. You should also scan and test each file or link before downloading or using it.
-
By following these tips, you can reduce the chances of getting into trouble or harm when using hack Anger of Stick 5. You can also enjoy the game more without compromising its quality or integrity.
-
We hope that this article has helped you learn how to download hack Anger of Stick 5 and enjoy unlimited fun in this game. If you have any questions or comments, feel free to leave them below. We would love to hear from you!
-
FAQs
-
Here are some frequently asked questions about hack Anger of Stick 5:
-
Q: Is hack Anger of Stick 5 legal?
-
A: Hack Anger of Stick 5 is not legal. It is against the terms of service and policies of the developers of Anger of Stick 5. It is also considered as cheating or hacking by other players of this game. Therefore, using hack Anger of Stick 5 can result in legal actions or penalties from the developers or other players.
-
Q: Is hack Anger of Stick 5 safe?
-
A: Hack Anger of Stick 5 is not safe. It can expose your device or data to viruses or malware. It can also expose your personal or financial information to phishing or scam sites. It can also expose your game account to suspension or termination. Therefore, using hack Anger of Stick 5 can result in safety issues or problems for you.
-
Q: Is hack Anger of Stick 5 free?
-
A: Hack Anger of Stick 5 is not free. It can cost you money or time to download or use it. It can also cost you money or time to fix any issues or problems that it might cause for your device, data, or game account. Therefore, using hack Anger of Stick 5 can result in hidden fees or subscriptions for you.
-
Q: Is hack Anger of Stick 5 worth it?
-
A: Hack Anger of Stick 5 is not worth it. It can ruin the fun and challenge of the game by making it too easy or unfair. It can also ruin the quality and integrity of the game by modifying it without permission or authorization. It can also ruin your reputation and relationship with other players by cheating or hacking in this game. Therefore, using hack Anger of Stick 5 can result in negative impacts or outcomes for you.
-
Q: Is there an alternative to hack Anger of Stick 5?
-
A: Yes, there is an alternative to hack Anger of Stick 5. You can play the game without using any hack tools and enjoy it as it is meant to be played. You can earn coins, gems, energy, weapons, skills, helicopters, robots, mechs, and more by playing the game fairly and honestly. You can also improve your skill and strategy by playing the game regularly and diligently. You can also interact with other players by playing the game cooperatively and competitively. Therefore, playing the game without using any hack tools can result in positive experiences or benefits for you.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Basketball Grand Slam APK and Compete with Legendary Players.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Basketball Grand Slam APK and Compete with Legendary Players.md
deleted file mode 100644
index eb2eb5d7de008562d9915634ff7603e01c29fa96..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Basketball Grand Slam APK and Compete with Legendary Players.md
+++ /dev/null
@@ -1,148 +0,0 @@
-
-
Basketball Grand Slam APK: A Real-Time Competitive Basketball Game
-
Introduction
-
If you are a fan of basketball and want to experience the thrill of real-time competition on your mobile device, then you should check out Basketball Grand Slam APK. This is a game that lets you play with or against other players from around the world in various modes and events. You can also unlock and use legendary players with different skills and characteristics to form your own super lineup.
Some of the main features of Basketball Grand Slam APK are:
-
-
Real-time synchronization technology that ensures fairness and smoothness between players.
-
Optimized operation and hand feel that allows you to control your player with ease and precision.
-
Various game modes that cater to different preferences and levels of players.
-
Legendary players that have versatile skills and unique styles.
-
Innovative hot three-point ball competition that tests your shooting accuracy and speed.
-
-
To download and install Basketball Grand Slam APK, you need to follow these steps:
-
-
Go to [Basketball Grand Slam APK (Android Game) - Free Download](^1^) or [Basketball Grand Slam for Android - Download the APK from Uptodown](^2^) and click on the download button.
-
Once the download is complete, open the file and follow the instructions to install the game on your device.
-
Launch the game and enjoy playing with other basketball fans.
-
-
Game Modes
-
3v3 Qualifying Mode
-
This is the mode where you can compete with other players in a 3v3 format. You can either join a random team or invite your friends to form your own team. The goal is to win as many matches as possible and climb up the ranking ladder. You can also earn rewards such as coins, gems, chests, and tickets by playing this mode.
-
The benefits of playing this mode are:
-
-
You can improve your skills and tactics by facing different opponents.
-
You can enjoy the cooperation and communication with your teammates.
-
You can challenge yourself and test your limits by competing with higher-ranked players.
-
-
Bullfighting Grand Prix ModeBullfighting Grand Prix Mode
-
This is the mode where you can participate in various events and tournaments that have different rules and rewards. You can choose from different difficulty levels and modes such as knockout, round robin, and ladder. You can also customize your own event and invite other players to join. The goal is to win as many matches as possible and earn trophies and prizes.
-
The challenges of playing this mode are:
-
-
You need to adapt to different rules and conditions that change every event.
-
You need to face stronger and more diverse opponents that have different strategies and skills.
-
You need to manage your stamina and resources wisely as you play multiple matches in a row.
-
-
Hot Three-Point Ball Competition
-
This is the mode where you can show off your shooting skills and compete with other players in a hot three-point ball competition. You can choose from different courts and backgrounds that have different effects on your shooting. You can also use different props and items that can enhance or hinder your performance. The goal is to score as many points as possible within the time limit and beat your opponents.
-
The skills required for playing this mode are:
-
-
You need to have a good sense of timing and rhythm to release the ball at the right moment.
-
You need to have a good aim and accuracy to hit the target and avoid the obstacles.
-
You need to have a good strategy and judgment to use the props and items effectively.
-
-
Legendary Players
-
How to unlock legendary players?
-
To unlock legendary players, you need to collect their cards and fragments. You can get them from various sources such as chests, events, rewards, and shops. You can also exchange them with other players or use gems to buy them. Once you have enough cards and fragments, you can activate and upgrade the legendary players in your lineup.
-
basketball grand slam game download
-basketball grand slam android app
-basketball grand slam free apk
-basketball grand slam latest version
-basketball grand slam wang lan
-basketball grand slam real-time competitive
-basketball grand slam legendary players
-basketball grand slam 3v3 qualifying mode
-basketball grand slam bullfighting grand prix mode
-basketball grand slam three-point ball competition
-basketball grand slam street basketball game
-basketball grand slam fan page address
-basketball grand slam customer service email
-basketball grand slam apkcombo games sports
-basketball grand slam uptodown android games sports
-basketball grand slam apk file size
-basketball grand slam apk install guide
-basketball grand slam apk update history
-basketball grand slam apk reviews and ratings
-basketball grand slam apk screenshots and videos
-basketball grand slam apk mod unlimited money
-basketball grand slam apk offline play
-basketball grand slam apk compatible devices
-basketball grand slam apk download link
-basketball grand slam apk mirror link
-basketball grand slam apk alternative apps
-basketball grand slam apk similar games
-basketball grand slam apk tips and tricks
-basketball grand slam apk cheats and hacks
-basketball grand slam apk gameplay features
-basketball grand slam apk system requirements
-basketball grand slam apk bugs and issues
-basketball grand slam apk feedback and suggestions
-basketball grand slam apk questions and answers
-basketball grand slam apk news and updates
-basketball grand slam apk release date and version number
-basketball grand slam apk developer information and contact details
-basketball grand slam apk license and terms of service
-basketball grand slam apk privacy policy and data usage
-basketball grand slam apk security and virus scan results
-
What are the different types of legendary players?
-
There are different types of legendary players that have different attributes and skills. They are divided into four categories: lone hero, mercury diarrhea, rebound king, and assist master. Here are some examples of each category:
-
Lone Hero
-
This type of legendary player is good at scoring by themselves. They have high offensive stats and skills that can help them break through the defense and make difficult shots. They are also good at creating their own space and opportunities. However, they may not be very good at passing or cooperating with their teammates. Some examples of this type are Kobe Bryant, Michael Jordan, Allen Iverson, etc.
-
Mercury Diarrhea
-
This type of legendary player is good at running fast and changing directions. They have high speed and agility stats and skills that can help them outrun their opponents and make quick moves. They are also good at stealing the ball and making fast breaks. However, they may not be very good at shooting or defending against bigger players. Some examples of this type are Stephen Curry, Kyrie Irving, Derrick Rose, etc.
Rebound King
-
This type of legendary player is good at grabbing rebounds and controlling the boards. They have high strength and jumping stats and skills that can help them dominate the paint and secure the ball. They are also good at blocking shots and protecting the rim. However, they may not be very good at dribbling or shooting from long range. Some examples of this type are Shaquille O'Neal, Wilt Chamberlain, Dennis Rodman, etc.
-
Assist Master
-
This type of legendary player is good at passing and assisting their teammates. They have high vision and intelligence stats and skills that can help them find the open man and create chances. They are also good at controlling the tempo and orchestrating the offense. However, they may not be very good at scoring by themselves or defending against faster players. Some examples of this type are Magic Johnson, Steve Nash, John Stockton, etc.
-
Tips and Tricks
-
How to improve your operation and hand feel?
-
To improve your operation and hand feel, you need to practice and familiarize yourself with the game controls and mechanics. You can use the training mode to learn the basic moves and skills of each player. You can also adjust the sensitivity and feedback settings to suit your preference. You can also watch some tutorials and guides online to learn some tips and tricks from other players.
-
How to use different skills and tactics?
-
To use different skills and tactics, you need to know the strengths and weaknesses of each player and team. You can check the stats and attributes of each player in your lineup and choose the ones that match your style and strategy. You can also use the skill buttons to activate different skills such as crossover, dunk, block, etc. You can also use the tactic buttons to switch between different tactics such as man-to-man, zone, pick-and-roll, etc.
-
How to cooperate with your teammates?
-
To cooperate with your teammates, you need to communicate and coordinate with them. You can use the chat function or voice chat function to talk to your teammates and share information and ideas. You can also use the gesture function or emoji function to express your emotions and reactions. You can also use the pass button or assist button to pass the ball or assist your teammates.
-
Conclusion
-
Basketball Grand Slam APK is a game that allows you to enjoy the excitement and fun of basketball on your mobile device. You can play with or against other players from around the world in various modes and events. You can also unlock and use legendary players with different skills and characteristics to form your own super lineup. If you are a basketball fan, you should not miss this game.
-
So what are you waiting for? Download Basketball Grand Slam APK now and start playing with other basketball fans. You will not regret it!
-
FAQs
-
Here are some of the frequently asked questions about Basketball Grand Slam APK:
-
-
Q1: What are the system requirements for Basketball Grand Slam APK?
-A1: The system requirements for Basketball Grand Slam APK are Android 4.4 or higher, 2 GB of RAM or more, 500 MB of free storage space or more, and a stable internet connection.
-
Q2: Is Basketball Grand Slam APK free to play?
-A2: Yes, Basketball Grand Slam APK is free to play. However, there are some optional in-app purchases that can enhance your gaming experience.
-
Q3: How to contact customer service for Basketball Grand Slam APK?
-A3: You can contact customer service for Basketball Grand Slam APK by sending an email to [support@basketballgrandslam.com] or by visiting their official website [www.basketballgrandslam.com].
-
Q4: What are some of the best basketball trivia and facts?
-A4: Here are some of the best basketball trivia and facts:
-
-
Trivia
Fact
-
The inventor of basketball was James Naismith.
He created the game in 1891 as a physical education instructor at Springfield College in Massachusetts.
-
The first official basketball game was played on January 20, 1892.
The game was played between two teams of nine players each at Springfield College. The final score was 1-0.
-
The first NBA game was played on November 1, 1946.
The game was played between the New York Knicks and the Toronto Huskies at Maple Leaf Gardens in Toronto. The final score was 68-66 in favor of the Knicks.
-
The shortest NBA player ever was Muggsy Bogues.
He was only 5 feet 3 inches tall and played as a point guard for 14 seasons in the NBA.
-
The tallest NBA player ever was Manute Bol.
He was 7 feet 7 inches tall and played as a center for 10 seasons in the NBA. He also holds the record for the most blocks per game in a season with 5.0.
-
-
Q5: What are some of the basic basketball rules and terms?
-A5: Here are some of the basic basketball rules and terms:
-
-
A basketball game is played between two teams of five players each on a rectangular court with a basket at each end.
-
The objective of the game is to score more points than the opposing team by shooting the ball through the basket.
-
The game is divided into four quarters of 12 minutes each (or 10 minutes in international games). There is also a halftime break of 15 minutes between the second and third quarters.
-
The game is started with a jump ball at the center circle, where one player from each team tries to tip the ball to their teammates.
-
The team that has possession of the ball is called the offense, and the team that tries to stop them from scoring is called the defense.
-
The offense can advance the ball by passing, dribbling, or shooting. The defense can try to steal the ball, block shots, or force turnovers.
-
The offense must shoot the ball within 24 seconds (or 30 seconds in international games) after gaining possession, otherwise they lose the ball. This is called the shot clock.
-
The offense must also cross the midcourt line within 8 seconds (or 10 seconds in international games) after gaining possession, otherwise they lose the ball. This is called the backcourt violation.
-
The offense cannot stay in the restricted area near the basket (also known as the paint or the key) for more than 3 seconds, otherwise they lose the ball. This is called the three-second violation.
-
The offense cannot dribble the ball with two hands at the same time, or stop dribbling and then start again, otherwise they lose the ball. This is called traveling or double dribble.
-
The defense cannot touch the ball when it is on its way down to the basket or when it is on or above the rim, otherwise they give two points to the offense. This is called goaltending.
-
The defense cannot foul the offense by hitting, pushing, holding, or tripping them, otherwise they give free throws to the offense. A free throw is an unopposed shot from the foul line worth one point. The number of free throws depends on the type and severity of the foul.
-
A foul committed during a shot attempt is called a shooting foul. If the shot goes in, the shooter gets one free throw. If not, they get two or three free throws depending on whether they were shooting from inside or outside the three-point line. This is called a three-point play or a four-point play.
-
A foul committed when the offense is not in a shooting position is called a non-shooting foul. If the defense has committed less than five fouls in a quarter (or four in international games), the offense gets the ball out of bounds. If not, they get two free throws. This is called the bonus situation.
-
A foul committed when the ball is not in play is called a technical foul. The offense gets one free throw and the ball out of bounds. A technical foul can also be given for unsportsmanlike conduct, such as arguing with the referee, taunting the opponent, or fighting.
-
A foul committed intentionally to stop the clock or prevent a scoring opportunity is called an intentional foul. The offense gets two free throws and the ball out of bounds.
-
A foul committed with excessive or unnecessary force is called a flagrant foul. The offense gets two free throws and the ball out of bounds. A flagrant foul can also result in an ejection or a suspension for the offender.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Hair Challenge on PC and Customize Your Hair Color and Style.md b/spaces/1phancelerku/anime-remove-background/Download Hair Challenge on PC and Customize Your Hair Color and Style.md
deleted file mode 100644
index ddb4c6b4302c05cfc9bebba818f1cf00614c04d1..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Hair Challenge on PC and Customize Your Hair Color and Style.md
+++ /dev/null
@@ -1,104 +0,0 @@
-
-
Hair Challenge Game Download for PC
-
Do you love playing with your hair and creating different styles? Do you enjoy running games that test your reflexes and skills? If you answered yes to both questions, then you should definitely try hair challenge game, a fun and popular hair runner game that lets you grow and style your hair as you race.
-
Hair challenge game is available for both mobile devices and PCs, but if you want to have the best gaming experience, you should download it for your PC. Why? Because you will get better graphics, performance, and controls than on your phone or tablet. Plus, you will be able to enjoy the game on a bigger screen and immerse yourself in the colorful and exciting world of hair challenge.
So, how do you download hair challenge game for your PC? It's actually very easy and simple. Just follow these steps:
-
How to Download Hair Challenge Game for PC
-
Install a Games Launcher
-
A games launcher is a program that allows you to access, buy, download, install, and play games on your PC. There are many games launchers out there, but one of the most popular and reliable ones is the Epic Games Launcher, which is also the official distributor of hair challenge game.
-
To install the Epic Games Launcher, you need to visit its website and click on the "Get Epic Games" button at the top right corner of the page. This will download a setup file that you need to run on your PC. Follow the instructions on the screen to complete the installation process.
-
Create an Account
-
Once you have installed the Epic Games Launcher, you need to create an account or sign in with an existing one. You can create an Epic Games account using your email address or phone number, or you can use your Google, Facebook, or console accounts to sign up.
-
Creating an account is free and easy, and it will allow you to access all the features of the Epic Games Launcher, such as browsing the store, downloading games, managing your library, connecting with friends, etc.
-
Select and Purchase the Game
-
Now that you have an account and a launcher installed, you can start
Now that you have an account and a launcher installed, you can start looking for the game you want to play. In this case, hair challenge game. To do that, you need to open the Epic Games Launcher and click on the "Store" tab at the top of the window. This will take you to the online store where you can browse and buy games.
-
To find hair challenge game, you can use the search bar at the top right corner of the store page, or you can scroll down and look for it in the "Free Games" section. Yes, you read that right. Hair challenge game is currently free to download and play on your PC, so you don't have to pay anything to enjoy it.
-
Once you find the game, click on it to go to its product page. There, you will see a brief description of the game, some screenshots and videos, and the system requirements. You will also see a big blue button that says "Get". Click on it to add the game to your library.
-
hair challenge game download for pc free
-hair challenge game download for pc windows 10
-hair challenge game download for pc online
-hair challenge game download for pc bluestacks
-hair challenge game download for pc emulator
-hair challenge game download for pc full version
-hair challenge game download for pc windows 7
-hair challenge game download for pc nox player
-hair challenge game download for pc apk
-hair challenge game download for pc mumu player
-hair challenge game download for pc mac
-hair challenge game download for pc laptop
-hair challenge game download for pc offline
-hair challenge game download for pc without emulator
-hair challenge game download for pc latest version
-hair challenge game download for pc windows 8
-hair challenge game download for pc rollic games
-hair challenge game download for pc softonic
-hair challenge game download for pc xapk
-hair challenge game download for pc browser
-hair challenge game download for pc windows 11
-hair challenge game download for pc android
-hair challenge game download for pc uptodown
-hair challenge game download for pc appsonwindows
-hair challenge game download for pc new scientist
-hair challenge game download for pc action runner
-hair challenge game download for pc mod apk
-hair challenge game download for pc steam
-hair challenge game download for pc review
-hair challenge game download for pc gameplay
-hair challenge game download for pc cheats
-hair challenge game download for pc tips and tricks
-hair challenge game download for pc hack
-hair challenge game download for pc unlimited money
-hair challenge game download for pc best characters
-hair challenge game download for pc hairstyles and colors
-hair challenge game download for pc how to play
-hair challenge game download for pc requirements and specifications
-hair challenge game download for pc features and benefits
-hair challenge game download for pc fun and addictive
-hair challenge game download for pc high heels and accessories
-hair challenge game download for pc obstacles and challenges
-hair challenge game download for pc levels and stages
-hair challenge game download for pc updates and improvements
-hair challenge game download for pc bugs and issues
-hair challenge game download for pc ratings and reviews
-hair challenge game download for pc alternatives and competitors
-hair challenge game download for pc pros and cons
-
Install and Play the Game
-
After you have added the game to your library, you are ready to download and install it on your PC. To do that, you need to go to the "Library" tab at the top of the Epic Games Launcher window. There, you will see a list of all the games you own or have access to.
-
Find hair challenge game in your library and click on it. This will open a new window where you can see more details about the game, such as its size, version, and update status. You will also see a button that says "Install". Click on it to start downloading and installing the game on your PC.
-
The download and installation process may take some time depending on your internet speed and PC performance. You can check the progress of the download and installation in the same window, or in the "Downloads" section of the launcher. You can also pause or resume the download at any time.
-
Once the download and installation are complete, you can start playing hair challenge game on your PC. To do that, just click on the "Launch" button in the same window where you installed the game, or in your library. This will open the game and let you enjoy it.
-
Tips and Tricks for Playing Hair Challenge Game
-
Hair challenge game is a fun and easy game to play, but it can also be challenging and addictive. Here are some tips and tricks to help you get better at it and have more fun:
-
Avoid Obstacles
-
The main goal of hair challenge game is to grow your hair as long as possible while running through different levels. However, there are many obstacles that can cut or damage your hair along the way, such as scissors, blades, flames, lasers, etc. You need to avoid these obstacles by moving left or right, jumping over them, or ducking under them.
-
To move left or right, you can use the arrow keys on your keyboard, or drag your mouse left or right. To jump or duck, you can use the spacebar on your keyboard, or click with your mouse. Be careful not to hit any obstacles or walls, as this will reduce your hair length and score.
-
Collect Hair Extensions
-
Another way to grow your hair longer is to collect hair extensions that are scattered throughout the levels. These are weaves of different colors and lengths that will add more volume and beauty to your hair. You can collect them by running over them or jumping to reach them.
-
Some hair extensions are hidden or hard to reach, so you need to be observant and creative to find them. Some hair extensions also have special effects, such as rainbow colors, sparkles, or stars. Try to collect as many hair extensions as you can to make your hair longer and more fabulous.
-
Customize Your Character
-
Hair challenge game is not only about growing your hair, but also about styling it and expressing yourself. You can customize your character by unlocking and choosing different characters, hair dyes, and accessories from the hair shop.
-
To access the hair shop, you need to click on the shopping cart icon at the top right corner of the main menu screen. There, you will see a variety of options to change your appearance. You can unlock new options by spending coins that you earn by playing the game or watching ads.
-
You can change your character's skin tone, eye color, outfit, shoes, etc. You can also change your hair color by choosing from different shades or patterns. You can even mix and match different colors for different parts of your hair. You can also add accessories such as hats, glasses, earrings, etc.
-
You can preview how your character
You can preview how your character looks by clicking on the "Try On" button at the bottom of the hair shop screen. You can also see how your character looks in different levels by clicking on the "Change Level" button at the top of the screen. You can choose from various themes and backgrounds, such as city, beach, forest, etc.
-
Once you are happy with your character's appearance, you can click on the "Save" button to confirm your changes. You can also click on the "Random" button to generate a random look for your character. You can change your character's look anytime you want by visiting the hair shop again.
-
Conclusion
-
Hair challenge game is a fun and addictive game that lets you grow and style your hair as you run through different levels. It is a great way to relax and have fun, as well as to unleash your creativity and personality.
-
If you want to enjoy hair challenge game on your PC, you can easily download it for free from the Epic Games Launcher. All you need to do is install the launcher, create an account, find the game, and install it. Then, you can start playing and customizing your character.
-
Hair challenge game is a game that anyone can play and enjoy, regardless of age or gender. It is a game that will make you smile and laugh, as well as challenge and reward you. So, what are you waiting for? Download hair challenge game for your PC today and have a hair-raising adventure!
-
FAQs
-
Here are some frequently asked questions and answers about hair challenge game or the download process:
-
-
Q: How much space does hair challenge game take on my PC?
-
A: The game requires about 200 MB of free disk space.
-
Q: How do I update hair challenge game on my PC?
-
A: The game will update automatically when you launch it from the Epic Games Launcher. You can also check for updates manually by clicking on the "Update" button in the library section of the launcher.
-
Q: How do I uninstall hair challenge game from my PC?
-
A: To uninstall the game, you need to go to the library section of the Epic Games Launcher, find the game, and click on the "Uninstall" button. You can also uninstall the game from the control panel of your PC.
-
Q: Is hair challenge game safe to download and play on my PC?
-
A: Yes, hair challenge game is safe and secure to download and play on your PC. The game is verified by Epic Games and does not contain any viruses or malware.
-
Q: Can I play hair challenge game offline on my PC?
-
A: Yes, you can play hair challenge game offline on your PC. However, you will not be able to access some features that require an internet connection, such as watching ads or connecting with friends.
.+?)"
- r"/I\:(?P.+?)\-(?P.+?)\@(?P.+?)\+(?P.+?)\&(?P.+?)\-(?P.+?)\|(?P.+?)\+(?P.+?)" # noqa
- r"/J\:(?P.+?)\_(?P.+?)"
- r"/K\:(?P.+?)\+(?P.+?)\-(?P.+?)$",
- label,
- ).groupdict()
- return cls(contexts=contexts)
-
- @property
- def label(self):
- """
- pyopenjtalk.extract_fullcontextで得られるラベルと等しい
- Returns
- -------
- lebel: str
- ラベルを返す
- """
- return (
- "{p1}^{p2}-{p3}+{p4}={p5}"
- "/A:{a1}+{a2}+{a3}"
- "/B:{b1}-{b2}_{b3}"
- "/C:{c1}_{c2}+{c3}"
- "/D:{d1}+{d2}_{d3}"
- "/E:{e1}_{e2}!{e3}_{e4}-{e5}"
- "/F:{f1}_{f2}#{f3}_{f4}@{f5}_{f6}|{f7}_{f8}"
- "/G:{g1}_{g2}%{g3}_{g4}_{g5}"
- "/H:{h1}_{h2}"
- "/I:{i1}-{i2}@{i3}+{i4}&{i5}-{i6}|{i7}+{i8}"
- "/J:{j1}_{j2}"
- "/K:{k1}+{k2}-{k3}"
- ).format(**self.contexts)
-
- @property
- def phoneme(self):
- """
- 音素クラスの中で、発声に必要な要素を返す
- Returns
- -------
- phoneme : str
- 発声に必要な要素を返す
- """
- return self.contexts["p3"]
-
- def is_pause(self):
- """
- 音素がポーズ(無音、silent/pause)であるかを返す
- Returns
- -------
- is_pose : bool
- 音素がポーズ(無音、silent/pause)であるか(True)否か(False)
- """
- return self.contexts["f1"] == "xx"
-
- def __repr__(self):
- return f""
-
-
-@dataclass
-class Mora:
- """
- モーラクラス
- モーラは1音素(母音や促音「っ」、撥音「ん」など)か、2音素(母音と子音の組み合わせ)で成り立つ
-
- Attributes
- ----------
- consonant : Optional[Phoneme]
- 子音
- vowel : Phoneme
- 母音
- """
-
- consonant: Optional[Phoneme]
- vowel: Phoneme
-
- def set_context(self, key: str, value: str):
- """
- Moraクラス内に含まれるPhonemeのcontextのうち、指定されたキーの値を変更する
- consonantが存在する場合は、vowelと同じようにcontextを変更する
- Parameters
- ----------
- key : str
- 変更したいcontextのキー
- value : str
- 変更したいcontextの値
- """
- self.vowel.contexts[key] = value
- if self.consonant is not None:
- self.consonant.contexts[key] = value
-
- @property
- def phonemes(self):
- """
- 音素群を返す
- Returns
- -------
- phonemes : List[Phoneme]
- 母音しかない場合は母音のみ、子音もある場合は子音、母音の順番でPhonemeのリストを返す
- """
- if self.consonant is not None:
- return [self.consonant, self.vowel]
- else:
- return [self.vowel]
-
- @property
- def labels(self):
- """
- ラベル群を返す
- Returns
- -------
- labels : List[str]
- Moraに含まれるすべてのラベルを返す
- """
- return [p.label for p in self.phonemes]
-
-
-@dataclass
-class AccentPhrase:
- """
- アクセント句クラス
- 同じアクセントのMoraを複数保持する
- Attributes
- ----------
- moras : List[Mora]
- 音韻のリスト
- accent : int
- アクセント
- """
-
- moras: List[Mora]
- accent: int
- is_interrogative: bool
-
- @classmethod
- def from_phonemes(cls, phonemes: List[Phoneme]):
- """
- PhonemeのリストからAccentPhraseクラスを作成する
- Parameters
- ----------
- phonemes : List[Phoneme]
- phonemeのリストを渡す
-
- Returns
- -------
- accent_phrase : AccentPhrase
- AccentPhraseクラスを返す
- """
- moras: List[Mora] = []
-
- mora_phonemes: List[Phoneme] = []
- for phoneme, next_phoneme in zip(phonemes, phonemes[1:] + [None]):
- # workaround for Hihosiba/voicevox_engine#57
- # (py)openjtalk によるアクセント句内のモーラへの附番は 49 番目まで
- # 49 番目のモーラについて、続く音素のモーラ番号を単一モーラの特定に使えない
- if int(phoneme.contexts["a2"]) == 49:
- break
-
- mora_phonemes.append(phoneme)
-
- if (
- next_phoneme is None
- or phoneme.contexts["a2"] != next_phoneme.contexts["a2"]
- ):
- if len(mora_phonemes) == 1:
- consonant, vowel = None, mora_phonemes[0]
- elif len(mora_phonemes) == 2:
- consonant, vowel = mora_phonemes[0], mora_phonemes[1]
- else:
- raise ValueError(mora_phonemes)
- mora = Mora(consonant=consonant, vowel=vowel)
- moras.append(mora)
- mora_phonemes = []
-
- accent = int(moras[0].vowel.contexts["f2"])
- # workaround for Hihosiba/voicevox_engine#55
- # アクセント位置とするキー f2 の値がアクセント句内のモーラ数を超える場合がある
- accent = accent if accent <= len(moras) else len(moras)
- is_interrogative = moras[-1].vowel.contexts["f3"] == "1"
- return cls(moras=moras, accent=accent, is_interrogative=is_interrogative)
-
- def set_context(self, key: str, value: str):
- """
- AccentPhraseに間接的に含まれる全てのPhonemeのcontextの、指定されたキーの値を変更する
- Parameters
- ----------
- key : str
- 変更したいcontextのキー
- value : str
- 変更したいcontextの値
- """
- for mora in self.moras:
- mora.set_context(key, value)
-
- @property
- def phonemes(self):
- """
- 音素群を返す
- Returns
- -------
- phonemes : List[Phoneme]
- AccentPhraseに間接的に含まれる全てのPhonemeを返す
- """
- return list(chain.from_iterable(m.phonemes for m in self.moras))
-
- @property
- def labels(self):
- """
- ラベル群を返す
- Returns
- -------
- labels : List[str]
- AccentPhraseに間接的に含まれる全てのラベルを返す
- """
- return [p.label for p in self.phonemes]
-
- def merge(self, accent_phrase: "AccentPhrase"):
- """
- AccentPhraseを合成する
- (このクラスが保持するmorasの後ろに、引数として渡されたAccentPhraseのmorasを合成する)
- Parameters
- ----------
- accent_phrase : AccentPhrase
- 合成したいAccentPhraseを渡す
-
- Returns
- -------
- accent_phrase : AccentPhrase
- 合成されたAccentPhraseを返す
- """
- return AccentPhrase(
- moras=self.moras + accent_phrase.moras,
- accent=self.accent,
- is_interrogative=accent_phrase.is_interrogative,
- )
-
-
-@dataclass
-class BreathGroup:
- """
- 発声の区切りクラス
- アクセントの異なるアクセント句を複数保持する
- Attributes
- ----------
- accent_phrases : List[AccentPhrase]
- アクセント句のリスト
- """
-
- accent_phrases: List[AccentPhrase]
-
- @classmethod
- def from_phonemes(cls, phonemes: List[Phoneme]):
- """
- PhonemeのリストからBreathGroupクラスを作成する
- Parameters
- ----------
- phonemes : List[Phoneme]
- phonemeのリストを渡す
-
- Returns
- -------
- breath_group : BreathGroup
- BreathGroupクラスを返す
- """
- accent_phrases: List[AccentPhrase] = []
- accent_phonemes: List[Phoneme] = []
- for phoneme, next_phoneme in zip(phonemes, phonemes[1:] + [None]):
- accent_phonemes.append(phoneme)
-
- if (
- next_phoneme is None
- or phoneme.contexts["i3"] != next_phoneme.contexts["i3"]
- or phoneme.contexts["f5"] != next_phoneme.contexts["f5"]
- ):
- accent_phrase = AccentPhrase.from_phonemes(accent_phonemes)
- accent_phrases.append(accent_phrase)
- accent_phonemes = []
-
- return cls(accent_phrases=accent_phrases)
-
- def set_context(self, key: str, value: str):
- """
- BreathGroupに間接的に含まれる全てのPhonemeのcontextの、指定されたキーの値を変更する
- Parameters
- ----------
- key : str
- 変更したいcontextのキー
- value : str
- 変更したいcontextの値
- """
- for accent_phrase in self.accent_phrases:
- accent_phrase.set_context(key, value)
-
- @property
- def phonemes(self):
- """
- 音素群を返す
- Returns
- -------
- phonemes : List[Phoneme]
- BreathGroupに間接的に含まれる全てのPhonemeを返す
- """
- return list(
- chain.from_iterable(
- accent_phrase.phonemes for accent_phrase in self.accent_phrases
- )
- )
-
- @property
- def labels(self):
- """
- ラベル群を返す
- Returns
- -------
- labels : List[str]
- BreathGroupに間接的に含まれる全てのラベルを返す
- """
- return [p.label for p in self.phonemes]
-
-
-@dataclass
-class Utterance:
- """
- 発声クラス
- 発声の区切りと無音を複数保持する
- Attributes
- ----------
- breath_groups : List[BreathGroup]
- 発声の区切りのリスト
- pauses : List[Phoneme]
- 無音のリスト
- """
-
- breath_groups: List[BreathGroup]
- pauses: List[Phoneme]
-
- @classmethod
- def from_phonemes(cls, phonemes: List[Phoneme]):
- """
- Phonemeの完全なリストからUtteranceクラスを作成する
- Parameters
- ----------
- phonemes : List[Phoneme]
- phonemeのリストを渡す
-
- Returns
- -------
- utterance : Utterance
- Utteranceクラスを返す
- """
- pauses: List[Phoneme] = []
-
- breath_groups: List[BreathGroup] = []
- group_phonemes: List[Phoneme] = []
- for phoneme in phonemes:
- if not phoneme.is_pause():
- group_phonemes.append(phoneme)
-
- else:
- pauses.append(phoneme)
-
- if len(group_phonemes) > 0:
- breath_group = BreathGroup.from_phonemes(group_phonemes)
- breath_groups.append(breath_group)
- group_phonemes = []
-
- return cls(breath_groups=breath_groups, pauses=pauses)
-
- def set_context(self, key: str, value: str):
- """
- Utteranceに間接的に含まれる全てのPhonemeのcontextの、指定されたキーの値を変更する
- Parameters
- ----------
- key : str
- 変更したいcontextのキー
- value : str
- 変更したいcontextの値
- """
- for breath_group in self.breath_groups:
- breath_group.set_context(key, value)
-
- @property
- def phonemes(self):
- """
- 音素群を返す
- Returns
- -------
- phonemes : List[Phoneme]
- Utteranceクラスに直接的・間接的に含まれる、全てのPhonemeを返す
- """
- accent_phrases = list(
- chain.from_iterable(
- breath_group.accent_phrases for breath_group in self.breath_groups
- )
- )
- for prev, cent, post in zip(
- [None] + accent_phrases[:-1],
- accent_phrases,
- accent_phrases[1:] + [None],
- ):
- mora_num = len(cent.moras)
- accent = cent.accent
-
- if prev is not None:
- prev.set_context("g1", str(mora_num))
- prev.set_context("g2", str(accent))
-
- if post is not None:
- post.set_context("e1", str(mora_num))
- post.set_context("e2", str(accent))
-
- cent.set_context("f1", str(mora_num))
- cent.set_context("f2", str(accent))
- for i_mora, mora in enumerate(cent.moras):
- mora.set_context("a1", str(i_mora - accent + 1))
- mora.set_context("a2", str(i_mora + 1))
- mora.set_context("a3", str(mora_num - i_mora))
-
- for prev, cent, post in zip(
- [None] + self.breath_groups[:-1],
- self.breath_groups,
- self.breath_groups[1:] + [None],
- ):
- accent_phrase_num = len(cent.accent_phrases)
-
- if prev is not None:
- prev.set_context("j1", str(accent_phrase_num))
-
- if post is not None:
- post.set_context("h1", str(accent_phrase_num))
-
- cent.set_context("i1", str(accent_phrase_num))
- cent.set_context(
- "i5", str(accent_phrases.index(cent.accent_phrases[0]) + 1)
- )
- cent.set_context(
- "i6",
- str(len(accent_phrases) - accent_phrases.index(cent.accent_phrases[0])),
- )
-
- self.set_context(
- "k2",
- str(
- sum(
- [
- len(breath_group.accent_phrases)
- for breath_group in self.breath_groups
- ]
- )
- ),
- )
-
- phonemes: List[Phoneme] = []
- for i in range(len(self.pauses)):
- if self.pauses[i] is not None:
- phonemes += [self.pauses[i]]
-
- if i < len(self.pauses) - 1:
- phonemes += self.breath_groups[i].phonemes
-
- return phonemes
-
- @property
- def labels(self):
- """
- ラベル群を返す
- Returns
- -------
- labels : List[str]
- Utteranceクラスに直接的・間接的に含まれる全てのラベルを返す
- """
- return [p.label for p in self.phonemes]
-
-
-def extract_full_context_label(text: str):
- labels = pyopenjtalk.extract_fullcontext(text)
- phonemes = [Phoneme.from_label(label=label) for label in labels]
- utterance = Utterance.from_phonemes(phonemes)
- return utterance
diff --git a/spaces/A00001/bingothoo/src/components/toaster.tsx b/spaces/A00001/bingothoo/src/components/toaster.tsx
deleted file mode 100644
index 4d2693460b61307a1d4c127fd01df9bee16e59ff..0000000000000000000000000000000000000000
--- a/spaces/A00001/bingothoo/src/components/toaster.tsx
+++ /dev/null
@@ -1,3 +0,0 @@
-'use client'
-
-export { Toaster } from 'react-hot-toast'
diff --git a/spaces/A666sxr/Genshin_TTS/mel_processing.py b/spaces/A666sxr/Genshin_TTS/mel_processing.py
deleted file mode 100644
index 817f03756f64caf8cc54329a9325024c8fb9e0c3..0000000000000000000000000000000000000000
--- a/spaces/A666sxr/Genshin_TTS/mel_processing.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import math
-import os
-import random
-import torch
-from torch import nn
-import torch.nn.functional as F
-import torch.utils.data
-import numpy as np
-import librosa
-import librosa.util as librosa_util
-from librosa.util import normalize, pad_center, tiny
-from scipy.signal import get_window
-from scipy.io.wavfile import read
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/AHzizi/WaifuVoiceGen/text/cleaners.py b/spaces/AHzizi/WaifuVoiceGen/text/cleaners.py
deleted file mode 100644
index 68c9ad24d5a303b68a521fba2e8776c8cc867356..0000000000000000000000000000000000000000
--- a/spaces/AHzizi/WaifuVoiceGen/text/cleaners.py
+++ /dev/null
@@ -1,475 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-
-'''
-Cleaners are transformations that run over the input text at both training and eval time.
-
-Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners"
-hyperparameter. Some cleaners are English-specific. You'll typically want to use:
- 1. "english_cleaners" for English text
- 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using
- the Unidecode library (https://pypi.python.org/pypi/Unidecode)
- 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update
- the symbols in symbols.py to match your data).
-'''
-
-import re
-from unidecode import unidecode
-import pyopenjtalk
-from jamo import h2j, j2hcj
-from pypinyin import lazy_pinyin, BOPOMOFO
-import jieba, cn2an
-
-
-# This is a list of Korean classifiers preceded by pure Korean numerals.
-_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통'
-
-# Regular expression matching whitespace:
-_whitespace_re = re.compile(r'\s+')
-
-# Regular expression matching Japanese without punctuation marks:
-_japanese_characters = re.compile(r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# Regular expression matching non-Japanese characters or punctuation marks:
-_japanese_marks = re.compile(r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# List of (regular expression, replacement) pairs for abbreviations:
-_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [
- ('mrs', 'misess'),
- ('mr', 'mister'),
- ('dr', 'doctor'),
- ('st', 'saint'),
- ('co', 'company'),
- ('jr', 'junior'),
- ('maj', 'major'),
- ('gen', 'general'),
- ('drs', 'doctors'),
- ('rev', 'reverend'),
- ('lt', 'lieutenant'),
- ('hon', 'honorable'),
- ('sgt', 'sergeant'),
- ('capt', 'captain'),
- ('esq', 'esquire'),
- ('ltd', 'limited'),
- ('col', 'colonel'),
- ('ft', 'fort'),
-]]
-
-# List of (hangul, hangul divided) pairs:
-_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄳ', 'ㄱㅅ'),
- ('ㄵ', 'ㄴㅈ'),
- ('ㄶ', 'ㄴㅎ'),
- ('ㄺ', 'ㄹㄱ'),
- ('ㄻ', 'ㄹㅁ'),
- ('ㄼ', 'ㄹㅂ'),
- ('ㄽ', 'ㄹㅅ'),
- ('ㄾ', 'ㄹㅌ'),
- ('ㄿ', 'ㄹㅍ'),
- ('ㅀ', 'ㄹㅎ'),
- ('ㅄ', 'ㅂㅅ'),
- ('ㅘ', 'ㅗㅏ'),
- ('ㅙ', 'ㅗㅐ'),
- ('ㅚ', 'ㅗㅣ'),
- ('ㅝ', 'ㅜㅓ'),
- ('ㅞ', 'ㅜㅔ'),
- ('ㅟ', 'ㅜㅣ'),
- ('ㅢ', 'ㅡㅣ'),
- ('ㅑ', 'ㅣㅏ'),
- ('ㅒ', 'ㅣㅐ'),
- ('ㅕ', 'ㅣㅓ'),
- ('ㅖ', 'ㅣㅔ'),
- ('ㅛ', 'ㅣㅗ'),
- ('ㅠ', 'ㅣㅜ')
-]]
-
-# List of (Latin alphabet, hangul) pairs:
-_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', '에이'),
- ('b', '비'),
- ('c', '시'),
- ('d', '디'),
- ('e', '이'),
- ('f', '에프'),
- ('g', '지'),
- ('h', '에이치'),
- ('i', '아이'),
- ('j', '제이'),
- ('k', '케이'),
- ('l', '엘'),
- ('m', '엠'),
- ('n', '엔'),
- ('o', '오'),
- ('p', '피'),
- ('q', '큐'),
- ('r', '아르'),
- ('s', '에스'),
- ('t', '티'),
- ('u', '유'),
- ('v', '브이'),
- ('w', '더블유'),
- ('x', '엑스'),
- ('y', '와이'),
- ('z', '제트')
-]]
-
-# List of (Latin alphabet, bopomofo) pairs:
-_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', 'ㄟˉ'),
- ('b', 'ㄅㄧˋ'),
- ('c', 'ㄙㄧˉ'),
- ('d', 'ㄉㄧˋ'),
- ('e', 'ㄧˋ'),
- ('f', 'ㄝˊㄈㄨˋ'),
- ('g', 'ㄐㄧˋ'),
- ('h', 'ㄝˇㄑㄩˋ'),
- ('i', 'ㄞˋ'),
- ('j', 'ㄐㄟˋ'),
- ('k', 'ㄎㄟˋ'),
- ('l', 'ㄝˊㄛˋ'),
- ('m', 'ㄝˊㄇㄨˋ'),
- ('n', 'ㄣˉ'),
- ('o', 'ㄡˉ'),
- ('p', 'ㄆㄧˉ'),
- ('q', 'ㄎㄧㄡˉ'),
- ('r', 'ㄚˋ'),
- ('s', 'ㄝˊㄙˋ'),
- ('t', 'ㄊㄧˋ'),
- ('u', 'ㄧㄡˉ'),
- ('v', 'ㄨㄧˉ'),
- ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'),
- ('x', 'ㄝˉㄎㄨˋㄙˋ'),
- ('y', 'ㄨㄞˋ'),
- ('z', 'ㄗㄟˋ')
-]]
-
-
-# List of (bopomofo, romaji) pairs:
-_bopomofo_to_romaji = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('ㄅㄛ', 'p⁼wo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p⁼'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't⁼'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k⁼'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'h'),
- ('ㄐ', 'ʧ⁼'),
- ('ㄑ', 'ʧʰ'),
- ('ㄒ', 'ʃ'),
- ('ㄓ', 'ʦ`⁼'),
- ('ㄔ', 'ʦ`ʰ'),
- ('ㄕ', 's`'),
- ('ㄖ', 'ɹ`'),
- ('ㄗ', 'ʦ⁼'),
- ('ㄘ', 'ʦʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ə'),
- ('ㄝ', 'e'),
- ('ㄞ', 'ai'),
- ('ㄟ', 'ei'),
- ('ㄠ', 'au'),
- ('ㄡ', 'ou'),
- ('ㄧㄢ', 'yeNN'),
- ('ㄢ', 'aNN'),
- ('ㄧㄣ', 'iNN'),
- ('ㄣ', 'əNN'),
- ('ㄤ', 'aNg'),
- ('ㄧㄥ', 'iNg'),
- ('ㄨㄥ', 'uNg'),
- ('ㄩㄥ', 'yuNg'),
- ('ㄥ', 'əNg'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'ɥ'),
- ('ˉ', '→'),
- ('ˊ', '↑'),
- ('ˇ', '↓↑'),
- ('ˋ', '↓'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-
-def expand_abbreviations(text):
- for regex, replacement in _abbreviations:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def lowercase(text):
- return text.lower()
-
-
-def collapse_whitespace(text):
- return re.sub(_whitespace_re, ' ', text)
-
-
-def convert_to_ascii(text):
- return unidecode(text)
-
-
-def japanese_to_romaji_with_accent(text):
- '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html'''
- sentences = re.split(_japanese_marks, text)
- marks = re.findall(_japanese_marks, text)
- text = ''
- for i, sentence in enumerate(sentences):
- if re.match(_japanese_characters, sentence):
- if text!='':
- text+=' '
- labels = pyopenjtalk.extract_fullcontext(sentence)
- for n, label in enumerate(labels):
- phoneme = re.search(r'\-([^\+]*)\+', label).group(1)
- if phoneme not in ['sil','pau']:
- text += phoneme.replace('ch','ʧ').replace('sh','ʃ').replace('cl','Q')
- else:
- continue
- n_moras = int(re.search(r'/F:(\d+)_', label).group(1))
- a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1))
- a2 = int(re.search(r"\+(\d+)\+", label).group(1))
- a3 = int(re.search(r"\+(\d+)/", label).group(1))
- if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil','pau']:
- a2_next=-1
- else:
- a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1))
- # Accent phrase boundary
- if a3 == 1 and a2_next == 1:
- text += ' '
- # Falling
- elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras:
- text += '↓'
- # Rising
- elif a2 == 1 and a2_next == 2:
- text += '↑'
- if i tuple:
- parser = argparse.ArgumentParser()
- parser.add_argument("--port", type=int, default=7865, help="Listen port")
- parser.add_argument(
- "--pycmd", type=str, default="python", help="Python command"
- )
- parser.add_argument("--colab", action="store_true", help="Launch in colab")
- parser.add_argument(
- "--noparallel", action="store_true", help="Disable parallel processing"
- )
- parser.add_argument(
- "--noautoopen",
- action="store_true",
- help="Do not open in browser automatically",
- )
- cmd_opts = parser.parse_args()
-
- cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865
-
- return (
- cmd_opts.pycmd,
- cmd_opts.port,
- cmd_opts.colab,
- cmd_opts.noparallel,
- cmd_opts.noautoopen,
- )
-
- def device_config(self) -> tuple:
- if torch.cuda.is_available():
- i_device = int(self.device.split(":")[-1])
- self.gpu_name = torch.cuda.get_device_name(i_device)
- if (
- ("16" in self.gpu_name and "V100" not in self.gpu_name.upper())
- or "P40" in self.gpu_name.upper()
- or "1060" in self.gpu_name
- or "1070" in self.gpu_name
- or "1080" in self.gpu_name
- ):
- print("16系/10系显卡和P40强制单精度")
- self.is_half = False
- config_file_change_fp32()
- else:
- self.gpu_name = None
- self.gpu_mem = int(
- torch.cuda.get_device_properties(i_device).total_memory
- / 1024
- / 1024
- / 1024
- + 0.4
- )
- if self.gpu_mem <= 4:
- with open("trainset_preprocess_pipeline_print.py", "r") as f:
- strr = f.read().replace("3.7", "3.0")
- with open("trainset_preprocess_pipeline_print.py", "w") as f:
- f.write(strr)
- elif torch.backends.mps.is_available():
- print("没有发现支持的N卡, 使用MPS进行推理")
- self.device = "mps"
- self.is_half = False
- config_file_change_fp32()
- else:
- print("没有发现支持的N卡, 使用CPU进行推理")
- self.device = "cpu"
- self.is_half = False
- config_file_change_fp32()
-
- if self.n_cpu == 0:
- self.n_cpu = cpu_count()
-
- if self.is_half:
- # 6G显存配置
- x_pad = 3
- x_query = 10
- x_center = 60
- x_max = 65
- else:
- # 5G显存配置
- x_pad = 1
- x_query = 6
- x_center = 38
- x_max = 41
-
- if self.gpu_mem != None and self.gpu_mem <= 4:
- x_pad = 1
- x_query = 5
- x_center = 30
- x_max = 32
-
- return x_pad, x_query, x_center, x_max
diff --git a/spaces/AIConsultant/MusicGen/tests/losses/test_losses.py b/spaces/AIConsultant/MusicGen/tests/losses/test_losses.py
deleted file mode 100644
index b6681e12c453dea5aeba738ab252d1923b7e0941..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/tests/losses/test_losses.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import random
-
-import torch
-
-from audiocraft.losses import (
- MelSpectrogramL1Loss,
- MultiScaleMelSpectrogramLoss,
- MRSTFTLoss,
- SISNR,
- STFTLoss,
-)
-
-
-def test_mel_l1_loss():
- N, C, T = 2, 2, random.randrange(1000, 100_000)
- t1 = torch.randn(N, C, T)
- t2 = torch.randn(N, C, T)
-
- mel_l1 = MelSpectrogramL1Loss(sample_rate=22_050)
- loss = mel_l1(t1, t2)
- loss_same = mel_l1(t1, t1)
-
- assert isinstance(loss, torch.Tensor)
- assert isinstance(loss_same, torch.Tensor)
- assert loss_same.item() == 0.0
-
-
-def test_msspec_loss():
- N, C, T = 2, 2, random.randrange(1000, 100_000)
- t1 = torch.randn(N, C, T)
- t2 = torch.randn(N, C, T)
-
- msspec = MultiScaleMelSpectrogramLoss(sample_rate=22_050)
- loss = msspec(t1, t2)
- loss_same = msspec(t1, t1)
-
- assert isinstance(loss, torch.Tensor)
- assert isinstance(loss_same, torch.Tensor)
- assert loss_same.item() == 0.0
-
-
-def test_mrstft_loss():
- N, C, T = 2, 2, random.randrange(1000, 100_000)
- t1 = torch.randn(N, C, T)
- t2 = torch.randn(N, C, T)
-
- mrstft = MRSTFTLoss()
- loss = mrstft(t1, t2)
-
- assert isinstance(loss, torch.Tensor)
-
-
-def test_sisnr_loss():
- N, C, T = 2, 2, random.randrange(1000, 100_000)
- t1 = torch.randn(N, C, T)
- t2 = torch.randn(N, C, T)
-
- sisnr = SISNR()
- loss = sisnr(t1, t2)
-
- assert isinstance(loss, torch.Tensor)
-
-
-def test_stft_loss():
- N, C, T = 2, 2, random.randrange(1000, 100_000)
- t1 = torch.randn(N, C, T)
- t2 = torch.randn(N, C, T)
-
- mrstft = STFTLoss()
- loss = mrstft(t1, t2)
-
- assert isinstance(loss, torch.Tensor)
diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/latent_diffusion/ddim.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/latent_diffusion/ddim.py
deleted file mode 100644
index 57ee8d302c77cb09bd73ef803ef9e715098feafc..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/latent_diffusion/ddim.py
+++ /dev/null
@@ -1,377 +0,0 @@
-"""SAMPLING ONLY."""
-
-import torch
-import numpy as np
-from tqdm import tqdm
-
-from audioldm.latent_diffusion.util import (
- make_ddim_sampling_parameters,
- make_ddim_timesteps,
- noise_like,
- extract_into_tensor,
-)
-import gradio as gr
-
-class DDIMSampler(object):
- def __init__(self, model, schedule="linear", **kwargs):
- super().__init__()
- self.model = model
- self.ddpm_num_timesteps = model.num_timesteps
- self.schedule = schedule
-
- def register_buffer(self, name, attr):
- if type(attr) == torch.Tensor:
- if attr.device != torch.device("cuda"):
- attr = attr.to(torch.device("cuda"))
- setattr(self, name, attr)
-
- def make_schedule(
- self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0.0, verbose=True
- ):
- self.ddim_timesteps = make_ddim_timesteps(
- ddim_discr_method=ddim_discretize,
- num_ddim_timesteps=ddim_num_steps,
- num_ddpm_timesteps=self.ddpm_num_timesteps,
- verbose=verbose,
- )
- alphas_cumprod = self.model.alphas_cumprod
- assert (
- alphas_cumprod.shape[0] == self.ddpm_num_timesteps
- ), "alphas have to be defined for each timestep"
- to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device)
-
- self.register_buffer("betas", to_torch(self.model.betas))
- self.register_buffer("alphas_cumprod", to_torch(alphas_cumprod))
- self.register_buffer(
- "alphas_cumprod_prev", to_torch(self.model.alphas_cumprod_prev)
- )
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer(
- "sqrt_alphas_cumprod", to_torch(np.sqrt(alphas_cumprod.cpu()))
- )
- self.register_buffer(
- "sqrt_one_minus_alphas_cumprod",
- to_torch(np.sqrt(1.0 - alphas_cumprod.cpu())),
- )
- self.register_buffer(
- "log_one_minus_alphas_cumprod", to_torch(np.log(1.0 - alphas_cumprod.cpu()))
- )
- self.register_buffer(
- "sqrt_recip_alphas_cumprod", to_torch(np.sqrt(1.0 / alphas_cumprod.cpu()))
- )
- self.register_buffer(
- "sqrt_recipm1_alphas_cumprod",
- to_torch(np.sqrt(1.0 / alphas_cumprod.cpu() - 1)),
- )
-
- # ddim sampling parameters
- ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(
- alphacums=alphas_cumprod.cpu(),
- ddim_timesteps=self.ddim_timesteps,
- eta=ddim_eta,
- verbose=verbose,
- )
- self.register_buffer("ddim_sigmas", ddim_sigmas)
- self.register_buffer("ddim_alphas", ddim_alphas)
- self.register_buffer("ddim_alphas_prev", ddim_alphas_prev)
- self.register_buffer("ddim_sqrt_one_minus_alphas", np.sqrt(1.0 - ddim_alphas))
- sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt(
- (1 - self.alphas_cumprod_prev)
- / (1 - self.alphas_cumprod)
- * (1 - self.alphas_cumprod / self.alphas_cumprod_prev)
- )
- self.register_buffer(
- "ddim_sigmas_for_original_num_steps", sigmas_for_original_sampling_steps
- )
-
- @torch.no_grad()
- def sample(
- self,
- S,
- batch_size,
- shape,
- conditioning=None,
- callback=None,
- normals_sequence=None,
- img_callback=None,
- quantize_x0=False,
- eta=0.0,
- mask=None,
- x0=None,
- temperature=1.0,
- noise_dropout=0.0,
- score_corrector=None,
- corrector_kwargs=None,
- verbose=True,
- x_T=None,
- log_every_t=100,
- unconditional_guidance_scale=1.0,
- unconditional_conditioning=None,
- # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ...
- **kwargs,
- ):
- if conditioning is not None:
- if isinstance(conditioning, dict):
- cbs = conditioning[list(conditioning.keys())[0]].shape[0]
- if cbs != batch_size:
- print(
- f"Warning: Got {cbs} conditionings but batch-size is {batch_size}"
- )
- else:
- if conditioning.shape[0] != batch_size:
- print(
- f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}"
- )
-
- self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)
- # sampling
- C, H, W = shape
- size = (batch_size, C, H, W)
- samples, intermediates = self.ddim_sampling(
- conditioning,
- size,
- callback=callback,
- img_callback=img_callback,
- quantize_denoised=quantize_x0,
- mask=mask,
- x0=x0,
- ddim_use_original_steps=False,
- noise_dropout=noise_dropout,
- temperature=temperature,
- score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- x_T=x_T,
- log_every_t=log_every_t,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- )
- return samples, intermediates
-
- @torch.no_grad()
- def ddim_sampling(
- self,
- cond,
- shape,
- x_T=None,
- ddim_use_original_steps=False,
- callback=None,
- timesteps=None,
- quantize_denoised=False,
- mask=None,
- x0=None,
- img_callback=None,
- log_every_t=100,
- temperature=1.0,
- noise_dropout=0.0,
- score_corrector=None,
- corrector_kwargs=None,
- unconditional_guidance_scale=1.0,
- unconditional_conditioning=None,
- ):
- device = self.model.betas.device
- b = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=device)
- else:
- img = x_T
-
- if timesteps is None:
- timesteps = (
- self.ddpm_num_timesteps
- if ddim_use_original_steps
- else self.ddim_timesteps
- )
- elif timesteps is not None and not ddim_use_original_steps:
- subset_end = (
- int(
- min(timesteps / self.ddim_timesteps.shape[0], 1)
- * self.ddim_timesteps.shape[0]
- )
- - 1
- )
- timesteps = self.ddim_timesteps[:subset_end]
-
- intermediates = {"x_inter": [img], "pred_x0": [img]}
- time_range = (
- reversed(range(0, timesteps))
- if ddim_use_original_steps
- else np.flip(timesteps)
- )
- total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0]
- # print(f"Running DDIM Sampling with {total_steps} timesteps")
-
- # iterator = gr.Progress().tqdm(time_range, desc="DDIM Sampler", total=total_steps)
- iterator = tqdm(time_range, desc="DDIM Sampler", total=total_steps)
-
- for i, step in enumerate(iterator):
- index = total_steps - i - 1
- ts = torch.full((b,), step, device=device, dtype=torch.long)
- if mask is not None:
- assert x0 is not None
- img_orig = self.model.q_sample(
- x0, ts
- ) # TODO deterministic forward pass?
- img = (
- img_orig * mask + (1.0 - mask) * img
- ) # In the first sampling step, img is pure gaussian noise
-
- outs = self.p_sample_ddim(
- img,
- cond,
- ts,
- index=index,
- use_original_steps=ddim_use_original_steps,
- quantize_denoised=quantize_denoised,
- temperature=temperature,
- noise_dropout=noise_dropout,
- score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- )
- img, pred_x0 = outs
- if callback:
- callback(i)
- if img_callback:
- img_callback(pred_x0, i)
-
- if index % log_every_t == 0 or index == total_steps - 1:
- intermediates["x_inter"].append(img)
- intermediates["pred_x0"].append(pred_x0)
-
- return img, intermediates
-
- @torch.no_grad()
- def stochastic_encode(self, x0, t, use_original_steps=False, noise=None):
- # fast, but does not allow for exact reconstruction
- # t serves as an index to gather the correct alphas
- if use_original_steps:
- sqrt_alphas_cumprod = self.sqrt_alphas_cumprod
- sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod
- else:
- sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas)
- sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas
-
- if noise is None:
- noise = torch.randn_like(x0)
-
- return (
- extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0
- + extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise
- )
-
- @torch.no_grad()
- def decode(
- self,
- x_latent,
- cond,
- t_start,
- unconditional_guidance_scale=1.0,
- unconditional_conditioning=None,
- use_original_steps=False,
- ):
-
- timesteps = (
- np.arange(self.ddpm_num_timesteps)
- if use_original_steps
- else self.ddim_timesteps
- )
- timesteps = timesteps[:t_start]
-
- time_range = np.flip(timesteps)
- total_steps = timesteps.shape[0]
- # print(f"Running DDIM Sampling with {total_steps} timesteps")
-
- # iterator = gr.Progress().tqdm(time_range, desc="Decoding image", total=total_steps)
- iterator = tqdm(time_range, desc="Decoding image", total=total_steps)
- x_dec = x_latent
-
- for i, step in enumerate(iterator):
- index = total_steps - i - 1
- ts = torch.full(
- (x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long
- )
- x_dec, _ = self.p_sample_ddim(
- x_dec,
- cond,
- ts,
- index=index,
- use_original_steps=use_original_steps,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- )
- return x_dec
-
- @torch.no_grad()
- def p_sample_ddim(
- self,
- x,
- c,
- t,
- index,
- repeat_noise=False,
- use_original_steps=False,
- quantize_denoised=False,
- temperature=1.0,
- noise_dropout=0.0,
- score_corrector=None,
- corrector_kwargs=None,
- unconditional_guidance_scale=1.0,
- unconditional_conditioning=None,
- ):
- b, *_, device = *x.shape, x.device
-
- if unconditional_conditioning is None or unconditional_guidance_scale == 1.0:
- e_t = self.model.apply_model(x, t, c)
- else:
- x_in = torch.cat([x] * 2)
- t_in = torch.cat([t] * 2)
- c_in = torch.cat([unconditional_conditioning, c])
- e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
- # When unconditional_guidance_scale == 1: only e_t
- # When unconditional_guidance_scale == 0: only unconditional
- # When unconditional_guidance_scale > 1: add more unconditional guidance
- e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
-
- if score_corrector is not None:
- assert self.model.parameterization == "eps"
- e_t = score_corrector.modify_score(
- self.model, e_t, x, t, c, **corrector_kwargs
- )
-
- alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas
- alphas_prev = (
- self.model.alphas_cumprod_prev
- if use_original_steps
- else self.ddim_alphas_prev
- )
- sqrt_one_minus_alphas = (
- self.model.sqrt_one_minus_alphas_cumprod
- if use_original_steps
- else self.ddim_sqrt_one_minus_alphas
- )
- sigmas = (
- self.model.ddim_sigmas_for_original_num_steps
- if use_original_steps
- else self.ddim_sigmas
- )
- # select parameters corresponding to the currently considered timestep
- a_t = torch.full((b, 1, 1, 1), alphas[index], device=device)
- a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device)
- sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device)
- sqrt_one_minus_at = torch.full(
- (b, 1, 1, 1), sqrt_one_minus_alphas[index], device=device
- )
-
- # current prediction for x_0
- pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()
- if quantize_denoised:
- pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)
- # direction pointing to x_t
- dir_xt = (1.0 - a_prev - sigma_t**2).sqrt() * e_t
- noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature
- if noise_dropout > 0.0:
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
- x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise # TODO
- return x_prev, pred_x0
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/cwt.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/cwt.py
deleted file mode 100644
index 1a08461b9e422aac614438e6240b7355b8e4bb2c..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/cwt.py
+++ /dev/null
@@ -1,146 +0,0 @@
-import librosa
-import numpy as np
-from pycwt import wavelet
-from scipy.interpolate import interp1d
-
-
-def load_wav(wav_file, sr):
- wav, _ = librosa.load(wav_file, sr=sr, mono=True)
- return wav
-
-
-def convert_continuos_f0(f0):
- '''CONVERT F0 TO CONTINUOUS F0
- Args:
- f0 (ndarray): original f0 sequence with the shape (T)
- Return:
- (ndarray): continuous f0 with the shape (T)
- '''
- # get uv information as binary
- f0 = np.copy(f0)
- uv = np.float32(f0 != 0)
-
- # get start and end of f0
- if (f0 == 0).all():
- print("| all of the f0 values are 0.")
- return uv, f0
- start_f0 = f0[f0 != 0][0]
- end_f0 = f0[f0 != 0][-1]
-
- # padding start and end of f0 sequence
- start_idx = np.where(f0 == start_f0)[0][0]
- end_idx = np.where(f0 == end_f0)[0][-1]
- f0[:start_idx] = start_f0
- f0[end_idx:] = end_f0
-
- # get non-zero frame index
- nz_frames = np.where(f0 != 0)[0]
-
- # perform linear interpolation
- f = interp1d(nz_frames, f0[nz_frames])
- cont_f0 = f(np.arange(0, f0.shape[0]))
-
- return uv, cont_f0
-
-
-def get_cont_lf0(f0, frame_period=5.0):
- uv, cont_f0_lpf = convert_continuos_f0(f0)
- # cont_f0_lpf = low_pass_filter(cont_f0_lpf, int(1.0 / (frame_period * 0.001)), cutoff=20)
- cont_lf0_lpf = np.log(cont_f0_lpf)
- return uv, cont_lf0_lpf
-
-
-def get_lf0_cwt(lf0):
- '''
- input:
- signal of shape (N)
- output:
- Wavelet_lf0 of shape(10, N), scales of shape(10)
- '''
- mother = wavelet.MexicanHat()
- dt = 0.005
- dj = 1
- s0 = dt * 2
- J = 9
-
- Wavelet_lf0, scales, _, _, _, _ = wavelet.cwt(np.squeeze(lf0), dt, dj, s0, J, mother)
- # Wavelet.shape => (J + 1, len(lf0))
- Wavelet_lf0 = np.real(Wavelet_lf0).T
- return Wavelet_lf0, scales
-
-
-def norm_scale(Wavelet_lf0):
- Wavelet_lf0_norm = np.zeros((Wavelet_lf0.shape[0], Wavelet_lf0.shape[1]))
- mean = Wavelet_lf0.mean(0)[None, :]
- std = Wavelet_lf0.std(0)[None, :]
- Wavelet_lf0_norm = (Wavelet_lf0 - mean) / std
- return Wavelet_lf0_norm, mean, std
-
-
-def normalize_cwt_lf0(f0, mean, std):
- uv, cont_lf0_lpf = get_cont_lf0(f0)
- cont_lf0_norm = (cont_lf0_lpf - mean) / std
- Wavelet_lf0, scales = get_lf0_cwt(cont_lf0_norm)
- Wavelet_lf0_norm, _, _ = norm_scale(Wavelet_lf0)
-
- return Wavelet_lf0_norm
-
-
-def get_lf0_cwt_norm(f0s, mean, std):
- uvs = list()
- cont_lf0_lpfs = list()
- cont_lf0_lpf_norms = list()
- Wavelet_lf0s = list()
- Wavelet_lf0s_norm = list()
- scaless = list()
-
- means = list()
- stds = list()
- for f0 in f0s:
- uv, cont_lf0_lpf = get_cont_lf0(f0)
- cont_lf0_lpf_norm = (cont_lf0_lpf - mean) / std
-
- Wavelet_lf0, scales = get_lf0_cwt(cont_lf0_lpf_norm) # [560,10]
- Wavelet_lf0_norm, mean_scale, std_scale = norm_scale(Wavelet_lf0) # [560,10],[1,10],[1,10]
-
- Wavelet_lf0s_norm.append(Wavelet_lf0_norm)
- uvs.append(uv)
- cont_lf0_lpfs.append(cont_lf0_lpf)
- cont_lf0_lpf_norms.append(cont_lf0_lpf_norm)
- Wavelet_lf0s.append(Wavelet_lf0)
- scaless.append(scales)
- means.append(mean_scale)
- stds.append(std_scale)
-
- return Wavelet_lf0s_norm, scaless, means, stds
-
-
-def inverse_cwt_torch(Wavelet_lf0, scales):
- import torch
- b = ((torch.arange(0, len(scales)).float().to(Wavelet_lf0.device)[None, None, :] + 1 + 2.5) ** (-2.5))
- lf0_rec = Wavelet_lf0 * b
- lf0_rec_sum = lf0_rec.sum(-1)
- lf0_rec_sum = (lf0_rec_sum - lf0_rec_sum.mean(-1, keepdim=True)) / lf0_rec_sum.std(-1, keepdim=True)
- return lf0_rec_sum
-
-
-def inverse_cwt(Wavelet_lf0, scales):
- b = ((np.arange(0, len(scales))[None, None, :] + 1 + 2.5) ** (-2.5))
- lf0_rec = Wavelet_lf0 * b
- lf0_rec_sum = lf0_rec.sum(-1)
- lf0_rec_sum = (lf0_rec_sum - lf0_rec_sum.mean(-1, keepdims=True)) / lf0_rec_sum.std(-1, keepdims=True)
- return lf0_rec_sum
-
-
-def cwt2f0(cwt_spec, mean, std, cwt_scales):
- assert len(mean.shape) == 1 and len(std.shape) == 1 and len(cwt_spec.shape) == 3
- import torch
- if isinstance(cwt_spec, torch.Tensor):
- f0 = inverse_cwt_torch(cwt_spec, cwt_scales)
- f0 = f0 * std[:, None] + mean[:, None]
- f0 = f0.exp() # [B, T]
- else:
- f0 = inverse_cwt(cwt_spec, cwt_scales)
- f0 = f0 * std[:, None] + mean[:, None]
- f0 = np.exp(f0) # [B, T]
- return f0
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/runs/binarize.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/runs/binarize.py
deleted file mode 100644
index 81cbf21dd50ece1302a9c6a052fc0443b6b5c621..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/runs/binarize.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import utils.commons.single_thread_env # NOQA
-from text_to_speech.utils.commons.hparams import hparams, set_hparams
-import importlib
-
-
-def binarize():
- binarizer_cls = hparams.get("binarizer_cls", 'data_gen.tts.base_binarizer.BaseBinarizer')
- pkg = ".".join(binarizer_cls.split(".")[:-1])
- cls_name = binarizer_cls.split(".")[-1]
- binarizer_cls = getattr(importlib.import_module(pkg), cls_name)
- print("| Binarizer: ", binarizer_cls)
- binarizer_cls().process()
-
-
-if __name__ == '__main__':
- set_hparams()
- binarize()
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/hparams.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/hparams.py
deleted file mode 100644
index 70abf2513ad50352b29086a9d455f6fb1bec33fc..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/hparams.py
+++ /dev/null
@@ -1,131 +0,0 @@
-import argparse
-import os
-import yaml
-
-from text_to_speech.utils.os_utils import remove_file
-
-global_print_hparams = True
-hparams = {}
-
-
-class Args:
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- self.__setattr__(k, v)
-
-
-def override_config(old_config: dict, new_config: dict):
- for k, v in new_config.items():
- if isinstance(v, dict) and k in old_config:
- override_config(old_config[k], new_config[k])
- else:
- old_config[k] = v
-
-
-def set_hparams(config='', exp_name='', hparams_str='', print_hparams=True, global_hparams=True):
- if config == '' and exp_name == '':
- parser = argparse.ArgumentParser(description='')
- parser.add_argument('--config', type=str, default='',
- help='location of the data corpus')
- parser.add_argument('--exp_name', type=str, default='', help='exp_name')
- parser.add_argument('-hp', '--hparams', type=str, default='',
- help='location of the data corpus')
- parser.add_argument('--infer', action='store_true', help='infer')
- parser.add_argument('--validate', action='store_true', help='validate')
- parser.add_argument('--reset', action='store_true', help='reset hparams')
- parser.add_argument('--remove', action='store_true', help='remove old ckpt')
- parser.add_argument('--debug', action='store_true', help='debug')
- args, unknown = parser.parse_known_args()
- print("| Unknow hparams: ", unknown)
- else:
- args = Args(config=config, exp_name=exp_name, hparams=hparams_str,
- infer=False, validate=False, reset=False, debug=False, remove=False)
- global hparams
- assert args.config != '' or args.exp_name != ''
- if args.config != '':
- assert os.path.exists(args.config)
-
- config_chains = []
- loaded_config = set()
-
- def load_config(config_fn):
- # deep first inheritance and avoid the second visit of one node
- if not os.path.exists(config_fn):
- return {}
- with open(config_fn) as f:
- hparams_ = yaml.safe_load(f)
- loaded_config.add(config_fn)
- if 'base_config' in hparams_:
- ret_hparams = {}
- if not isinstance(hparams_['base_config'], list):
- hparams_['base_config'] = [hparams_['base_config']]
- for c in hparams_['base_config']:
- if c.startswith('.'):
- c = f'{os.path.dirname(config_fn)}/{c}'
- c = os.path.normpath(c)
- if c not in loaded_config:
- override_config(ret_hparams, load_config(c))
- override_config(ret_hparams, hparams_)
- else:
- ret_hparams = hparams_
- config_chains.append(config_fn)
- return ret_hparams
-
- saved_hparams = {}
- args_work_dir = ''
- if args.exp_name != '':
- args_work_dir = f'checkpoints/{args.exp_name}'
- ckpt_config_path = f'{args_work_dir}/config.yaml'
- if os.path.exists(ckpt_config_path):
- with open(ckpt_config_path) as f:
- saved_hparams_ = yaml.safe_load(f)
- if saved_hparams_ is not None:
- saved_hparams.update(saved_hparams_)
- hparams_ = {}
- if args.config != '':
- hparams_.update(load_config(args.config))
- if not args.reset:
- hparams_.update(saved_hparams)
- hparams_['work_dir'] = args_work_dir
-
- # Support config overriding in command line. Support list type config overriding.
- # Examples: --hparams="a=1,b.c=2,d=[1 1 1]"
- if args.hparams != "":
- for new_hparam in args.hparams.split(","):
- k, v = new_hparam.split("=")
- v = v.strip("\'\" ")
- config_node = hparams_
- for k_ in k.split(".")[:-1]:
- config_node = config_node[k_]
- k = k.split(".")[-1]
- if v in ['True', 'False'] or type(config_node[k]) in [bool, list, dict]:
- if type(config_node[k]) == list:
- v = v.replace(" ", ",")
- config_node[k] = eval(v)
- else:
- config_node[k] = type(config_node[k])(v)
- if args_work_dir != '' and args.remove:
- answer = input("REMOVE old checkpoint? Y/N [Default: N]: ")
- if answer.lower() == "y":
- remove_file(args_work_dir)
- if args_work_dir != '' and (not os.path.exists(ckpt_config_path) or args.reset) and not args.infer:
- os.makedirs(hparams_['work_dir'], exist_ok=True)
- with open(ckpt_config_path, 'w') as f:
- yaml.safe_dump(hparams_, f)
-
- hparams_['infer'] = args.infer
- hparams_['debug'] = args.debug
- hparams_['validate'] = args.validate
- hparams_['exp_name'] = args.exp_name
- global global_print_hparams
- if global_hparams:
- hparams.clear()
- hparams.update(hparams_)
- if print_hparams and global_print_hparams and global_hparams:
- print('| Hparams chains: ', config_chains)
- print('| Hparams: ')
- for i, (k, v) in enumerate(sorted(hparams_.items())):
- print(f"\033[;33;m{k}\033[0m: {v}, ", end="\n" if i % 5 == 4 else "")
- print("")
- global_print_hparams = False
- return hparams_
diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/image_degradation/__init__.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/image_degradation/__init__.py
deleted file mode 100644
index 7836cada81f90ded99c58d5942eea4c3477f58fc..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/image_degradation/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from ldm.modules.image_degradation.bsrgan import degradation_bsrgan_variant as degradation_fn_bsr
-from ldm.modules.image_degradation.bsrgan_light import degradation_bsrgan_variant as degradation_fn_bsr_light
diff --git a/spaces/AISuperheroes/09SL-AI-Image-Music-Video-AIUIUX/Article.md b/spaces/AISuperheroes/09SL-AI-Image-Music-Video-AIUIUX/Article.md
deleted file mode 100644
index c7f042e4c9c0f401731f009842a325e2d1386bf5..0000000000000000000000000000000000000000
--- a/spaces/AISuperheroes/09SL-AI-Image-Music-Video-AIUIUX/Article.md
+++ /dev/null
@@ -1,51 +0,0 @@
-
-# Image Generation for Art, Marketing, Ideation, Design, and Use in Business
-
-A number of multiple AI pipeline element strategies have evolved on the open market which allow you to generate images using a combination of image prompts and word prompts. This brief analysis gives an idea of the prompting capabilities as well as image rendering techniques that are used in the strategy to generate art from human understanding of images and text used to describe a scene.
-
-First a top five list on state of the art generators both free and paid is worth consideration.
-
-1) Midjourney - a Discord server based chatboat AI that allows /imagine prompts which can generate multiple images at a time. This is best at parallel creation, high accuracy even photo real creations.
-2) Artbreeder - A multiple capability tool which now features a Collager which assists in starting image composition. By far the most innovative approach which does great to combine the right partial elements in a scene.
-3) Dreamstudio - A Huggingface derived art program in beta which uses stable diffusion to create highly accurate art and images.
-4) Nightcafe - A credit based creation AI app that can do generation of video dives into an AI art piece which can produce some of the best experiences in Video.
-5) RunwayML - a quintessential tool in processing morph audio and video tracks which rival most high end video edit tools.
-
-These 5 tools make up some of the best AI pipeline programs that are cloud based that allow anyone to begin easily building their portfolio of art.
-
-The prompting capabilities often involve having a set of text based prompts to get started. Most also feature a starter image which could be an example of what you would like to create.
-
-URL Links:
-1) Collager: https://www.artbreeder.com/beta/collage
-2) NightCafe: https://creator.nightcafe.studio/explore
-3) Midjourney: https://www.midjourney.com/app/users/779773261440614430/
-4) Dreamstudio: https://beta.dreamstudio.ai/dream
-5) RunwayML: https://app.runwayml.com/
-
-## Getting Started and Organizing Your AI Pipeline and Process
-
-Any great strategy has a number of steps that combine all capabilities at your disposal. It is useful to note how you can easily fir these together into a process that works for you.
-
-The techniques worth noted are listed below. Consider how you will use them will make your pipeline easier and more automated to allow you to spend the majority of your time curating what you have made, and ideating what you want to create next.
-
-1) Source materials: Since prompting requires text and text examples can quickly help you compose good input, its worth considering and documenting some effective prompts. Nightcafe with its integration into email, sends you a copy of your creation plus the prompting text so one option is to use your email account to keep a record of which prompts work for which outputs.
-2) Source materials: Discord since its a public chat format allows you to easily see what others are using for prompts in bulk. There are a number of chat channels designed for people new to the platform and often you can copy and paste if you see very effective prompts with material you are looking for.
-3) Source materials: Collager is unique in its ability to add additive parts and then dial in the percent of AI you would like with that. This allows you to add a few image elements which help start out your generation.
-4) Source materials: Since images and prompts are going to be your mainstay for inputs its worth considering an open standard for storing and retrieving these from anywhere. Github is a good place since markdown language can involve text in table or list format and includes a capability to reference uploaded images within markdown. This is also a good form for portability since you can later fork and download your repository with a few clicks from anywhere.
-5) Source materials: Google drive is integrated into the Artbreeder Collager workflow which allows you easily expand your work and even compose albums of the ones you like to place in Google photo albums. The portfolio you save on different sites have different degrees of ease when aggregating your collections. Collager for instance allows right click save for instant saving of your creation. Dreamstudio features a history. Midjourney features a profile site for you to store and review creations even triggering Upscales which important to use to get the highest resolution output for your creations.
-
-## Social Media integration
-
-Depending on your target "safe for work" exports of your work, it is sometimes important to know your accepted social media outlets that you can integrate. Cloud based interactions are the key to successful audiences if you want to scale and share your process with others.
-
-The key social media outlets supported for these tools are here in a sorted link list which start with public open source first:
-
-1) Github - Github is open at most companies and allow creation of a free space to share your content.
-2) LinkedIn - LinkedIn is acceptable use at nearly every company.
-3) Twitter - Twitter is supported as a social media outlet at most companies yet can also be used with security restrictions which might limit posting but allow read access.
-4) Facebook - Meta's Facebook is a good outlet since it allows creation of large folios of your images along with stories. This venue however is locked down at many organizations.
-5) Instagram - Instagram is supported as an output channel for many tools yet has decreased in popularity due to high frequency of ads and pay for likes models. While it can still be one of the best places for domain specific arrangements of images it is likely locked down in most secure organizations.
-6) Youtube - For video uploads with automated captioning and long term storage of short and long form video this is an essential for any creation you compose as video. It is also useful to review and compose playlists of videos here for yourself that speed up your learning - Spend some time at Youtube university and keep a record of keyword searches there sometimes along with your playlists to accelerate learning.
-7) Gmail - With the baility to move email in and out its useful to create and wrap up details within email. Most email policies come with a content limitation (for example no files larger than 25MB. For this reason get used to creating pproject wrap up archives with winzip or compression software. With the convenience of keyword searching you can usually use this as a base.
-8) Last a worth mention is Huggingface.com. Like github as you become more sophisticated in your public open source capabilities, HuggingFace can allow you to wrap up using one of three software development kits which are gadio, streamlit, and HTML5 each with unique AI and UI integration components and features. If you want to create your own AI pipelines this one also has the open source code and models ready to go to help you on your journey.
-
diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/quantization/vq.py b/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/quantization/vq.py
deleted file mode 100644
index f67c3a0cd30d4b8993a36c587f00dc8a451d926f..0000000000000000000000000000000000000000
--- a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/quantization/vq.py
+++ /dev/null
@@ -1,116 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-import typing as tp
-
-import torch
-
-from .base import BaseQuantizer, QuantizedResult
-from .core_vq import ResidualVectorQuantization
-
-
-class ResidualVectorQuantizer(BaseQuantizer):
- """Residual Vector Quantizer.
-
- Args:
- dimension (int): Dimension of the codebooks.
- n_q (int): Number of residual vector quantizers used.
- q_dropout (bool): Random quantizer drop out at train time.
- bins (int): Codebook size.
- decay (float): Decay for exponential moving average over the codebooks.
- kmeans_init (bool): Whether to use kmeans to initialize the codebooks.
- kmeans_iters (int): Number of iterations used for kmeans initialization.
- threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes
- that have an exponential moving average cluster size less than the specified threshold with
- randomly selected vector from the current batch.
- orthogonal_reg_weight (float): Orthogonal regularization weights.
- orthogonal_reg_active_codes_only (bool): Apply orthogonal regularization only on active codes.
- orthogonal_reg_max_codes (optional int): Maximum number of codes to consider.
- for orthogonal regulariation.
- """
- def __init__(
- self,
- dimension: int = 256,
- n_q: int = 8,
- q_dropout: bool = False,
- bins: int = 1024,
- decay: float = 0.99,
- kmeans_init: bool = True,
- kmeans_iters: int = 10,
- threshold_ema_dead_code: int = 2,
- orthogonal_reg_weight: float = 0.0,
- orthogonal_reg_active_codes_only: bool = False,
- orthogonal_reg_max_codes: tp.Optional[int] = None,
- ):
- super().__init__()
- self.max_n_q = n_q
- self.n_q = n_q
- self.q_dropout = q_dropout
- self.dimension = dimension
- self.bins = bins
- self.decay = decay
- self.kmeans_init = kmeans_init
- self.kmeans_iters = kmeans_iters
- self.threshold_ema_dead_code = threshold_ema_dead_code
- self.orthogonal_reg_weight = orthogonal_reg_weight
- self.orthogonal_reg_active_codes_only = orthogonal_reg_active_codes_only
- self.orthogonal_reg_max_codes = orthogonal_reg_max_codes
- self.vq = ResidualVectorQuantization(
- dim=self.dimension,
- codebook_size=self.bins,
- num_quantizers=self.n_q,
- decay=self.decay,
- kmeans_init=self.kmeans_init,
- kmeans_iters=self.kmeans_iters,
- threshold_ema_dead_code=self.threshold_ema_dead_code,
- orthogonal_reg_weight=self.orthogonal_reg_weight,
- orthogonal_reg_active_codes_only=self.orthogonal_reg_active_codes_only,
- orthogonal_reg_max_codes=self.orthogonal_reg_max_codes,
- channels_last=False
- )
-
- def forward(self, x: torch.Tensor, frame_rate: int):
- n_q = self.n_q
- if self.training and self.q_dropout:
- n_q = int(torch.randint(1, self.n_q + 1, (1,)).item())
- bw_per_q = math.log2(self.bins) * frame_rate / 1000
- quantized, codes, commit_loss = self.vq(x, n_q=n_q)
- codes = codes.transpose(0, 1)
- # codes is [B, K, T], with T frames, K nb of codebooks.
- bw = torch.tensor(n_q * bw_per_q).to(x)
- return QuantizedResult(quantized, codes, bw, penalty=torch.mean(commit_loss))
-
- def encode(self, x: torch.Tensor) -> torch.Tensor:
- """Encode a given input tensor with the specified frame rate at the given bandwidth.
- The RVQ encode method sets the appropriate number of quantizer to use
- and returns indices for each quantizer.
- """
- n_q = self.n_q
- codes = self.vq.encode(x, n_q=n_q)
- codes = codes.transpose(0, 1)
- # codes is [B, K, T], with T frames, K nb of codebooks.
- return codes
-
- def decode(self, codes: torch.Tensor) -> torch.Tensor:
- """Decode the given codes to the quantized representation.
- """
- # codes is [B, K, T], with T frames, K nb of codebooks, vq.decode expects [K, B, T].
- codes = codes.transpose(0, 1)
- quantized = self.vq.decode(codes)
- return quantized
-
- @property
- def total_codebooks(self):
- return self.max_n_q
-
- @property
- def num_codebooks(self):
- return self.n_q
-
- def set_num_codebooks(self, n: int):
- assert n > 0 and n <= self.max_n_q
- self.n_q = n
diff --git a/spaces/Acapellas/Extract_Vocals_Instrumentals/README.md b/spaces/Acapellas/Extract_Vocals_Instrumentals/README.md
deleted file mode 100644
index 4ed1cf8e9dad3869067d904679b233f368d02924..0000000000000000000000000000000000000000
--- a/spaces/Acapellas/Extract_Vocals_Instrumentals/README.md
+++ /dev/null
@@ -1,38 +0,0 @@
----
-title: Extract Acapellas & Instrumentals
-emoji: null
-colorFrom: gray
-colorTo: gray
-sdk: gradio
-app_file: app.py
-pinned: False
-duplicated_from: Thafx/Demucs_v4_2s_HT
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
\ No newline at end of file
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Liaobots.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Liaobots.py
deleted file mode 100644
index 2ab96ce349f641d3e4afaf862169f27d749ca62b..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Liaobots.py
+++ /dev/null
@@ -1,106 +0,0 @@
-from __future__ import annotations
-
-import uuid
-
-from aiohttp import ClientSession
-
-from ..typing import AsyncGenerator
-from .base_provider import AsyncGeneratorProvider
-
-models = {
- "gpt-4": {
- "id": "gpt-4",
- "name": "GPT-4",
- "maxLength": 24000,
- "tokenLimit": 8000,
- },
- "gpt-3.5-turbo": {
- "id": "gpt-3.5-turbo",
- "name": "GPT-3.5",
- "maxLength": 12000,
- "tokenLimit": 4000,
- },
- "gpt-3.5-turbo-16k": {
- "id": "gpt-3.5-turbo-16k",
- "name": "GPT-3.5-16k",
- "maxLength": 48000,
- "tokenLimit": 16000,
- },
-}
-
-class Liaobots(AsyncGeneratorProvider):
- url = "https://liaobots.site"
- working = True
- supports_gpt_35_turbo = True
- supports_gpt_4 = True
- _auth_code = None
-
- @classmethod
- async def create_async_generator(
- cls,
- model: str,
- messages: list[dict[str, str]],
- auth: str = None,
- proxy: str = None,
- **kwargs
- ) -> AsyncGenerator:
- model = model if model in models else "gpt-3.5-turbo"
- headers = {
- "authority": "liaobots.com",
- "content-type": "application/json",
- "origin": cls.url,
- "referer": cls.url + "/",
- "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36",
- }
- async with ClientSession(
- headers=headers
- ) as session:
- cls._auth_code = auth if isinstance(auth, str) else cls._auth_code
- if not cls._auth_code:
- async with session.post(
- "https://liaobots.work/recaptcha/api/login",
- proxy=proxy,
- data={"token": "abcdefghijklmnopqrst"},
- verify_ssl=False
- ) as response:
- response.raise_for_status()
- async with session.post(
- "https://liaobots.work/api/user",
- proxy=proxy,
- json={"authcode": ""},
- verify_ssl=False
- ) as response:
- response.raise_for_status()
- cls._auth_code = (await response.json(content_type=None))["authCode"]
- data = {
- "conversationId": str(uuid.uuid4()),
- "model": models[model],
- "messages": messages,
- "key": "",
- "prompt": "You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully.",
- }
- async with session.post(
- "https://liaobots.work/api/chat",
- proxy=proxy,
- json=data,
- headers={"x-auth-code": cls._auth_code},
- verify_ssl=False
- ) as response:
- response.raise_for_status()
- async for stream in response.content.iter_any():
- if stream:
- yield stream.decode()
-
-
- @classmethod
- @property
- def params(cls):
- params = [
- ("model", "str"),
- ("messages", "list[dict[str, str]]"),
- ("stream", "bool"),
- ("proxy", "str"),
- ("auth", "str"),
- ]
- param = ", ".join([": ".join(p) for p in params])
- return f"g4f.provider.{cls.__name__} supports: ({param})"
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/V50.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/V50.py
deleted file mode 100644
index 9a8b032c3949d493de81ff49b2e24ab33a2004f4..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/V50.py
+++ /dev/null
@@ -1,67 +0,0 @@
-from __future__ import annotations
-
-import uuid
-
-import requests
-
-from ...typing import Any, CreateResult
-from ..base_provider import BaseProvider
-
-
-class V50(BaseProvider):
- url = 'https://p5.v50.ltd'
- supports_gpt_35_turbo = True
- supports_stream = False
- needs_auth = False
- working = False
-
- @staticmethod
- def create_completion(
- model: str,
- messages: list[dict[str, str]],
- stream: bool, **kwargs: Any) -> CreateResult:
-
- conversation = "\n".join(f"{message['role']}: {message['content']}" for message in messages)
- conversation += "\nassistant: "
-
- payload = {
- "prompt" : conversation,
- "options" : {},
- "systemMessage" : ".",
- "temperature" : kwargs.get("temperature", 0.4),
- "top_p" : kwargs.get("top_p", 0.4),
- "model" : model,
- "user" : str(uuid.uuid4())
- }
-
- headers = {
- 'authority' : 'p5.v50.ltd',
- 'accept' : 'application/json, text/plain, */*',
- 'accept-language' : 'id-ID,id;q=0.9,en-US;q=0.8,en;q=0.7',
- 'content-type' : 'application/json',
- 'origin' : 'https://p5.v50.ltd',
- 'referer' : 'https://p5.v50.ltd/',
- 'sec-ch-ua-platform': '"Windows"',
- 'sec-fetch-dest' : 'empty',
- 'sec-fetch-mode' : 'cors',
- 'sec-fetch-site' : 'same-origin',
- 'user-agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36'
- }
- response = requests.post("https://p5.v50.ltd/api/chat-process",
- json=payload, headers=headers, proxies=kwargs['proxy'] if 'proxy' in kwargs else {})
-
- if "https://fk1.v50.ltd" not in response.text:
- yield response.text
-
- @classmethod
- @property
- def params(cls):
- params = [
- ("model", "str"),
- ("messages", "list[dict[str, str]]"),
- ("stream", "bool"),
- ("temperature", "float"),
- ("top_p", "int"),
- ]
- param = ", ".join([": ".join(p) for p in params])
- return f"g4f.provider.{cls.__name__} supports: ({param})"
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/order/__init__.py b/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/order/__init__.py
deleted file mode 100644
index 1c6ea9f0eaae3902de0a308e58239a1b1e86ceb8..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/order/__init__.py
+++ /dev/null
@@ -1,11 +0,0 @@
-from agentverse.registry import Registry
-order_registry = Registry(name="OrderRegistry")
-
-from .base import BaseOrder
-from .sequential import SequentialOrder
-from .random import RandomOrder
-from .concurrent import ConcurrentOrder
-from .classroom import ClassroomOrder
-from .prisoner import PrisonerOrder
-from .sde_team import SdeTeamOrder
-from .sde_team_given_tests import SdeTeamGivenTestsOrder
diff --git a/spaces/AlexZou/Deploy_Restoration/utils/utils_image.py b/spaces/AlexZou/Deploy_Restoration/utils/utils_image.py
deleted file mode 100644
index 6f0361e8a0b57d8935b75cd61aa1ded4848e1594..0000000000000000000000000000000000000000
--- a/spaces/AlexZou/Deploy_Restoration/utils/utils_image.py
+++ /dev/null
@@ -1,778 +0,0 @@
-import os
-import math
-import random
-import numpy as np
-import torch
-import cv2
-from torchvision.utils import make_grid
-from datetime import datetime
-# import torchvision.transforms as transforms
-import matplotlib.pyplot as plt
-
-'''
-modified by Kai Zhang (github: https://github.com/cszn)
-03/03/2019
-https://github.com/twhui/SRGAN-pyTorch
-https://github.com/xinntao/BasicSR
-'''
-
-IMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP']
-
-
-def is_image_file(filename):
- return any(filename.endswith(extension) for extension in IMG_EXTENSIONS)
-
-
-def get_timestamp():
- return datetime.now().strftime('%y%m%d-%H%M%S')
-
-
-def imshow(x, title=None, cbar=False, figsize=None):
- plt.figure(figsize=figsize)
- plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray')
- if title:
- plt.title(title)
- if cbar:
- plt.colorbar()
- plt.show()
-
-
-'''
-# =======================================
-# get image pathes of files
-# =======================================
-'''
-
-
-def get_image_paths(dataroot):
- paths = None # return None if dataroot is None
- if dataroot is not None:
- paths = sorted(_get_paths_from_images(dataroot))
- return paths
-
-
-def _get_paths_from_images(path):
- assert os.path.isdir(path), '{:s} is not a valid directory'.format(path)
- images = []
- for dirpath, _, fnames in sorted(os.walk(path)):
- for fname in sorted(fnames):
- if is_image_file(fname):
- img_path = os.path.join(dirpath, fname)
- images.append(img_path)
- assert images, '{:s} has no valid image file'.format(path)
- return images
-
-
-'''
-# =======================================
-# makedir
-# =======================================
-'''
-
-
-def mkdir(path):
- if not os.path.exists(path):
- os.makedirs(path)
-
-
-def mkdirs(paths):
- if isinstance(paths, str):
- mkdir(paths)
- else:
- for path in paths:
- mkdir(path)
-
-
-def mkdir_and_rename(path):
- if os.path.exists(path):
- new_name = path + '_archived_' + get_timestamp()
- print('Path already exists. Rename it to [{:s}]'.format(new_name))
- os.rename(path, new_name)
- os.makedirs(path)
-
-
-'''
-# =======================================
-# read image from path
-# Note: opencv is fast
-# but read BGR numpy image
-# =======================================
-'''
-
-
-# ----------------------------------------
-# get single image of size HxWxn_channles (BGR)
-# ----------------------------------------
-def read_img(path):
- # read image by cv2
- # return: Numpy float32, HWC, BGR, [0,1]
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # cv2.IMREAD_GRAYSCALE
- img = img.astype(np.float32) / 255.
- if img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- # some images have 4 channels
- if img.shape[2] > 3:
- img = img[:, :, :3]
- return img
-
-
-# ----------------------------------------
-# get uint8 image of size HxWxn_channles (RGB)
-# ----------------------------------------
-def imread_uint(path, n_channels=3):
- # input: path
- # output: HxWx3(RGB or GGG), or HxWx1 (G)
- if n_channels == 1:
- img = cv2.imread(path, 0) # cv2.IMREAD_GRAYSCALE
- img = np.expand_dims(img, axis=2) # HxWx1
- elif n_channels == 3:
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # BGR or G
- if img.ndim == 2:
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) # GGG
- else:
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # RGB
- return img
-
-
-def imsave(img, img_path):
- img = np.squeeze(img)
- if img.ndim == 3:
- img = img[:, :, [2, 1, 0]]
- cv2.imwrite(img_path, img)
-
-
-'''
-# =======================================
-# numpy(single) <---> numpy(uint)
-# numpy(single) <---> tensor
-# numpy(uint) <---> tensor
-# =======================================
-'''
-
-
-# --------------------------------
-# numpy(single) <---> numpy(uint)
-# --------------------------------
-
-
-def uint2single(img):
-
- return np.float32(img/255.)
-
-
-def uint2single1(img):
-
- return np.float32(np.squeeze(img)/255.)
-
-
-def single2uint(img):
-
- return np.uint8((img.clip(0, 1)*255.).round())
-
-
-def uint162single(img):
-
- return np.float32(img/65535.)
-
-
-def single2uint16(img):
-
- return np.uint8((img.clip(0, 1)*65535.).round())
-
-
-# --------------------------------
-# numpy(uint) <---> tensor
-# uint (HxWxn_channels (RGB) or G)
-# --------------------------------
-
-
-# convert uint (HxWxn_channels) to 4-dimensional torch tensor
-def uint2tensor4(img):
- if img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.).unsqueeze(0)
-
-
-# convert uint (HxWxn_channels) to 3-dimensional torch tensor
-def uint2tensor3(img):
- if img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.)
-
-
-# convert torch tensor to uint
-def tensor2uint(img):
- img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy()
- if img.ndim == 3:
- img = np.transpose(img, (1, 2, 0))
- return np.uint8((img*255.0).round())
-
-
-# --------------------------------
-# numpy(single) <---> tensor
-# single (HxWxn_channels (RGB) or G)
-# --------------------------------
-
-
-# convert single (HxWxn_channels) to 4-dimensional torch tensor
-def single2tensor4(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().unsqueeze(0)
-
-
-# convert single (HxWxn_channels) to 3-dimensional torch tensor
-def single2tensor3(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float()
-
-
-# convert torch tensor to single
-def tensor2single(img):
- img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy()
- if img.ndim == 3:
- img = np.transpose(img, (1, 2, 0))
-
- return img
-
-def tensor2single3(img):
- img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy()
- if img.ndim == 3:
- img = np.transpose(img, (1, 2, 0))
- elif img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- return img
-
-
-# from skimage.io import imread, imsave
-def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)):
- '''
- Converts a torch Tensor into an image Numpy array of BGR channel order
- Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order
- Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default)
- '''
- tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp
- tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1]
- n_dim = tensor.dim()
- if n_dim == 4:
- n_img = len(tensor)
- img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy()
- img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR
- elif n_dim == 3:
- img_np = tensor.numpy()
- img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR
- elif n_dim == 2:
- img_np = tensor.numpy()
- else:
- raise TypeError(
- 'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim))
- if out_type == np.uint8:
- img_np = (img_np * 255.0).round()
- # Important. Unlike matlab, numpy.uint8() WILL NOT round by default.
- return img_np.astype(out_type)
-
-
-'''
-# =======================================
-# image processing process on numpy image
-# augment(img_list, hflip=True, rot=True):
-# =======================================
-'''
-
-
-def augment_img(img, mode=0):
- if mode == 0:
- return img
- elif mode == 1:
- return np.flipud(np.rot90(img))
- elif mode == 2:
- return np.flipud(img)
- elif mode == 3:
- return np.rot90(img, k=3)
- elif mode == 4:
- return np.flipud(np.rot90(img, k=2))
- elif mode == 5:
- return np.rot90(img)
- elif mode == 6:
- return np.rot90(img, k=2)
- elif mode == 7:
- return np.flipud(np.rot90(img, k=3))
-
-
-def augment_img_np3(img, mode=0):
- if mode == 0:
- return img
- elif mode == 1:
- return img.transpose(1, 0, 2)
- elif mode == 2:
- return img[::-1, :, :]
- elif mode == 3:
- img = img[::-1, :, :]
- img = img.transpose(1, 0, 2)
- return img
- elif mode == 4:
- return img[:, ::-1, :]
- elif mode == 5:
- img = img[:, ::-1, :]
- img = img.transpose(1, 0, 2)
- return img
- elif mode == 6:
- img = img[:, ::-1, :]
- img = img[::-1, :, :]
- return img
- elif mode == 7:
- img = img[:, ::-1, :]
- img = img[::-1, :, :]
- img = img.transpose(1, 0, 2)
- return img
-
-
-def augment_img_tensor(img, mode=0):
- img_size = img.size()
- img_np = img.data.cpu().numpy()
- if len(img_size) == 3:
- img_np = np.transpose(img_np, (1, 2, 0))
- elif len(img_size) == 4:
- img_np = np.transpose(img_np, (2, 3, 1, 0))
- img_np = augment_img(img_np, mode=mode)
- img_tensor = torch.from_numpy(np.ascontiguousarray(img_np))
- if len(img_size) == 3:
- img_tensor = img_tensor.permute(2, 0, 1)
- elif len(img_size) == 4:
- img_tensor = img_tensor.permute(3, 2, 0, 1)
-
- return img_tensor.type_as(img)
-
-
-def augment_imgs(img_list, hflip=True, rot=True):
- # horizontal flip OR rotate
- hflip = hflip and random.random() < 0.5
- vflip = rot and random.random() < 0.5
- rot90 = rot and random.random() < 0.5
-
- def _augment(img):
- if hflip:
- img = img[:, ::-1, :]
- if vflip:
- img = img[::-1, :, :]
- if rot90:
- img = img.transpose(1, 0, 2)
- return img
-
- return [_augment(img) for img in img_list]
-
-
-'''
-# =======================================
-# image processing process on numpy image
-# channel_convert(in_c, tar_type, img_list):
-# rgb2ycbcr(img, only_y=True):
-# bgr2ycbcr(img, only_y=True):
-# ycbcr2rgb(img):
-# modcrop(img_in, scale):
-# =======================================
-'''
-
-
-def rgb2ycbcr(img, only_y=True):
- '''same as matlab rgb2ycbcr
- only_y: only return Y channel
- Input:
- uint8, [0, 255]
- float, [0, 1]
- '''
- in_img_type = img.dtype
- img.astype(np.float32)
- if in_img_type != np.uint8:
- img *= 255.
- # convert
- if only_y:
- rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0
- else:
- rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786],
- [24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128]
- if in_img_type == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(in_img_type)
-
-
-def ycbcr2rgb(img):
- '''same as matlab ycbcr2rgb
- Input:
- uint8, [0, 255]
- float, [0, 1]
- '''
- in_img_type = img.dtype
- img.astype(np.float32)
- if in_img_type != np.uint8:
- img *= 255.
- # convert
- rlt = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071],
- [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836]
- if in_img_type == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(in_img_type)
-
-
-def bgr2ycbcr(img, only_y=True):
- '''bgr version of rgb2ycbcr
- only_y: only return Y channel
- Input:
- uint8, [0, 255]
- float, [0, 1]
- '''
- in_img_type = img.dtype
- img.astype(np.float32)
- if in_img_type != np.uint8:
- img *= 255.
- # convert
- if only_y:
- rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0
- else:
- rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786],
- [65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128]
- if in_img_type == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(in_img_type)
-
-
-def modcrop(img_in, scale):
- # img_in: Numpy, HWC or HW
- img = np.copy(img_in)
- if img.ndim == 2:
- H, W = img.shape
- H_r, W_r = H % scale, W % scale
- img = img[:H - H_r, :W - W_r]
- elif img.ndim == 3:
- H, W, C = img.shape
- H_r, W_r = H % scale, W % scale
- img = img[:H - H_r, :W - W_r, :]
- else:
- raise ValueError('Wrong img ndim: [{:d}].'.format(img.ndim))
- return img
-
-
-def shave(img_in, border=0):
- # img_in: Numpy, HWC or HW
- img = np.copy(img_in)
- h, w = img.shape[:2]
- img = img[border:h-border, border:w-border]
- return img
-
-
-def channel_convert(in_c, tar_type, img_list):
- # conversion among BGR, gray and y
- if in_c == 3 and tar_type == 'gray': # BGR to gray
- gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list]
- return [np.expand_dims(img, axis=2) for img in gray_list]
- elif in_c == 3 and tar_type == 'y': # BGR to y
- y_list = [bgr2ycbcr(img, only_y=True) for img in img_list]
- return [np.expand_dims(img, axis=2) for img in y_list]
- elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR
- return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list]
- else:
- return img_list
-
-
-'''
-# =======================================
-# metric, PSNR and SSIM
-# =======================================
-'''
-
-
-# ----------
-# PSNR
-# ----------
-def calculate_psnr(img1, img2, border=0):
- # img1 and img2 have range [0, 255]
- if not img1.shape == img2.shape:
- raise ValueError('Input images must have the same dimensions.')
- h, w = img1.shape[:2]
- img1 = img1[border:h-border, border:w-border]
- img2 = img2[border:h-border, border:w-border]
-
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
- mse = np.mean((img1 - img2)**2)
- if mse == 0:
- return float('inf')
- return 20 * math.log10(255.0 / math.sqrt(mse))
-
-
-# ----------
-# SSIM
-# ----------
-def calculate_ssim(img1, img2, border=0):
- '''calculate SSIM
- the same outputs as MATLAB's
- img1, img2: [0, 255]
- '''
- if not img1.shape == img2.shape:
- raise ValueError('Input images must have the same dimensions.')
- h, w = img1.shape[:2]
- img1 = img1[border:h-border, border:w-border]
- img2 = img2[border:h-border, border:w-border]
-
- if img1.ndim == 2:
- return ssim(img1, img2)
- elif img1.ndim == 3:
- if img1.shape[2] == 3:
- ssims = []
- for i in range(3):
- ssims.append(ssim(img1, img2))
- return np.array(ssims).mean()
- elif img1.shape[2] == 1:
- return ssim(np.squeeze(img1), np.squeeze(img2))
- else:
- raise ValueError('Wrong input image dimensions.')
-
-
-def ssim(img1, img2):
- C1 = (0.01 * 255)**2
- C2 = (0.03 * 255)**2
-
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
- kernel = cv2.getGaussianKernel(11, 1.5)
- window = np.outer(kernel, kernel.transpose())
-
- mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid
- mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]
- mu1_sq = mu1**2
- mu2_sq = mu2**2
- mu1_mu2 = mu1 * mu2
- sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq
- sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq
- sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2
-
- ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) *
- (sigma1_sq + sigma2_sq + C2))
- return ssim_map.mean()
-
-
-'''
-# =======================================
-# pytorch version of matlab imresize
-# =======================================
-'''
-
-
-# matlab 'imresize' function, now only support 'bicubic'
-def cubic(x):
- absx = torch.abs(x)
- absx2 = absx**2
- absx3 = absx**3
- return (1.5*absx3 - 2.5*absx2 + 1) * ((absx <= 1).type_as(absx)) + \
- (-0.5*absx3 + 2.5*absx2 - 4*absx + 2) * (((absx > 1)*(absx <= 2)).type_as(absx))
-
-
-def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing):
- if (scale < 1) and (antialiasing):
- # Use a modified kernel to simultaneously interpolate and antialias- larger kernel width
- kernel_width = kernel_width / scale
-
- # Output-space coordinates
- x = torch.linspace(1, out_length, out_length)
-
- # Input-space coordinates. Calculate the inverse mapping such that 0.5
- # in output space maps to 0.5 in input space, and 0.5+scale in output
- # space maps to 1.5 in input space.
- u = x / scale + 0.5 * (1 - 1 / scale)
-
- # What is the left-most pixel that can be involved in the computation?
- left = torch.floor(u - kernel_width / 2)
-
- # What is the maximum number of pixels that can be involved in the
- # computation? Note: it's OK to use an extra pixel here; if the
- # corresponding weights are all zero, it will be eliminated at the end
- # of this function.
- P = math.ceil(kernel_width) + 2
-
- # The indices of the input pixels involved in computing the k-th output
- # pixel are in row k of the indices matrix.
- indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view(
- 1, P).expand(out_length, P)
-
- # The weights used to compute the k-th output pixel are in row k of the
- # weights matrix.
- distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices
- # apply cubic kernel
- if (scale < 1) and (antialiasing):
- weights = scale * cubic(distance_to_center * scale)
- else:
- weights = cubic(distance_to_center)
- # Normalize the weights matrix so that each row sums to 1.
- weights_sum = torch.sum(weights, 1).view(out_length, 1)
- weights = weights / weights_sum.expand(out_length, P)
-
- # If a column in weights is all zero, get rid of it. only consider the first and last column.
- weights_zero_tmp = torch.sum((weights == 0), 0)
- if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6):
- indices = indices.narrow(1, 1, P - 2)
- weights = weights.narrow(1, 1, P - 2)
- if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6):
- indices = indices.narrow(1, 0, P - 2)
- weights = weights.narrow(1, 0, P - 2)
- weights = weights.contiguous()
- indices = indices.contiguous()
- sym_len_s = -indices.min() + 1
- sym_len_e = indices.max() - in_length
- indices = indices + sym_len_s - 1
- return weights, indices, int(sym_len_s), int(sym_len_e)
-
-
-# --------------------------------
-# imresize for tensor image
-# --------------------------------
-def imresize(img, scale, antialiasing=True):
- # Now the scale should be the same for H and W
- # input: img: pytorch tensor, CHW or HW [0,1]
- # output: CHW or HW [0,1] w/o round
- need_squeeze = True if img.dim() == 2 else False
- if need_squeeze:
- img.unsqueeze_(0)
- in_C, in_H, in_W = img.size()
- out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale)
- kernel_width = 4
- kernel = 'cubic'
-
- # Return the desired dimension order for performing the resize. The
- # strategy is to perform the resize first along the dimension with the
- # smallest scale factor.
- # Now we do not support this.
-
- # get weights and indices
- weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices(
- in_H, out_H, scale, kernel, kernel_width, antialiasing)
- weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices(
- in_W, out_W, scale, kernel, kernel_width, antialiasing)
- # process H dimension
- # symmetric copying
- img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W)
- img_aug.narrow(1, sym_len_Hs, in_H).copy_(img)
-
- sym_patch = img[:, :sym_len_Hs, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv)
-
- sym_patch = img[:, -sym_len_He:, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv)
-
- out_1 = torch.FloatTensor(in_C, out_H, in_W)
- kernel_width = weights_H.size(1)
- for i in range(out_H):
- idx = int(indices_H[i][0])
- for j in range(out_C):
- out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_H[i])
-
- # process W dimension
- # symmetric copying
- out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We)
- out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1)
-
- sym_patch = out_1[:, :, :sym_len_Ws]
- inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(2, inv_idx)
- out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv)
-
- sym_patch = out_1[:, :, -sym_len_We:]
- inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(2, inv_idx)
- out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv)
-
- out_2 = torch.FloatTensor(in_C, out_H, out_W)
- kernel_width = weights_W.size(1)
- for i in range(out_W):
- idx = int(indices_W[i][0])
- for j in range(out_C):
- out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_W[i])
- if need_squeeze:
- out_2.squeeze_()
- return out_2
-
-
-# --------------------------------
-# imresize for numpy image
-# --------------------------------
-def imresize_np(img, scale, antialiasing=True):
- # Now the scale should be the same for H and W
- # input: img: Numpy, HWC or HW [0,1]
- # output: HWC or HW [0,1] w/o round
- img = torch.from_numpy(img)
- need_squeeze = True if img.dim() == 2 else False
- if need_squeeze:
- img.unsqueeze_(2)
-
- in_H, in_W, in_C = img.size()
- out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale)
- kernel_width = 4
- kernel = 'cubic'
-
- # Return the desired dimension order for performing the resize. The
- # strategy is to perform the resize first along the dimension with the
- # smallest scale factor.
- # Now we do not support this.
-
- # get weights and indices
- weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices(
- in_H, out_H, scale, kernel, kernel_width, antialiasing)
- weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices(
- in_W, out_W, scale, kernel, kernel_width, antialiasing)
- # process H dimension
- # symmetric copying
- img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C)
- img_aug.narrow(0, sym_len_Hs, in_H).copy_(img)
-
- sym_patch = img[:sym_len_Hs, :, :]
- inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(0, inv_idx)
- img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv)
-
- sym_patch = img[-sym_len_He:, :, :]
- inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(0, inv_idx)
- img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv)
-
- out_1 = torch.FloatTensor(out_H, in_W, in_C)
- kernel_width = weights_H.size(1)
- for i in range(out_H):
- idx = int(indices_H[i][0])
- for j in range(out_C):
- out_1[i, :, j] = img_aug[idx:idx + kernel_width, :, j].transpose(0, 1).mv(weights_H[i])
-
- # process W dimension
- # symmetric copying
- out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C)
- out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1)
-
- sym_patch = out_1[:, :sym_len_Ws, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv)
-
- sym_patch = out_1[:, -sym_len_We:, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv)
-
- out_2 = torch.FloatTensor(out_H, out_W, in_C)
- kernel_width = weights_W.size(1)
- for i in range(out_W):
- idx = int(indices_W[i][0])
- for j in range(out_C):
- out_2[:, i, j] = out_1_aug[:, idx:idx + kernel_width, j].mv(weights_W[i])
- if need_squeeze:
- out_2.squeeze_()
-
- return out_2.numpy()
-
-
-if __name__ == '__main__':
- img = imread_uint('test.bmp',3)
diff --git a/spaces/AllAideas/SegmentacionVideo/utils/constants.py b/spaces/AllAideas/SegmentacionVideo/utils/constants.py
deleted file mode 100644
index a7f7e2a32bef3b18bfab03aa22c7331ef206d818..0000000000000000000000000000000000000000
--- a/spaces/AllAideas/SegmentacionVideo/utils/constants.py
+++ /dev/null
@@ -1,4 +0,0 @@
-MAX_SEQ_LENGTH = 20
-NUM_FEATURES = 1024
-IMG_SIZE = 128
-CLASS_VOCAB = ['CricketShot', 'PlayingCello', 'Punch', 'ShavingBeard', 'TennisSwing']
\ No newline at end of file
diff --git a/spaces/Amrrs/openai-whisper-live-transcribe/README.md b/spaces/Amrrs/openai-whisper-live-transcribe/README.md
deleted file mode 100644
index 402a0522d7bbbaad3d63ed1d4c0a07c40a923fd1..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/openai-whisper-live-transcribe/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Openai Whisper Live Transcribe
-emoji: 🎙
-colorFrom: yellow
-colorTo: gray
-sdk: gradio
-sdk_version: 3.3.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/unclip/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/unclip/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/_base_/datasets/cityscapes_detection.py b/spaces/Andy1621/uniformer_image_detection/configs/_base_/datasets/cityscapes_detection.py
deleted file mode 100644
index 156aca02588a96a4e279de2e647864b0739e476d..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/_base_/datasets/cityscapes_detection.py
+++ /dev/null
@@ -1,55 +0,0 @@
-dataset_type = 'CityscapesDataset'
-data_root = 'data/cityscapes/'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(
- type='Resize', img_scale=[(2048, 800), (2048, 1024)], keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(2048, 1024),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=1,
- workers_per_gpu=2,
- train=dict(
- type='RepeatDataset',
- times=8,
- dataset=dict(
- type=dataset_type,
- ann_file=data_root +
- 'annotations/instancesonly_filtered_gtFine_train.json',
- img_prefix=data_root + 'leftImg8bit/train/',
- pipeline=train_pipeline)),
- val=dict(
- type=dataset_type,
- ann_file=data_root +
- 'annotations/instancesonly_filtered_gtFine_val.json',
- img_prefix=data_root + 'leftImg8bit/val/',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- ann_file=data_root +
- 'annotations/instancesonly_filtered_gtFine_test.json',
- img_prefix=data_root + 'leftImg8bit/test/',
- pipeline=test_pipeline))
-evaluation = dict(interval=1, metric='bbox')
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_r101_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_r101_fpn_1x_coco.py
deleted file mode 100644
index 66666517ad6c7a8427d59cb3efaf33712ef7ed83..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_r101_fpn_1x_coco.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './cascade_rcnn_r50_fpn_1x_coco.py'
-model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/ccnet_r50-d8_512x512_20k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/ccnet_r50-d8_512x512_20k_voc12aug.py
deleted file mode 100644
index bbcd29ccea8dcf9f67f1cd198dacd5dab380b265..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/ccnet_r50-d8_512x512_20k_voc12aug.py
+++ /dev/null
@@ -1,7 +0,0 @@
-_base_ = [
- '../_base_/models/ccnet_r50-d8.py',
- '../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_20k.py'
-]
-model = dict(
- decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_40k_cityscapes.py
deleted file mode 100644
index bf39d2f12b719b1c91e38bef71f0f5232543b0dc..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_40k_cityscapes.py
+++ /dev/null
@@ -1,11 +0,0 @@
-_base_ = './deeplabv3plus_r50-d8_512x1024_40k_cityscapes.py'
-model = dict(
- pretrained='open-mmlab://resnet101_v1c',
- backbone=dict(
- depth=101,
- dilations=(1, 1, 1, 2),
- strides=(1, 2, 2, 1),
- multi_grid=(1, 2, 4)),
- decode_head=dict(
- dilations=(1, 6, 12, 18),
- sampler=dict(type='OHEMPixelSampler', min_kept=100000)))
diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/model-card.md b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/model-card.md
deleted file mode 100644
index 2d22e25bea89fdbccdaa2809fbeb83e0a7cfaa07..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/model-card.md
+++ /dev/null
@@ -1,120 +0,0 @@
-# Model Card: CLIP
-
-Inspired by [Model Cards for Model Reporting (Mitchell et al.)](https://arxiv.org/abs/1810.03993) and [Lessons from Archives (Jo & Gebru)](https://arxiv.org/pdf/1912.10389.pdf), we’re providing some accompanying information about the multimodal model.
-
-## Model Details
-
-The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
-
-### Model Date
-
-January 2021
-
-### Model Type
-
-The base model uses a ResNet50 with several modifications as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. There is also a variant of the model where the ResNet image encoder is replaced with a Vision Transformer.
-
-### Model Versions
-
-Initially, we’ve released one CLIP model based on the Vision Transformer architecture equivalent to ViT-B/32, along with the RN50 model, using the architecture equivalent to ResNet-50.
-
-As part of the staged release process, we have also released the RN101 model, as well as RN50x4, a RN50 scaled up 4x according to the [EfficientNet](https://arxiv.org/abs/1905.11946) scaling rule. In July 2021, we additionally released the RN50x16 and ViT-B/16 models.
-
-Please see the paper linked below for further details about their specification.
-
-### Documents
-
-- [Blog Post](https://openai.com/blog/clip/)
-- [CLIP Paper](https://arxiv.org/abs/2103.00020)
-
-
-
-## Model Use
-
-### Intended Use
-
-The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
-
-#### Primary intended uses
-
-The primary intended users of these models are AI researchers.
-
-We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
-
-### Out-of-Scope Use Cases
-
-**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
-
-Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
-
-Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
-
-
-
-## Data
-
-The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.
-
-### Data Mission Statement
-
-Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset.
-
-
-
-## Performance and Limitations
-
-### Performance
-
-We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets:
-
-- Food101
-- CIFAR10
-- CIFAR100
-- Birdsnap
-- SUN397
-- Stanford Cars
-- FGVC Aircraft
-- VOC2007
-- DTD
-- Oxford-IIIT Pet dataset
-- Caltech101
-- Flowers102
-- MNIST
-- SVHN
-- IIIT5K
-- Hateful Memes
-- SST-2
-- UCF101
-- Kinetics700
-- Country211
-- CLEVR Counting
-- KITTI Distance
-- STL-10
-- RareAct
-- Flickr30
-- MSCOCO
-- ImageNet
-- ImageNet-A
-- ImageNet-R
-- ImageNet Sketch
-- ObjectNet (ImageNet Overlap)
-- Youtube-BB
-- ImageNet-Vid
-
-## Limitations
-
-CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance.
-
-### Bias and Fairness
-
-We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper).
-
-We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.
-
-
-
-## Feedback
-
-### Where to send questions or comments about the model
-
-Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)
diff --git a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/evaluations/evaluation.py b/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/evaluations/evaluation.py
deleted file mode 100644
index 2500225bbbd87b797cbee732feff2249b6af3db4..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/evaluations/evaluation.py
+++ /dev/null
@@ -1,98 +0,0 @@
-import os
-import glob
-import shutil
-import lpips
-import numpy as np
-import argparse
-from PIL import Image
-from skimage.metrics import structural_similarity as ssim
-from skimage.metrics import peak_signal_noise_ratio as psnr
-from dataloader.image_folder import make_dataset
-from util import util
-import torch
-
-parser = argparse.ArgumentParser(description='Image quality evaluations on the dataset')
-parser.add_argument('--gt_path', type=str, default='../results/', help='path to original gt data')
-parser.add_argument('--g_path', type=str, default='../results.', help='path to the generated data')
-parser.add_argument('--save_path', type=str, default=None, help='path to save the best results')
-parser.add_argument('--center', action='store_true', help='only calculate the center masked regions for the image quality')
-parser.add_argument('--num_test', type=int, default=0, help='how many examples to load for testing')
-
-args = parser.parse_args()
-lpips_alex = lpips.LPIPS(net='alex')
-
-
-def calculate_score(img_gt, img_test):
- """
- function to calculate the image quality score
- :param img_gt: original image
- :param img_test: generated image
- :return: mae, ssim, psnr
- """
-
- l1loss = np.mean(np.abs(img_gt-img_test))
-
- psnr_score = psnr(img_gt, img_test, data_range=1)
-
- ssim_score = ssim(img_gt, img_test, multichannel=True, data_range=1, win_size=11)
-
- lpips_dis = lpips_alex(torch.from_numpy(img_gt).permute(2, 0, 1), torch.from_numpy(img_test).permute(2, 0, 1), normalize=True)
-
- return l1loss, ssim_score, psnr_score, lpips_dis.data.numpy().item()
-
-
-if __name__ == '__main__':
- gt_paths, gt_size = make_dataset(args.gt_path)
- g_paths, g_size = make_dataset(args.g_path)
-
- l1losses = []
- ssims = []
- psnrs = []
- lpipses = []
-
- size = args.num_test if args.num_test > 0 else gt_size
-
- for i in range(size):
- gt_img = Image.open(gt_paths[i + 0*2000]).resize([256, 256]).convert('RGB')
- gt_numpy = np.array(gt_img).astype(np.float32) / 255.0
- if args.center:
- gt_numpy = gt_numpy[64:192, 64:192, :]
-
- l1loss_sample = 1000
- ssim_sample = 0
- psnr_sample = 0
- lpips_sample = 1000
-
- name = gt_paths[i + 0*2000].split('/')[-1].split(".")[0] + "*"
- g_paths = sorted(glob.glob(os.path.join(args.g_path, name)))
- num_files = len(g_paths)
-
- for j in range(num_files):
- index = j
- try:
- g_img = Image.open(g_paths[j]).resize([256, 256]).convert('RGB')
- g_numpy = np.array(g_img).astype(np.float32) / 255.0
- if args.center:
- g_numpy = g_numpy[64:192, 64:192, :]
- l1loss, ssim_score, psnr_score, lpips_score = calculate_score(gt_numpy, g_numpy)
- if l1loss - ssim_score - psnr_score + lpips_score < l1loss_sample - ssim_sample - psnr_sample + lpips_sample:
- l1loss_sample, ssim_sample, psnr_sample, lpips_sample = l1loss, ssim_score, psnr_score, lpips_score
- best_index = index
- except:
- print(g_paths[index])
-
- if l1loss_sample != 1000 and ssim_sample !=0 and psnr_sample != 0:
- print(g_paths[best_index])
- print(l1loss_sample, ssim_sample, psnr_sample, lpips_sample)
- l1losses.append(l1loss_sample)
- ssims.append(ssim_sample)
- psnrs.append(psnr_sample)
- lpipses.append(lpips_sample)
-
- if args.save_path is not None:
- util.mkdir(args.save_path)
- shutil.copy(g_paths[best_index], args.save_path)
-
- print('{:>10},{:>10},{:>10},{:>10}'.format('l1loss', 'SSIM', 'PSNR', 'LPIPS'))
- print('{:10.4f},{:10.4f},{:10.4f},{:10.4f}'.format(np.mean(l1losses), np.mean(ssims), np.mean(psnrs), np.mean(lpipses)))
- print('{:10.4f},{:10.4f},{:10.4f},{:10.4f}'.format(np.var(l1losses), np.var(ssims), np.var(psnrs), np.var(lpipses)))
\ No newline at end of file
diff --git a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/model/stylegan_ops/fused_bias_act.cpp b/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/model/stylegan_ops/fused_bias_act.cpp
deleted file mode 100644
index 02be898f970bcc8ea297867fcaa4e71b24b3d949..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/model/stylegan_ops/fused_bias_act.cpp
+++ /dev/null
@@ -1,21 +0,0 @@
-#include
-
-
-torch::Tensor fused_bias_act_op(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer,
- int act, int grad, float alpha, float scale);
-
-#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
-#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous")
-#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
-
-torch::Tensor fused_bias_act(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer,
- int act, int grad, float alpha, float scale) {
- CHECK_CUDA(input);
- CHECK_CUDA(bias);
-
- return fused_bias_act_op(input, bias, refer, act, grad, alpha, scale);
-}
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def("fused_bias_act", &fused_bias_act, "fused bias act (CUDA)");
-}
\ No newline at end of file
diff --git a/spaces/AnthonyTruchetPoC/persistent-docker/start_streamlit.sh b/spaces/AnthonyTruchetPoC/persistent-docker/start_streamlit.sh
deleted file mode 100644
index c7eb13e590210716ae04d63ba139aa9f66d12eab..0000000000000000000000000000000000000000
--- a/spaces/AnthonyTruchetPoC/persistent-docker/start_streamlit.sh
+++ /dev/null
@@ -1,9 +0,0 @@
-#!/bin/bash
-set -e
-
-mkdir -p $APP_DATA $HF_HOME
-
-streamlit run \
- --server.enableXsrfProtection=true \
- --server.fileWatcherType=auto \
- $APP_CODE/apps/streamlit_demo.py
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/resolution/legacy/resolver.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/resolution/legacy/resolver.py
deleted file mode 100644
index b17b7e4530b185a4011f4dc3211ddedd6d6587aa..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/resolution/legacy/resolver.py
+++ /dev/null
@@ -1,600 +0,0 @@
-"""Dependency Resolution
-
-The dependency resolution in pip is performed as follows:
-
-for top-level requirements:
- a. only one spec allowed per project, regardless of conflicts or not.
- otherwise a "double requirement" exception is raised
- b. they override sub-dependency requirements.
-for sub-dependencies
- a. "first found, wins" (where the order is breadth first)
-"""
-
-# The following comment should be removed at some point in the future.
-# mypy: strict-optional=False
-
-import logging
-import sys
-from collections import defaultdict
-from itertools import chain
-from typing import DefaultDict, Iterable, List, Optional, Set, Tuple
-
-from pip._vendor.packaging import specifiers
-from pip._vendor.packaging.requirements import Requirement
-
-from pip._internal.cache import WheelCache
-from pip._internal.exceptions import (
- BestVersionAlreadyInstalled,
- DistributionNotFound,
- HashError,
- HashErrors,
- InstallationError,
- NoneMetadataError,
- UnsupportedPythonVersion,
-)
-from pip._internal.index.package_finder import PackageFinder
-from pip._internal.metadata import BaseDistribution
-from pip._internal.models.link import Link
-from pip._internal.models.wheel import Wheel
-from pip._internal.operations.prepare import RequirementPreparer
-from pip._internal.req.req_install import (
- InstallRequirement,
- check_invalid_constraint_type,
-)
-from pip._internal.req.req_set import RequirementSet
-from pip._internal.resolution.base import BaseResolver, InstallRequirementProvider
-from pip._internal.utils import compatibility_tags
-from pip._internal.utils.compatibility_tags import get_supported
-from pip._internal.utils.direct_url_helpers import direct_url_from_link
-from pip._internal.utils.logging import indent_log
-from pip._internal.utils.misc import normalize_version_info
-from pip._internal.utils.packaging import check_requires_python
-
-logger = logging.getLogger(__name__)
-
-DiscoveredDependencies = DefaultDict[str, List[InstallRequirement]]
-
-
-def _check_dist_requires_python(
- dist: BaseDistribution,
- version_info: Tuple[int, int, int],
- ignore_requires_python: bool = False,
-) -> None:
- """
- Check whether the given Python version is compatible with a distribution's
- "Requires-Python" value.
-
- :param version_info: A 3-tuple of ints representing the Python
- major-minor-micro version to check.
- :param ignore_requires_python: Whether to ignore the "Requires-Python"
- value if the given Python version isn't compatible.
-
- :raises UnsupportedPythonVersion: When the given Python version isn't
- compatible.
- """
- # This idiosyncratically converts the SpecifierSet to str and let
- # check_requires_python then parse it again into SpecifierSet. But this
- # is the legacy resolver so I'm just not going to bother refactoring.
- try:
- requires_python = str(dist.requires_python)
- except FileNotFoundError as e:
- raise NoneMetadataError(dist, str(e))
- try:
- is_compatible = check_requires_python(
- requires_python,
- version_info=version_info,
- )
- except specifiers.InvalidSpecifier as exc:
- logger.warning(
- "Package %r has an invalid Requires-Python: %s", dist.raw_name, exc
- )
- return
-
- if is_compatible:
- return
-
- version = ".".join(map(str, version_info))
- if ignore_requires_python:
- logger.debug(
- "Ignoring failed Requires-Python check for package %r: %s not in %r",
- dist.raw_name,
- version,
- requires_python,
- )
- return
-
- raise UnsupportedPythonVersion(
- "Package {!r} requires a different Python: {} not in {!r}".format(
- dist.raw_name, version, requires_python
- )
- )
-
-
-class Resolver(BaseResolver):
- """Resolves which packages need to be installed/uninstalled to perform \
- the requested operation without breaking the requirements of any package.
- """
-
- _allowed_strategies = {"eager", "only-if-needed", "to-satisfy-only"}
-
- def __init__(
- self,
- preparer: RequirementPreparer,
- finder: PackageFinder,
- wheel_cache: Optional[WheelCache],
- make_install_req: InstallRequirementProvider,
- use_user_site: bool,
- ignore_dependencies: bool,
- ignore_installed: bool,
- ignore_requires_python: bool,
- force_reinstall: bool,
- upgrade_strategy: str,
- py_version_info: Optional[Tuple[int, ...]] = None,
- ) -> None:
- super().__init__()
- assert upgrade_strategy in self._allowed_strategies
-
- if py_version_info is None:
- py_version_info = sys.version_info[:3]
- else:
- py_version_info = normalize_version_info(py_version_info)
-
- self._py_version_info = py_version_info
-
- self.preparer = preparer
- self.finder = finder
- self.wheel_cache = wheel_cache
-
- self.upgrade_strategy = upgrade_strategy
- self.force_reinstall = force_reinstall
- self.ignore_dependencies = ignore_dependencies
- self.ignore_installed = ignore_installed
- self.ignore_requires_python = ignore_requires_python
- self.use_user_site = use_user_site
- self._make_install_req = make_install_req
-
- self._discovered_dependencies: DiscoveredDependencies = defaultdict(list)
-
- def resolve(
- self, root_reqs: List[InstallRequirement], check_supported_wheels: bool
- ) -> RequirementSet:
- """Resolve what operations need to be done
-
- As a side-effect of this method, the packages (and their dependencies)
- are downloaded, unpacked and prepared for installation. This
- preparation is done by ``pip.operations.prepare``.
-
- Once PyPI has static dependency metadata available, it would be
- possible to move the preparation to become a step separated from
- dependency resolution.
- """
- requirement_set = RequirementSet(check_supported_wheels=check_supported_wheels)
- for req in root_reqs:
- if req.constraint:
- check_invalid_constraint_type(req)
- self._add_requirement_to_set(requirement_set, req)
-
- # Actually prepare the files, and collect any exceptions. Most hash
- # exceptions cannot be checked ahead of time, because
- # _populate_link() needs to be called before we can make decisions
- # based on link type.
- discovered_reqs: List[InstallRequirement] = []
- hash_errors = HashErrors()
- for req in chain(requirement_set.all_requirements, discovered_reqs):
- try:
- discovered_reqs.extend(self._resolve_one(requirement_set, req))
- except HashError as exc:
- exc.req = req
- hash_errors.append(exc)
-
- if hash_errors:
- raise hash_errors
-
- return requirement_set
-
- def _add_requirement_to_set(
- self,
- requirement_set: RequirementSet,
- install_req: InstallRequirement,
- parent_req_name: Optional[str] = None,
- extras_requested: Optional[Iterable[str]] = None,
- ) -> Tuple[List[InstallRequirement], Optional[InstallRequirement]]:
- """Add install_req as a requirement to install.
-
- :param parent_req_name: The name of the requirement that needed this
- added. The name is used because when multiple unnamed requirements
- resolve to the same name, we could otherwise end up with dependency
- links that point outside the Requirements set. parent_req must
- already be added. Note that None implies that this is a user
- supplied requirement, vs an inferred one.
- :param extras_requested: an iterable of extras used to evaluate the
- environment markers.
- :return: Additional requirements to scan. That is either [] if
- the requirement is not applicable, or [install_req] if the
- requirement is applicable and has just been added.
- """
- # If the markers do not match, ignore this requirement.
- if not install_req.match_markers(extras_requested):
- logger.info(
- "Ignoring %s: markers '%s' don't match your environment",
- install_req.name,
- install_req.markers,
- )
- return [], None
-
- # If the wheel is not supported, raise an error.
- # Should check this after filtering out based on environment markers to
- # allow specifying different wheels based on the environment/OS, in a
- # single requirements file.
- if install_req.link and install_req.link.is_wheel:
- wheel = Wheel(install_req.link.filename)
- tags = compatibility_tags.get_supported()
- if requirement_set.check_supported_wheels and not wheel.supported(tags):
- raise InstallationError(
- "{} is not a supported wheel on this platform.".format(
- wheel.filename
- )
- )
-
- # This next bit is really a sanity check.
- assert (
- not install_req.user_supplied or parent_req_name is None
- ), "a user supplied req shouldn't have a parent"
-
- # Unnamed requirements are scanned again and the requirement won't be
- # added as a dependency until after scanning.
- if not install_req.name:
- requirement_set.add_unnamed_requirement(install_req)
- return [install_req], None
-
- try:
- existing_req: Optional[
- InstallRequirement
- ] = requirement_set.get_requirement(install_req.name)
- except KeyError:
- existing_req = None
-
- has_conflicting_requirement = (
- parent_req_name is None
- and existing_req
- and not existing_req.constraint
- and existing_req.extras == install_req.extras
- and existing_req.req
- and install_req.req
- and existing_req.req.specifier != install_req.req.specifier
- )
- if has_conflicting_requirement:
- raise InstallationError(
- "Double requirement given: {} (already in {}, name={!r})".format(
- install_req, existing_req, install_req.name
- )
- )
-
- # When no existing requirement exists, add the requirement as a
- # dependency and it will be scanned again after.
- if not existing_req:
- requirement_set.add_named_requirement(install_req)
- # We'd want to rescan this requirement later
- return [install_req], install_req
-
- # Assume there's no need to scan, and that we've already
- # encountered this for scanning.
- if install_req.constraint or not existing_req.constraint:
- return [], existing_req
-
- does_not_satisfy_constraint = install_req.link and not (
- existing_req.link and install_req.link.path == existing_req.link.path
- )
- if does_not_satisfy_constraint:
- raise InstallationError(
- "Could not satisfy constraints for '{}': "
- "installation from path or url cannot be "
- "constrained to a version".format(install_req.name)
- )
- # If we're now installing a constraint, mark the existing
- # object for real installation.
- existing_req.constraint = False
- # If we're now installing a user supplied requirement,
- # mark the existing object as such.
- if install_req.user_supplied:
- existing_req.user_supplied = True
- existing_req.extras = tuple(
- sorted(set(existing_req.extras) | set(install_req.extras))
- )
- logger.debug(
- "Setting %s extras to: %s",
- existing_req,
- existing_req.extras,
- )
- # Return the existing requirement for addition to the parent and
- # scanning again.
- return [existing_req], existing_req
-
- def _is_upgrade_allowed(self, req: InstallRequirement) -> bool:
- if self.upgrade_strategy == "to-satisfy-only":
- return False
- elif self.upgrade_strategy == "eager":
- return True
- else:
- assert self.upgrade_strategy == "only-if-needed"
- return req.user_supplied or req.constraint
-
- def _set_req_to_reinstall(self, req: InstallRequirement) -> None:
- """
- Set a requirement to be installed.
- """
- # Don't uninstall the conflict if doing a user install and the
- # conflict is not a user install.
- if not self.use_user_site or req.satisfied_by.in_usersite:
- req.should_reinstall = True
- req.satisfied_by = None
-
- def _check_skip_installed(
- self, req_to_install: InstallRequirement
- ) -> Optional[str]:
- """Check if req_to_install should be skipped.
-
- This will check if the req is installed, and whether we should upgrade
- or reinstall it, taking into account all the relevant user options.
-
- After calling this req_to_install will only have satisfied_by set to
- None if the req_to_install is to be upgraded/reinstalled etc. Any
- other value will be a dist recording the current thing installed that
- satisfies the requirement.
-
- Note that for vcs urls and the like we can't assess skipping in this
- routine - we simply identify that we need to pull the thing down,
- then later on it is pulled down and introspected to assess upgrade/
- reinstalls etc.
-
- :return: A text reason for why it was skipped, or None.
- """
- if self.ignore_installed:
- return None
-
- req_to_install.check_if_exists(self.use_user_site)
- if not req_to_install.satisfied_by:
- return None
-
- if self.force_reinstall:
- self._set_req_to_reinstall(req_to_install)
- return None
-
- if not self._is_upgrade_allowed(req_to_install):
- if self.upgrade_strategy == "only-if-needed":
- return "already satisfied, skipping upgrade"
- return "already satisfied"
-
- # Check for the possibility of an upgrade. For link-based
- # requirements we have to pull the tree down and inspect to assess
- # the version #, so it's handled way down.
- if not req_to_install.link:
- try:
- self.finder.find_requirement(req_to_install, upgrade=True)
- except BestVersionAlreadyInstalled:
- # Then the best version is installed.
- return "already up-to-date"
- except DistributionNotFound:
- # No distribution found, so we squash the error. It will
- # be raised later when we re-try later to do the install.
- # Why don't we just raise here?
- pass
-
- self._set_req_to_reinstall(req_to_install)
- return None
-
- def _find_requirement_link(self, req: InstallRequirement) -> Optional[Link]:
- upgrade = self._is_upgrade_allowed(req)
- best_candidate = self.finder.find_requirement(req, upgrade)
- if not best_candidate:
- return None
-
- # Log a warning per PEP 592 if necessary before returning.
- link = best_candidate.link
- if link.is_yanked:
- reason = link.yanked_reason or ""
- msg = (
- # Mark this as a unicode string to prevent
- # "UnicodeEncodeError: 'ascii' codec can't encode character"
- # in Python 2 when the reason contains non-ascii characters.
- "The candidate selected for download or install is a "
- "yanked version: {candidate}\n"
- "Reason for being yanked: {reason}"
- ).format(candidate=best_candidate, reason=reason)
- logger.warning(msg)
-
- return link
-
- def _populate_link(self, req: InstallRequirement) -> None:
- """Ensure that if a link can be found for this, that it is found.
-
- Note that req.link may still be None - if the requirement is already
- installed and not needed to be upgraded based on the return value of
- _is_upgrade_allowed().
-
- If preparer.require_hashes is True, don't use the wheel cache, because
- cached wheels, always built locally, have different hashes than the
- files downloaded from the index server and thus throw false hash
- mismatches. Furthermore, cached wheels at present have undeterministic
- contents due to file modification times.
- """
- if req.link is None:
- req.link = self._find_requirement_link(req)
-
- if self.wheel_cache is None or self.preparer.require_hashes:
- return
- cache_entry = self.wheel_cache.get_cache_entry(
- link=req.link,
- package_name=req.name,
- supported_tags=get_supported(),
- )
- if cache_entry is not None:
- logger.debug("Using cached wheel link: %s", cache_entry.link)
- if req.link is req.original_link and cache_entry.persistent:
- req.cached_wheel_source_link = req.link
- if cache_entry.origin is not None:
- req.download_info = cache_entry.origin
- else:
- # Legacy cache entry that does not have origin.json.
- # download_info may miss the archive_info.hashes field.
- req.download_info = direct_url_from_link(
- req.link, link_is_in_wheel_cache=cache_entry.persistent
- )
- req.link = cache_entry.link
-
- def _get_dist_for(self, req: InstallRequirement) -> BaseDistribution:
- """Takes a InstallRequirement and returns a single AbstractDist \
- representing a prepared variant of the same.
- """
- if req.editable:
- return self.preparer.prepare_editable_requirement(req)
-
- # satisfied_by is only evaluated by calling _check_skip_installed,
- # so it must be None here.
- assert req.satisfied_by is None
- skip_reason = self._check_skip_installed(req)
-
- if req.satisfied_by:
- return self.preparer.prepare_installed_requirement(req, skip_reason)
-
- # We eagerly populate the link, since that's our "legacy" behavior.
- self._populate_link(req)
- dist = self.preparer.prepare_linked_requirement(req)
-
- # NOTE
- # The following portion is for determining if a certain package is
- # going to be re-installed/upgraded or not and reporting to the user.
- # This should probably get cleaned up in a future refactor.
-
- # req.req is only avail after unpack for URL
- # pkgs repeat check_if_exists to uninstall-on-upgrade
- # (#14)
- if not self.ignore_installed:
- req.check_if_exists(self.use_user_site)
-
- if req.satisfied_by:
- should_modify = (
- self.upgrade_strategy != "to-satisfy-only"
- or self.force_reinstall
- or self.ignore_installed
- or req.link.scheme == "file"
- )
- if should_modify:
- self._set_req_to_reinstall(req)
- else:
- logger.info(
- "Requirement already satisfied (use --upgrade to upgrade): %s",
- req,
- )
- return dist
-
- def _resolve_one(
- self,
- requirement_set: RequirementSet,
- req_to_install: InstallRequirement,
- ) -> List[InstallRequirement]:
- """Prepare a single requirements file.
-
- :return: A list of additional InstallRequirements to also install.
- """
- # Tell user what we are doing for this requirement:
- # obtain (editable), skipping, processing (local url), collecting
- # (remote url or package name)
- if req_to_install.constraint or req_to_install.prepared:
- return []
-
- req_to_install.prepared = True
-
- # Parse and return dependencies
- dist = self._get_dist_for(req_to_install)
- # This will raise UnsupportedPythonVersion if the given Python
- # version isn't compatible with the distribution's Requires-Python.
- _check_dist_requires_python(
- dist,
- version_info=self._py_version_info,
- ignore_requires_python=self.ignore_requires_python,
- )
-
- more_reqs: List[InstallRequirement] = []
-
- def add_req(subreq: Requirement, extras_requested: Iterable[str]) -> None:
- # This idiosyncratically converts the Requirement to str and let
- # make_install_req then parse it again into Requirement. But this is
- # the legacy resolver so I'm just not going to bother refactoring.
- sub_install_req = self._make_install_req(str(subreq), req_to_install)
- parent_req_name = req_to_install.name
- to_scan_again, add_to_parent = self._add_requirement_to_set(
- requirement_set,
- sub_install_req,
- parent_req_name=parent_req_name,
- extras_requested=extras_requested,
- )
- if parent_req_name and add_to_parent:
- self._discovered_dependencies[parent_req_name].append(add_to_parent)
- more_reqs.extend(to_scan_again)
-
- with indent_log():
- # We add req_to_install before its dependencies, so that we
- # can refer to it when adding dependencies.
- if not requirement_set.has_requirement(req_to_install.name):
- # 'unnamed' requirements will get added here
- # 'unnamed' requirements can only come from being directly
- # provided by the user.
- assert req_to_install.user_supplied
- self._add_requirement_to_set(
- requirement_set, req_to_install, parent_req_name=None
- )
-
- if not self.ignore_dependencies:
- if req_to_install.extras:
- logger.debug(
- "Installing extra requirements: %r",
- ",".join(req_to_install.extras),
- )
- missing_requested = sorted(
- set(req_to_install.extras) - set(dist.iter_provided_extras())
- )
- for missing in missing_requested:
- logger.warning(
- "%s %s does not provide the extra '%s'",
- dist.raw_name,
- dist.version,
- missing,
- )
-
- available_requested = sorted(
- set(dist.iter_provided_extras()) & set(req_to_install.extras)
- )
- for subreq in dist.iter_dependencies(available_requested):
- add_req(subreq, extras_requested=available_requested)
-
- return more_reqs
-
- def get_installation_order(
- self, req_set: RequirementSet
- ) -> List[InstallRequirement]:
- """Create the installation order.
-
- The installation order is topological - requirements are installed
- before the requiring thing. We break cycles at an arbitrary point,
- and make no other guarantees.
- """
- # The current implementation, which we may change at any point
- # installs the user specified things in the order given, except when
- # dependencies must come earlier to achieve topological order.
- order = []
- ordered_reqs: Set[InstallRequirement] = set()
-
- def schedule(req: InstallRequirement) -> None:
- if req.satisfied_by or req in ordered_reqs:
- return
- if req.constraint:
- return
- ordered_reqs.add(req)
- for dep in self._discovered_dependencies[req.name]:
- schedule(dep)
- order.append(req)
-
- for install_req in req_set.requirements.values():
- schedule(install_req)
- return order
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/COCO-PanopticSegmentation/panoptic_fpn_R_50_1x.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/COCO-PanopticSegmentation/panoptic_fpn_R_50_1x.py
deleted file mode 100644
index 40cf18131810307157a9a7d1f6d5922b00fd73d5..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/COCO-PanopticSegmentation/panoptic_fpn_R_50_1x.py
+++ /dev/null
@@ -1,8 +0,0 @@
-from ..common.optim import SGD as optimizer
-from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier
-from ..common.data.coco_panoptic_separated import dataloader
-from ..common.models.panoptic_fpn import model
-from ..common.train import train
-
-model.backbone.bottom_up.freeze_at = 2
-train.init_checkpoint = "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
diff --git a/spaces/Banbri/zcvzcv/src/app/page.tsx b/spaces/Banbri/zcvzcv/src/app/page.tsx
deleted file mode 100644
index e6c08e336d0f43af1d211390f3dee22f563a976d..0000000000000000000000000000000000000000
--- a/spaces/Banbri/zcvzcv/src/app/page.tsx
+++ /dev/null
@@ -1,43 +0,0 @@
-"use server"
-
-import Head from "next/head"
-
-import Main from "./main"
-import { TooltipProvider } from "@/components/ui/tooltip"
-import Script from "next/script"
-// import { Maintenance } from "./interface/maintenance"
-
-// https://nextjs.org/docs/pages/building-your-application/optimizing/fonts
-
-export default async function IndexPage({ params: { ownerId } }: { params: { ownerId: string }}) {
- return (
- <>
-
-
-
-
-
-
-
-
-
- {/* */}
-
-
-
-
-
- >
- )
-}
\ No newline at end of file
diff --git a/spaces/Bart92/RVC_HF/gui_v0.py b/spaces/Bart92/RVC_HF/gui_v0.py
deleted file mode 100644
index 88c3cf9eb1eaa0fa812b32ae4d3750b4ce0a8699..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/gui_v0.py
+++ /dev/null
@@ -1,786 +0,0 @@
-import os, sys, traceback, re
-
-import json
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-from configs.config import Config
-
-Config = Config()
-import PySimpleGUI as sg
-import sounddevice as sd
-import noisereduce as nr
-import numpy as np
-from fairseq import checkpoint_utils
-import librosa, torch, pyworld, faiss, time, threading
-import torch.nn.functional as F
-import torchaudio.transforms as tat
-import scipy.signal as signal
-import torchcrepe
-
-# import matplotlib.pyplot as plt
-from lib.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from i18n import I18nAuto
-
-i18n = I18nAuto()
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-current_dir = os.getcwd()
-
-
-class RVC:
- def __init__(
- self, key, f0_method, hubert_path, pth_path, index_path, npy_path, index_rate
- ) -> None:
- """
- 初始化
- """
- try:
- self.f0_up_key = key
- self.time_step = 160 / 16000 * 1000
- self.f0_min = 50
- self.f0_max = 1100
- self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700)
- self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700)
- self.f0_method = f0_method
- self.sr = 16000
- self.window = 160
-
- # Get Torch Device
- if torch.cuda.is_available():
- self.torch_device = torch.device(
- f"cuda:{0 % torch.cuda.device_count()}"
- )
- elif torch.backends.mps.is_available():
- self.torch_device = torch.device("mps")
- else:
- self.torch_device = torch.device("cpu")
-
- if index_rate != 0:
- self.index = faiss.read_index(index_path)
- # self.big_npy = np.load(npy_path)
- self.big_npy = self.index.reconstruct_n(0, self.index.ntotal)
- print("index search enabled")
- self.index_rate = index_rate
- model_path = hubert_path
- print("load model(s) from {}".format(model_path))
- models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
- [model_path],
- suffix="",
- )
- self.model = models[0]
- self.model = self.model.to(device)
- if Config.is_half:
- self.model = self.model.half()
- else:
- self.model = self.model.float()
- self.model.eval()
- cpt = torch.load(pth_path, map_location="cpu")
- self.tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- self.if_f0 = cpt.get("f0", 1)
- self.version = cpt.get("version", "v1")
- if self.version == "v1":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs256NSFsid(
- *cpt["config"], is_half=Config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif self.version == "v2":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs768NSFsid(
- *cpt["config"], is_half=Config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del self.net_g.enc_q
- print(self.net_g.load_state_dict(cpt["weight"], strict=False))
- self.net_g.eval().to(device)
- if Config.is_half:
- self.net_g = self.net_g.half()
- else:
- self.net_g = self.net_g.float()
- except:
- print(traceback.format_exc())
-
- def get_regular_crepe_computation(self, x, f0_min, f0_max, model="full"):
- batch_size = 512
- # Compute pitch using first gpu
- audio = torch.tensor(np.copy(x))[None].float()
- f0, pd = torchcrepe.predict(
- audio,
- self.sr,
- self.window,
- f0_min,
- f0_max,
- model,
- batch_size=batch_size,
- device=self.torch_device,
- return_periodicity=True,
- )
- pd = torchcrepe.filter.median(pd, 3)
- f0 = torchcrepe.filter.mean(f0, 3)
- f0[pd < 0.1] = 0
- f0 = f0[0].cpu().numpy()
- return f0
-
- def get_harvest_computation(self, x, f0_min, f0_max):
- f0, t = pyworld.harvest(
- x.astype(np.double),
- fs=self.sr,
- f0_ceil=f0_max,
- f0_floor=f0_min,
- frame_period=10,
- )
- f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr)
- f0 = signal.medfilt(f0, 3)
- return f0
-
- def get_f0(self, x, f0_up_key, inp_f0=None):
- # Calculate Padding and f0 details here
- p_len = x.shape[0] // 512 # For Now This probs doesn't work
- x_pad = 1
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
-
- f0 = 0
- # Here, check f0_methods and get their computations
- if self.f0_method == "harvest":
- f0 = self.get_harvest_computation(x, f0_min, f0_max)
- elif self.f0_method == "reg-crepe":
- f0 = self.get_regular_crepe_computation(x, f0_min, f0_max)
- elif self.f0_method == "reg-crepe-tiny":
- f0 = self.get_regular_crepe_computation(x, f0_min, f0_max, "tiny")
-
- # Calculate f0_course and f0_bak here
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0]
- f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0bak # 1-0
-
- def infer(self, feats: torch.Tensor) -> np.ndarray:
- """
- 推理函数
- """
- audio = feats.clone().cpu().numpy()
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).fill_(False)
- if Config.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- inputs = {
- "source": feats.to(device),
- "padding_mask": padding_mask.to(device),
- "output_layer": 9 if self.version == "v1" else 12,
- }
- torch.cuda.synchronize()
- with torch.no_grad():
- logits = self.model.extract_features(**inputs)
- feats = (
- self.model.final_proj(logits[0]) if self.version == "v1" else logits[0]
- )
-
- ####索引优化
- try:
- if (
- hasattr(self, "index")
- and hasattr(self, "big_npy")
- and self.index_rate != 0
- ):
- npy = feats[0].cpu().numpy().astype("float32")
- score, ix = self.index.search(npy, k=8)
- weight = np.square(1 / score)
- weight /= weight.sum(axis=1, keepdims=True)
- npy = np.sum(self.big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
- if Config.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(device) * self.index_rate
- + (1 - self.index_rate) * feats
- )
- else:
- print("index search FAIL or disabled")
- except:
- traceback.print_exc()
- print("index search FAIL")
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- torch.cuda.synchronize()
- print(feats.shape)
- if self.if_f0 == 1:
- pitch, pitchf = self.get_f0(audio, self.f0_up_key)
- p_len = min(feats.shape[1], 13000, pitch.shape[0]) # 太大了爆显存
- else:
- pitch, pitchf = None, None
- p_len = min(feats.shape[1], 13000) # 太大了爆显存
- torch.cuda.synchronize()
- # print(feats.shape,pitch.shape)
- feats = feats[:, :p_len, :]
- if self.if_f0 == 1:
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- pitch = torch.LongTensor(pitch).unsqueeze(0).to(device)
- pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device)
- p_len = torch.LongTensor([p_len]).to(device)
- ii = 0 # sid
- sid = torch.LongTensor([ii]).to(device)
- with torch.no_grad():
- if self.if_f0 == 1:
- infered_audio = (
- self.net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]
- .data.cpu()
- .float()
- )
- else:
- infered_audio = (
- self.net_g.infer(feats, p_len, sid)[0][0, 0].data.cpu().float()
- )
- torch.cuda.synchronize()
- return infered_audio
-
-
-class GUIConfig:
- def __init__(self) -> None:
- self.hubert_path: str = ""
- self.pth_path: str = ""
- self.index_path: str = ""
- self.npy_path: str = ""
- self.f0_method: str = ""
- self.pitch: int = 12
- self.samplerate: int = 44100
- self.block_time: float = 1.0 # s
- self.buffer_num: int = 1
- self.threhold: int = -30
- self.crossfade_time: float = 0.08
- self.extra_time: float = 0.04
- self.I_noise_reduce = False
- self.O_noise_reduce = False
- self.index_rate = 0.3
-
-
-class GUI:
- def __init__(self) -> None:
- self.config = GUIConfig()
- self.flag_vc = False
-
- self.launcher()
-
- def load(self):
- (
- input_devices,
- output_devices,
- input_devices_indices,
- output_devices_indices,
- ) = self.get_devices()
- try:
- with open("values1.json", "r") as j:
- data = json.load(j)
- except:
- # Injecting f0_method into the json data
- with open("values1.json", "w") as j:
- data = {
- "pth_path": "",
- "index_path": "",
- "sg_input_device": input_devices[
- input_devices_indices.index(sd.default.device[0])
- ],
- "sg_output_device": output_devices[
- output_devices_indices.index(sd.default.device[1])
- ],
- "threhold": "-45",
- "pitch": "0",
- "index_rate": "0",
- "block_time": "1",
- "crossfade_length": "0.04",
- "extra_time": "1",
- }
- return data
-
- def launcher(self):
- data = self.load()
- sg.theme("DarkTeal12")
- input_devices, output_devices, _, _ = self.get_devices()
- layout = [
- [
- sg.Frame(
- title="Proudly forked by Mangio621",
- ),
- sg.Frame(
- title=i18n("Load model"),
- layout=[
- [
- sg.Input(
- default_text="hubert_base.pt",
- key="hubert_path",
- disabled=True,
- ),
- sg.FileBrowse(
- i18n("Hubert Model"),
- initial_folder=os.path.join(os.getcwd()),
- file_types=(("pt files", "*.pt"),),
- ),
- ],
- [
- sg.Input(
- default_text=data.get("pth_path", ""),
- key="pth_path",
- ),
- sg.FileBrowse(
- i18n("Select the .pth file"),
- initial_folder=os.path.join(os.getcwd(), "weights"),
- file_types=(("weight files", "*.pth"),),
- ),
- ],
- [
- sg.Input(
- default_text=data.get("index_path", ""),
- key="index_path",
- ),
- sg.FileBrowse(
- i18n("Select the .index file"),
- initial_folder=os.path.join(os.getcwd(), "logs"),
- file_types=(("index files", "*.index"),),
- ),
- ],
- [
- sg.Input(
- default_text="你不需要填写这个You don't need write this.",
- key="npy_path",
- disabled=True,
- ),
- sg.FileBrowse(
- i18n("Select the .npy file"),
- initial_folder=os.path.join(os.getcwd(), "logs"),
- file_types=(("feature files", "*.npy"),),
- ),
- ],
- ],
- ),
- ],
- [
- # Mangio f0 Selection frame Here
- sg.Frame(
- layout=[
- [
- sg.Radio(
- "Harvest", "f0_method", key="harvest", default=True
- ),
- sg.Radio("Crepe", "f0_method", key="reg-crepe"),
- sg.Radio("Crepe Tiny", "f0_method", key="reg-crepe-tiny"),
- ]
- ],
- title="Select an f0 Method",
- )
- ],
- [
- sg.Frame(
- layout=[
- [
- sg.Text(i18n("Input device")),
- sg.Combo(
- input_devices,
- key="sg_input_device",
- default_value=data.get("sg_input_device", ""),
- ),
- ],
- [
- sg.Text(i18n("Output device")),
- sg.Combo(
- output_devices,
- key="sg_output_device",
- default_value=data.get("sg_output_device", ""),
- ),
- ],
- ],
- title=i18n("Audio device (please use the same type of driver)"),
- )
- ],
- [
- sg.Frame(
- layout=[
- [
- sg.Text(i18n("Response threshold")),
- sg.Slider(
- range=(-60, 0),
- key="threhold",
- resolution=1,
- orientation="h",
- default_value=data.get("threhold", ""),
- ),
- ],
- [
- sg.Text(i18n("Pitch settings")),
- sg.Slider(
- range=(-24, 24),
- key="pitch",
- resolution=1,
- orientation="h",
- default_value=data.get("pitch", ""),
- ),
- ],
- [
- sg.Text(i18n("Index Rate")),
- sg.Slider(
- range=(0.0, 1.0),
- key="index_rate",
- resolution=0.01,
- orientation="h",
- default_value=data.get("index_rate", ""),
- ),
- ],
- ],
- title=i18n("General settings"),
- ),
- sg.Frame(
- layout=[
- [
- sg.Text(i18n("Sample length")),
- sg.Slider(
- range=(0.1, 3.0),
- key="block_time",
- resolution=0.1,
- orientation="h",
- default_value=data.get("block_time", ""),
- ),
- ],
- [
- sg.Text(i18n("Fade length")),
- sg.Slider(
- range=(0.01, 0.15),
- key="crossfade_length",
- resolution=0.01,
- orientation="h",
- default_value=data.get("crossfade_length", ""),
- ),
- ],
- [
- sg.Text(i18n("Extra推理时长")),
- sg.Slider(
- range=(0.05, 3.00),
- key="extra_time",
- resolution=0.01,
- orientation="h",
- default_value=data.get("extra_time", ""),
- ),
- ],
- [
- sg.Checkbox(i18n("Input noise reduction"), key="I_noise_reduce"),
- sg.Checkbox(i18n("Output noise reduction"), key="O_noise_reduce"),
- ],
- ],
- title=i18n("Performance settings"),
- ),
- ],
- [
- sg.Button(i18n("开始音频Convert"), key="start_vc"),
- sg.Button(i18n("停止音频Convert"), key="stop_vc"),
- sg.Text(i18n("Inference time (ms):")),
- sg.Text("0", key="infer_time"),
- ],
- ]
- self.window = sg.Window("RVC - GUI", layout=layout)
- self.event_handler()
-
- def event_handler(self):
- while True:
- event, values = self.window.read()
- if event == sg.WINDOW_CLOSED:
- self.flag_vc = False
- exit()
- if event == "start_vc" and self.flag_vc == False:
- if self.set_values(values) == True:
- print("using_cuda:" + str(torch.cuda.is_available()))
- self.start_vc()
- settings = {
- "pth_path": values["pth_path"],
- "index_path": values["index_path"],
- "f0_method": self.get_f0_method_from_radios(values),
- "sg_input_device": values["sg_input_device"],
- "sg_output_device": values["sg_output_device"],
- "threhold": values["threhold"],
- "pitch": values["pitch"],
- "index_rate": values["index_rate"],
- "block_time": values["block_time"],
- "crossfade_length": values["crossfade_length"],
- "extra_time": values["extra_time"],
- }
- with open("values1.json", "w") as j:
- json.dump(settings, j)
- if event == "stop_vc" and self.flag_vc == True:
- self.flag_vc = False
-
- # Function that returns the used f0 method in string format "harvest"
- def get_f0_method_from_radios(self, values):
- f0_array = [
- {"name": "harvest", "val": values["harvest"]},
- {"name": "reg-crepe", "val": values["reg-crepe"]},
- {"name": "reg-crepe-tiny", "val": values["reg-crepe-tiny"]},
- ]
- # Filter through to find a true value
- used_f0 = ""
- for f0 in f0_array:
- if f0["val"] == True:
- used_f0 = f0["name"]
- break
- if used_f0 == "":
- used_f0 = "harvest" # Default Harvest if used_f0 is empty somehow
- return used_f0
-
- def set_values(self, values):
- if len(values["pth_path"].strip()) == 0:
- sg.popup(i18n("Select the pth file"))
- return False
- if len(values["index_path"].strip()) == 0:
- sg.popup(i18n("Select the index file"))
- return False
- pattern = re.compile("[^\x00-\x7F]+")
- if pattern.findall(values["hubert_path"]):
- sg.popup(i18n("The hubert model path must not contain Chinese characters"))
- return False
- if pattern.findall(values["pth_path"]):
- sg.popup(i18n("The pth file path must not contain Chinese characters."))
- return False
- if pattern.findall(values["index_path"]):
- sg.popup(i18n("The index file path must not contain Chinese characters."))
- return False
- self.set_devices(values["sg_input_device"], values["sg_output_device"])
- self.config.hubert_path = os.path.join(current_dir, "hubert_base.pt")
- self.config.pth_path = values["pth_path"]
- self.config.index_path = values["index_path"]
- self.config.npy_path = values["npy_path"]
- self.config.f0_method = self.get_f0_method_from_radios(values)
- self.config.threhold = values["threhold"]
- self.config.pitch = values["pitch"]
- self.config.block_time = values["block_time"]
- self.config.crossfade_time = values["crossfade_length"]
- self.config.extra_time = values["extra_time"]
- self.config.I_noise_reduce = values["I_noise_reduce"]
- self.config.O_noise_reduce = values["O_noise_reduce"]
- self.config.index_rate = values["index_rate"]
- return True
-
- def start_vc(self):
- torch.cuda.empty_cache()
- self.flag_vc = True
- self.block_frame = int(self.config.block_time * self.config.samplerate)
- self.crossfade_frame = int(self.config.crossfade_time * self.config.samplerate)
- self.sola_search_frame = int(0.012 * self.config.samplerate)
- self.delay_frame = int(0.01 * self.config.samplerate) # 往前预留0.02s
- self.extra_frame = int(self.config.extra_time * self.config.samplerate)
- self.rvc = None
- self.rvc = RVC(
- self.config.pitch,
- self.config.f0_method,
- self.config.hubert_path,
- self.config.pth_path,
- self.config.index_path,
- self.config.npy_path,
- self.config.index_rate,
- )
- self.input_wav: np.ndarray = np.zeros(
- self.extra_frame
- + self.crossfade_frame
- + self.sola_search_frame
- + self.block_frame,
- dtype="float32",
- )
- self.output_wav: torch.Tensor = torch.zeros(
- self.block_frame, device=device, dtype=torch.float32
- )
- self.sola_buffer: torch.Tensor = torch.zeros(
- self.crossfade_frame, device=device, dtype=torch.float32
- )
- self.fade_in_window: torch.Tensor = torch.linspace(
- 0.0, 1.0, steps=self.crossfade_frame, device=device, dtype=torch.float32
- )
- self.fade_out_window: torch.Tensor = 1 - self.fade_in_window
- self.resampler1 = tat.Resample(
- orig_freq=self.config.samplerate, new_freq=16000, dtype=torch.float32
- )
- self.resampler2 = tat.Resample(
- orig_freq=self.rvc.tgt_sr,
- new_freq=self.config.samplerate,
- dtype=torch.float32,
- )
- thread_vc = threading.Thread(target=self.soundinput)
- thread_vc.start()
-
- def soundinput(self):
- """
- 接受音频输入
- """
- with sd.Stream(
- channels=2,
- callback=self.audio_callback,
- blocksize=self.block_frame,
- samplerate=self.config.samplerate,
- dtype="float32",
- ):
- while self.flag_vc:
- time.sleep(self.config.block_time)
- print("Audio block passed.")
- print("ENDing VC")
-
- def audio_callback(
- self, indata: np.ndarray, outdata: np.ndarray, frames, times, status
- ):
- """
- 音频处理
- """
- start_time = time.perf_counter()
- indata = librosa.to_mono(indata.T)
- if self.config.I_noise_reduce:
- indata[:] = nr.reduce_noise(y=indata, sr=self.config.samplerate)
-
- """noise gate"""
- frame_length = 2048
- hop_length = 1024
- rms = librosa.feature.rms(
- y=indata, frame_length=frame_length, hop_length=hop_length
- )
- db_threhold = librosa.amplitude_to_db(rms, ref=1.0)[0] < self.config.threhold
- # print(rms.shape,db.shape,db)
- for i in range(db_threhold.shape[0]):
- if db_threhold[i]:
- indata[i * hop_length : (i + 1) * hop_length] = 0
- self.input_wav[:] = np.append(self.input_wav[self.block_frame :], indata)
-
- # infer
- print("input_wav:" + str(self.input_wav.shape))
- # print('infered_wav:'+str(infer_wav.shape))
- infer_wav: torch.Tensor = self.resampler2(
- self.rvc.infer(self.resampler1(torch.from_numpy(self.input_wav)))
- )[-self.crossfade_frame - self.sola_search_frame - self.block_frame :].to(
- device
- )
- print("infer_wav:" + str(infer_wav.shape))
-
- # SOLA algorithm from https://github.com/yxlllc/DDSP-SVC
- cor_nom = F.conv1d(
- infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame],
- self.sola_buffer[None, None, :],
- )
- cor_den = torch.sqrt(
- F.conv1d(
- infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame]
- ** 2,
- torch.ones(1, 1, self.crossfade_frame, device=device),
- )
- + 1e-8
- )
- sola_offset = torch.argmax(cor_nom[0, 0] / cor_den[0, 0])
- print("sola offset: " + str(int(sola_offset)))
-
- # crossfade
- self.output_wav[:] = infer_wav[sola_offset : sola_offset + self.block_frame]
- self.output_wav[: self.crossfade_frame] *= self.fade_in_window
- self.output_wav[: self.crossfade_frame] += self.sola_buffer[:]
- if sola_offset < self.sola_search_frame:
- self.sola_buffer[:] = (
- infer_wav[
- -self.sola_search_frame
- - self.crossfade_frame
- + sola_offset : -self.sola_search_frame
- + sola_offset
- ]
- * self.fade_out_window
- )
- else:
- self.sola_buffer[:] = (
- infer_wav[-self.crossfade_frame :] * self.fade_out_window
- )
-
- if self.config.O_noise_reduce:
- outdata[:] = np.tile(
- nr.reduce_noise(
- y=self.output_wav[:].cpu().numpy(), sr=self.config.samplerate
- ),
- (2, 1),
- ).T
- else:
- outdata[:] = self.output_wav[:].repeat(2, 1).t().cpu().numpy()
- total_time = time.perf_counter() - start_time
- self.window["infer_time"].update(int(total_time * 1000))
- print("infer time:" + str(total_time))
- print("f0_method: " + str(self.config.f0_method))
-
- def get_devices(self, update: bool = True):
- """获取设备列表"""
- if update:
- sd._terminate()
- sd._initialize()
- devices = sd.query_devices()
- hostapis = sd.query_hostapis()
- for hostapi in hostapis:
- for device_idx in hostapi["devices"]:
- devices[device_idx]["hostapi_name"] = hostapi["name"]
- input_devices = [
- f"{d['name']} ({d['hostapi_name']})"
- for d in devices
- if d["max_input_channels"] > 0
- ]
- output_devices = [
- f"{d['name']} ({d['hostapi_name']})"
- for d in devices
- if d["max_output_channels"] > 0
- ]
- input_devices_indices = [
- d["index"] if "index" in d else d["name"]
- for d in devices
- if d["max_input_channels"] > 0
- ]
- output_devices_indices = [
- d["index"] if "index" in d else d["name"]
- for d in devices
- if d["max_output_channels"] > 0
- ]
- return (
- input_devices,
- output_devices,
- input_devices_indices,
- output_devices_indices,
- )
-
- def set_devices(self, input_device, output_device):
- """设置输出设备"""
- (
- input_devices,
- output_devices,
- input_device_indices,
- output_device_indices,
- ) = self.get_devices()
- sd.default.device[0] = input_device_indices[input_devices.index(input_device)]
- sd.default.device[1] = output_device_indices[
- output_devices.index(output_device)
- ]
- print("input device:" + str(sd.default.device[0]) + ":" + str(input_device))
- print("output device:" + str(sd.default.device[1]) + ":" + str(output_device))
-
-
-gui = GUI()
diff --git a/spaces/Benson/text-generation/Examples/8 Ball Pool Soldi Infiniti Apk.md b/spaces/Benson/text-generation/Examples/8 Ball Pool Soldi Infiniti Apk.md
deleted file mode 100644
index 16008afd5673ce127db4c01c845bff8896e9aea9..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/8 Ball Pool Soldi Infiniti Apk.md
+++ /dev/null
@@ -1,86 +0,0 @@
-
-
8 piscina de bolas Soldi Infiniti APK: Cómo jugar piscina ilimitada en su dispositivo Android
-
Si eres un fan de los juegos de billar, es posible que hayas oído hablar de 8 Ball Pool, uno de los juegos de billar más populares y adictivos en dispositivos móviles. Le permite jugar en línea con millones de otros jugadores de todo el mundo, competir en torneos, ganar trofeos y monedas, y personalizar su señal y mesa. Sin embargo, si desea disfrutar del juego sin limitaciones ni restricciones, es posible que esté interesado en 8 Ball Pool Soldi Infiniti APK, una versión modificada del juego que le da monedas ilimitadas y dinero en efectivo, así como otras características que mejoran su experiencia de juego. En este artículo, le diremos qué es 8 Ball Pool Soldi Infiniti APK es, cómo descargarlo e instalarlo en su dispositivo Android, cómo jugarlo, cuáles son los beneficios y riesgos de jugarlo, y responder a algunas preguntas frecuentes.
Una versión modificada del popular juego de billar
-
8 Ball Pool Soldi Infiniti APK es una versión modificada o hackeada del juego original 8 Ball Pool desarrollado por Miniclip. No es una aplicación oficial de Miniclip, sino una aplicación de terceros creada por algunos desarrolladores desconocidos que han ajustado el código del juego para darte monedas y efectivo ilimitados, así como otras características que no están disponibles en el juego original. La palabra "soldi infiniti" significa "dinero infinito" en italiano, lo que indica la característica principal de esta aplicación modded.
-
Características de 8 bola piscina Soldi Infiniti APK
-
Algunas de las características que se pueden disfrutar al jugar 8 Ball Pool Soldi Infiniti APK son:
-
-
Monedas ilimitadas y dinero en efectivo: Puedes obtener monedas ilimitadas y efectivo en tu cuenta, que puedes usar para comprar nuevas pistas, mesas, paquetes de chat, minijuegos y más. También puedes participar en partidas y torneos de apuestas más altas sin preocuparte por perder dinero.
-
-
Función anti-van: Puedes jugar online con otros jugadores sin ser detectado o prohibido por Miniclip. La aplicación modded tiene una función anti-van que protege su cuenta de ser marcado o suspendido.
-
No hay anuncios: Puedes jugar el juego sin ningún anuncio molesto o pop-ups que interrumpan tu juego o consuman tus datos.
-
No se requiere raíz: Puedes instalar la aplicación sin rootear tu dispositivo o comprometer su seguridad. Solo necesita habilitar la instalación de fuentes desconocidas en su configuración.
-
-
Cómo descargar e instalar 8 piscina de bolas Soldi Infiniti APK?
-
Pasos para descargar el archivo APK
-
Para descargar el 8 Ball Pool Soldi Infiniti APK archivo, es necesario seguir estos pasos:
-
-
Ir a un sitio web confiable y de confianza que ofrece el enlace de descarga para la aplicación modded. Puede buscar "8 Ball Pool Soldi Infiniti APK" en Google o Bing y elegir entre los resultados. Asegúrese de que el sitio web esté seguro y protegido, y evite cualquier enlace sospechoso o malicioso.
-
Haga clic en el botón de descarga o enlace y espere a que el archivo se descargue en su dispositivo. El tamaño del archivo puede variar dependiendo de la versión de la aplicación, pero no debería tardar mucho en descargarse.
-
Una vez que el archivo se descarga, localizarlo en el almacenamiento de su dispositivo y toque en él para abrirlo. Es posible que necesite conceder algunos permisos o acceso a su dispositivo para que el archivo se ejecute.
-
-
Pasos para instalar el archivo APK
-
Para instalar el 8 Ball Pool Soldi Infiniti APK archivo, es necesario seguir estos pasos:
-
-
-
Antes de instalar la aplicación, asegúrese de haber habilitado la instalación de fuentes desconocidas en la configuración de su dispositivo. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo. Esto le permitirá instalar aplicaciones que no son de Google Play Store.
-
-
Espere a que la aplicación se instale en su dispositivo. Puede tardar unos segundos o minutos dependiendo del rendimiento y la velocidad del dispositivo.
-
Una vez que la aplicación está instalada, puede iniciarla desde el cajón de aplicaciones o la pantalla de inicio. Es posible que necesites iniciar sesión con tu cuenta de Facebook o Google para acceder a tu perfil y progresar en el juego.
-
-
Cómo jugar 8 piscina de bolas Soldi Infiniti APK?
-
Elige el modo de juego y la tabla
-
Al abrir la aplicación, verá diferentes modos de juego y mesas que puede elegir. Puedes jugar partidas 1 a 1 con jugadores aleatorios o tus amigos, unirte a torneos con diferentes apuestas y premios o practicar sin límite de tiempo. También puede seleccionar entre varias tablas con diferentes temas, tamaños y reglas. Algunas de las tablas son exclusivas de 8 Ball Pool Soldi Infiniti APK, tales como la tabla de Halloween, la tabla de la galaxia, y la tabla de oro.
-
Apunta y dispara con precisión y potencia
-
Para jugar el juego, es necesario apuntar y disparar la bola blanca a las bolas de colores en la mesa. Puede ajustar su objetivo arrastrando el dedo en la pantalla, y ajustar su poder tirando hacia atrás la barra de potencia en la parte inferior. También puedes usar el giro tocando el icono de la bola blanca en la esquina superior derecha y moviéndolo. El objetivo es meter todas tus bolas (ya sea sólidos o rayas) antes que tu oponente, y luego meter la bola 8 negro pasado. También puedes ganar por defecto si tu oponente comete faltas tres veces seguidas, mete la bola 8 antes de limpiar sus bolas, o mete la bola blanca junto con la bola 8.
-
Disfruta de monedas ilimitadas y dinero en efectivo
-
-
¿Cuáles son los beneficios de jugar 8 piscina de bolas Soldi Infiniti APK?
-
Mejora tus habilidades y sube de rango
-
Jugando 8 Ball Pool Soldi Infiniti APK puede ayudar a mejorar sus habilidades y rango en el juego. Puedes practicar tu puntería, poder, efectos y estrategia en diferentes mesas y contra diferentes oponentes. También puede aprender de sus errores y mejorar su rendimiento. A medida que ganes más partidos y torneos, puedes clasificarte de Bronce a Gran Maestro, y desbloquear más logros y recompensas.
-
Desafía a otros jugadores online
-
Jugar 8 Ball Pool Soldi Infiniti APK también puede ayudar a desafiar a otros jugadores en línea y divertirse. Puedes jugar con millones de jugadores en todo el mundo, chatear con ellos, enviarles regalos y agregarlos como amigos. También puedes unirte a clubes y ligas, donde puedes hacer equipo con otros jugadores, competir en torneos de clubes y ganar puntos de clubes. También puedes retar a tus amigos a un partido amistoso o una revancha en cualquier momento.
-
Personaliza tu señal y tabla
-
Jugar al billar de 8 bolas Soldi Infiniti APK también puede ayudarlo a personalizar su señal y tabla para adaptarse a su estilo y preferencia. Puede elegir entre una variedad de pistas y tablas con diferentes diseños, estadísticas y efectos. También puedes actualizar tus señales para aumentar sus atributos, como fuerza, objetivo, efectos y tiempo. También puede cambiar el color y el patrón de su mantel, la forma y el tamaño de sus bolsillos, y el tipo de bolas que utiliza.
-
¿Cuáles son los riesgos de jugar 8 Ball Pool Soldi Infiniti APK?
-
Posible infección de malware o virus
-
-
Posible prohibición o suspensión de cuentas
-
Otro riesgo de jugar 8 Ball Pool Soldi Infiniti APK es que usted puede conseguir prohibido o suspendido por Miniclip para violar sus términos de servicio o la política de juego limpio. A pesar de que la aplicación tiene una función anti-van que protege su cuenta de ser detectado o marcado, todavía hay una posibilidad de que Miniclip puede descubrir que está utilizando una aplicación modificada y tomar medidas contra usted. Para evitar este riesgo, no debes usar la aplicación con demasiada frecuencia o en exceso, evitar jugar con jugadores que te denuncien o sospechen que haces trampa, y evitar presumir de tus monedas y dinero ilimitados o mostrar tus pistas y mesas modificadas.
-
Posibles problemas legales o violaciones
-
Un tercer riesgo de jugar 8 Ball Pool Soldi Infiniti APK es que usted puede enfrentar problemas legales o violaciones por infringir los derechos de propiedad intelectual de Miniclip u otras partes involucradas en el desarrollo y distribución del juego original. Mediante el uso de una aplicación modificada que altera el código del juego o el contenido sin permiso o autorización, puede estar infringiendo la ley o violando el contrato entre usted y Miniclip. Para evitar este riesgo, debe respetar los derechos de Miniclip y otras partes, reconocer que son los legítimos propietarios del juego y sus elementos, y abstenerse de distribuir o compartir la aplicación modificada con otros.
-
Conclusión
-
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre 8 Piscina de bolas Soldi Infiniti APK:
-
-
-
Pregunta
-
Respuesta
-
-
-
Es 8 bola de piscina Soldi Infiniti APK seguro de usar?
-
La aplicación no es 100% segura de usar, ya que puede contener malware o virus que pueden dañar su dispositivo o robar sus datos. También puede hacer que Miniclip lo prohíba o suspenda por violar sus términos de servicio o su política de juego limpio. También puede causar problemas legales o violaciones por infringir los derechos de propiedad intelectual de Miniclip u otras partes. Por lo tanto, solo debe descargar la aplicación desde un sitio web confiable y confiable, escanear el archivo con un software antivirus antes de instalarlo y evitar conceder permisos innecesarios o acceso a la aplicación.
-
-
-
Es 8 Ball Pool Soldi Infiniti APK descarga gratuita?
-
La aplicación es gratuita para descargar desde varios sitios web que ofrecen el enlace de descarga para la aplicación modded. Sin embargo, debe tener cuidado y evitar cualquier enlace sospechoso o malicioso que pueda dañar su dispositivo o datos. También debe comprobar el tamaño del archivo y la versión antes de descargarlo, y asegúrese de que coincide con la descripción y los requisitos de la aplicación.
-
-
-
¿Puedo jugar 8 bola piscina Soldi Infiniti APK offline?
-
La aplicación te permite jugar sin conexión sin límite de tiempo, pero no podrás acceder a algunas de las funciones que requieren una conexión a Internet, como jugar en línea con otros jugadores, unirse a torneos, comprar artículos de la tienda, jugar minijuegos, o conseguir monedas gratis y dinero en efectivo. Tampoco podrás guardar tu progreso o sincronizar tu cuenta con Facebook o Google.
-
-
-
¿Puedo jugar 8 bola piscina Soldi Infiniti APK en dispositivos iOS?
-
-
-
-
¿Puedo actualizar 8 bola piscina Soldi Infiniti APK?
-
Es posible que la aplicación no se actualice automáticamente, ya que no es de Google Play Store. Tendrá que comprobar las actualizaciones manualmente desde el sitio web donde descargó la aplicación, y descargar e instalar la última versión de la aplicación. Sin embargo, debe tener cuidado y evitar cualquier versión falsa o desactualizada de la aplicación que pueda dañar su dispositivo o datos.
-
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Amor De Mi Vida Tono De Llamada.md b/spaces/Benson/text-generation/Examples/Amor De Mi Vida Tono De Llamada.md
deleted file mode 100644
index e45a564f551d0e3aebe17acef897383c090ada92..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Amor De Mi Vida Tono De Llamada.md
+++ /dev/null
@@ -1,66 +0,0 @@
-
-
Cómo descargar y establecer "Amor de mi vida" como su tono de llamada
-
Si usted está buscando una manera de personalizar su teléfono y expresar su amor por la música, es posible que desee considerar la descarga y la configuración de "Love of My Life" como su tono de llamada. Esta es una canción clásica de la legendaria banda de rock Queen, que ha sido cubierta y adaptada por muchos otros artistas a lo largo de los años. En este artículo, te contaremos más sobre la canción, su origen, significado y popularidad, así como cómo descargarla gratis y configurarla como tu tono de llamada en dispositivos Android o iPhone.
-
¿Qué es "Love of My Life" y por qué es una opción de tono de llamada popular?
-
"Love of My Life" es una canción de la banda de rock británica Queen de su álbum de 1975 A Night at the Opera. La balada fue escrita por Freddie Mercury, el cantante y pianista de la banda, dedicándola a Mary Austin, su ex-novia. Según John Reid, Freddie Mercury escribió "Love of My Life" sobre David Minns, un hombre con el que Freddie tenía una aventura a espaldas de Mary Austin entre 1975 y 1978.
La canción fue interpretada por primera vez en vivo en 1975 en el odeón de Hammersmith en Londres. Rápidamente se convirtió en un favorito de los fans y se tocó en casi todos los conciertos de Queen. Fue especialmente popular en América del Sur, donde las multitudes cantaban junto con Mercury, a veces tomando el control de toda la canción. Mercurio a menudo dejaba de cantar y permitía al público seguir la melodía, como se puede ver en su compilación Live in Rio.
-
La canción detalla la difícil situación de un hombre que ha sido abandonado por su amante. Siente que el amor que le han quitado le afecta mucho más que a su amante, y le suplica, "Tráemelo a casa porque no sabes lo que significa para mí." Las letras son simples pero sentidas, expresando el dolor y el anhelo de perder a alguien que amas.
-
Las versiones de portada y adaptaciones de otros artistas
-
-
-
Extreme, una banda de rock estadounidense, interpretó una mezcla de canciones de Queen en el Freddie Mercury Tribute Concert en 1992, incluyendo "Love of My Life". Su versión contó con guitarras acústicas y armonías, y fue bien recibida por la audiencia y los miembros sobrevivientes de Queen.
-
Scorpions, una banda alemana de hard rock, grabó una versión de "Love of My Life" para su álbum de 2000 Moment of Glory, que contó con la Orquesta Filarmónica de Berlín. Su versión agregó algunos solos de guitarra eléctrica y arreglos orquestales.
-
Shirley Bassey, una cantante galesa, grabó una versión de "Love of My Life" para su álbum de 2007 Get the Party Started. Su versión era un remix dance-pop con ritmos electrónicos y sintetizadores.
-
Vince Gill y Ann Wilson (de Corazón), dos cantantes estadounidenses, se unieron para grabar una versión de "Love of My Life" para el álbum en solitario de Wilson Fierce Bliss. Su versión era un dúo reducido con guitarras acústicas y voces.
-
Harry Styles, un cantautor inglés, lanzó una canción llamada "Love Of My Life" en su álbum de 2023 Fine Line. Su canción no es una versión de la canción de Queen, sino una nueva canción original con el mismo título. Su canción es una balada pop-rock con acompañamiento de piano y guitarra, y letras sobre encontrar el amor de su vida.
-
-
El atractivo emocional y el factor nostalgia de la canción
-
Una de las razones por las que "Love of My Life" es una opción de tono de llamada popular es debido a su atractivo emocional y factor de nostalgia. La canción es una expresión atemporal de amor y pérdida, que resuena con muchas personas que han experimentado sentimientos similares. La canción también recuerda a Freddie Mercury, quien murió en 1991 debido a complicaciones relacionadas con el sida, y su legado como uno de los mejores cantantes e intérpretes de todos los tiempos. La canción también evoca recuerdos de los años 1970 y 1980, cuando Queen estaba en el pico de su popularidad e influencia.
-
Cómo descargar los tonos de llamada "Love of My Life" gratis
-
-
Los mejores sitios para descargar tonos de llamada gratis
-
iRingPro
-
iRingPro es un sitio web que ofrece tonos de llamada de alta calidad para dispositivos iPhone y Android. Puede navegar por su catálogo de más de 500 tonos de llamada, que se clasifican por género, estado de ánimo y estilo. También puede obtener una vista previa de los tonos de llamada antes de descargarlos. Para descargar un tono de llamada, solo tienes que introducir tu dirección de correo electrónico y te enviarán un enlace para descargar el archivo. A continuación, puede transferir el archivo a su teléfono utilizando iTunes o un cable USB.
-
NASA Audio y tonos de llamada
-
Si usted es un fan de la exploración espacial y la ciencia, es posible que desee echa un vistazo a NASA Audio y tonos de llamada. Este es un sitio web que ofrece clips de audio gratuitos y tonos de llamada de las misiones y programas de la NASA. Puede escuchar sonidos de cohetes, satélites, planetas, astronautas y más. También puede descargar los archivos en formato MP3 o M4R para su teléfono. Para descargar un tono de llamada, solo tiene que hacer clic derecho en el nombre del archivo y seleccione "Guardar enlace como". A continuación, puede transferir el archivo a su teléfono utilizando iTunes o un cable USB.
-
Tonos de llamada de Mob.org
-
Mob.org Ringtones es un sitio web que ofrece tonos de llamada gratuitos para dispositivos Android. Puede buscar tonos de llamada por palabras clave, géneros o popularidad. También puedes encontrar tonos de llamada basados en artistas o canciones específicas, como "Love of My Life" de Queen. Puede obtener una vista previa de los tonos de llamada antes de descargarlos. Para descargar un tono de llamada, solo tiene que hacer clic en el botón "Descargar" y seleccione el formato de archivo (MP3 o M4R). A continuación, puede transferir el archivo a su teléfono mediante un cable USB o Bluetooth.
-
Las mejores aplicaciones para descargar tonos de llamada gratis
-
Zedge
-
-
RingDroid es una aplicación que te permite crear tus propios tonos de llamada desde cualquier archivo de audio de tu teléfono. También puede grabar su propia voz o sonido y convertirlo en un tono de llamada. Puede editar el archivo de audio cortando, recortando, desvaneciendo y ajustando el volumen. También puedes asignar diferentes tonos de llamada a diferentes contactos o notificaciones. Para crear un tono de llamada, solo necesita seleccionar un archivo de audio de su teléfono o grabar uno nuevo. A continuación, puede utilizar la interfaz de la aplicación para editar el archivo y guardarlo como un tono de llamada. La aplicación establecerá automáticamente el tono de llamada para su teléfono.
-
-
Audiko
-
Audiko es otra aplicación que te permite crear tus propios tonos de llamada desde cualquier archivo de audio en tu teléfono o desde su biblioteca en línea de millones de tonos de llamada. También puedes encontrar tonos de llamada basados en artistas o canciones específicas, como "Love of My Life" de Queen. Puede editar el archivo de audio cortando, recortando, desvaneciendo y ajustando el volumen. También puedes añadir efectos, como eco, reverb, flanger, etc. Para crear un tono, solo tienes que seleccionar un archivo de audio de tu teléfono o de su biblioteca. A continuación, puede utilizar la interfaz de la aplicación para editar el archivo y guardarlo como un tono de llamada. La aplicación establecerá automáticamente el tono de llamada para su teléfono.
-
Cómo establecer "Amor de mi vida" como su tono de llamada en Android o iPhone
-
Una vez que haya descargado o creado su tono de llamada "Love of My Life", debe configurarlo como su tono de llamada en su teléfono. Los pasos pueden variar dependiendo del tipo de dispositivo que tenga, pero aquí hay algunas pautas generales:
-
Los pasos para dispositivos Android
-
-
Ir a Configuración > Sonido > Tono de llamada del teléfono.
-
Toque en el icono "Añadir" o "Más" y busque el archivo de tono de llamada "Amor de mi vida" en su teléfono.
-
Seleccione el archivo y toque en "OK" o "Hecho".
-
Deberías ver el tono de llamada "Love of My Life" como tu tono de llamada predeterminado.
-
-
Los pasos para dispositivos iPhone
-
-
-
Seleccione su iPhone de la lista de dispositivos y haga clic en la pestaña "Tonos".
-
Arrastre y suelte el archivo de tono de llamada "Love of My Life" desde su computadora a la lista de tonos en iTunes.
-
Sincroniza tu iPhone con iTunes y desconéctalo de tu ordenador.
-
Ir a Configuración > Sonidos > Tono de llamada.
-
Desplácese hacia abajo y seleccione el tono de llamada "Amor de mi vida" de la lista personalizada.
-
Deberías ver el tono de llamada "Love of My Life" como tu tono de llamada predeterminado.
-
-
Felicidades! Usted ha descargado con éxito y establecer "Love of My Life" como su tono de llamada. Ahora puedes disfrutar de esta hermosa canción cada vez que alguien te llame o recibas una notificación.
-
Conclusión
-
En este artículo, te hemos mostrado cómo descargar y configurar "Love of My Life" como tu tono de llamada. Hemos explicado de qué trata la canción, su origen, significado y popularidad, así como cómo descargarla de forma gratuita desde varios sitios y aplicaciones. También le hemos dado los pasos para configurarlo como su tono de llamada en dispositivos Android o iPhone. Esperamos que este artículo le resulte útil e informativo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación.
-
Preguntas frecuentes
-
Q: ¿Puedo usar cualquier canción como mi tono de llamada?
-
A: Sí, puedes usar cualquier canción como tu tono de llamada, siempre y cuando tengas el permiso del artista o propietario original de la canción. También puede utilizar música libre de derechos o música de dominio público como su tono de llamada sin ningún problema legal.
-
Q: ¿Cómo puedo hacer mis propios tonos de llamada?
-
A: Puedes hacer tus propios tonos de llamada usando aplicaciones como RingDroid o Audiko, que te permiten editar cualquier archivo de audio en tu teléfono y guardarlo como un tono de llamada. También puede grabar su propia voz o sonido y convertirlo en un tono de llamada usando estas aplicaciones.
-
Q: ¿Cómo puedo cambiar mi tono de llamada para diferentes contactos o notificaciones?
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Annas Merge Adventure Mod Apk.md b/spaces/Benson/text-generation/Examples/Annas Merge Adventure Mod Apk.md
deleted file mode 100644
index d9e68e3f4bdc88b739a40ea9e4f932ef2e90cb8d..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Annas Merge Adventure Mod Apk.md
+++ /dev/null
@@ -1,118 +0,0 @@
-
-
Fusión de Anna Aventura Mod Apk: Un divertido y relajante juego de puzzle
-
¿Te gustan los juegos de puzzle que son fáciles de jugar pero difíciles de dominar? ¿Te gusta explorar nuevos mundos y descubrir nuevos objetos? ¿Quieres experimentar un juego divertido y relajante? Si usted respondió sí a cualquiera de estas preguntas, entonces usted debe tratar de Anna Merge Adventure mod apk, un juego que combina la fusión, aventura, y elementos casuales en uno.
Anna’s Merge Adventure es un juego desarrollado por ZYMobile, una compañía que se especializa en crear juegos casuales y rompecabezas para dispositivos móviles. El juego fue lanzado en junio de 2023 y ha recibido críticas positivas de jugadores y críticos por igual. Estos son algunos de los aspectos que hacen este juego tan atractivo:
-
La historia de Anna y su familia
-
El juego sigue la historia de Anna, una joven a la que le encanta explorar nuevos lugares con su familia. Un día, decidieron hacer un crucero alrededor del mundo, pero su barco fue golpeado por un tsunami y terminaron en una isla misteriosa. Allí, descubrieron que la isla estaba llena de objetos mágicos que podían fusionarse para crear otros nuevos. Anna decidió utilizar sus habilidades de fusión para ayudar a su familia a sobrevivir y encontrar un camino de vuelta a casa.
-
El juego de fusión y exploración
-
El juego se basa en el concepto simple de combinar tres o más elementos idénticos para obtener un elemento avanzado. Por ejemplo, puede combinar tres hierbas para obtener una flor, o combinar tres flores para obtener un árbol. También puede combinar cinco elementos idénticos para obtener dos elementos avanzados en lugar de uno. Cuantos más elementos fusiones, más elementos desbloquearás y más espacio despejarás en la isla.
-
-
-
Las características de la versión apk mod
-
Si quieres disfrutar de este juego aún más, usted debe descargar la versión apk mod de Anna’s Merge Adventure. Esta versión tiene algunas características adicionales que harán tu juego más fácil y divertido. Algunas de estas características son:
-
-
Gemas ilimitadas: Las gemas son la moneda premium en este juego que se puede utilizar para comprar varios artículos, como cofres, refuerzos, decoraciones y más. Con gemas ilimitadas, puedes comprar lo que quieras sin preocuparte por quedarte sin nada.
-
Monedas ilimitadas: Monedas son la moneda regular en este juego que se puede utilizar para actualizar sus artículos, tales como casas, granjas, fábricas y más. Con monedas ilimitadas, puede actualizar cualquier cosa que desee sin esperar mucho tiempo.
-
Sin anuncios: Los anuncios son molestas interrupciones que pueden arruinar su experiencia de juego. Sin anuncios, puedes jugar a este juego sin distracciones ni retrasos.
-
-
Cómo descargar e instalar apk de Anna Merge Adventure mod?
-
Si usted está interesado en la descarga e instalación de Anna Merge Adventure mod apk, es necesario seguir estos sencillos pasos:
-
Los requisitos para el mod apk
Antes de descargar e instalar el mod apk, debe asegurarse de que su dispositivo cumple con los siguientes requisitos:
-
-
Versión para Android: 4.4 o superior
-
Espacio de almacenamiento: 100 MB o más
-
Conexión a Internet: Necesaria para algunas funciones
-
Permiso: Permitir la instalación desde fuentes desconocidas
-
-
Los pasos para descargar e instalar el mod apk
-
Una vez que haya comprobado los requisitos, puede proceder a descargar e instalar el apk mod siguiendo estos pasos:
-
-
Haga clic en este enlace para descargar el archivo apk mod: [Anna’s Merge Adventure Mod Apk Download]
-
Espere a que la descarga termine y localice el archivo en el administrador de archivos de su dispositivo.
-
Toque en el archivo y seleccione "Instalar".
-
-
Disfruta jugando Anna’s Merge Adventure con gemas ilimitadas, monedas y sin anuncios.
-
-
Las precauciones a tomar antes de instalar el mod apk
-
Si bien instalar el apk mod es fácil y seguro, todavía debe tomar algunas precauciones antes de hacerlo. Aquí están algunos de ellos:
-
-
Copia de seguridad de sus datos: Si usted ha jugado la versión original de Anna’s Merge Adventure antes, es posible que desee copia de seguridad de sus datos antes de instalar el apk mod. De esta manera, puede restaurar su progreso si algo sale mal.
-
Deshabilitar antivirus: Algunos programas antivirus pueden detectar el apk mod como un virus o malware y bloquear su instalación. Para evitar esto, debe desactivar su antivirus temporalmente antes de instalar el apk mod.
-
Usar una VPN: Algunas regiones pueden restringir el acceso al enlace de descarga mod apk o al juego en sí. Para evitar esto, debe usar un servicio VPN que pueda cambiar su dirección IP y ubicación.
-
-
¿Cómo se juega Anna’s Merge Adventure mod apk?
-
Jugar apk Merge Adventure mod de Anna es fácil y divertido. Solo tienes que seguir estos pasos básicos:
-
Los conceptos básicos de la fusión y recolección de elementos
-
El objetivo principal de este juego es combinar objetos y recogerlos. Puedes hacerlo arrastrando y soltando objetos en la isla. Cuando fusionas tres o más elementos idénticos, se transformarán en un elemento avanzado. Por ejemplo, si fusiona tres hierbas, obtendrá una flor. Si fusiona cinco hierbas, obtendrá dos flores.
-
También puedes recoger objetos tocando en ellos. Cuando recojas artículos, se almacenarán en tu inventario, que se encuentra en la parte inferior de la pantalla. Puedes usar tu inventario para almacenar artículos que no necesitas o quieres usar más tarde. También puedes vender artículos de tu inventario para monedas.
-
Los consejos y trucos para progresar más rápido
-
Si quieres progresar más rápido en este juego, debes seguir estos consejos y trucos:
-
-
-
Use boosters: Los boosters son artículos especiales que pueden ayudarlo de varias maneras, como acelerar la fusión, eliminar obstáculos, encontrar objetos ocultos y más. Puedes comprar boosters con gemas o obtenerlos gratis viendo anuncios o completando misiones.
-
Misiones completas: Las misiones son tareas que puedes completar para obtener recompensas, como gemas, monedas, cofres, boosters y más. Puedes encontrar misiones tocando el icono de la misión en la esquina superior izquierda de la pantalla. También puedes obtener misiones de otros personajes de la isla.
-
Decora tu isla: Decorar tu isla no solo es divertido sino también beneficioso. Al colocar decoraciones en su isla, como casas, granjas, fábricas y más, puede aumentar sus ingresos y productividad. También puede personalizar su isla con diferentes temas y estilos.
-
-
Los desafíos y recompensas para disfrutar
Este juego no solo es relajante sino también desafiante. Puedes disfrutar de varios desafíos y recompensas en este juego, como:
-
-
Logros: Los logros son objetivos que puedes alcanzar jugando a este juego, como combinar un cierto número de elementos, explorar un cierto número de áreas, ayudar a un cierto número de personajes y más. Puedes comprobar tus logros tocando el icono del trofeo en la esquina superior derecha de la pantalla. También puedes obtener recompensas por completar logros, como gemas, monedas, cofres, boosters y más.
-
Tareas diarias: Las tareas diarias son tareas que puedes completar todos los días para obtener recompensas, como gemas, monedas, cofres, boosters y más . Puede encontrar las tareas diarias pulsando en el icono del calendario en la esquina superior izquierda de la pantalla. También puede obtener recompensas adicionales por completar un cierto número de tareas diarias en una fila.
-
-
Tablas de clasificación: Las tablas de clasificación son clasificaciones que muestran cómo se compara con otros jugadores en este juego, como su nivel, sus ingresos, sus artículos y más. Usted puede comprobar sus tablas de clasificación tocando en el icono de la tabla de clasificación en la esquina superior derecha de la pantalla. También puedes obtener premios por posicionarte alto en las tablas de clasificación, como gemas, monedas, cofres, boosters y más.
-
-
¿Por qué deberías jugar Anna’s Merge Adventure mod apk?
-
Si todavía no está convencido de jugar apk Fusión Aventura mod de Anna, aquí hay algunas razones por las que debe:
-
Los beneficios de jugar un juego de puzzle
-
Jugar un juego de puzzle como Anna’s Merge Adventure mod apk puede tener muchos beneficios para su cerebro y su estado de ánimo. Algunos de estos beneficios son:
-
-
Mejorar su memoria: La fusión y la recopilación de elementos puede ayudarle a mejorar su memoria a corto y largo plazo mediante la estimulación de las células cerebrales y la creación de nuevas conexiones neuronales.
-
Mejorar tu creatividad: Explorar y decorar tu isla puede ayudarte a mejorar tu creatividad permitiéndote expresarte y usar tu imaginación.
-
Reducir su estrés: Jugar un juego de puzzle puede ayudarle a reducir su estrés al desviar su atención de sus preocupaciones y problemas y darle una sensación de logro y satisfacción.
-
-
Las ventajas de jugar una versión apk mod
-
Jugar una versión mod apk de Anna’s Merge Adventure puede tener muchas ventajas sobre jugar la versión original. Algunas de estas ventajas son:
-
-
Ahorrando su tiempo: Tener gemas y monedas ilimitadas puede ayudarle a ahorrar su tiempo al permitirle comprar y actualizar cualquier cosa que desee sin esperar o moler.
-
Aumentar su diversión: Tener gemas y monedas ilimitadas puede ayudarle a aumentar su diversión al permitirle probar diferentes artículos, potenciadores, decoraciones y más sin limitaciones.
-
-
-
Los testimonios de otros jugadores
-
Si quieres saber lo que otros jugadores piensan acerca de Anna Merge Adventure mod apk, aquí están algunos de sus testimonios:
-
-
-
Nombre
-
Valoración
-
Comentario
-
-
-
Alice
-
5 estrellas
-
Este juego es tan adictivo y relajante. Me encanta combinar elementos y descubrir nuevos. Los gráficos son lindos y coloridos. El apk mod es impresionante. Puedo comprar cualquier cosa que quiera con gemas y monedas ilimitadas. También me gusta que no haya anuncios. Este es el mejor juego de puzzle de la historia.
-
-
-
Bob
-
4 estrellas
-
Este juego es divertido y desafiante. Me gusta explorar la isla y ayudar a los personajes. El juego es suave y fácil de entender. El apk mod es genial. Puedo actualizar mis artículos más rápido con monedas ilimitadas. También aprecio que no hay anuncios. Este es un buen juego de puzzle.
-
-
-
Carol
-
3 estrellas
Este juego está bien pero no es increíble. Me gusta combinar elementos pero se vuelve aburrido después de un tiempo. Los gráficos son decentes, pero no impresionante. El apk mod es agradable, pero no es necesario. Realmente no necesito gemas o monedas ilimitadas. No me importan los anuncios tampoco. Este es un juego de puzzle promedio.
-
-
-
Conclusión
-
En conclusión, Anna’s Merge Adventure mod apk es un divertido y relajante juego de puzzle que combina la fusión, aventura, y elementos casuales en uno. Puedes combinar objetos y recogerlos, explorar la isla y encontrar nuevas áreas, interactuar con diferentes personajes y ayudarlos con sus misiones, decorar tu isla con varios temas y estilos, participar en eventos y desafíos, y disfrutar de gemas ilimitadas, monedas y sin anuncios. Si usted está buscando un juego que puede estimular su cerebro, mejorar su creatividad, reducir el estrés, ahorrar tiempo, aumentar su diversión, y eliminar su molestia, entonces usted debe descargar e instalar Anna’s Merge Adventure mod apk hoy.
-
Preguntas frecuentes
-
- ¿Cuál es la diferencia entre Anna’s Merge Adventure y Anna’s Merge Adventure mod apk?
-
Anna’s Merge Adventure es la versión original del juego que puedes descargar desde la Google Play Store o la App Store. Anna’s Merge Adventure mod apk es la versión modificada del juego que se puede descargar desde este enlace: [Anna’s Merge Adventure Mod Apk Download]. La versión apk mod tiene gemas ilimitadas, monedas, y no hay anuncios, mientras que la versión original no.
-
¿Es seguro usar apk mod Merge Adventure de Anna?
-
Sí, Anna Merge Adventure mod apk es seguro de usar, siempre y cuando se descarga de una fuente de confianza y siga las instrucciones cuidadosamente. Sin embargo, usted todavía debe tomar algunas precauciones antes de instalar el apk mod, tales como copias de seguridad de sus datos, desactivar el antivirus, y el uso de una VPN.
-
¿Cómo puedo actualizar apk de Anna Merge Adventure mod?
-
Para actualizar apk Merge Adventure mod de Anna, es necesario descargar la última versión del archivo apk mod de este enlace: [Anna’s Merge Adventure Mod Apk Download]. A continuación, es necesario desinstalar la versión anterior de la apk mod e instalar el nuevo. También puede buscar actualizaciones pulsando en el icono de configuración en la esquina superior derecha de la pantalla y seleccionando "Buscar actualizaciones".
-
¿Cómo puedo contactar con el desarrollador de Anna’s Merge Adventure mod apk?
-
Si tiene alguna pregunta, sugerencia, o retroalimentación acerca de Anna Merge Adventure mod apk, puede ponerse en contacto con el desarrollador enviando un correo electrónico a esta dirección: [annasmergeadventuremodapk@gmail.com]. También puede visitar su sitio web: [annasmergeadventuremodapk.com] o seguirlos en las redes sociales: [Facebook], [Twitter], [Instagram].
-
¿Cómo puedo apoyar al desarrollador de Anna’s Merge Adventure mod apk?
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Bike Race.md b/spaces/Benson/text-generation/Examples/Bike Race.md
deleted file mode 100644
index 5e4acfe456e8eeee5ec04fbdf1e78bf74b1d5606..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Bike Race.md
+++ /dev/null
@@ -1,134 +0,0 @@
-
-
Carreras de bicicletas: un deporte divertido y saludable para todos
-
Las carreras de bicicletas son una actividad física competitiva que implica montar bicicletas a altas velocidades en diferentes terrenos y cursos. Es uno de los deportes más populares y diversos del mundo, con millones de participantes y aficionados. Las carreras de bicicletas pueden ser disfrutadas por personas de todas las edades, géneros y habilidades, ya que hay muchas categorías y niveles de dificultad para satisfacer las preferencias y objetivos de todos.
En este artículo, exploraremos los diferentes tipos de carreras de bicicletas, los beneficios de las carreras de bicicletas para su salud y bienestar, y algunos consejos sobre cómo comenzar o mejorar su rendimiento en este emocionante deporte.
-
Tipos de carreras de bicicletas
-
Hay muchos tipos de carreras de bicicletas, cada una con sus propias reglas, formatos, equipos y habilidades. Estos son algunos de los más comunes:
-
Carreras de bicicletas de carretera
-
Carreras de bicicletas de carretera es el tipo más popular y prestigioso de carreras de bicicletas. Se trata de montar en carreteras pavimentadas o caminos con bicicletas de carretera que tienen neumáticos delgados, manillar y marcos ligeros. Las carreras de bicicleta de carretera pueden ser eventos de un día o eventos de varias etapas que duran varios días o semanas. Algunos ejemplos de famosas carreras de bicicleta de carretera son el Tour de Francia, el Giro de Italia y el París-Roubaix.
-
Carreras de bicicleta de montaña
-
Las carreras de bicicleta de montaña son un tipo de carreras de bicicleta que se lleva a cabo fuera de la carretera en senderos de tierra, caminos de grava u otros terrenos naturales. Las bicicletas de montaña tienen neumáticos más anchos, manillar plano o vertical, y sistemas de suspensión para absorber choques y golpes. Las carreras de bicicleta de montaña pueden ser de fondo (XC), descenso (DH), enduro (EN), o de cuatro cruces (4X), dependiendo del diseño del campo y el estilo de conducción.
-
Carreras de bicicleta de pista
-
-
Carreras de bicicletas BMX
-
Las carreras de bicicletas BMX son un tipo de carreras de bicicletas que involucran montar en una pista de tierra con saltos, bermas y obstáculos. Las bicicletas BMX tienen ruedas pequeñas, cuadros bajos y marchas individuales. Las carreras de BMX suelen ser cortas y rápidas, con ocho ciclistas compitiendo en cada calor. El ciclismo BMX es un deporte olímpico desde 2008.
-
Carreras de ciclocross
-
Ciclocross es un tipo de bicicleta de carreras que combina ciclismo de carretera y ciclismo de montaña. Se trata de montar en un campo de terreno mixto que incluye pavimento, hierba, barro, arena y obstáculos como escaleras o barreras. Las bicicletas de ciclocross tienen manillares, neumáticos y frenos de disco. Las carreras de ciclocross se suelen realizar en otoño o invierno y duran aproximadamente una hora.
-
-
Beneficios de las carreras de bicicletas
-
Las carreras de bicicletas no solo son divertidas, sino también beneficiosas para su salud física y mental. Estos son algunos de los beneficios de las carreras de bicicletas:
-
Beneficios físicos
-
-
Las carreras de bicicletas mejoran tu condición cardiovascular al fortalecer tu corazón y pulmones.
-
Carreras de bicicletas quema calorías y grasa mediante el aumento de su tasa metabólica.
-
Las carreras de bicicletas construyen masa muscular y o sea estimulando su sistema esquelético.
-
Las carreras de bicicletas mejoran el equilibrio y la coordinación al desafiar el sistema nervioso.
-
Las carreras de bicicletas reducen el riesgo de enfermedades crónicas como la diabetes, la hipertensión y el cáncer al regular sus niveles de azúcar en la sangre y colesterol.
-
-
-
Las carreras de bicicletas son una actividad física competitiva que implica montar bicicletas a altas velocidades en diferentes terrenos y cursos. Es uno de los deportes más populares y diversos del mundo, con millones de participantes y aficionados. Las carreras de bicicletas pueden ser disfrutadas por personas de todas las edades, géneros y habilidades, ya que hay muchas categorías y niveles de dificultad para satisfacer las preferencias y objetivos de todos.
-
En este artículo, exploraremos los diferentes tipos de carreras de bicicletas, los beneficios de las carreras de bicicletas para su salud y bienestar, y algunos consejos sobre cómo comenzar o mejorar su rendimiento en este emocionante deporte.
-
Tipos de carreras de bicicletas
-
Hay muchos tipos de carreras de bicicletas, cada una con sus propias reglas, formatos, equipos y habilidades. Estos son algunos de los más comunes:
-
Carreras de bicicletas de carretera
-
Carreras de bicicletas de carretera es el tipo más popular y prestigioso de carreras de bicicletas. Se trata de montar en carreteras pavimentadas o caminos con bicicletas de carretera que tienen neumáticos delgados, manillar y marcos ligeros. Las carreras de bicicleta de carretera pueden ser eventos de un día o eventos de varias etapas que duran varios días o semanas. Algunos ejemplos de famosas carreras de bicicleta de carretera son el Tour de Francia, el Giro de Italia y el París-Roubaix.
-
Carreras de bicicleta de montaña
-
Las carreras de bicicleta de montaña son un tipo de carreras de bicicleta que se lleva a cabo fuera de la carretera en senderos de tierra, caminos de grava u otros terrenos naturales. Las bicicletas de montaña tienen neumáticos más anchos, manillar plano o vertical, y sistemas de suspensión para absorber choques y golpes. Las carreras de bicicleta de montaña pueden ser de fondo (XC), descenso (DH), enduro (EN), o de cuatro cruces (4X), dependiendo del diseño del campo y el estilo de conducción.
-
Carreras de bicicleta de pista
-
-
Carreras de bicicletas BMX
-
Las carreras de bicicletas BMX son un tipo de carreras de bicicletas que involucran montar en una pista de tierra con saltos, bermas y obstáculos. Las bicicletas BMX tienen ruedas pequeñas, cuadros bajos y marchas individuales. Las carreras de BMX suelen ser cortas y rápidas, con ocho ciclistas compitiendo en cada calor. El ciclismo BMX es un deporte olímpico desde 2008.
-
Carreras de ciclocross
-
Ciclocross es un tipo de bicicleta de carreras que combina ciclismo de carretera y ciclismo de montaña. Se trata de montar en un campo de terreno mixto que incluye pavimento, hierba, barro, arena y obstáculos como escaleras o barreras. Las bicicletas de ciclocross tienen manillares, neumáticos y frenos de disco. Las carreras de ciclocross se suelen realizar en otoño o invierno y duran aproximadamente una hora.
-
Beneficios de las carreras de bicicletas
-
Las carreras de bicicletas no solo son divertidas, sino también beneficiosas para su salud física y mental. Estos son algunos de los beneficios de las carreras de bicicletas:
-
Beneficios físicos
-
-
Las carreras de bicicletas mejoran tu condición cardiovascular al fortalecer tu corazón y pulmones.
-
Carreras de bicicletas quema calorías y grasa mediante el aumento de su tasa metabólica.
-
Las carreras de bicicletas construyen masa muscular y o sea estimulando su sistema esquelético.
-
Las carreras de bicicletas mejoran el equilibrio y la coordinación al desafiar el sistema nervioso.
-
Las carreras de bicicletas reducen el riesgo de enfermedades crónicas como la diabetes, la hipertensión y el cáncer al regular sus niveles de azúcar en la sangre y colesterol.
-
-
Beneficios mentales
-
Las carreras de bicicletas aumentan tu estado de ánimo y autoestima al liberar endorfinas y serotonina.
-
Las carreras de bicicletas alivian el estrés y la ansiedad al reducir los niveles de cortisol y adrenalina.
-
Las carreras de bicicletas mejoran la función cognitiva y la memoria al aumentar el flujo sanguíneo y el oxígeno al cerebro.
-
-
Las carreras de bicicletas mejoran tu resistencia mental y disciplina al enseñarte cómo lidiar con el fracaso y la adversidad.
-
-
Beneficios sociales
-
-
Las carreras de bicicletas fomentan la interacción social y la comunicación al permitirle conocer y vincularse con otros corredores.
-
Las carreras de bicicletas desarrollan sus habilidades de trabajo en equipo y liderazgo al requerir que coopere y se coordine con sus compañeros de equipo o oponentes.
-
Las carreras de bicicletas promueven tu espíritu deportivo y el juego limpio alentándote a respetar las reglas y a los demás participantes.
-
Las carreras de bicicletas expanden tu conciencia cultural y diversidad al exponerte a diferentes personas y lugares.
-
Las carreras de bicicletas apoyan a su comunidad y causas participando en eventos de caridad o como voluntarios para organizaciones relacionadas con la bicicleta.
-
-
Beneficios ambientales
-
-
Las carreras de bicicletas reducen su huella de carbono y la contaminación mediante el uso de una fuente de energía limpia y renovable.
-
Las carreras de bicicletas conservan sus recursos naturales y la vida silvestre al minimizar su impacto en el medio ambiente.
-
Las carreras de bicicletas mejoran su conciencia y responsabilidad ambiental al educarlo sobre los problemas y las soluciones relacionados con las carreras de bicicletas.
-
Las carreras de bicicletas inspiran su activismo y defensa del medio ambiente al motivarlo a apoyar o unirse a iniciativas o movimientos favorables a las bicicletas.
-
Las carreras de bicicletas mejoran su apreciación y disfrute del medio ambiente al permitirle experimentar la belleza y diversidad de la naturaleza.
-
-
Consejos para carreras de bicicletas
-
Si estás interesado en las carreras de motos, aquí hay algunos consejos sobre cómo empezar o mejorar tu rendimiento:
-
Entrenamiento
-
-
Establezca objetivos realistas y específicos para usted basado en su nivel de condición física actual, experiencia e intereses.
-
Sigue un plan de entrenamiento estructurado y progresivo que incluye una variedad de entrenamientos, como intervalos, tempo, resistencia, recuperación, etc.
-
-
Realice un seguimiento de su progreso y ajuste su plan de capacitación en consecuencia utilizando un registro de capacitación, un diario o una plataforma en línea.
-
Busca orientación profesional o únete a un club o grupo si necesitas más apoyo, comentarios o motivación.
-
-
Nutrición
-
-
Consuma una dieta equilibrada y nutritiva que proporcione suficientes calorías, carbohidratos, proteínas, grasas, vitaminas, minerales y líquidos para sus necesidades de energía y recuperación.
-
Consuma una comida previa a la carrera que sea alta en carbohidratos, moderada en proteínas, baja en grasas y fácil de digerir al menos 2-3 horas antes de la carrera.
-
Beba mucha agua o bebidas deportivas antes, durante y después de la carrera para mantenerse hidratado y reponer sus electrolitos.
-
Refrigerio en barras energéticas, geles, masticables o frutas durante la carrera para mantener sus niveles de azúcar en la sangre y prevenir la fatiga.
-
Tener una comida post-carrera que es alta en proteínas, moderada en carbohidratos, baja en grasa, y rica en antioxidantes dentro de los 30 minutos después de la carrera para reparar los músculos y reducir la inflamación.
-
-
Equipo
-
-
Elija una bicicleta que se adapte a su tipo de carreras de bicicletas, se adapte a su tamaño y forma corporal, y cumple con su presupuesto y preferencias.
-
Mantenga su bicicleta regularmente comprobando los neumáticos, frenos, engranajes, cadenas y otros componentes para detectar cualquier desgaste o daño.
-
Use ropa adecuada que sea cómoda, transpirable y resistente a la intemperie. No olvide usar un casco, guantes y gafas de sol para protegerse.
-
Utilice accesorios que pueden mejorar su rendimiento o seguridad, como zapatos, pedales, tacos, silla de montar, manillar, computadora, luces, etc.
-
Empaque un kit de reparación que incluye un tubo de repuesto, una bomba, un kit de parches, una herramienta múltiple y algo de efectivo o tarjeta de crédito en caso de emergencia.
-
-
Seguridad
-
-
Calentar correctamente antes de la carrera haciendo algunos ejercicios ligeros de cardio y estiramiento para preparar los músculos y las articulaciones.
-
-
Evite el entrenamiento excesivo o insuficiente escuchando su cuerpo y descansando cuando sea necesario.
-
Pre
Prevenga lesiones o enfermedades usando equipo adecuado, siguiendo las reglas, manteniéndose alerta y buscando atención médica si es necesario.
-
Respete el medio ambiente siguiendo los principios de no dejar rastro, como la eliminación de sus residuos correctamente, minimizando su impacto y dejando lo que encuentra.
-
-
Técnica
-
-
Mejore su eficiencia de pedaleo mediante el uso de una cadencia suave y consistente, aplicando una presión uniforme en ambos pedales y cambiando de marcha adecuadamente.
-
Mejore su eficiencia de frenado utilizando ambos frenos simultáneamente, aplicando presión gradual y controlada, y evitando patinar o bloquear sus ruedas.
-
Mejore su eficiencia en las curvas inclinando su bicicleta y cuerpo en la curva, mirando hacia adelante a donde desea ir y saliendo de la curva con velocidad y equilibrio.
-
Mejore su eficiencia de escalada cambiando a una velocidad más baja, de pie o sentado dependiendo del gradiente, y manteniendo un ritmo constante y la respiración.
-
Mejore su eficiencia descendente cambiando a una velocidad más alta, bajando su centro de gravedad y usando sus frenos con moderación y sin problemas.
-
-
Conclusión
-
Las carreras de bicicletas son un deporte divertido y saludable que puede ofrecerle muchos beneficios para su bienestar físico, mental, social y ambiental. También puede desafiarle a mejorar sus habilidades y rendimiento en varios tipos de carreras de bicicletas, como carretera, montaña, pista, BMX o ciclocross. Si usted es un principiante o un experto, las carreras de bicicletas pueden ser una experiencia gratificante y agradable para usted.
-
Si usted está interesado en las carreras de bicicletas, esperamos que este artículo le ha dado información útil y consejos sobre cómo empezar o mejorar su rendimiento. Recuerda entrenar inteligentemente, comer saludablemente, equiparte apropiadamente, mantenerte seguro y divertirte. ¡Feliz carrera de bicicletas!
-
-
¿Cuáles son las mejores bicicletas para las carreras de bicicletas?
-
Las mejores bicicletas para las carreras de bicicletas dependen del tipo de carreras de bicicletas que quieras hacer. Para las carreras de bicicleta de carretera, necesita una bicicleta de carretera que sea ligera, aerodinámica y rápida. Para las carreras de bicicleta de montaña, necesita una bicicleta de montaña que sea resistente, estable y versátil. Para las carreras de bicicleta de pista, necesita una bicicleta de pista que sea simple, rígida y ágil. Para las carreras de BMX, necesitas una bicicleta BMX pequeña, duradera y maniobrable. Para las carreras de ciclocross, necesitas una bicicleta ciclocross que sea similar a una bicicleta de carretera pero con neumáticos más anchos, marchas más bajas y mejores frenos.
-
¿Cómo entreno para las carreras de bicicletas?
-
Para entrenar en las carreras de bicicletas, necesitas seguir un plan de entrenamiento estructurado y progresivo que incluya una variedad de entrenamientos, como intervalos, tempo, resistencia, recuperación, etc. También necesitas controlar la intensidad de tu entrenamiento, duración, frecuencia y recuperación utilizando un monitor de frecuencia cardíaca, un medidor de potencia, un dispositivo GPS o una aplicación. También debe realizar un seguimiento de su progreso y ajustar su plan de capacitación en consecuencia utilizando un registro de capacitación, un diario o una plataforma en línea. También necesitas buscar orientación profesional o unirte a un club o grupo si necesitas más apoyo, comentarios o motivación.
-
¿Qué debo comer antes, durante y después de una carrera de bicicletas?
-
Antes de una carrera en bicicleta, debe comer una comida previa a la carrera que sea alta en carbohidratos, moderada en proteínas, baja en grasas y fácil de digerir al menos 2-3 horas antes de la carrera. Durante una carrera en bicicleta, debes comer barritas energéticas, geles, masticables o frutas para mantener tus niveles de azúcar en la sangre y prevenir la fatiga. También debe beber mucha agua o bebidas deportivas para mantenerse hidratado y reponer sus electrolitos. Después de una carrera en bicicleta, debe tener una comida post-carrera que es alta en proteínas, moderada en carbohidratos, baja en grasa y rica en antioxidantes dentro de los 30 minutos después de la carrera para reparar sus músculos y reducir la inflamación.
-
-
Para prevenir lesiones o enfermedades de las carreras de bicicletas, es necesario calentar correctamente antes de la carrera haciendo algunos ejercicios de cardio y estiramiento ligeros para preparar los músculos y las articulaciones. También necesitas refrescarte adecuadamente después de la carrera haciendo ejercicios de cardio y estiramiento suaves para relajar tus músculos y articulaciones. También es necesario evitar el entrenamiento excesivo o insuficiente al escuchar a su cuerpo y descansar cuando sea necesario. También necesita usar el equipo adecuado, seguir las reglas, mantenerse alerta y buscar atención médica si es necesario.
-
¿Cómo respeto el medio ambiente cuando corro en bicicleta?
-
Respetar Para respetar el medio ambiente en las carreras de bicicletas, es necesario seguir los principios de no dejar rastro, tales como la eliminación de sus residuos correctamente, minimizando su impacto y dejando lo que encuentra. También necesita conservar sus recursos naturales y vida silvestre evitando áreas sensibles, permaneciendo en senderos designados y no perturbar o dañar ninguna planta o animal. También necesita mejorar su conciencia y responsabilidad ambiental al educarse a sí mismo y a otros sobre los problemas y soluciones relacionados con las carreras de bicicletas. También necesita apoyar a su comunidad y causas participando en eventos de caridad o como voluntario para organizaciones relacionadas con la bicicleta. También necesitas inspirar tu activismo y activismo ambiental apoyando o uniéndote a iniciativas o movimientos amigables con la bicicleta. También necesitas mejorar tu apreciación y disfrute ambiental al experimentar la belleza y diversidad de la naturaleza.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Apk De Choque Royale Lite.md b/spaces/Benson/text-generation/Examples/Descargar Apk De Choque Royale Lite.md
deleted file mode 100644
index 0c89bd2c68bcd99ecb75fe717a9c8bb649dfc66b..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Apk De Choque Royale Lite.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-
Choque Royale Lite APK Descargar: Cómo jugar el juego de estrategia popular en dispositivos de gama baja
-
Clash Royale es uno de los juegos móviles más populares del mundo, con millones de jugadores disfrutando de su juego rápido y adictivo. Sin embargo, no todo el mundo tiene un dispositivo de alta gama que puede ejecutar el juego sin problemas y sin retraso. Si usted es una de esas personas que aman Clash Royale pero tienen un dispositivo de gama baja, no se preocupe. Hay una solución para usted: Clash Royale Lite.
-
Clash Royale Lite es una versión modificada de Clash Royale que está diseñada para funcionar en dispositivos de gama baja con menos RAM y espacio de almacenamiento. Tiene todas las características y la diversión del juego original, pero con gráficos reducidos y tamaño de archivo. En este artículo, te mostraremos cómo descargar e instalar Clash Royale Lite en tu dispositivo Android, cómo jugar y disfrutar del juego, y cómo evitar posibles riesgos o problemas. ¡Vamos a empezar!
Una breve introducción a Clash Royale y sus características
-
Clash Royale es un juego de estrategia en tiempo real desarrollado por Supercell, los creadores de Clash of Clans, Brawl Stars, Hay Day y más. Fue lanzado en 2016 y desde entonces se ha convertido en uno de los juegos móviles más exitosos de la historia. En Clash Royale, coleccionas y mejoras cartas con personajes, hechizos y edificios del universo Clash. Utiliza estas cartas para construir tu mazo de batalla y luchar contra otros jugadores en línea en duelos de ritmo rápido. El objetivo es destruir las torres de tu oponente mientras defiendes las tuyas. También puedes unirte o crear clanes, participar en torneos, eventos, desafíos y más.
-
La diferencia entre Clash Royale y Clash Royale Lite
-
Clash Royale Lite es una versión modificada de Clash Royale que está optimizada para dispositivos de gama baja. Tiene la misma jugabilidad y características que el juego original, pero con algunas diferencias:
-
-
Los gráficos son de menor calidad y menos detallados.
-
-
El tiempo de carga es más rápido.
-
El rendimiento es más suave y estable.
-
El consumo de batería es menor.
-
-
Estas diferencias hacen Clash Royale Lite más accesible y agradable para los jugadores que tienen dispositivos de gama baja o conexión a Internet limitada.
-
Los beneficios de jugar Clash Royale Lite
-
Jugar a Clash Royale Lite tiene varios beneficios para los jugadores que aman el juego pero tienen dispositivos de gama baja. Algunos de estos beneficios son:
-
-
Puede jugar el juego sin retraso o se bloquea.
-
Puede ahorrar espacio de almacenamiento en su dispositivo.
-
Puede guardar datos en su dispositivo.
-
Puedes jugar el juego incluso con una conexión a Internet débil o inestable.
-
Puedes disfrutar de las mismas características y contenido que el juego original.
-
-
Jugar a Clash Royale Lite no significa que te estés perdiendo nada. Todavía puedes divertirte y competir con otros jugadores de todo el mundo.
-
-
Cómo descargar e instalar Clash Royale Lite en su dispositivo Android
-
Los requisitos y la compatibilidad de Clash Royale Lite
-
Clash Royale Lite es compatible con la mayoría de los dispositivos Android que tienen al menos 1 GB de RAM y Android 4.4 o superior. Sin embargo, algunos dispositivos pueden no ser capaces de ejecutar el juego correctamente debido a limitaciones de hardware o problemas de software. Para comprobar si su dispositivo es compatible, puede visitar el sitio web oficial de Clash Royale Lite y ver la lista de dispositivos compatibles. También puede ponerse en contacto con los desarrolladores si tiene alguna pregunta o problema con respecto a la compatibilidad de su dispositivo.
-
Los pasos para descargar e instalar Clash Royale Lite desde una fuente de confianza
-
Clash Royale Lite no está disponible en Google Play Store, por lo que tendrá que descargar e instalar desde una fuente de confianza. Estos son los pasos para hacerlo:
-
-
-
Antes de descargar el archivo APK, asegúrese de que ha habilitado la opción de instalar aplicaciones de fuentes desconocidas en su dispositivo. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo.
-
Una vez que haya descargado el archivo APK, localizarlo en su dispositivo y toque en él para iniciar el proceso de instalación. Siga las instrucciones de la pantalla y espere a que termine la instalación.
-
Después de que la instalación se haya completado, puede iniciar Clash Royale Lite desde el cajón de la aplicación o la pantalla de inicio y disfrutar del juego.
-
-
Los consejos para evitar malware y virus al descargar archivos APK
-
Descargar archivos APK de fuentes desconocidas puede ser arriesgado, ya que pueden contener malware o virus que pueden dañar su dispositivo o robar su información personal. Para evitar esto, debes seguir estos consejos:
-
-
Solo descargar archivos APK de fuentes confiables y verificadas, como el sitio web oficial de Clash Royale Lite u otros sitios web de renombre.
-
Escanear el archivo APK con una aplicación antivirus antes de instalarlo en su dispositivo.
-
No haga clic en ningún enlace sospechoso o ventanas emergentes que puedan aparecer durante el proceso de descarga o instalación.
-
No conceda ningún permiso innecesario o acceso a la aplicación que pueda comprometer su privacidad o seguridad.
-
-
Siguiendo estos consejos, puede asegurarse de que está descargando e instalando Clash Royale Lite de forma segura.
-
Cómo jugar y disfrutar de Clash Royale Lite
-
El juego básico y las reglas de Clash Royale Lite
-
-
Las mejores estrategias y consejos para ganar batallas en Clash Royale Lite
-
Para ganar batallas en Clash Royale Lite, necesitas tener una buena estrategia y algunos consejos en mente. Estos son algunos de ellos:
-
-
Elige un mazo de batalla equilibrado que se adapte a tu estilo de juego y tenga cartas que puedan contrarrestar diferentes tipos de amenazas. Por ejemplo, debes tener algunas tarjetas que puedan causar daño por salpicaduras, algunas tarjetas que puedan apuntar a unidades de aire, algunas tarjetas que puedan dañar tanques, etc.
-
Conozca las fortalezas y debilidades de cada tarjeta y cómo usarlas de manera efectiva. Por ejemplo, debes saber cuándo usar hechizos, cuándo colocar edificios, cuándo empujar o defender, etc.
-
Preste atención a su manejo de elixir y no lo desperdicie en movimientos innecesarios. Trata de obtener una ventaja de elixir sobre tu oponente haciendo operaciones de elixir positivas (usando menos elixir que tu oponente para tratar con sus cartas).
-
Analiza la baraja de batalla y la estrategia de tu oponente y adáptala en consecuencia. Intenta predecir sus movimientos y contrarrestarlos con jugadas inteligentes.
-
Usa combos y sinergias entre tus cartas para crear poderosos ataques o defensas.
-
Sea consciente de la hora y la puntuación y ajustar su estrategia en consecuencia. Por ejemplo, puede que quieras jugar de forma más agresiva o defensiva dependiendo de la situación.
-
Diviértete y disfruta del juego. No te frustres ni te enojes si pierdes. Aprende de tus errores y mejora tus habilidades.
-
-
Las formas de recoger y actualizar tarjetas, unirse a clanes, y participar en eventos en Clash Royale Lite
-
Clash Royale Lite tiene las mismas formas de recoger y actualizar tarjetas, unirse a clanes, y participar en eventos como Clash Royale. Puedes hacer lo siguiente:
-
-
Recoge cartas abriendo los cofres que obtienes al ganar batallas, completar misiones o comprar en la tienda. También puedes solicitar o donar tarjetas a los miembros de tu clan.
-
-
Únete a los clanes buscando uno o creando el tuyo. Los clanes son grupos de jugadores que pueden chatear, compartir cartas y jugar juntos. También puedes participar en guerras de clanes, que son competiciones por equipos que te recompensan con cofres y medallas.
-
Participe en eventos introduciéndolos desde la pestaña de eventos. Los eventos son modos especiales o desafíos que tienen diferentes reglas y recompensas. Puedes jugar eventos gratis o con gemas, dependiendo del evento.
-
-
Al hacer estas cosas, puedes mejorar tu experiencia de juego y divertirte más con Clash Royale Lite.
-
Conclusión
-
Un resumen de los puntos principales del artículo
-
En conclusión, Clash Royale Lite es una gran alternativa para los jugadores que aman Clash Royale pero tienen dispositivos de gama baja. Tiene todas las características y la diversión del juego original, pero con gráficos reducidos y tamaño de archivo. Es fácil de descargar e instalar desde una fuente de confianza, y es seguro jugar si sigues algunos consejos. También tiene la misma jugabilidad y reglas que Clash Royale, pero con algunos consejos y estrategias para ayudarte a ganar batallas. También puedes recoger y actualizar cartas, unirte a clanes y participar en eventos en Clash Royale Lite.
-
Un llamado a la acción para que los lectores prueben Clash Royale Lite
-
Si usted está buscando una manera de jugar Clash Royale en su dispositivo de gama baja sin ningún problema, definitivamente debe probar Clash Royale Lite. Es un juego divertido y emocionante que te mantendrá entretenido durante horas. Puede descargarlo desde el sitio web oficial de Clash Royale Lite o desde otros sitios web de renombre. También puedes compartirlo con tus amigos que tienen dispositivos de gama baja y juegan juntos. ¿Qué estás esperando? Descargar Clash Royale Lite hoy y disfrutar del juego!
-
Preguntas frecuentes
-
¿Es seguro descargar y jugar Clash Royale Lite?
-
-
¿Puedo jugar Clash Royale Lite con mis amigos que tienen Clash Royale?
-
No, Clash Royale Lite no es compatible con Clash Royale, por lo que no puedes jugar con tus amigos que tienen Clash Royale. Sin embargo, puedes jugar con tus amigos que tienen Clash Royale Lite agregándolos como amigos en el juego o uniéndote al mismo clan que ellos.
-
¿Cuánto espacio de almacenamiento ocupa Clash Royale Lite en mi dispositivo?
-
Clash Royale Lite ocupa alrededor de 150 MB de espacio de almacenamiento en su dispositivo, en comparación con 445 MB para Clash Royale. Esto significa que puede ahorrar mucho espacio de almacenamiento en su dispositivo jugando Clash Royale Lite en lugar de Clash Royale.
-
¿Con qué frecuencia se actualiza Clash Royale Lite con nuevas características y contenido?
-
Clash Royale Lite se actualiza regularmente con nuevas características y contenido, al igual que Clash Royale. Puede esperar ver nuevas tarjetas, arenas, modos, eventos, cambios de equilibrio, correcciones de errores y más en cada actualización. Puede consultar el sitio web oficial de Clash Royale Lite o seguir sus cuentas de redes sociales para mantenerse al día sobre las últimas noticias y actualizaciones.
-
¿Cuáles son algunos otros juegos como Clash Royale que puedo jugar en mi dispositivo?
-
Si te gusta jugar juegos de estrategia como Clash Royale, es posible que también te guste jugar otros juegos similares o relacionados con él. Algunos de estos juegos son:
-
-
Brawl Stars: Un juego de disparos multijugador de ritmo rápido donde se puede elegir entre diferentes personajes con diferentes habilidades y modos de juego.
-
Choque de clanes: un juego de estrategia donde puedes construir tu propia aldea, entrenar a tus tropas y asaltar las bases de otros jugadores.
-
Boom Beach: un juego de estrategia donde puedes explorar un archipiélago tropical, luchar contra el malvado Blackguard y formar equipo con otros jugadores.
-
plantas vs zombies 2: un juego de torre de defensa donde se puede utilizar las plantas para defender el cerebro de los zombies.
-
-
-
Estos son algunos de los juegos que puedes jugar en tu dispositivo si te gusta Clash Royale. Puedes encontrarlos en Google Play Store u otras fuentes.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BernardoOlisan/vqganclip/CLIP/model-card.md b/spaces/BernardoOlisan/vqganclip/CLIP/model-card.md
deleted file mode 100644
index 2d22e25bea89fdbccdaa2809fbeb83e0a7cfaa07..0000000000000000000000000000000000000000
--- a/spaces/BernardoOlisan/vqganclip/CLIP/model-card.md
+++ /dev/null
@@ -1,120 +0,0 @@
-# Model Card: CLIP
-
-Inspired by [Model Cards for Model Reporting (Mitchell et al.)](https://arxiv.org/abs/1810.03993) and [Lessons from Archives (Jo & Gebru)](https://arxiv.org/pdf/1912.10389.pdf), we’re providing some accompanying information about the multimodal model.
-
-## Model Details
-
-The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
-
-### Model Date
-
-January 2021
-
-### Model Type
-
-The base model uses a ResNet50 with several modifications as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. There is also a variant of the model where the ResNet image encoder is replaced with a Vision Transformer.
-
-### Model Versions
-
-Initially, we’ve released one CLIP model based on the Vision Transformer architecture equivalent to ViT-B/32, along with the RN50 model, using the architecture equivalent to ResNet-50.
-
-As part of the staged release process, we have also released the RN101 model, as well as RN50x4, a RN50 scaled up 4x according to the [EfficientNet](https://arxiv.org/abs/1905.11946) scaling rule. In July 2021, we additionally released the RN50x16 and ViT-B/16 models.
-
-Please see the paper linked below for further details about their specification.
-
-### Documents
-
-- [Blog Post](https://openai.com/blog/clip/)
-- [CLIP Paper](https://arxiv.org/abs/2103.00020)
-
-
-
-## Model Use
-
-### Intended Use
-
-The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
-
-#### Primary intended uses
-
-The primary intended users of these models are AI researchers.
-
-We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
-
-### Out-of-Scope Use Cases
-
-**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
-
-Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
-
-Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
-
-
-
-## Data
-
-The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.
-
-### Data Mission Statement
-
-Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset.
-
-
-
-## Performance and Limitations
-
-### Performance
-
-We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets:
-
-- Food101
-- CIFAR10
-- CIFAR100
-- Birdsnap
-- SUN397
-- Stanford Cars
-- FGVC Aircraft
-- VOC2007
-- DTD
-- Oxford-IIIT Pet dataset
-- Caltech101
-- Flowers102
-- MNIST
-- SVHN
-- IIIT5K
-- Hateful Memes
-- SST-2
-- UCF101
-- Kinetics700
-- Country211
-- CLEVR Counting
-- KITTI Distance
-- STL-10
-- RareAct
-- Flickr30
-- MSCOCO
-- ImageNet
-- ImageNet-A
-- ImageNet-R
-- ImageNet Sketch
-- ObjectNet (ImageNet Overlap)
-- Youtube-BB
-- ImageNet-Vid
-
-## Limitations
-
-CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance.
-
-### Bias and Fairness
-
-We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper).
-
-We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.
-
-
-
-## Feedback
-
-### Where to send questions or comments about the model
-
-Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)
diff --git a/spaces/BiTransSciencia/www/index.html b/spaces/BiTransSciencia/www/index.html
deleted file mode 100644
index 5619f092a9505f8f6e097d6f7be5bdeb435c402a..0000000000000000000000000000000000000000
--- a/spaces/BiTransSciencia/www/index.html
+++ /dev/null
@@ -1,753 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
- BiTransSciencia [081]
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
'BiTransSciencia [081]' Evolution Tree (E.T.): __0_0_0__
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/util/retry.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/util/retry.py
deleted file mode 100644
index 2490d5e5b63359a7f826922dc69c0015cb9a5b2e..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/util/retry.py
+++ /dev/null
@@ -1,620 +0,0 @@
-from __future__ import absolute_import
-
-import email
-import logging
-import re
-import time
-import warnings
-from collections import namedtuple
-from itertools import takewhile
-
-from ..exceptions import (
- ConnectTimeoutError,
- InvalidHeader,
- MaxRetryError,
- ProtocolError,
- ProxyError,
- ReadTimeoutError,
- ResponseError,
-)
-from ..packages import six
-
-log = logging.getLogger(__name__)
-
-
-# Data structure for representing the metadata of requests that result in a retry.
-RequestHistory = namedtuple(
- "RequestHistory", ["method", "url", "error", "status", "redirect_location"]
-)
-
-
-# TODO: In v2 we can remove this sentinel and metaclass with deprecated options.
-_Default = object()
-
-
-class _RetryMeta(type):
- @property
- def DEFAULT_METHOD_WHITELIST(cls):
- warnings.warn(
- "Using 'Retry.DEFAULT_METHOD_WHITELIST' is deprecated and "
- "will be removed in v2.0. Use 'Retry.DEFAULT_ALLOWED_METHODS' instead",
- DeprecationWarning,
- )
- return cls.DEFAULT_ALLOWED_METHODS
-
- @DEFAULT_METHOD_WHITELIST.setter
- def DEFAULT_METHOD_WHITELIST(cls, value):
- warnings.warn(
- "Using 'Retry.DEFAULT_METHOD_WHITELIST' is deprecated and "
- "will be removed in v2.0. Use 'Retry.DEFAULT_ALLOWED_METHODS' instead",
- DeprecationWarning,
- )
- cls.DEFAULT_ALLOWED_METHODS = value
-
- @property
- def DEFAULT_REDIRECT_HEADERS_BLACKLIST(cls):
- warnings.warn(
- "Using 'Retry.DEFAULT_REDIRECT_HEADERS_BLACKLIST' is deprecated and "
- "will be removed in v2.0. Use 'Retry.DEFAULT_REMOVE_HEADERS_ON_REDIRECT' instead",
- DeprecationWarning,
- )
- return cls.DEFAULT_REMOVE_HEADERS_ON_REDIRECT
-
- @DEFAULT_REDIRECT_HEADERS_BLACKLIST.setter
- def DEFAULT_REDIRECT_HEADERS_BLACKLIST(cls, value):
- warnings.warn(
- "Using 'Retry.DEFAULT_REDIRECT_HEADERS_BLACKLIST' is deprecated and "
- "will be removed in v2.0. Use 'Retry.DEFAULT_REMOVE_HEADERS_ON_REDIRECT' instead",
- DeprecationWarning,
- )
- cls.DEFAULT_REMOVE_HEADERS_ON_REDIRECT = value
-
- @property
- def BACKOFF_MAX(cls):
- warnings.warn(
- "Using 'Retry.BACKOFF_MAX' is deprecated and "
- "will be removed in v2.0. Use 'Retry.DEFAULT_BACKOFF_MAX' instead",
- DeprecationWarning,
- )
- return cls.DEFAULT_BACKOFF_MAX
-
- @BACKOFF_MAX.setter
- def BACKOFF_MAX(cls, value):
- warnings.warn(
- "Using 'Retry.BACKOFF_MAX' is deprecated and "
- "will be removed in v2.0. Use 'Retry.DEFAULT_BACKOFF_MAX' instead",
- DeprecationWarning,
- )
- cls.DEFAULT_BACKOFF_MAX = value
-
-
-@six.add_metaclass(_RetryMeta)
-class Retry(object):
- """Retry configuration.
-
- Each retry attempt will create a new Retry object with updated values, so
- they can be safely reused.
-
- Retries can be defined as a default for a pool::
-
- retries = Retry(connect=5, read=2, redirect=5)
- http = PoolManager(retries=retries)
- response = http.request('GET', 'http://example.com/')
-
- Or per-request (which overrides the default for the pool)::
-
- response = http.request('GET', 'http://example.com/', retries=Retry(10))
-
- Retries can be disabled by passing ``False``::
-
- response = http.request('GET', 'http://example.com/', retries=False)
-
- Errors will be wrapped in :class:`~urllib3.exceptions.MaxRetryError` unless
- retries are disabled, in which case the causing exception will be raised.
-
- :param int total:
- Total number of retries to allow. Takes precedence over other counts.
-
- Set to ``None`` to remove this constraint and fall back on other
- counts.
-
- Set to ``0`` to fail on the first retry.
-
- Set to ``False`` to disable and imply ``raise_on_redirect=False``.
-
- :param int connect:
- How many connection-related errors to retry on.
-
- These are errors raised before the request is sent to the remote server,
- which we assume has not triggered the server to process the request.
-
- Set to ``0`` to fail on the first retry of this type.
-
- :param int read:
- How many times to retry on read errors.
-
- These errors are raised after the request was sent to the server, so the
- request may have side-effects.
-
- Set to ``0`` to fail on the first retry of this type.
-
- :param int redirect:
- How many redirects to perform. Limit this to avoid infinite redirect
- loops.
-
- A redirect is a HTTP response with a status code 301, 302, 303, 307 or
- 308.
-
- Set to ``0`` to fail on the first retry of this type.
-
- Set to ``False`` to disable and imply ``raise_on_redirect=False``.
-
- :param int status:
- How many times to retry on bad status codes.
-
- These are retries made on responses, where status code matches
- ``status_forcelist``.
-
- Set to ``0`` to fail on the first retry of this type.
-
- :param int other:
- How many times to retry on other errors.
-
- Other errors are errors that are not connect, read, redirect or status errors.
- These errors might be raised after the request was sent to the server, so the
- request might have side-effects.
-
- Set to ``0`` to fail on the first retry of this type.
-
- If ``total`` is not set, it's a good idea to set this to 0 to account
- for unexpected edge cases and avoid infinite retry loops.
-
- :param iterable allowed_methods:
- Set of uppercased HTTP method verbs that we should retry on.
-
- By default, we only retry on methods which are considered to be
- idempotent (multiple requests with the same parameters end with the
- same state). See :attr:`Retry.DEFAULT_ALLOWED_METHODS`.
-
- Set to a ``False`` value to retry on any verb.
-
- .. warning::
-
- Previously this parameter was named ``method_whitelist``, that
- usage is deprecated in v1.26.0 and will be removed in v2.0.
-
- :param iterable status_forcelist:
- A set of integer HTTP status codes that we should force a retry on.
- A retry is initiated if the request method is in ``allowed_methods``
- and the response status code is in ``status_forcelist``.
-
- By default, this is disabled with ``None``.
-
- :param float backoff_factor:
- A backoff factor to apply between attempts after the second try
- (most errors are resolved immediately by a second try without a
- delay). urllib3 will sleep for::
-
- {backoff factor} * (2 ** ({number of total retries} - 1))
-
- seconds. If the backoff_factor is 0.1, then :func:`.sleep` will sleep
- for [0.0s, 0.2s, 0.4s, ...] between retries. It will never be longer
- than :attr:`Retry.DEFAULT_BACKOFF_MAX`.
-
- By default, backoff is disabled (set to 0).
-
- :param bool raise_on_redirect: Whether, if the number of redirects is
- exhausted, to raise a MaxRetryError, or to return a response with a
- response code in the 3xx range.
-
- :param bool raise_on_status: Similar meaning to ``raise_on_redirect``:
- whether we should raise an exception, or return a response,
- if status falls in ``status_forcelist`` range and retries have
- been exhausted.
-
- :param tuple history: The history of the request encountered during
- each call to :meth:`~Retry.increment`. The list is in the order
- the requests occurred. Each list item is of class :class:`RequestHistory`.
-
- :param bool respect_retry_after_header:
- Whether to respect Retry-After header on status codes defined as
- :attr:`Retry.RETRY_AFTER_STATUS_CODES` or not.
-
- :param iterable remove_headers_on_redirect:
- Sequence of headers to remove from the request when a response
- indicating a redirect is returned before firing off the redirected
- request.
- """
-
- #: Default methods to be used for ``allowed_methods``
- DEFAULT_ALLOWED_METHODS = frozenset(
- ["HEAD", "GET", "PUT", "DELETE", "OPTIONS", "TRACE"]
- )
-
- #: Default status codes to be used for ``status_forcelist``
- RETRY_AFTER_STATUS_CODES = frozenset([413, 429, 503])
-
- #: Default headers to be used for ``remove_headers_on_redirect``
- DEFAULT_REMOVE_HEADERS_ON_REDIRECT = frozenset(["Authorization"])
-
- #: Maximum backoff time.
- DEFAULT_BACKOFF_MAX = 120
-
- def __init__(
- self,
- total=10,
- connect=None,
- read=None,
- redirect=None,
- status=None,
- other=None,
- allowed_methods=_Default,
- status_forcelist=None,
- backoff_factor=0,
- raise_on_redirect=True,
- raise_on_status=True,
- history=None,
- respect_retry_after_header=True,
- remove_headers_on_redirect=_Default,
- # TODO: Deprecated, remove in v2.0
- method_whitelist=_Default,
- ):
-
- if method_whitelist is not _Default:
- if allowed_methods is not _Default:
- raise ValueError(
- "Using both 'allowed_methods' and "
- "'method_whitelist' together is not allowed. "
- "Instead only use 'allowed_methods'"
- )
- warnings.warn(
- "Using 'method_whitelist' with Retry is deprecated and "
- "will be removed in v2.0. Use 'allowed_methods' instead",
- DeprecationWarning,
- stacklevel=2,
- )
- allowed_methods = method_whitelist
- if allowed_methods is _Default:
- allowed_methods = self.DEFAULT_ALLOWED_METHODS
- if remove_headers_on_redirect is _Default:
- remove_headers_on_redirect = self.DEFAULT_REMOVE_HEADERS_ON_REDIRECT
-
- self.total = total
- self.connect = connect
- self.read = read
- self.status = status
- self.other = other
-
- if redirect is False or total is False:
- redirect = 0
- raise_on_redirect = False
-
- self.redirect = redirect
- self.status_forcelist = status_forcelist or set()
- self.allowed_methods = allowed_methods
- self.backoff_factor = backoff_factor
- self.raise_on_redirect = raise_on_redirect
- self.raise_on_status = raise_on_status
- self.history = history or tuple()
- self.respect_retry_after_header = respect_retry_after_header
- self.remove_headers_on_redirect = frozenset(
- [h.lower() for h in remove_headers_on_redirect]
- )
-
- def new(self, **kw):
- params = dict(
- total=self.total,
- connect=self.connect,
- read=self.read,
- redirect=self.redirect,
- status=self.status,
- other=self.other,
- status_forcelist=self.status_forcelist,
- backoff_factor=self.backoff_factor,
- raise_on_redirect=self.raise_on_redirect,
- raise_on_status=self.raise_on_status,
- history=self.history,
- remove_headers_on_redirect=self.remove_headers_on_redirect,
- respect_retry_after_header=self.respect_retry_after_header,
- )
-
- # TODO: If already given in **kw we use what's given to us
- # If not given we need to figure out what to pass. We decide
- # based on whether our class has the 'method_whitelist' property
- # and if so we pass the deprecated 'method_whitelist' otherwise
- # we use 'allowed_methods'. Remove in v2.0
- if "method_whitelist" not in kw and "allowed_methods" not in kw:
- if "method_whitelist" in self.__dict__:
- warnings.warn(
- "Using 'method_whitelist' with Retry is deprecated and "
- "will be removed in v2.0. Use 'allowed_methods' instead",
- DeprecationWarning,
- )
- params["method_whitelist"] = self.allowed_methods
- else:
- params["allowed_methods"] = self.allowed_methods
-
- params.update(kw)
- return type(self)(**params)
-
- @classmethod
- def from_int(cls, retries, redirect=True, default=None):
- """Backwards-compatibility for the old retries format."""
- if retries is None:
- retries = default if default is not None else cls.DEFAULT
-
- if isinstance(retries, Retry):
- return retries
-
- redirect = bool(redirect) and None
- new_retries = cls(retries, redirect=redirect)
- log.debug("Converted retries value: %r -> %r", retries, new_retries)
- return new_retries
-
- def get_backoff_time(self):
- """Formula for computing the current backoff
-
- :rtype: float
- """
- # We want to consider only the last consecutive errors sequence (Ignore redirects).
- consecutive_errors_len = len(
- list(
- takewhile(lambda x: x.redirect_location is None, reversed(self.history))
- )
- )
- if consecutive_errors_len <= 1:
- return 0
-
- backoff_value = self.backoff_factor * (2 ** (consecutive_errors_len - 1))
- return min(self.DEFAULT_BACKOFF_MAX, backoff_value)
-
- def parse_retry_after(self, retry_after):
- # Whitespace: https://tools.ietf.org/html/rfc7230#section-3.2.4
- if re.match(r"^\s*[0-9]+\s*$", retry_after):
- seconds = int(retry_after)
- else:
- retry_date_tuple = email.utils.parsedate_tz(retry_after)
- if retry_date_tuple is None:
- raise InvalidHeader("Invalid Retry-After header: %s" % retry_after)
- if retry_date_tuple[9] is None: # Python 2
- # Assume UTC if no timezone was specified
- # On Python2.7, parsedate_tz returns None for a timezone offset
- # instead of 0 if no timezone is given, where mktime_tz treats
- # a None timezone offset as local time.
- retry_date_tuple = retry_date_tuple[:9] + (0,) + retry_date_tuple[10:]
-
- retry_date = email.utils.mktime_tz(retry_date_tuple)
- seconds = retry_date - time.time()
-
- if seconds < 0:
- seconds = 0
-
- return seconds
-
- def get_retry_after(self, response):
- """Get the value of Retry-After in seconds."""
-
- retry_after = response.headers.get("Retry-After")
-
- if retry_after is None:
- return None
-
- return self.parse_retry_after(retry_after)
-
- def sleep_for_retry(self, response=None):
- retry_after = self.get_retry_after(response)
- if retry_after:
- time.sleep(retry_after)
- return True
-
- return False
-
- def _sleep_backoff(self):
- backoff = self.get_backoff_time()
- if backoff <= 0:
- return
- time.sleep(backoff)
-
- def sleep(self, response=None):
- """Sleep between retry attempts.
-
- This method will respect a server's ``Retry-After`` response header
- and sleep the duration of the time requested. If that is not present, it
- will use an exponential backoff. By default, the backoff factor is 0 and
- this method will return immediately.
- """
-
- if self.respect_retry_after_header and response:
- slept = self.sleep_for_retry(response)
- if slept:
- return
-
- self._sleep_backoff()
-
- def _is_connection_error(self, err):
- """Errors when we're fairly sure that the server did not receive the
- request, so it should be safe to retry.
- """
- if isinstance(err, ProxyError):
- err = err.original_error
- return isinstance(err, ConnectTimeoutError)
-
- def _is_read_error(self, err):
- """Errors that occur after the request has been started, so we should
- assume that the server began processing it.
- """
- return isinstance(err, (ReadTimeoutError, ProtocolError))
-
- def _is_method_retryable(self, method):
- """Checks if a given HTTP method should be retried upon, depending if
- it is included in the allowed_methods
- """
- # TODO: For now favor if the Retry implementation sets its own method_whitelist
- # property outside of our constructor to avoid breaking custom implementations.
- if "method_whitelist" in self.__dict__:
- warnings.warn(
- "Using 'method_whitelist' with Retry is deprecated and "
- "will be removed in v2.0. Use 'allowed_methods' instead",
- DeprecationWarning,
- )
- allowed_methods = self.method_whitelist
- else:
- allowed_methods = self.allowed_methods
-
- if allowed_methods and method.upper() not in allowed_methods:
- return False
- return True
-
- def is_retry(self, method, status_code, has_retry_after=False):
- """Is this method/status code retryable? (Based on allowlists and control
- variables such as the number of total retries to allow, whether to
- respect the Retry-After header, whether this header is present, and
- whether the returned status code is on the list of status codes to
- be retried upon on the presence of the aforementioned header)
- """
- if not self._is_method_retryable(method):
- return False
-
- if self.status_forcelist and status_code in self.status_forcelist:
- return True
-
- return (
- self.total
- and self.respect_retry_after_header
- and has_retry_after
- and (status_code in self.RETRY_AFTER_STATUS_CODES)
- )
-
- def is_exhausted(self):
- """Are we out of retries?"""
- retry_counts = (
- self.total,
- self.connect,
- self.read,
- self.redirect,
- self.status,
- self.other,
- )
- retry_counts = list(filter(None, retry_counts))
- if not retry_counts:
- return False
-
- return min(retry_counts) < 0
-
- def increment(
- self,
- method=None,
- url=None,
- response=None,
- error=None,
- _pool=None,
- _stacktrace=None,
- ):
- """Return a new Retry object with incremented retry counters.
-
- :param response: A response object, or None, if the server did not
- return a response.
- :type response: :class:`~urllib3.response.HTTPResponse`
- :param Exception error: An error encountered during the request, or
- None if the response was received successfully.
-
- :return: A new ``Retry`` object.
- """
- if self.total is False and error:
- # Disabled, indicate to re-raise the error.
- raise six.reraise(type(error), error, _stacktrace)
-
- total = self.total
- if total is not None:
- total -= 1
-
- connect = self.connect
- read = self.read
- redirect = self.redirect
- status_count = self.status
- other = self.other
- cause = "unknown"
- status = None
- redirect_location = None
-
- if error and self._is_connection_error(error):
- # Connect retry?
- if connect is False:
- raise six.reraise(type(error), error, _stacktrace)
- elif connect is not None:
- connect -= 1
-
- elif error and self._is_read_error(error):
- # Read retry?
- if read is False or not self._is_method_retryable(method):
- raise six.reraise(type(error), error, _stacktrace)
- elif read is not None:
- read -= 1
-
- elif error:
- # Other retry?
- if other is not None:
- other -= 1
-
- elif response and response.get_redirect_location():
- # Redirect retry?
- if redirect is not None:
- redirect -= 1
- cause = "too many redirects"
- redirect_location = response.get_redirect_location()
- status = response.status
-
- else:
- # Incrementing because of a server error like a 500 in
- # status_forcelist and the given method is in the allowed_methods
- cause = ResponseError.GENERIC_ERROR
- if response and response.status:
- if status_count is not None:
- status_count -= 1
- cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status)
- status = response.status
-
- history = self.history + (
- RequestHistory(method, url, error, status, redirect_location),
- )
-
- new_retry = self.new(
- total=total,
- connect=connect,
- read=read,
- redirect=redirect,
- status=status_count,
- other=other,
- history=history,
- )
-
- if new_retry.is_exhausted():
- raise MaxRetryError(_pool, url, error or ResponseError(cause))
-
- log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)
-
- return new_retry
-
- def __repr__(self):
- return (
- "{cls.__name__}(total={self.total}, connect={self.connect}, "
- "read={self.read}, redirect={self.redirect}, status={self.status})"
- ).format(cls=type(self), self=self)
-
- def __getattr__(self, item):
- if item == "method_whitelist":
- # TODO: Remove this deprecated alias in v2.0
- warnings.warn(
- "Using 'method_whitelist' with Retry is deprecated and "
- "will be removed in v2.0. Use 'allowed_methods' instead",
- DeprecationWarning,
- )
- return self.allowed_methods
- try:
- return getattr(super(Retry, self), item)
- except AttributeError:
- return getattr(Retry, item)
-
-
-# For backwards compatibility (equivalent to pre-v1.9):
-Retry.DEFAULT = Retry(3)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/dist.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/dist.py
deleted file mode 100644
index 824235488666c6ecdb22240b08354806fadb58ca..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/dist.py
+++ /dev/null
@@ -1,1222 +0,0 @@
-# -*- coding: utf-8 -*-
-__all__ = ['Distribution']
-
-import io
-import sys
-import re
-import os
-import warnings
-import numbers
-import distutils.log
-import distutils.core
-import distutils.cmd
-import distutils.dist
-import distutils.command
-from distutils.util import strtobool
-from distutils.debug import DEBUG
-from distutils.fancy_getopt import translate_longopt
-from glob import iglob
-import itertools
-import textwrap
-from typing import List, Optional, TYPE_CHECKING
-from pathlib import Path
-
-from collections import defaultdict
-from email import message_from_file
-
-from distutils.errors import DistutilsOptionError, DistutilsSetupError
-from distutils.util import rfc822_escape
-
-from setuptools.extern import packaging
-from setuptools.extern import ordered_set
-from setuptools.extern.more_itertools import unique_everseen, partition
-
-from ._importlib import metadata
-
-from . import SetuptoolsDeprecationWarning
-
-import setuptools
-import setuptools.command
-from setuptools import windows_support
-from setuptools.monkey import get_unpatched
-from setuptools.config import setupcfg, pyprojecttoml
-from setuptools.discovery import ConfigDiscovery
-
-import pkg_resources
-from setuptools.extern.packaging import version
-from . import _reqs
-from . import _entry_points
-
-if TYPE_CHECKING:
- from email.message import Message
-
-__import__('setuptools.extern.packaging.specifiers')
-__import__('setuptools.extern.packaging.version')
-
-
-def _get_unpatched(cls):
- warnings.warn("Do not call this function", DistDeprecationWarning)
- return get_unpatched(cls)
-
-
-def get_metadata_version(self):
- mv = getattr(self, 'metadata_version', None)
- if mv is None:
- mv = version.Version('2.1')
- self.metadata_version = mv
- return mv
-
-
-def rfc822_unescape(content: str) -> str:
- """Reverse RFC-822 escaping by removing leading whitespaces from content."""
- lines = content.splitlines()
- if len(lines) == 1:
- return lines[0].lstrip()
- return '\n'.join((lines[0].lstrip(), textwrap.dedent('\n'.join(lines[1:]))))
-
-
-def _read_field_from_msg(msg: "Message", field: str) -> Optional[str]:
- """Read Message header field."""
- value = msg[field]
- if value == 'UNKNOWN':
- return None
- return value
-
-
-def _read_field_unescaped_from_msg(msg: "Message", field: str) -> Optional[str]:
- """Read Message header field and apply rfc822_unescape."""
- value = _read_field_from_msg(msg, field)
- if value is None:
- return value
- return rfc822_unescape(value)
-
-
-def _read_list_from_msg(msg: "Message", field: str) -> Optional[List[str]]:
- """Read Message header field and return all results as list."""
- values = msg.get_all(field, None)
- if values == []:
- return None
- return values
-
-
-def _read_payload_from_msg(msg: "Message") -> Optional[str]:
- value = msg.get_payload().strip()
- if value == 'UNKNOWN' or not value:
- return None
- return value
-
-
-def read_pkg_file(self, file):
- """Reads the metadata values from a file object."""
- msg = message_from_file(file)
-
- self.metadata_version = version.Version(msg['metadata-version'])
- self.name = _read_field_from_msg(msg, 'name')
- self.version = _read_field_from_msg(msg, 'version')
- self.description = _read_field_from_msg(msg, 'summary')
- # we are filling author only.
- self.author = _read_field_from_msg(msg, 'author')
- self.maintainer = None
- self.author_email = _read_field_from_msg(msg, 'author-email')
- self.maintainer_email = None
- self.url = _read_field_from_msg(msg, 'home-page')
- self.download_url = _read_field_from_msg(msg, 'download-url')
- self.license = _read_field_unescaped_from_msg(msg, 'license')
-
- self.long_description = _read_field_unescaped_from_msg(msg, 'description')
- if (
- self.long_description is None and
- self.metadata_version >= version.Version('2.1')
- ):
- self.long_description = _read_payload_from_msg(msg)
- self.description = _read_field_from_msg(msg, 'summary')
-
- if 'keywords' in msg:
- self.keywords = _read_field_from_msg(msg, 'keywords').split(',')
-
- self.platforms = _read_list_from_msg(msg, 'platform')
- self.classifiers = _read_list_from_msg(msg, 'classifier')
-
- # PEP 314 - these fields only exist in 1.1
- if self.metadata_version == version.Version('1.1'):
- self.requires = _read_list_from_msg(msg, 'requires')
- self.provides = _read_list_from_msg(msg, 'provides')
- self.obsoletes = _read_list_from_msg(msg, 'obsoletes')
- else:
- self.requires = None
- self.provides = None
- self.obsoletes = None
-
- self.license_files = _read_list_from_msg(msg, 'license-file')
-
-
-def single_line(val):
- """
- Quick and dirty validation for Summary pypa/setuptools#1390.
- """
- if '\n' in val:
- # TODO: Replace with `raise ValueError("newlines not allowed")`
- # after reviewing #2893.
- warnings.warn("newlines not allowed and will break in the future")
- val = val.strip().split('\n')[0]
- return val
-
-
-# Based on Python 3.5 version
-def write_pkg_file(self, file): # noqa: C901 # is too complex (14) # FIXME
- """Write the PKG-INFO format data to a file object."""
- version = self.get_metadata_version()
-
- def write_field(key, value):
- file.write("%s: %s\n" % (key, value))
-
- write_field('Metadata-Version', str(version))
- write_field('Name', self.get_name())
- write_field('Version', self.get_version())
-
- summary = self.get_description()
- if summary:
- write_field('Summary', single_line(summary))
-
- optional_fields = (
- ('Home-page', 'url'),
- ('Download-URL', 'download_url'),
- ('Author', 'author'),
- ('Author-email', 'author_email'),
- ('Maintainer', 'maintainer'),
- ('Maintainer-email', 'maintainer_email'),
- )
-
- for field, attr in optional_fields:
- attr_val = getattr(self, attr, None)
- if attr_val is not None:
- write_field(field, attr_val)
-
- license = self.get_license()
- if license:
- write_field('License', rfc822_escape(license))
-
- for project_url in self.project_urls.items():
- write_field('Project-URL', '%s, %s' % project_url)
-
- keywords = ','.join(self.get_keywords())
- if keywords:
- write_field('Keywords', keywords)
-
- platforms = self.get_platforms() or []
- for platform in platforms:
- write_field('Platform', platform)
-
- self._write_list(file, 'Classifier', self.get_classifiers())
-
- # PEP 314
- self._write_list(file, 'Requires', self.get_requires())
- self._write_list(file, 'Provides', self.get_provides())
- self._write_list(file, 'Obsoletes', self.get_obsoletes())
-
- # Setuptools specific for PEP 345
- if hasattr(self, 'python_requires'):
- write_field('Requires-Python', self.python_requires)
-
- # PEP 566
- if self.long_description_content_type:
- write_field('Description-Content-Type', self.long_description_content_type)
- if self.provides_extras:
- for extra in self.provides_extras:
- write_field('Provides-Extra', extra)
-
- self._write_list(file, 'License-File', self.license_files or [])
-
- long_description = self.get_long_description()
- if long_description:
- file.write("\n%s" % long_description)
- if not long_description.endswith("\n"):
- file.write("\n")
-
-
-sequence = tuple, list
-
-
-def check_importable(dist, attr, value):
- try:
- ep = metadata.EntryPoint(value=value, name=None, group=None)
- assert not ep.extras
- except (TypeError, ValueError, AttributeError, AssertionError) as e:
- raise DistutilsSetupError(
- "%r must be importable 'module:attrs' string (got %r)" % (attr, value)
- ) from e
-
-
-def assert_string_list(dist, attr, value):
- """Verify that value is a string list"""
- try:
- # verify that value is a list or tuple to exclude unordered
- # or single-use iterables
- assert isinstance(value, (list, tuple))
- # verify that elements of value are strings
- assert ''.join(value) != value
- except (TypeError, ValueError, AttributeError, AssertionError) as e:
- raise DistutilsSetupError(
- "%r must be a list of strings (got %r)" % (attr, value)
- ) from e
-
-
-def check_nsp(dist, attr, value):
- """Verify that namespace packages are valid"""
- ns_packages = value
- assert_string_list(dist, attr, ns_packages)
- for nsp in ns_packages:
- if not dist.has_contents_for(nsp):
- raise DistutilsSetupError(
- "Distribution contains no modules or packages for "
- + "namespace package %r" % nsp
- )
- parent, sep, child = nsp.rpartition('.')
- if parent and parent not in ns_packages:
- distutils.log.warn(
- "WARNING: %r is declared as a package namespace, but %r"
- " is not: please correct this in setup.py",
- nsp,
- parent,
- )
- msg = (
- "The namespace_packages parameter is deprecated, "
- "consider using implicit namespaces instead (PEP 420)."
- )
- warnings.warn(msg, SetuptoolsDeprecationWarning)
-
-
-def check_extras(dist, attr, value):
- """Verify that extras_require mapping is valid"""
- try:
- list(itertools.starmap(_check_extra, value.items()))
- except (TypeError, ValueError, AttributeError) as e:
- raise DistutilsSetupError(
- "'extras_require' must be a dictionary whose values are "
- "strings or lists of strings containing valid project/version "
- "requirement specifiers."
- ) from e
-
-
-def _check_extra(extra, reqs):
- name, sep, marker = extra.partition(':')
- if marker and pkg_resources.invalid_marker(marker):
- raise DistutilsSetupError("Invalid environment marker: " + marker)
- list(_reqs.parse(reqs))
-
-
-def assert_bool(dist, attr, value):
- """Verify that value is True, False, 0, or 1"""
- if bool(value) != value:
- tmpl = "{attr!r} must be a boolean value (got {value!r})"
- raise DistutilsSetupError(tmpl.format(attr=attr, value=value))
-
-
-def invalid_unless_false(dist, attr, value):
- if not value:
- warnings.warn(f"{attr} is ignored.", DistDeprecationWarning)
- return
- raise DistutilsSetupError(f"{attr} is invalid.")
-
-
-def check_requirements(dist, attr, value):
- """Verify that install_requires is a valid requirements list"""
- try:
- list(_reqs.parse(value))
- if isinstance(value, (dict, set)):
- raise TypeError("Unordered types are not allowed")
- except (TypeError, ValueError) as error:
- tmpl = (
- "{attr!r} must be a string or list of strings "
- "containing valid project/version requirement specifiers; {error}"
- )
- raise DistutilsSetupError(tmpl.format(attr=attr, error=error)) from error
-
-
-def check_specifier(dist, attr, value):
- """Verify that value is a valid version specifier"""
- try:
- packaging.specifiers.SpecifierSet(value)
- except (packaging.specifiers.InvalidSpecifier, AttributeError) as error:
- tmpl = (
- "{attr!r} must be a string " "containing valid version specifiers; {error}"
- )
- raise DistutilsSetupError(tmpl.format(attr=attr, error=error)) from error
-
-
-def check_entry_points(dist, attr, value):
- """Verify that entry_points map is parseable"""
- try:
- _entry_points.load(value)
- except Exception as e:
- raise DistutilsSetupError(e) from e
-
-
-def check_test_suite(dist, attr, value):
- if not isinstance(value, str):
- raise DistutilsSetupError("test_suite must be a string")
-
-
-def check_package_data(dist, attr, value):
- """Verify that value is a dictionary of package names to glob lists"""
- if not isinstance(value, dict):
- raise DistutilsSetupError(
- "{!r} must be a dictionary mapping package names to lists of "
- "string wildcard patterns".format(attr)
- )
- for k, v in value.items():
- if not isinstance(k, str):
- raise DistutilsSetupError(
- "keys of {!r} dict must be strings (got {!r})".format(attr, k)
- )
- assert_string_list(dist, 'values of {!r} dict'.format(attr), v)
-
-
-def check_packages(dist, attr, value):
- for pkgname in value:
- if not re.match(r'\w+(\.\w+)*', pkgname):
- distutils.log.warn(
- "WARNING: %r not a valid package name; please use only "
- ".-separated package names in setup.py",
- pkgname,
- )
-
-
-_Distribution = get_unpatched(distutils.core.Distribution)
-
-
-class Distribution(_Distribution):
- """Distribution with support for tests and package data
-
- This is an enhanced version of 'distutils.dist.Distribution' that
- effectively adds the following new optional keyword arguments to 'setup()':
-
- 'install_requires' -- a string or sequence of strings specifying project
- versions that the distribution requires when installed, in the format
- used by 'pkg_resources.require()'. They will be installed
- automatically when the package is installed. If you wish to use
- packages that are not available in PyPI, or want to give your users an
- alternate download location, you can add a 'find_links' option to the
- '[easy_install]' section of your project's 'setup.cfg' file, and then
- setuptools will scan the listed web pages for links that satisfy the
- requirements.
-
- 'extras_require' -- a dictionary mapping names of optional "extras" to the
- additional requirement(s) that using those extras incurs. For example,
- this::
-
- extras_require = dict(reST = ["docutils>=0.3", "reSTedit"])
-
- indicates that the distribution can optionally provide an extra
- capability called "reST", but it can only be used if docutils and
- reSTedit are installed. If the user installs your package using
- EasyInstall and requests one of your extras, the corresponding
- additional requirements will be installed if needed.
-
- 'test_suite' -- the name of a test suite to run for the 'test' command.
- If the user runs 'python setup.py test', the package will be installed,
- and the named test suite will be run. The format is the same as
- would be used on a 'unittest.py' command line. That is, it is the
- dotted name of an object to import and call to generate a test suite.
-
- 'package_data' -- a dictionary mapping package names to lists of filenames
- or globs to use to find data files contained in the named packages.
- If the dictionary has filenames or globs listed under '""' (the empty
- string), those names will be searched for in every package, in addition
- to any names for the specific package. Data files found using these
- names/globs will be installed along with the package, in the same
- location as the package. Note that globs are allowed to reference
- the contents of non-package subdirectories, as long as you use '/' as
- a path separator. (Globs are automatically converted to
- platform-specific paths at runtime.)
-
- In addition to these new keywords, this class also has several new methods
- for manipulating the distribution's contents. For example, the 'include()'
- and 'exclude()' methods can be thought of as in-place add and subtract
- commands that add or remove packages, modules, extensions, and so on from
- the distribution.
- """
-
- _DISTUTILS_UNSUPPORTED_METADATA = {
- 'long_description_content_type': lambda: None,
- 'project_urls': dict,
- 'provides_extras': ordered_set.OrderedSet,
- 'license_file': lambda: None,
- 'license_files': lambda: None,
- }
-
- _patched_dist = None
-
- def patch_missing_pkg_info(self, attrs):
- # Fake up a replacement for the data that would normally come from
- # PKG-INFO, but which might not yet be built if this is a fresh
- # checkout.
- #
- if not attrs or 'name' not in attrs or 'version' not in attrs:
- return
- key = pkg_resources.safe_name(str(attrs['name'])).lower()
- dist = pkg_resources.working_set.by_key.get(key)
- if dist is not None and not dist.has_metadata('PKG-INFO'):
- dist._version = pkg_resources.safe_version(str(attrs['version']))
- self._patched_dist = dist
-
- def __init__(self, attrs=None):
- have_package_data = hasattr(self, "package_data")
- if not have_package_data:
- self.package_data = {}
- attrs = attrs or {}
- self.dist_files = []
- # Filter-out setuptools' specific options.
- self.src_root = attrs.pop("src_root", None)
- self.patch_missing_pkg_info(attrs)
- self.dependency_links = attrs.pop('dependency_links', [])
- self.setup_requires = attrs.pop('setup_requires', [])
- for ep in metadata.entry_points(group='distutils.setup_keywords'):
- vars(self).setdefault(ep.name, None)
- _Distribution.__init__(
- self,
- {
- k: v
- for k, v in attrs.items()
- if k not in self._DISTUTILS_UNSUPPORTED_METADATA
- },
- )
-
- # Save the original dependencies before they are processed into the egg format
- self._orig_extras_require = {}
- self._orig_install_requires = []
- self._tmp_extras_require = defaultdict(ordered_set.OrderedSet)
-
- self.set_defaults = ConfigDiscovery(self)
-
- self._set_metadata_defaults(attrs)
-
- self.metadata.version = self._normalize_version(
- self._validate_version(self.metadata.version)
- )
- self._finalize_requires()
-
- def _validate_metadata(self):
- required = {"name"}
- provided = {
- key
- for key in vars(self.metadata)
- if getattr(self.metadata, key, None) is not None
- }
- missing = required - provided
-
- if missing:
- msg = f"Required package metadata is missing: {missing}"
- raise DistutilsSetupError(msg)
-
- def _set_metadata_defaults(self, attrs):
- """
- Fill-in missing metadata fields not supported by distutils.
- Some fields may have been set by other tools (e.g. pbr).
- Those fields (vars(self.metadata)) take precedence to
- supplied attrs.
- """
- for option, default in self._DISTUTILS_UNSUPPORTED_METADATA.items():
- vars(self.metadata).setdefault(option, attrs.get(option, default()))
-
- @staticmethod
- def _normalize_version(version):
- if isinstance(version, setuptools.sic) or version is None:
- return version
-
- normalized = str(packaging.version.Version(version))
- if version != normalized:
- tmpl = "Normalizing '{version}' to '{normalized}'"
- warnings.warn(tmpl.format(**locals()))
- return normalized
- return version
-
- @staticmethod
- def _validate_version(version):
- if isinstance(version, numbers.Number):
- # Some people apparently take "version number" too literally :)
- version = str(version)
-
- if version is not None:
- try:
- packaging.version.Version(version)
- except (packaging.version.InvalidVersion, TypeError):
- warnings.warn(
- "The version specified (%r) is an invalid version, this "
- "may not work as expected with newer versions of "
- "setuptools, pip, and PyPI. Please see PEP 440 for more "
- "details." % version
- )
- return setuptools.sic(version)
- return version
-
- def _finalize_requires(self):
- """
- Set `metadata.python_requires` and fix environment markers
- in `install_requires` and `extras_require`.
- """
- if getattr(self, 'python_requires', None):
- self.metadata.python_requires = self.python_requires
-
- if getattr(self, 'extras_require', None):
- # Save original before it is messed by _convert_extras_requirements
- self._orig_extras_require = self._orig_extras_require or self.extras_require
- for extra in self.extras_require.keys():
- # Since this gets called multiple times at points where the
- # keys have become 'converted' extras, ensure that we are only
- # truly adding extras we haven't seen before here.
- extra = extra.split(':')[0]
- if extra:
- self.metadata.provides_extras.add(extra)
-
- if getattr(self, 'install_requires', None) and not self._orig_install_requires:
- # Save original before it is messed by _move_install_requirements_markers
- self._orig_install_requires = self.install_requires
-
- self._convert_extras_requirements()
- self._move_install_requirements_markers()
-
- def _convert_extras_requirements(self):
- """
- Convert requirements in `extras_require` of the form
- `"extra": ["barbazquux; {marker}"]` to
- `"extra:{marker}": ["barbazquux"]`.
- """
- spec_ext_reqs = getattr(self, 'extras_require', None) or {}
- tmp = defaultdict(ordered_set.OrderedSet)
- self._tmp_extras_require = getattr(self, '_tmp_extras_require', tmp)
- for section, v in spec_ext_reqs.items():
- # Do not strip empty sections.
- self._tmp_extras_require[section]
- for r in _reqs.parse(v):
- suffix = self._suffix_for(r)
- self._tmp_extras_require[section + suffix].append(r)
-
- @staticmethod
- def _suffix_for(req):
- """
- For a requirement, return the 'extras_require' suffix for
- that requirement.
- """
- return ':' + str(req.marker) if req.marker else ''
-
- def _move_install_requirements_markers(self):
- """
- Move requirements in `install_requires` that are using environment
- markers `extras_require`.
- """
-
- # divide the install_requires into two sets, simple ones still
- # handled by install_requires and more complex ones handled
- # by extras_require.
-
- def is_simple_req(req):
- return not req.marker
-
- spec_inst_reqs = getattr(self, 'install_requires', None) or ()
- inst_reqs = list(_reqs.parse(spec_inst_reqs))
- simple_reqs = filter(is_simple_req, inst_reqs)
- complex_reqs = itertools.filterfalse(is_simple_req, inst_reqs)
- self.install_requires = list(map(str, simple_reqs))
-
- for r in complex_reqs:
- self._tmp_extras_require[':' + str(r.marker)].append(r)
- self.extras_require = dict(
- # list(dict.fromkeys(...)) ensures a list of unique strings
- (k, list(dict.fromkeys(str(r) for r in map(self._clean_req, v))))
- for k, v in self._tmp_extras_require.items()
- )
-
- def _clean_req(self, req):
- """
- Given a Requirement, remove environment markers and return it.
- """
- req.marker = None
- return req
-
- def _finalize_license_files(self):
- """Compute names of all license files which should be included."""
- license_files: Optional[List[str]] = self.metadata.license_files
- patterns: List[str] = license_files if license_files else []
-
- license_file: Optional[str] = self.metadata.license_file
- if license_file and license_file not in patterns:
- patterns.append(license_file)
-
- if license_files is None and license_file is None:
- # Default patterns match the ones wheel uses
- # See https://wheel.readthedocs.io/en/stable/user_guide.html
- # -> 'Including license files in the generated wheel file'
- patterns = ('LICEN[CS]E*', 'COPYING*', 'NOTICE*', 'AUTHORS*')
-
- self.metadata.license_files = list(
- unique_everseen(self._expand_patterns(patterns))
- )
-
- @staticmethod
- def _expand_patterns(patterns):
- """
- >>> list(Distribution._expand_patterns(['LICENSE']))
- ['LICENSE']
- >>> list(Distribution._expand_patterns(['setup.cfg', 'LIC*']))
- ['setup.cfg', 'LICENSE']
- """
- return (
- path
- for pattern in patterns
- for path in sorted(iglob(pattern))
- if not path.endswith('~') and os.path.isfile(path)
- )
-
- # FIXME: 'Distribution._parse_config_files' is too complex (14)
- def _parse_config_files(self, filenames=None): # noqa: C901
- """
- Adapted from distutils.dist.Distribution.parse_config_files,
- this method provides the same functionality in subtly-improved
- ways.
- """
- from configparser import ConfigParser
-
- # Ignore install directory options if we have a venv
- ignore_options = (
- []
- if sys.prefix == sys.base_prefix
- else [
- 'install-base',
- 'install-platbase',
- 'install-lib',
- 'install-platlib',
- 'install-purelib',
- 'install-headers',
- 'install-scripts',
- 'install-data',
- 'prefix',
- 'exec-prefix',
- 'home',
- 'user',
- 'root',
- ]
- )
-
- ignore_options = frozenset(ignore_options)
-
- if filenames is None:
- filenames = self.find_config_files()
-
- if DEBUG:
- self.announce("Distribution.parse_config_files():")
-
- parser = ConfigParser()
- parser.optionxform = str
- for filename in filenames:
- with io.open(filename, encoding='utf-8') as reader:
- if DEBUG:
- self.announce(" reading {filename}".format(**locals()))
- parser.read_file(reader)
- for section in parser.sections():
- options = parser.options(section)
- opt_dict = self.get_option_dict(section)
-
- for opt in options:
- if opt == '__name__' or opt in ignore_options:
- continue
-
- val = parser.get(section, opt)
- opt = self.warn_dash_deprecation(opt, section)
- opt = self.make_option_lowercase(opt, section)
- opt_dict[opt] = (filename, val)
-
- # Make the ConfigParser forget everything (so we retain
- # the original filenames that options come from)
- parser.__init__()
-
- if 'global' not in self.command_options:
- return
-
- # If there was a "global" section in the config file, use it
- # to set Distribution options.
-
- for (opt, (src, val)) in self.command_options['global'].items():
- alias = self.negative_opt.get(opt)
- if alias:
- val = not strtobool(val)
- elif opt in ('verbose', 'dry_run'): # ugh!
- val = strtobool(val)
-
- try:
- setattr(self, alias or opt, val)
- except ValueError as e:
- raise DistutilsOptionError(e) from e
-
- def warn_dash_deprecation(self, opt, section):
- if section in (
- 'options.extras_require',
- 'options.data_files',
- ):
- return opt
-
- underscore_opt = opt.replace('-', '_')
- commands = list(itertools.chain(
- distutils.command.__all__,
- self._setuptools_commands(),
- ))
- if (
- not section.startswith('options')
- and section != 'metadata'
- and section not in commands
- ):
- return underscore_opt
-
- if '-' in opt:
- warnings.warn(
- "Usage of dash-separated '%s' will not be supported in future "
- "versions. Please use the underscore name '%s' instead"
- % (opt, underscore_opt)
- )
- return underscore_opt
-
- def _setuptools_commands(self):
- try:
- return metadata.distribution('setuptools').entry_points.names
- except metadata.PackageNotFoundError:
- # during bootstrapping, distribution doesn't exist
- return []
-
- def make_option_lowercase(self, opt, section):
- if section != 'metadata' or opt.islower():
- return opt
-
- lowercase_opt = opt.lower()
- warnings.warn(
- "Usage of uppercase key '%s' in '%s' will be deprecated in future "
- "versions. Please use lowercase '%s' instead"
- % (opt, section, lowercase_opt)
- )
- return lowercase_opt
-
- # FIXME: 'Distribution._set_command_options' is too complex (14)
- def _set_command_options(self, command_obj, option_dict=None): # noqa: C901
- """
- Set the options for 'command_obj' from 'option_dict'. Basically
- this means copying elements of a dictionary ('option_dict') to
- attributes of an instance ('command').
-
- 'command_obj' must be a Command instance. If 'option_dict' is not
- supplied, uses the standard option dictionary for this command
- (from 'self.command_options').
-
- (Adopted from distutils.dist.Distribution._set_command_options)
- """
- command_name = command_obj.get_command_name()
- if option_dict is None:
- option_dict = self.get_option_dict(command_name)
-
- if DEBUG:
- self.announce(" setting options for '%s' command:" % command_name)
- for (option, (source, value)) in option_dict.items():
- if DEBUG:
- self.announce(" %s = %s (from %s)" % (option, value, source))
- try:
- bool_opts = [translate_longopt(o) for o in command_obj.boolean_options]
- except AttributeError:
- bool_opts = []
- try:
- neg_opt = command_obj.negative_opt
- except AttributeError:
- neg_opt = {}
-
- try:
- is_string = isinstance(value, str)
- if option in neg_opt and is_string:
- setattr(command_obj, neg_opt[option], not strtobool(value))
- elif option in bool_opts and is_string:
- setattr(command_obj, option, strtobool(value))
- elif hasattr(command_obj, option):
- setattr(command_obj, option, value)
- else:
- raise DistutilsOptionError(
- "error in %s: command '%s' has no such option '%s'"
- % (source, command_name, option)
- )
- except ValueError as e:
- raise DistutilsOptionError(e) from e
-
- def _get_project_config_files(self, filenames):
- """Add default file and split between INI and TOML"""
- tomlfiles = []
- standard_project_metadata = Path(self.src_root or os.curdir, "pyproject.toml")
- if filenames is not None:
- parts = partition(lambda f: Path(f).suffix == ".toml", filenames)
- filenames = list(parts[0]) # 1st element => predicate is False
- tomlfiles = list(parts[1]) # 2nd element => predicate is True
- elif standard_project_metadata.exists():
- tomlfiles = [standard_project_metadata]
- return filenames, tomlfiles
-
- def parse_config_files(self, filenames=None, ignore_option_errors=False):
- """Parses configuration files from various levels
- and loads configuration.
- """
- inifiles, tomlfiles = self._get_project_config_files(filenames)
-
- self._parse_config_files(filenames=inifiles)
-
- setupcfg.parse_configuration(
- self, self.command_options, ignore_option_errors=ignore_option_errors
- )
- for filename in tomlfiles:
- pyprojecttoml.apply_configuration(self, filename, ignore_option_errors)
-
- self._finalize_requires()
- self._finalize_license_files()
-
- def fetch_build_eggs(self, requires):
- """Resolve pre-setup requirements"""
- resolved_dists = pkg_resources.working_set.resolve(
- _reqs.parse(requires),
- installer=self.fetch_build_egg,
- replace_conflicting=True,
- )
- for dist in resolved_dists:
- pkg_resources.working_set.add(dist, replace=True)
- return resolved_dists
-
- def finalize_options(self):
- """
- Allow plugins to apply arbitrary operations to the
- distribution. Each hook may optionally define a 'order'
- to influence the order of execution. Smaller numbers
- go first and the default is 0.
- """
- group = 'setuptools.finalize_distribution_options'
-
- def by_order(hook):
- return getattr(hook, 'order', 0)
-
- defined = metadata.entry_points(group=group)
- filtered = itertools.filterfalse(self._removed, defined)
- loaded = map(lambda e: e.load(), filtered)
- for ep in sorted(loaded, key=by_order):
- ep(self)
-
- @staticmethod
- def _removed(ep):
- """
- When removing an entry point, if metadata is loaded
- from an older version of Setuptools, that removed
- entry point will attempt to be loaded and will fail.
- See #2765 for more details.
- """
- removed = {
- # removed 2021-09-05
- '2to3_doctests',
- }
- return ep.name in removed
-
- def _finalize_setup_keywords(self):
- for ep in metadata.entry_points(group='distutils.setup_keywords'):
- value = getattr(self, ep.name, None)
- if value is not None:
- ep.load()(self, ep.name, value)
-
- def get_egg_cache_dir(self):
- egg_cache_dir = os.path.join(os.curdir, '.eggs')
- if not os.path.exists(egg_cache_dir):
- os.mkdir(egg_cache_dir)
- windows_support.hide_file(egg_cache_dir)
- readme_txt_filename = os.path.join(egg_cache_dir, 'README.txt')
- with open(readme_txt_filename, 'w') as f:
- f.write(
- 'This directory contains eggs that were downloaded '
- 'by setuptools to build, test, and run plug-ins.\n\n'
- )
- f.write(
- 'This directory caches those eggs to prevent '
- 'repeated downloads.\n\n'
- )
- f.write('However, it is safe to delete this directory.\n\n')
-
- return egg_cache_dir
-
- def fetch_build_egg(self, req):
- """Fetch an egg needed for building"""
- from setuptools.installer import fetch_build_egg
-
- return fetch_build_egg(self, req)
-
- def get_command_class(self, command):
- """Pluggable version of get_command_class()"""
- if command in self.cmdclass:
- return self.cmdclass[command]
-
- eps = metadata.entry_points(group='distutils.commands', name=command)
- for ep in eps:
- self.cmdclass[command] = cmdclass = ep.load()
- return cmdclass
- else:
- return _Distribution.get_command_class(self, command)
-
- def print_commands(self):
- for ep in metadata.entry_points(group='distutils.commands'):
- if ep.name not in self.cmdclass:
- cmdclass = ep.load()
- self.cmdclass[ep.name] = cmdclass
- return _Distribution.print_commands(self)
-
- def get_command_list(self):
- for ep in metadata.entry_points(group='distutils.commands'):
- if ep.name not in self.cmdclass:
- cmdclass = ep.load()
- self.cmdclass[ep.name] = cmdclass
- return _Distribution.get_command_list(self)
-
- def include(self, **attrs):
- """Add items to distribution that are named in keyword arguments
-
- For example, 'dist.include(py_modules=["x"])' would add 'x' to
- the distribution's 'py_modules' attribute, if it was not already
- there.
-
- Currently, this method only supports inclusion for attributes that are
- lists or tuples. If you need to add support for adding to other
- attributes in this or a subclass, you can add an '_include_X' method,
- where 'X' is the name of the attribute. The method will be called with
- the value passed to 'include()'. So, 'dist.include(foo={"bar":"baz"})'
- will try to call 'dist._include_foo({"bar":"baz"})', which can then
- handle whatever special inclusion logic is needed.
- """
- for k, v in attrs.items():
- include = getattr(self, '_include_' + k, None)
- if include:
- include(v)
- else:
- self._include_misc(k, v)
-
- def exclude_package(self, package):
- """Remove packages, modules, and extensions in named package"""
-
- pfx = package + '.'
- if self.packages:
- self.packages = [
- p for p in self.packages if p != package and not p.startswith(pfx)
- ]
-
- if self.py_modules:
- self.py_modules = [
- p for p in self.py_modules if p != package and not p.startswith(pfx)
- ]
-
- if self.ext_modules:
- self.ext_modules = [
- p
- for p in self.ext_modules
- if p.name != package and not p.name.startswith(pfx)
- ]
-
- def has_contents_for(self, package):
- """Return true if 'exclude_package(package)' would do something"""
-
- pfx = package + '.'
-
- for p in self.iter_distribution_names():
- if p == package or p.startswith(pfx):
- return True
-
- def _exclude_misc(self, name, value):
- """Handle 'exclude()' for list/tuple attrs without a special handler"""
- if not isinstance(value, sequence):
- raise DistutilsSetupError(
- "%s: setting must be a list or tuple (%r)" % (name, value)
- )
- try:
- old = getattr(self, name)
- except AttributeError as e:
- raise DistutilsSetupError("%s: No such distribution setting" % name) from e
- if old is not None and not isinstance(old, sequence):
- raise DistutilsSetupError(
- name + ": this setting cannot be changed via include/exclude"
- )
- elif old:
- setattr(self, name, [item for item in old if item not in value])
-
- def _include_misc(self, name, value):
- """Handle 'include()' for list/tuple attrs without a special handler"""
-
- if not isinstance(value, sequence):
- raise DistutilsSetupError("%s: setting must be a list (%r)" % (name, value))
- try:
- old = getattr(self, name)
- except AttributeError as e:
- raise DistutilsSetupError("%s: No such distribution setting" % name) from e
- if old is None:
- setattr(self, name, value)
- elif not isinstance(old, sequence):
- raise DistutilsSetupError(
- name + ": this setting cannot be changed via include/exclude"
- )
- else:
- new = [item for item in value if item not in old]
- setattr(self, name, old + new)
-
- def exclude(self, **attrs):
- """Remove items from distribution that are named in keyword arguments
-
- For example, 'dist.exclude(py_modules=["x"])' would remove 'x' from
- the distribution's 'py_modules' attribute. Excluding packages uses
- the 'exclude_package()' method, so all of the package's contained
- packages, modules, and extensions are also excluded.
-
- Currently, this method only supports exclusion from attributes that are
- lists or tuples. If you need to add support for excluding from other
- attributes in this or a subclass, you can add an '_exclude_X' method,
- where 'X' is the name of the attribute. The method will be called with
- the value passed to 'exclude()'. So, 'dist.exclude(foo={"bar":"baz"})'
- will try to call 'dist._exclude_foo({"bar":"baz"})', which can then
- handle whatever special exclusion logic is needed.
- """
- for k, v in attrs.items():
- exclude = getattr(self, '_exclude_' + k, None)
- if exclude:
- exclude(v)
- else:
- self._exclude_misc(k, v)
-
- def _exclude_packages(self, packages):
- if not isinstance(packages, sequence):
- raise DistutilsSetupError(
- "packages: setting must be a list or tuple (%r)" % (packages,)
- )
- list(map(self.exclude_package, packages))
-
- def _parse_command_opts(self, parser, args):
- # Remove --with-X/--without-X options when processing command args
- self.global_options = self.__class__.global_options
- self.negative_opt = self.__class__.negative_opt
-
- # First, expand any aliases
- command = args[0]
- aliases = self.get_option_dict('aliases')
- while command in aliases:
- src, alias = aliases[command]
- del aliases[command] # ensure each alias can expand only once!
- import shlex
-
- args[:1] = shlex.split(alias, True)
- command = args[0]
-
- nargs = _Distribution._parse_command_opts(self, parser, args)
-
- # Handle commands that want to consume all remaining arguments
- cmd_class = self.get_command_class(command)
- if getattr(cmd_class, 'command_consumes_arguments', None):
- self.get_option_dict(command)['args'] = ("command line", nargs)
- if nargs is not None:
- return []
-
- return nargs
-
- def get_cmdline_options(self):
- """Return a '{cmd: {opt:val}}' map of all command-line options
-
- Option names are all long, but do not include the leading '--', and
- contain dashes rather than underscores. If the option doesn't take
- an argument (e.g. '--quiet'), the 'val' is 'None'.
-
- Note that options provided by config files are intentionally excluded.
- """
-
- d = {}
-
- for cmd, opts in self.command_options.items():
-
- for opt, (src, val) in opts.items():
-
- if src != "command line":
- continue
-
- opt = opt.replace('_', '-')
-
- if val == 0:
- cmdobj = self.get_command_obj(cmd)
- neg_opt = self.negative_opt.copy()
- neg_opt.update(getattr(cmdobj, 'negative_opt', {}))
- for neg, pos in neg_opt.items():
- if pos == opt:
- opt = neg
- val = None
- break
- else:
- raise AssertionError("Shouldn't be able to get here")
-
- elif val == 1:
- val = None
-
- d.setdefault(cmd, {})[opt] = val
-
- return d
-
- def iter_distribution_names(self):
- """Yield all packages, modules, and extension names in distribution"""
-
- for pkg in self.packages or ():
- yield pkg
-
- for module in self.py_modules or ():
- yield module
-
- for ext in self.ext_modules or ():
- if isinstance(ext, tuple):
- name, buildinfo = ext
- else:
- name = ext.name
- if name.endswith('module'):
- name = name[:-6]
- yield name
-
- def handle_display_options(self, option_order):
- """If there were any non-global "display-only" options
- (--help-commands or the metadata display options) on the command
- line, display the requested info and return true; else return
- false.
- """
- import sys
-
- if self.help_commands:
- return _Distribution.handle_display_options(self, option_order)
-
- # Stdout may be StringIO (e.g. in tests)
- if not isinstance(sys.stdout, io.TextIOWrapper):
- return _Distribution.handle_display_options(self, option_order)
-
- # Don't wrap stdout if utf-8 is already the encoding. Provides
- # workaround for #334.
- if sys.stdout.encoding.lower() in ('utf-8', 'utf8'):
- return _Distribution.handle_display_options(self, option_order)
-
- # Print metadata in UTF-8 no matter the platform
- encoding = sys.stdout.encoding
- errors = sys.stdout.errors
- newline = sys.platform != 'win32' and '\n' or None
- line_buffering = sys.stdout.line_buffering
-
- sys.stdout = io.TextIOWrapper(
- sys.stdout.detach(), 'utf-8', errors, newline, line_buffering
- )
- try:
- return _Distribution.handle_display_options(self, option_order)
- finally:
- sys.stdout = io.TextIOWrapper(
- sys.stdout.detach(), encoding, errors, newline, line_buffering
- )
-
- def run_command(self, command):
- self.set_defaults()
- # Postpone defaults until all explicit configuration is considered
- # (setup() args, config files, command line and plugins)
-
- super().run_command(command)
-
-
-class DistDeprecationWarning(SetuptoolsDeprecationWarning):
- """Class for warning about deprecations in dist in
- setuptools. Not ignored by default, unlike DeprecationWarning."""
diff --git a/spaces/Billyosoro/ESRGAN/app.py b/spaces/Billyosoro/ESRGAN/app.py
deleted file mode 100644
index 97c59221c429e335c3a2e3413c11cc155d5b6122..0000000000000000000000000000000000000000
--- a/spaces/Billyosoro/ESRGAN/app.py
+++ /dev/null
@@ -1,68 +0,0 @@
-import os
-os.system("pip install gradio==2.9b23")
-import random
-import gradio as gr
-from PIL import Image
-import torch
-from random import randint
-import sys
-from subprocess import call
-import psutil
-
-
-
-
-torch.hub.download_url_to_file('http://people.csail.mit.edu/billf/project%20pages/sresCode/Markov%20Random%20Fields%20for%20Super-Resolution_files/100075_lowres.jpg', 'bear.jpg')
-
-
-def run_cmd(command):
- try:
- print(command)
- call(command, shell=True)
- except KeyboardInterrupt:
- print("Process interrupted")
- sys.exit(1)
-run_cmd("wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P .")
-run_cmd("pip install basicsr")
-run_cmd("pip freeze")
-
-os.system("wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P .")
-
-
-def inference(img,mode):
- _id = randint(1, 10000)
- INPUT_DIR = "/tmp/input_image" + str(_id) + "/"
- OUTPUT_DIR = "/tmp/output_image" + str(_id) + "/"
- run_cmd("rm -rf " + INPUT_DIR)
- run_cmd("rm -rf " + OUTPUT_DIR)
- run_cmd("mkdir " + INPUT_DIR)
- run_cmd("mkdir " + OUTPUT_DIR)
- basewidth = 256
- wpercent = (basewidth/float(img.size[0]))
- hsize = int((float(img.size[1])*float(wpercent)))
- img = img.resize((basewidth,hsize), Image.ANTIALIAS)
- img.save(INPUT_DIR + "1.jpg", "JPEG")
- if mode == "base":
- run_cmd("python inference_realesrgan.py -n RealESRGAN_x4plus -i "+ INPUT_DIR + " -o " + OUTPUT_DIR)
- else:
- os.system("python inference_realesrgan.py -n RealESRGAN_x4plus_anime_6B -i "+ INPUT_DIR + " -o " + OUTPUT_DIR)
- return os.path.join(OUTPUT_DIR, "1_out.jpg")
-
-
-
-
-title = "Real-ESRGAN"
-description = "Gradio demo for Real-ESRGAN. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once"
-article = "
"
-
-gr.Interface(
- inference,
- [gr.inputs.Image(type="pil", label="Input"),gr.inputs.Radio(["base","anime"], type="value", default="base", label="model type")],
- gr.outputs.Image(type="file", label="Output"),
- title=title,
- description=description,
- article=article,
- examples=[
- ['bear.jpg','base'],
- ['anime.png','anime']
- ]).launch()
\ No newline at end of file
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/config/defaults.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/config/defaults.py
deleted file mode 100644
index a397a6fbef36e188a676ad52f34309c42877ba1e..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/config/defaults.py
+++ /dev/null
@@ -1,596 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-from .config import CfgNode as CN
-
-# -----------------------------------------------------------------------------
-# Convention about Training / Test specific parameters
-# -----------------------------------------------------------------------------
-# Whenever an argument can be either used for training or for testing, the
-# corresponding name will be post-fixed by a _TRAIN for a training parameter,
-# or _TEST for a test-specific parameter.
-# For example, the number of images during training will be
-# IMAGES_PER_BATCH_TRAIN, while the number of images for testing will be
-# IMAGES_PER_BATCH_TEST
-
-# -----------------------------------------------------------------------------
-# Config definition
-# -----------------------------------------------------------------------------
-
-_C = CN()
-
-# The version number, to upgrade from old configs to new ones if any
-# changes happen. It's recommended to keep a VERSION in your config file.
-_C.VERSION = 2
-
-_C.MODEL = CN()
-_C.MODEL.LOAD_PROPOSALS = False
-_C.MODEL.MASK_ON = False
-_C.MODEL.KEYPOINT_ON = False
-_C.MODEL.DEVICE = "cuda"
-_C.MODEL.META_ARCHITECTURE = "GeneralizedRCNN"
-
-# Path (possibly with schema like catalog:// or detectron2://) to a checkpoint file
-# to be loaded to the model. You can find available models in the model zoo.
-_C.MODEL.WEIGHTS = ""
-
-# Values to be used for image normalization (BGR order, since INPUT.FORMAT defaults to BGR).
-# To train on images of different number of channels, just set different mean & std.
-# Default values are the mean pixel value from ImageNet: [103.53, 116.28, 123.675]
-_C.MODEL.PIXEL_MEAN = [103.530, 116.280, 123.675]
-# When using pre-trained models in Detectron1 or any MSRA models,
-# std has been absorbed into its conv1 weights, so the std needs to be set 1.
-# Otherwise, you can use [57.375, 57.120, 58.395] (ImageNet std)
-_C.MODEL.PIXEL_STD = [1.0, 1.0, 1.0]
-
-
-# -----------------------------------------------------------------------------
-# INPUT
-# -----------------------------------------------------------------------------
-_C.INPUT = CN()
-# Size of the smallest side of the image during training
-_C.INPUT.MIN_SIZE_TRAIN = (800,)
-# Sample size of smallest side by choice or random selection from range give by
-# INPUT.MIN_SIZE_TRAIN
-_C.INPUT.MIN_SIZE_TRAIN_SAMPLING = "choice"
-# Maximum size of the side of the image during training
-_C.INPUT.MAX_SIZE_TRAIN = 1333
-# Size of the smallest side of the image during testing. Set to zero to disable resize in testing.
-_C.INPUT.MIN_SIZE_TEST = 800
-# Maximum size of the side of the image during testing
-_C.INPUT.MAX_SIZE_TEST = 1333
-
-# `True` if cropping is used for data augmentation during training
-_C.INPUT.CROP = CN({"ENABLED": False})
-# Cropping type:
-# - "relative" crop (H * CROP.SIZE[0], W * CROP.SIZE[1]) part of an input of size (H, W)
-# - "relative_range" uniformly sample relative crop size from between [CROP.SIZE[0], [CROP.SIZE[1]].
-# and [1, 1] and use it as in "relative" scenario.
-# - "absolute" crop part of an input with absolute size: (CROP.SIZE[0], CROP.SIZE[1]).
-_C.INPUT.CROP.TYPE = "relative_range"
-# Size of crop in range (0, 1] if CROP.TYPE is "relative" or "relative_range" and in number of
-# pixels if CROP.TYPE is "absolute"
-_C.INPUT.CROP.SIZE = [0.9, 0.9]
-
-
-# Whether the model needs RGB, YUV, HSV etc.
-# Should be one of the modes defined here, as we use PIL to read the image:
-# https://pillow.readthedocs.io/en/stable/handbook/concepts.html#concept-modes
-# with BGR being the one exception. One can set image format to BGR, we will
-# internally use RGB for conversion and flip the channels over
-_C.INPUT.FORMAT = "BGR"
-# The ground truth mask format that the model will use.
-# Mask R-CNN supports either "polygon" or "bitmask" as ground truth.
-_C.INPUT.MASK_FORMAT = "polygon" # alternative: "bitmask"
-
-
-# -----------------------------------------------------------------------------
-# Dataset
-# -----------------------------------------------------------------------------
-_C.DATASETS = CN()
-# List of the dataset names for training. Must be registered in DatasetCatalog
-_C.DATASETS.TRAIN = ()
-# List of the pre-computed proposal files for training, which must be consistent
-# with datasets listed in DATASETS.TRAIN.
-_C.DATASETS.PROPOSAL_FILES_TRAIN = ()
-# Number of top scoring precomputed proposals to keep for training
-_C.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TRAIN = 2000
-# List of the dataset names for testing. Must be registered in DatasetCatalog
-_C.DATASETS.TEST = ()
-# List of the pre-computed proposal files for test, which must be consistent
-# with datasets listed in DATASETS.TEST.
-_C.DATASETS.PROPOSAL_FILES_TEST = ()
-# Number of top scoring precomputed proposals to keep for test
-_C.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TEST = 1000
-
-# -----------------------------------------------------------------------------
-# DataLoader
-# -----------------------------------------------------------------------------
-_C.DATALOADER = CN()
-# Number of data loading threads
-_C.DATALOADER.NUM_WORKERS = 4
-# If True, each batch should contain only images for which the aspect ratio
-# is compatible. This groups portrait images together, and landscape images
-# are not batched with portrait images.
-_C.DATALOADER.ASPECT_RATIO_GROUPING = True
-# Options: TrainingSampler, RepeatFactorTrainingSampler
-_C.DATALOADER.SAMPLER_TRAIN = "TrainingSampler"
-# Repeat threshold for RepeatFactorTrainingSampler
-_C.DATALOADER.REPEAT_THRESHOLD = 0.0
-# if True, the dataloader will filter out images that have no associated
-# annotations at train time.
-_C.DATALOADER.FILTER_EMPTY_ANNOTATIONS = True
-
-# ---------------------------------------------------------------------------- #
-# Backbone options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.BACKBONE = CN()
-
-_C.MODEL.BACKBONE.NAME = "build_resnet_backbone"
-# Freeze the first several stages so they are not trained.
-# There are 5 stages in ResNet. The first is a convolution, and the following
-# stages are each group of residual blocks.
-_C.MODEL.BACKBONE.FREEZE_AT = 2
-
-
-# ---------------------------------------------------------------------------- #
-# FPN options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.FPN = CN()
-# Names of the input feature maps to be used by FPN
-# They must have contiguous power of 2 strides
-# e.g., ["res2", "res3", "res4", "res5"]
-_C.MODEL.FPN.IN_FEATURES = []
-_C.MODEL.FPN.OUT_CHANNELS = 256
-
-# Options: "" (no norm), "GN"
-_C.MODEL.FPN.NORM = ""
-
-# Types for fusing the FPN top-down and lateral features. Can be either "sum" or "avg"
-_C.MODEL.FPN.FUSE_TYPE = "sum"
-
-
-# ---------------------------------------------------------------------------- #
-# Proposal generator options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.PROPOSAL_GENERATOR = CN()
-# Current proposal generators include "RPN", "RRPN" and "PrecomputedProposals"
-_C.MODEL.PROPOSAL_GENERATOR.NAME = "RPN"
-# Proposal height and width both need to be greater than MIN_SIZE
-# (a the scale used during training or inference)
-_C.MODEL.PROPOSAL_GENERATOR.MIN_SIZE = 0
-
-
-# ---------------------------------------------------------------------------- #
-# Anchor generator options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.ANCHOR_GENERATOR = CN()
-# The generator can be any name in the ANCHOR_GENERATOR registry
-_C.MODEL.ANCHOR_GENERATOR.NAME = "DefaultAnchorGenerator"
-# Anchor sizes (i.e. sqrt of area) in absolute pixels w.r.t. the network input.
-# Format: list[list[int]]. SIZES[i] specifies the list of sizes
-# to use for IN_FEATURES[i]; len(SIZES) == len(IN_FEATURES) must be true,
-# or len(SIZES) == 1 is true and size list SIZES[0] is used for all
-# IN_FEATURES.
-_C.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64, 128, 256, 512]]
-# Anchor aspect ratios. For each area given in `SIZES`, anchors with different aspect
-# ratios are generated by an anchor generator.
-# Format: list[list[int]]. ASPECT_RATIOS[i] specifies the list of aspect ratios
-# to use for IN_FEATURES[i]; len(ASPECT_RATIOS) == len(IN_FEATURES) must be true,
-# or len(ASPECT_RATIOS) == 1 is true and aspect ratio list ASPECT_RATIOS[0] is used
-# for all IN_FEATURES.
-_C.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.5, 1.0, 2.0]]
-# Anchor angles.
-# list[float], the angle in degrees, for each input feature map.
-# ANGLES[i] specifies the list of angles for IN_FEATURES[i].
-_C.MODEL.ANCHOR_GENERATOR.ANGLES = [[-90, 0, 90]]
-# Relative offset between the center of the first anchor and the top-left corner of the image
-# Units: fraction of feature map stride (e.g., 0.5 means half stride)
-# Allowed values are floats in [0, 1) range inclusive.
-# Recommended value is 0.5, although it is not expected to affect model accuracy.
-_C.MODEL.ANCHOR_GENERATOR.OFFSET = 0.0
-
-# ---------------------------------------------------------------------------- #
-# RPN options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.RPN = CN()
-_C.MODEL.RPN.HEAD_NAME = "StandardRPNHead" # used by RPN_HEAD_REGISTRY
-
-# Names of the input feature maps to be used by RPN
-# e.g., ["p2", "p3", "p4", "p5", "p6"] for FPN
-_C.MODEL.RPN.IN_FEATURES = ["res4"]
-# Remove RPN anchors that go outside the image by BOUNDARY_THRESH pixels
-# Set to -1 or a large value, e.g. 100000, to disable pruning anchors
-_C.MODEL.RPN.BOUNDARY_THRESH = -1
-# IOU overlap ratios [BG_IOU_THRESHOLD, FG_IOU_THRESHOLD]
-# Minimum overlap required between an anchor and ground-truth box for the
-# (anchor, gt box) pair to be a positive example (IoU >= FG_IOU_THRESHOLD
-# ==> positive RPN example: 1)
-# Maximum overlap allowed between an anchor and ground-truth box for the
-# (anchor, gt box) pair to be a negative examples (IoU < BG_IOU_THRESHOLD
-# ==> negative RPN example: 0)
-# Anchors with overlap in between (BG_IOU_THRESHOLD <= IoU < FG_IOU_THRESHOLD)
-# are ignored (-1)
-_C.MODEL.RPN.IOU_THRESHOLDS = [0.3, 0.7]
-_C.MODEL.RPN.IOU_LABELS = [0, -1, 1]
-# Total number of RPN examples per image
-_C.MODEL.RPN.BATCH_SIZE_PER_IMAGE = 256
-# Target fraction of foreground (positive) examples per RPN minibatch
-_C.MODEL.RPN.POSITIVE_FRACTION = 0.5
-# Weights on (dx, dy, dw, dh) for normalizing RPN anchor regression targets
-_C.MODEL.RPN.BBOX_REG_WEIGHTS = (1.0, 1.0, 1.0, 1.0)
-# The transition point from L1 to L2 loss. Set to 0.0 to make the loss simply L1.
-_C.MODEL.RPN.SMOOTH_L1_BETA = 0.0
-_C.MODEL.RPN.LOSS_WEIGHT = 1.0
-# Number of top scoring RPN proposals to keep before applying NMS
-# When FPN is used, this is *per FPN level* (not total)
-_C.MODEL.RPN.PRE_NMS_TOPK_TRAIN = 12000
-_C.MODEL.RPN.PRE_NMS_TOPK_TEST = 6000
-# Number of top scoring RPN proposals to keep after applying NMS
-# When FPN is used, this limit is applied per level and then again to the union
-# of proposals from all levels
-# NOTE: When FPN is used, the meaning of this config is different from Detectron1.
-# It means per-batch topk in Detectron1, but per-image topk here.
-# See "modeling/rpn/rpn_outputs.py" for details.
-_C.MODEL.RPN.POST_NMS_TOPK_TRAIN = 2000
-_C.MODEL.RPN.POST_NMS_TOPK_TEST = 1000
-# NMS threshold used on RPN proposals
-_C.MODEL.RPN.NMS_THRESH = 0.7
-
-# ---------------------------------------------------------------------------- #
-# ROI HEADS options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.ROI_HEADS = CN()
-_C.MODEL.ROI_HEADS.NAME = "Res5ROIHeads"
-# Number of foreground classes
-_C.MODEL.ROI_HEADS.NUM_CLASSES = 80
-# Names of the input feature maps to be used by ROI heads
-# Currently all heads (box, mask, ...) use the same input feature map list
-# e.g., ["p2", "p3", "p4", "p5"] is commonly used for FPN
-_C.MODEL.ROI_HEADS.IN_FEATURES = ["res4"]
-# IOU overlap ratios [IOU_THRESHOLD]
-# Overlap threshold for an RoI to be considered background (if < IOU_THRESHOLD)
-# Overlap threshold for an RoI to be considered foreground (if >= IOU_THRESHOLD)
-_C.MODEL.ROI_HEADS.IOU_THRESHOLDS = [0.5]
-_C.MODEL.ROI_HEADS.IOU_LABELS = [0, 1]
-# RoI minibatch size *per image* (number of regions of interest [ROIs])
-# Total number of RoIs per training minibatch =
-# ROI_HEADS.BATCH_SIZE_PER_IMAGE * SOLVER.IMS_PER_BATCH
-# E.g., a common configuration is: 512 * 16 = 8192
-_C.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512
-# Target fraction of RoI minibatch that is labeled foreground (i.e. class > 0)
-_C.MODEL.ROI_HEADS.POSITIVE_FRACTION = 0.25
-
-# Only used on test mode
-
-# Minimum score threshold (assuming scores in a [0, 1] range); a value chosen to
-# balance obtaining high recall with not having too many low precision
-# detections that will slow down inference post processing steps (like NMS)
-# A default threshold of 0.0 increases AP by ~0.2-0.3 but significantly slows down
-# inference.
-_C.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.05
-# Overlap threshold used for non-maximum suppression (suppress boxes with
-# IoU >= this threshold)
-_C.MODEL.ROI_HEADS.NMS_THRESH_TEST = 0.5
-# If True, augment proposals with ground-truth boxes before sampling proposals to
-# train ROI heads.
-_C.MODEL.ROI_HEADS.PROPOSAL_APPEND_GT = True
-
-# ---------------------------------------------------------------------------- #
-# Box Head
-# ---------------------------------------------------------------------------- #
-_C.MODEL.ROI_BOX_HEAD = CN()
-# C4 don't use head name option
-# Options for non-C4 models: FastRCNNConvFCHead,
-_C.MODEL.ROI_BOX_HEAD.NAME = ""
-# Default weights on (dx, dy, dw, dh) for normalizing bbox regression targets
-# These are empirically chosen to approximately lead to unit variance targets
-_C.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS = (10.0, 10.0, 5.0, 5.0)
-# The transition point from L1 to L2 loss. Set to 0.0 to make the loss simply L1.
-_C.MODEL.ROI_BOX_HEAD.SMOOTH_L1_BETA = 0.0
-_C.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION = 14
-_C.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO = 0
-# Type of pooling operation applied to the incoming feature map for each RoI
-_C.MODEL.ROI_BOX_HEAD.POOLER_TYPE = "ROIAlignV2"
-
-_C.MODEL.ROI_BOX_HEAD.NUM_FC = 0
-# Hidden layer dimension for FC layers in the RoI box head
-_C.MODEL.ROI_BOX_HEAD.FC_DIM = 1024
-_C.MODEL.ROI_BOX_HEAD.NUM_CONV = 0
-# Channel dimension for Conv layers in the RoI box head
-_C.MODEL.ROI_BOX_HEAD.CONV_DIM = 256
-# Normalization method for the convolution layers.
-# Options: "" (no norm), "GN", "SyncBN".
-_C.MODEL.ROI_BOX_HEAD.NORM = ""
-# Whether to use class agnostic for bbox regression
-_C.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG = False
-# If true, RoI heads use bounding boxes predicted by the box head rather than proposal boxes.
-_C.MODEL.ROI_BOX_HEAD.TRAIN_ON_PRED_BOXES = False
-
-# ---------------------------------------------------------------------------- #
-# Cascaded Box Head
-# ---------------------------------------------------------------------------- #
-_C.MODEL.ROI_BOX_CASCADE_HEAD = CN()
-# The number of cascade stages is implicitly defined by the length of the following two configs.
-_C.MODEL.ROI_BOX_CASCADE_HEAD.BBOX_REG_WEIGHTS = (
- (10.0, 10.0, 5.0, 5.0),
- (20.0, 20.0, 10.0, 10.0),
- (30.0, 30.0, 15.0, 15.0),
-)
-_C.MODEL.ROI_BOX_CASCADE_HEAD.IOUS = (0.5, 0.6, 0.7)
-
-
-# ---------------------------------------------------------------------------- #
-# Mask Head
-# ---------------------------------------------------------------------------- #
-_C.MODEL.ROI_MASK_HEAD = CN()
-_C.MODEL.ROI_MASK_HEAD.NAME = "MaskRCNNConvUpsampleHead"
-_C.MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION = 14
-_C.MODEL.ROI_MASK_HEAD.POOLER_SAMPLING_RATIO = 0
-_C.MODEL.ROI_MASK_HEAD.NUM_CONV = 0 # The number of convs in the mask head
-_C.MODEL.ROI_MASK_HEAD.CONV_DIM = 256
-# Normalization method for the convolution layers.
-# Options: "" (no norm), "GN", "SyncBN".
-_C.MODEL.ROI_MASK_HEAD.NORM = ""
-# Whether to use class agnostic for mask prediction
-_C.MODEL.ROI_MASK_HEAD.CLS_AGNOSTIC_MASK = False
-# Type of pooling operation applied to the incoming feature map for each RoI
-_C.MODEL.ROI_MASK_HEAD.POOLER_TYPE = "ROIAlignV2"
-
-
-# ---------------------------------------------------------------------------- #
-# Keypoint Head
-# ---------------------------------------------------------------------------- #
-_C.MODEL.ROI_KEYPOINT_HEAD = CN()
-_C.MODEL.ROI_KEYPOINT_HEAD.NAME = "KRCNNConvDeconvUpsampleHead"
-_C.MODEL.ROI_KEYPOINT_HEAD.POOLER_RESOLUTION = 14
-_C.MODEL.ROI_KEYPOINT_HEAD.POOLER_SAMPLING_RATIO = 0
-_C.MODEL.ROI_KEYPOINT_HEAD.CONV_DIMS = tuple(512 for _ in range(8))
-_C.MODEL.ROI_KEYPOINT_HEAD.NUM_KEYPOINTS = 17 # 17 is the number of keypoints in COCO.
-
-# Images with too few (or no) keypoints are excluded from training.
-_C.MODEL.ROI_KEYPOINT_HEAD.MIN_KEYPOINTS_PER_IMAGE = 1
-# Normalize by the total number of visible keypoints in the minibatch if True.
-# Otherwise, normalize by the total number of keypoints that could ever exist
-# in the minibatch.
-# The keypoint softmax loss is only calculated on visible keypoints.
-# Since the number of visible keypoints can vary significantly between
-# minibatches, this has the effect of up-weighting the importance of
-# minibatches with few visible keypoints. (Imagine the extreme case of
-# only one visible keypoint versus N: in the case of N, each one
-# contributes 1/N to the gradient compared to the single keypoint
-# determining the gradient direction). Instead, we can normalize the
-# loss by the total number of keypoints, if it were the case that all
-# keypoints were visible in a full minibatch. (Returning to the example,
-# this means that the one visible keypoint contributes as much as each
-# of the N keypoints.)
-_C.MODEL.ROI_KEYPOINT_HEAD.NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS = True
-# Multi-task loss weight to use for keypoints
-# Recommended values:
-# - use 1.0 if NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS is True
-# - use 4.0 if NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS is False
-_C.MODEL.ROI_KEYPOINT_HEAD.LOSS_WEIGHT = 1.0
-# Type of pooling operation applied to the incoming feature map for each RoI
-_C.MODEL.ROI_KEYPOINT_HEAD.POOLER_TYPE = "ROIAlignV2"
-
-# ---------------------------------------------------------------------------- #
-# Semantic Segmentation Head
-# ---------------------------------------------------------------------------- #
-_C.MODEL.SEM_SEG_HEAD = CN()
-_C.MODEL.SEM_SEG_HEAD.NAME = "SemSegFPNHead"
-_C.MODEL.SEM_SEG_HEAD.IN_FEATURES = ["p2", "p3", "p4", "p5"]
-# Label in the semantic segmentation ground truth that is ignored, i.e., no loss is calculated for
-# the correposnding pixel.
-_C.MODEL.SEM_SEG_HEAD.IGNORE_VALUE = 255
-# Number of classes in the semantic segmentation head
-_C.MODEL.SEM_SEG_HEAD.NUM_CLASSES = 54
-# Number of channels in the 3x3 convs inside semantic-FPN heads.
-_C.MODEL.SEM_SEG_HEAD.CONVS_DIM = 128
-# Outputs from semantic-FPN heads are up-scaled to the COMMON_STRIDE stride.
-_C.MODEL.SEM_SEG_HEAD.COMMON_STRIDE = 4
-# Normalization method for the convolution layers. Options: "" (no norm), "GN".
-_C.MODEL.SEM_SEG_HEAD.NORM = "GN"
-_C.MODEL.SEM_SEG_HEAD.LOSS_WEIGHT = 1.0
-
-_C.MODEL.PANOPTIC_FPN = CN()
-# Scaling of all losses from instance detection / segmentation head.
-_C.MODEL.PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT = 1.0
-
-# options when combining instance & semantic segmentation outputs
-_C.MODEL.PANOPTIC_FPN.COMBINE = CN({"ENABLED": True})
-_C.MODEL.PANOPTIC_FPN.COMBINE.OVERLAP_THRESH = 0.5
-_C.MODEL.PANOPTIC_FPN.COMBINE.STUFF_AREA_LIMIT = 4096
-_C.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = 0.5
-
-
-# ---------------------------------------------------------------------------- #
-# RetinaNet Head
-# ---------------------------------------------------------------------------- #
-_C.MODEL.RETINANET = CN()
-
-# This is the number of foreground classes.
-_C.MODEL.RETINANET.NUM_CLASSES = 80
-
-_C.MODEL.RETINANET.IN_FEATURES = ["p3", "p4", "p5", "p6", "p7"]
-
-# Convolutions to use in the cls and bbox tower
-# NOTE: this doesn't include the last conv for logits
-_C.MODEL.RETINANET.NUM_CONVS = 4
-
-# IoU overlap ratio [bg, fg] for labeling anchors.
-# Anchors with < bg are labeled negative (0)
-# Anchors with >= bg and < fg are ignored (-1)
-# Anchors with >= fg are labeled positive (1)
-_C.MODEL.RETINANET.IOU_THRESHOLDS = [0.4, 0.5]
-_C.MODEL.RETINANET.IOU_LABELS = [0, -1, 1]
-
-# Prior prob for rare case (i.e. foreground) at the beginning of training.
-# This is used to set the bias for the logits layer of the classifier subnet.
-# This improves training stability in the case of heavy class imbalance.
-_C.MODEL.RETINANET.PRIOR_PROB = 0.01
-
-# Inference cls score threshold, only anchors with score > INFERENCE_TH are
-# considered for inference (to improve speed)
-_C.MODEL.RETINANET.SCORE_THRESH_TEST = 0.05
-_C.MODEL.RETINANET.TOPK_CANDIDATES_TEST = 1000
-_C.MODEL.RETINANET.NMS_THRESH_TEST = 0.5
-
-# Weights on (dx, dy, dw, dh) for normalizing Retinanet anchor regression targets
-_C.MODEL.RETINANET.BBOX_REG_WEIGHTS = (1.0, 1.0, 1.0, 1.0)
-
-# Loss parameters
-_C.MODEL.RETINANET.FOCAL_LOSS_GAMMA = 2.0
-_C.MODEL.RETINANET.FOCAL_LOSS_ALPHA = 0.25
-_C.MODEL.RETINANET.SMOOTH_L1_LOSS_BETA = 0.1
-
-
-# ---------------------------------------------------------------------------- #
-# ResNe[X]t options (ResNets = {ResNet, ResNeXt}
-# Note that parts of a resnet may be used for both the backbone and the head
-# These options apply to both
-# ---------------------------------------------------------------------------- #
-_C.MODEL.RESNETS = CN()
-
-_C.MODEL.RESNETS.DEPTH = 50
-_C.MODEL.RESNETS.OUT_FEATURES = ["res4"] # res4 for C4 backbone, res2..5 for FPN backbone
-
-# Number of groups to use; 1 ==> ResNet; > 1 ==> ResNeXt
-_C.MODEL.RESNETS.NUM_GROUPS = 1
-
-# Options: FrozenBN, GN, "SyncBN", "BN"
-_C.MODEL.RESNETS.NORM = "FrozenBN"
-
-# Baseline width of each group.
-# Scaling this parameters will scale the width of all bottleneck layers.
-_C.MODEL.RESNETS.WIDTH_PER_GROUP = 64
-
-# Place the stride 2 conv on the 1x1 filter
-# Use True only for the original MSRA ResNet; use False for C2 and Torch models
-_C.MODEL.RESNETS.STRIDE_IN_1X1 = True
-
-# Apply dilation in stage "res5"
-_C.MODEL.RESNETS.RES5_DILATION = 1
-
-# Output width of res2. Scaling this parameters will scale the width of all 1x1 convs in ResNet
-# For R18 and R34, this needs to be set to 64
-_C.MODEL.RESNETS.RES2_OUT_CHANNELS = 256
-_C.MODEL.RESNETS.STEM_OUT_CHANNELS = 64
-
-# Apply Deformable Convolution in stages
-# Specify if apply deform_conv on Res2, Res3, Res4, Res5
-_C.MODEL.RESNETS.DEFORM_ON_PER_STAGE = [False, False, False, False]
-# Use True to use modulated deform_conv (DeformableV2, https://arxiv.org/abs/1811.11168);
-# Use False for DeformableV1.
-_C.MODEL.RESNETS.DEFORM_MODULATED = False
-# Number of groups in deformable conv.
-_C.MODEL.RESNETS.DEFORM_NUM_GROUPS = 1
-
-
-# ---------------------------------------------------------------------------- #
-# Solver
-# ---------------------------------------------------------------------------- #
-_C.SOLVER = CN()
-
-# See detectron2/solver/build.py for LR scheduler options
-_C.SOLVER.LR_SCHEDULER_NAME = "WarmupMultiStepLR"
-
-_C.SOLVER.MAX_ITER = 40000
-
-_C.SOLVER.BASE_LR = 0.001
-
-_C.SOLVER.MOMENTUM = 0.9
-
-_C.SOLVER.WEIGHT_DECAY = 0.0001
-# The weight decay that's applied to parameters of normalization layers
-# (typically the affine transformation)
-_C.SOLVER.WEIGHT_DECAY_NORM = 0.0
-
-_C.SOLVER.GAMMA = 0.1
-# The iteration number to decrease learning rate by GAMMA.
-_C.SOLVER.STEPS = (30000,)
-
-_C.SOLVER.WARMUP_FACTOR = 1.0 / 1000
-_C.SOLVER.WARMUP_ITERS = 1000
-_C.SOLVER.WARMUP_METHOD = "linear"
-
-# Save a checkpoint after every this number of iterations
-_C.SOLVER.CHECKPOINT_PERIOD = 5000
-
-# Number of images per batch across all machines.
-# If we have 16 GPUs and IMS_PER_BATCH = 32,
-# each GPU will see 2 images per batch.
-_C.SOLVER.IMS_PER_BATCH = 16
-
-# Detectron v1 (and previous detection code) used a 2x higher LR and 0 WD for
-# biases. This is not useful (at least for recent models). You should avoid
-# changing these and they exist only to reproduce Detectron v1 training if
-# desired.
-_C.SOLVER.BIAS_LR_FACTOR = 1.0
-_C.SOLVER.WEIGHT_DECAY_BIAS = _C.SOLVER.WEIGHT_DECAY
-
-# Gradient clipping
-_C.SOLVER.CLIP_GRADIENTS = CN({"ENABLED": False})
-# Type of gradient clipping, currently 2 values are supported:
-# - "value": the absolute values of elements of each gradients are clipped
-# - "norm": the norm of the gradient for each parameter is clipped thus
-# affecting all elements in the parameter
-_C.SOLVER.CLIP_GRADIENTS.CLIP_TYPE = "value"
-# Maximum absolute value used for clipping gradients
-_C.SOLVER.CLIP_GRADIENTS.CLIP_VALUE = 1.0
-# Floating point number p for L-p norm to be used with the "norm"
-# gradient clipping type; for L-inf, please specify .inf
-_C.SOLVER.CLIP_GRADIENTS.NORM_TYPE = 2.0
-
-# ---------------------------------------------------------------------------- #
-# Specific test options
-# ---------------------------------------------------------------------------- #
-_C.TEST = CN()
-# For end-to-end tests to verify the expected accuracy.
-# Each item is [task, metric, value, tolerance]
-# e.g.: [['bbox', 'AP', 38.5, 0.2]]
-_C.TEST.EXPECTED_RESULTS = []
-# The period (in terms of steps) to evaluate the model during training.
-# Set to 0 to disable.
-_C.TEST.EVAL_PERIOD = 0
-# The sigmas used to calculate keypoint OKS. See http://cocodataset.org/#keypoints-eval
-# When empty it will use the defaults in COCO.
-# Otherwise it should have the same length as ROI_KEYPOINT_HEAD.NUM_KEYPOINTS.
-_C.TEST.KEYPOINT_OKS_SIGMAS = []
-# Maximum number of detections to return per image during inference (100 is
-# based on the limit established for the COCO dataset).
-_C.TEST.DETECTIONS_PER_IMAGE = 100
-
-_C.TEST.AUG = CN({"ENABLED": False})
-_C.TEST.AUG.MIN_SIZES = (400, 500, 600, 700, 800, 900, 1000, 1100, 1200)
-_C.TEST.AUG.MAX_SIZE = 4000
-_C.TEST.AUG.FLIP = True
-
-_C.TEST.PRECISE_BN = CN({"ENABLED": False})
-_C.TEST.PRECISE_BN.NUM_ITER = 200
-
-# ---------------------------------------------------------------------------- #
-# Misc options
-# ---------------------------------------------------------------------------- #
-# Directory where output files are written
-_C.OUTPUT_DIR = "./output"
-# Set seed to negative to fully randomize everything.
-# Set seed to positive to use a fixed seed. Note that a fixed seed does not
-# guarantee fully deterministic behavior.
-_C.SEED = -1
-# Benchmark different cudnn algorithms.
-# If input images have very different sizes, this option will have large overhead
-# for about 10k iterations. It usually hurts total time, but can benefit for certain models.
-# If input images have the same or similar sizes, benchmark is often helpful.
-_C.CUDNN_BENCHMARK = False
-# The period (in terms of steps) for minibatch visualization at train time.
-# Set to 0 to disable.
-_C.VIS_PERIOD = 0
-
-# global config is for quick hack purposes.
-# You can set them in command line or config files,
-# and access it with:
-#
-# from detectron2.config import global_cfg
-# print(global_cfg.HACK)
-#
-# Do not commit any configs into it.
-_C.GLOBAL = CN()
-_C.GLOBAL.HACK = 1.0
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mfb/model_cfgs.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mfb/model_cfgs.py
deleted file mode 100644
index e914255c67b3ef34f8c793a5311584fecd9f82d1..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mfb/model_cfgs.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# --------------------------------------------------------
-# OpenVQA
-# Written by Gao Pengbing https://github.com/nbgao
-# --------------------------------------------------------
-
-from openvqa.core.base_cfgs import BaseCfgs
-
-
-class Cfgs(BaseCfgs):
- def __init__(self):
- super(Cfgs, self).__init__()
-
- self.HIGH_ORDER = False
- self.HIDDEN_SIZE = 512
- self.MFB_K = 5
- self.MFB_O = 1000
- self.LSTM_OUT_SIZE = 1024
- self.DROPOUT_R = 0.1
- self.I_GLIMPSES = 2
- self.Q_GLIMPSES = 2
diff --git a/spaces/CVPR/regionclip-demo/detectron2/data/dataset_mapper.py b/spaces/CVPR/regionclip-demo/detectron2/data/dataset_mapper.py
deleted file mode 100644
index 5e03ea2f428a271fcc85de1d97a17a8914a8978a..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/data/dataset_mapper.py
+++ /dev/null
@@ -1,214 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import copy
-import logging
-import numpy as np
-from typing import List, Optional, Union
-import torch
-
-from detectron2.config import configurable
-
-from . import detection_utils as utils
-from . import transforms as T
-
-"""
-This file contains the default mapping that's applied to "dataset dicts".
-"""
-
-__all__ = ["DatasetMapper"]
-
-
-class DatasetMapper:
- """
- A callable which takes a dataset dict in Detectron2 Dataset format,
- and map it into a format used by the model.
-
- This is the default callable to be used to map your dataset dict into training data.
- You may need to follow it to implement your own one for customized logic,
- such as a different way to read or transform images.
- See :doc:`/tutorials/data_loading` for details.
-
- The callable currently does the following:
-
- 1. Read the image from "file_name"
- 2. Applies cropping/geometric transforms to the image and annotations
- 3. Prepare data and annotations to Tensor and :class:`Instances`
- """
-
- @configurable
- def __init__(
- self,
- is_train: bool,
- *,
- augmentations: List[Union[T.Augmentation, T.Transform]],
- image_format: str,
- use_instance_mask: bool = False,
- use_keypoint: bool = False,
- instance_mask_format: str = "polygon",
- keypoint_hflip_indices: Optional[np.ndarray] = None,
- precomputed_proposal_topk: Optional[int] = None,
- recompute_boxes: bool = False,
- filter_open_cls: bool = False,
- clip_crop: bool = False,
- ):
- """
- NOTE: this interface is experimental.
-
- Args:
- is_train: whether it's used in training or inference
- augmentations: a list of augmentations or deterministic transforms to apply
- image_format: an image format supported by :func:`detection_utils.read_image`.
- use_instance_mask: whether to process instance segmentation annotations, if available
- use_keypoint: whether to process keypoint annotations if available
- instance_mask_format: one of "polygon" or "bitmask". Process instance segmentation
- masks into this format.
- keypoint_hflip_indices: see :func:`detection_utils.create_keypoint_hflip_indices`
- precomputed_proposal_topk: if given, will load pre-computed
- proposals from dataset_dict and keep the top k proposals for each image.
- recompute_boxes: whether to overwrite bounding box annotations
- by computing tight bounding boxes from instance mask annotations.
- filter_open_cls: open-set setting, filter the open-set categories during training
- clip_crop: the mode that directly use CLIP on cropped image regions
- """
- if recompute_boxes:
- assert use_instance_mask, "recompute_boxes requires instance masks"
- # fmt: off
- self.is_train = is_train
- self.augmentations = T.AugmentationList(augmentations)
- self.image_format = image_format
- self.use_instance_mask = use_instance_mask
- self.instance_mask_format = instance_mask_format
- self.use_keypoint = use_keypoint
- self.keypoint_hflip_indices = keypoint_hflip_indices
- self.proposal_topk = precomputed_proposal_topk
- self.recompute_boxes = recompute_boxes
- self.filter_open_cls = filter_open_cls
- self.clip_crop = clip_crop
- # fmt: on
- logger = logging.getLogger(__name__)
- mode = "training" if is_train else "inference"
- logger.info(f"[DatasetMapper] Augmentations used in {mode}: {augmentations}")
-
- @classmethod
- def from_config(cls, cfg, is_train: bool = True):
- augs = utils.build_augmentation(cfg, is_train)
- if cfg.INPUT.CROP.ENABLED and is_train:
- augs.insert(0, T.RandomCrop(cfg.INPUT.CROP.TYPE, cfg.INPUT.CROP.SIZE))
- recompute_boxes = cfg.MODEL.MASK_ON
- else:
- recompute_boxes = False
-
- ret = {
- "is_train": is_train,
- "augmentations": augs,
- "image_format": cfg.INPUT.FORMAT,
- "use_instance_mask": cfg.MODEL.MASK_ON,
- "instance_mask_format": cfg.INPUT.MASK_FORMAT,
- "use_keypoint": cfg.MODEL.KEYPOINT_ON,
- "recompute_boxes": recompute_boxes,
- }
-
- if cfg.MODEL.KEYPOINT_ON:
- ret["keypoint_hflip_indices"] = utils.create_keypoint_hflip_indices(cfg.DATASETS.TRAIN)
-
- if cfg.MODEL.LOAD_PROPOSALS:
- ret["precomputed_proposal_topk"] = (
- cfg.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TRAIN
- if is_train
- else cfg.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TEST
- )
- # open-set setting, filter the open-set categories during training
- # filter_open_cls = cfg.SOLVER.IMS_PER_BATCH < 10 # debug
- # if filter_open_cls:
- # ret["filter_open_cls"] = True
- # CLIP inference on cropped image regions
- if cfg.MODEL.META_ARCHITECTURE in ["CLIPRCNN", "CLIPFastRCNN"]:
- ret["clip_crop"] = True
- return ret
-
- def __call__(self, dataset_dict):
- """
- Args:
- dataset_dict (dict): Metadata of one image, in Detectron2 Dataset format.
-
- Returns:
- dict: a format that builtin models in detectron2 accept
- """
- dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below
- # USER: Write your own image loading if it's not from a file
- image = utils.read_image(dataset_dict["file_name"], format=self.image_format)
- utils.check_image_size(dataset_dict, image)
-
- # USER: Remove if you don't do semantic/panoptic segmentation.
- if "sem_seg_file_name" in dataset_dict:
- sem_seg_gt = utils.read_image(dataset_dict.pop("sem_seg_file_name"), "L").squeeze(2)
- else:
- sem_seg_gt = None
-
- aug_input = T.AugInput(image, sem_seg=sem_seg_gt)
- transforms = self.augmentations(aug_input)
- # if self.clip_crop: # load original images into CLIP model, without resizing
- # pass
- # else:
- image, sem_seg_gt = aug_input.image, aug_input.sem_seg
-
- image_shape = image.shape[:2] # h, w
- # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory,
- # but not efficient on large generic data structures due to the use of pickle & mp.Queue.
- # Therefore it's important to use torch.Tensor.
- dataset_dict["image"] = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1)))
- if sem_seg_gt is not None:
- dataset_dict["sem_seg"] = torch.as_tensor(sem_seg_gt.astype("long"))
-
- # USER: Remove if you don't use pre-computed proposals.
- # Most users would not need this feature.
- if self.proposal_topk is not None:
- utils.transform_proposals(
- dataset_dict, image_shape, transforms, proposal_topk=self.proposal_topk
- )
-
- if not self.is_train:
- if self.clip_crop: # still load the GT annotations
- pass
- else:
- # USER: Modify this if you want to keep them for some reason.
- dataset_dict.pop("annotations", None)
- dataset_dict.pop("sem_seg_file_name", None)
- return dataset_dict
-
- if "annotations" in dataset_dict:
- # if self.filter_open_cls: # filter categories for open-set training
- # obj_annos = dataset_dict['annotations']
- # clean_obj_annos = [obj_anno for obj_anno in obj_annos if obj_anno['frequency'] != 'r'] # filter rare classes
- # if len(clean_obj_annos) == 0: # empty annotation
- # print("\n\nImage {} has no annotation after filtering open-set classes!\n\n".format(dataset_dict['image_id']))
- # clean_obj_annos = obj_annos[0] # keep one for compatability, fix it later
- # dataset_dict['annotations'] = clean_obj_annos
-
- # USER: Modify this if you want to keep them for some reason.
- for anno in dataset_dict["annotations"]:
- if not self.use_instance_mask:
- anno.pop("segmentation", None)
- if not self.use_keypoint:
- anno.pop("keypoints", None)
-
- # USER: Implement additional transformations if you have other types of data
- annos = [
- utils.transform_instance_annotations(
- obj, transforms, image_shape, keypoint_hflip_indices=self.keypoint_hflip_indices
- )
- for obj in dataset_dict.pop("annotations")
- if obj.get("iscrowd", 0) == 0
- ]
- instances = utils.annotations_to_instances(
- annos, image_shape, mask_format=self.instance_mask_format
- )
-
- # After transforms such as cropping are applied, the bounding box may no longer
- # tightly bound the object. As an example, imagine a triangle object
- # [(0,0), (2,0), (0,2)] cropped by a box [(1,0),(2,2)] (XYXY format). The tight
- # bounding box of the cropped triangle should be [(1,0),(2,1)], which is not equal to
- # the intersection of original bounding box and the cropping box.
- if self.recompute_boxes:
- instances.gt_boxes = instances.gt_masks.get_bounding_boxes()
- dataset_dict["instances"] = utils.filter_empty_instances(instances)
- return dataset_dict
diff --git a/spaces/CXD200/QSign/README.md b/spaces/CXD200/QSign/README.md
deleted file mode 100644
index 113f580f7c693f7fd8dc9051ca915f2f86dfeab3..0000000000000000000000000000000000000000
--- a/spaces/CXD200/QSign/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: QSign
-emoji: 💻
-colorFrom: gray
-colorTo: gray
-sdk: docker
-pinned: false
-duplicated_from: AIxPha/QSign
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Curranj/Words_To_SQL/README.md b/spaces/Curranj/Words_To_SQL/README.md
deleted file mode 100644
index 65458ab203d20ff01bb4a0c70f84b25c43568dc7..0000000000000000000000000000000000000000
--- a/spaces/Curranj/Words_To_SQL/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Words_to_sql
-emoji: 🐨
-colorFrom: green
-colorTo: purple
-sdk: gradio
-sdk_version: 3.0.6
-app_file: app.py
-pinned: true
----
-
-Natural Language to SQL
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/cpu/dcn_v2_im2col_cpu.cpp b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/cpu/dcn_v2_im2col_cpu.cpp
deleted file mode 100644
index 1704a60d1aeeecd4cd08b44a75ff2b0cf7167fac..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/cpu/dcn_v2_im2col_cpu.cpp
+++ /dev/null
@@ -1,395 +0,0 @@
-#include "dcn_v2_im2col_cpu.h"
-#include
-#include
-#include
-
-#include
-//#include
-
-#include
-//#include
-//#include
-
-// modified from the CUDA version for CPU use by Daniel K. Suhendro
-
-/*#define CUDA_KERNEL_LOOP(i, n) \
- for (int i = blockIdx.x * blockDim.x + threadIdx.x; \
- i < (n); \
- i += blockDim.x * gridDim.x)
-
-const int CUDA_NUM_THREADS = 1024;
-inline int GET_BLOCKS(const int N)
-{
- return (N + CUDA_NUM_THREADS - 1) / CUDA_NUM_THREADS;
-}*/
-
-
-float dmcn_im2col_bilinear_cpu(const float *bottom_data, const int data_width,
- const int height, const int width, float h, float w)
-{
- int h_low = floor(h);
- int w_low = floor(w);
- int h_high = h_low + 1;
- int w_high = w_low + 1;
-
- float lh = h - h_low;
- float lw = w - w_low;
- float hh = 1 - lh, hw = 1 - lw;
-
- float v1 = 0;
- if (h_low >= 0 && w_low >= 0)
- v1 = bottom_data[h_low * data_width + w_low];
- float v2 = 0;
- if (h_low >= 0 && w_high <= width - 1)
- v2 = bottom_data[h_low * data_width + w_high];
- float v3 = 0;
- if (h_high <= height - 1 && w_low >= 0)
- v3 = bottom_data[h_high * data_width + w_low];
- float v4 = 0;
- if (h_high <= height - 1 && w_high <= width - 1)
- v4 = bottom_data[h_high * data_width + w_high];
-
- float w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw;
-
- float val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4);
- return val;
-}
-
-float dmcn_get_gradient_weight_cpu(float argmax_h, float argmax_w,
- const int h, const int w, const int height, const int width)
-{
- if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || argmax_w >= width)
- {
- //empty
- return 0;
- }
-
- int argmax_h_low = floor(argmax_h);
- int argmax_w_low = floor(argmax_w);
- int argmax_h_high = argmax_h_low + 1;
- int argmax_w_high = argmax_w_low + 1;
-
- float weight = 0;
- if (h == argmax_h_low && w == argmax_w_low)
- weight = (h + 1 - argmax_h) * (w + 1 - argmax_w);
- if (h == argmax_h_low && w == argmax_w_high)
- weight = (h + 1 - argmax_h) * (argmax_w + 1 - w);
- if (h == argmax_h_high && w == argmax_w_low)
- weight = (argmax_h + 1 - h) * (w + 1 - argmax_w);
- if (h == argmax_h_high && w == argmax_w_high)
- weight = (argmax_h + 1 - h) * (argmax_w + 1 - w);
- return weight;
-}
-
-float dmcn_get_coordinate_weight_cpu(float argmax_h, float argmax_w,
- const int height, const int width, const float *im_data,
- const int data_width, const int bp_dir)
-{
- if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || argmax_w >= width)
- {
- //empty
- return 0;
- }
-
- int argmax_h_low = floor(argmax_h);
- int argmax_w_low = floor(argmax_w);
- int argmax_h_high = argmax_h_low + 1;
- int argmax_w_high = argmax_w_low + 1;
-
- float weight = 0;
-
- if (bp_dir == 0)
- {
- if (argmax_h_low >= 0 && argmax_w_low >= 0)
- weight += -1 * (argmax_w_low + 1 - argmax_w) * im_data[argmax_h_low * data_width + argmax_w_low];
- if (argmax_h_low >= 0 && argmax_w_high <= width - 1)
- weight += -1 * (argmax_w - argmax_w_low) * im_data[argmax_h_low * data_width + argmax_w_high];
- if (argmax_h_high <= height - 1 && argmax_w_low >= 0)
- weight += (argmax_w_low + 1 - argmax_w) * im_data[argmax_h_high * data_width + argmax_w_low];
- if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1)
- weight += (argmax_w - argmax_w_low) * im_data[argmax_h_high * data_width + argmax_w_high];
- }
- else if (bp_dir == 1)
- {
- if (argmax_h_low >= 0 && argmax_w_low >= 0)
- weight += -1 * (argmax_h_low + 1 - argmax_h) * im_data[argmax_h_low * data_width + argmax_w_low];
- if (argmax_h_low >= 0 && argmax_w_high <= width - 1)
- weight += (argmax_h_low + 1 - argmax_h) * im_data[argmax_h_low * data_width + argmax_w_high];
- if (argmax_h_high <= height - 1 && argmax_w_low >= 0)
- weight += -1 * (argmax_h - argmax_h_low) * im_data[argmax_h_high * data_width + argmax_w_low];
- if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1)
- weight += (argmax_h - argmax_h_low) * im_data[argmax_h_high * data_width + argmax_w_high];
- }
-
- return weight;
-}
-
-void modulated_deformable_im2col_cpu_kernel(const int n, const float *data_im, const float *data_offset, const float *data_mask,
- const int height, const int width, const int kernel_h, const int kernel_w,
- const int pad_h, const int pad_w,
- const int stride_h, const int stride_w,
- const int dilation_h, const int dilation_w,
- const int channel_per_deformable_group,
- const int batch_size, const int num_channels, const int deformable_group,
- const int height_col, const int width_col,
- float *data_col)
-{
- // launch channels * batch_size * height_col * width_col cores
- for(int index=0; index(0);
- const float h_im = h_in + i * dilation_h + offset_h;
- const float w_im = w_in + j * dilation_w + offset_w;
- //if (h_im >= 0 && w_im >= 0 && h_im < height && w_im < width) {
- if (h_im > -1 && w_im > -1 && h_im < height && w_im < width)
- {
- //const float map_h = i * dilation_h + offset_h;
- //const float map_w = j * dilation_w + offset_w;
- //const int cur_height = height - h_in;
- //const int cur_width = width - w_in;
- //val = dmcn_im2col_bilinear_cpu(data_im_ptr, width, cur_height, cur_width, map_h, map_w);
- val = dmcn_im2col_bilinear_cpu(data_im_ptr, width, height, width, h_im, w_im);
- }
- *data_col_ptr = val * mask;
- // data_col_ptr += batch_size * height_col * width_col;
- data_col_ptr += height_col * width_col;
- }
- }
- }
-}
-
-void modulated_deformable_col2im_cpu_kernel(const int n, const float *data_col, const float *data_offset, const float *data_mask,
- const int channels, const int height, const int width,
- const int kernel_h, const int kernel_w,
- const int pad_h, const int pad_w,
- const int stride_h, const int stride_w,
- const int dilation_h, const int dilation_w,
- const int channel_per_deformable_group,
- const int batch_size, const int deformable_group,
- const int height_col, const int width_col,
- float *grad_im)
-{
- for(int index = 0; index < n; index++)
- {
- const int j = (index / width_col / height_col / batch_size) % kernel_w;
- const int i = (index / width_col / height_col / batch_size / kernel_w) % kernel_h;
- const int c = index / width_col / height_col / batch_size / kernel_w / kernel_h;
- // compute the start and end of the output
-
- const int deformable_group_index = c / channel_per_deformable_group;
-
- int w_out = index % width_col;
- int h_out = (index / width_col) % height_col;
- int b = (index / width_col / height_col) % batch_size;
- int w_in = w_out * stride_w - pad_w;
- int h_in = h_out * stride_h - pad_h;
-
- const float *data_offset_ptr = data_offset + (b * deformable_group + deformable_group_index) * 2 * kernel_h * kernel_w * height_col * width_col;
- const float *data_mask_ptr = data_mask + (b * deformable_group + deformable_group_index) * kernel_h * kernel_w * height_col * width_col;
- const int data_offset_h_ptr = ((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out;
- const int data_offset_w_ptr = ((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out;
- const int data_mask_hw_ptr = ((i * kernel_w + j) * height_col + h_out) * width_col + w_out;
- const float offset_h = data_offset_ptr[data_offset_h_ptr];
- const float offset_w = data_offset_ptr[data_offset_w_ptr];
- const float mask = data_mask_ptr[data_mask_hw_ptr];
- const float cur_inv_h_data = h_in + i * dilation_h + offset_h;
- const float cur_inv_w_data = w_in + j * dilation_w + offset_w;
-
- const float cur_top_grad = data_col[index] * mask;
- const int cur_h = (int)cur_inv_h_data;
- const int cur_w = (int)cur_inv_w_data;
-
- for (int dy = -2; dy <= 2; dy++)
- {
- for (int dx = -2; dx <= 2; dx++)
- {
- if (cur_h + dy >= 0 && cur_h + dy < height &&
- cur_w + dx >= 0 && cur_w + dx < width &&
- abs(cur_inv_h_data - (cur_h + dy)) < 1 &&
- abs(cur_inv_w_data - (cur_w + dx)) < 1)
- {
- int cur_bottom_grad_pos = ((b * channels + c) * height + cur_h + dy) * width + cur_w + dx;
- float weight = dmcn_get_gradient_weight_cpu(cur_inv_h_data, cur_inv_w_data, cur_h + dy, cur_w + dx, height, width);
- //atomicAdd(grad_im + cur_bottom_grad_pos, weight * cur_top_grad);
- *(grad_im + cur_bottom_grad_pos) += weight * cur_top_grad;
-
- }
- }
- }
- }
-}
-
-void modulated_deformable_col2im_coord_cpu_kernel(const int n, const float *data_col, const float *data_im,
- const float *data_offset, const float *data_mask,
- const int channels, const int height, const int width,
- const int kernel_h, const int kernel_w,
- const int pad_h, const int pad_w,
- const int stride_h, const int stride_w,
- const int dilation_h, const int dilation_w,
- const int channel_per_deformable_group,
- const int batch_size, const int offset_channels, const int deformable_group,
- const int height_col, const int width_col,
- float *grad_offset, float *grad_mask)
-{
- for(int index = 0; index < n; index++)
- {
- float val = 0, mval = 0;
- int w = index % width_col;
- int h = (index / width_col) % height_col;
- int c = (index / width_col / height_col) % offset_channels;
- int b = (index / width_col / height_col) / offset_channels;
- // compute the start and end of the output
-
- const int deformable_group_index = c / (2 * kernel_h * kernel_w);
- const int col_step = kernel_h * kernel_w;
- int cnt = 0;
- const float *data_col_ptr = data_col + deformable_group_index * channel_per_deformable_group * batch_size * width_col * height_col;
- const float *data_im_ptr = data_im + (b * deformable_group + deformable_group_index) * channel_per_deformable_group / kernel_h / kernel_w * height * width;
- const float *data_offset_ptr = data_offset + (b * deformable_group + deformable_group_index) * 2 * kernel_h * kernel_w * height_col * width_col;
- const float *data_mask_ptr = data_mask + (b * deformable_group + deformable_group_index) * kernel_h * kernel_w * height_col * width_col;
-
- const int offset_c = c - deformable_group_index * 2 * kernel_h * kernel_w;
-
- for (int col_c = (offset_c / 2); col_c < channel_per_deformable_group; col_c += col_step)
- {
- const int col_pos = (((col_c * batch_size + b) * height_col) + h) * width_col + w;
- const int bp_dir = offset_c % 2;
-
- int j = (col_pos / width_col / height_col / batch_size) % kernel_w;
- int i = (col_pos / width_col / height_col / batch_size / kernel_w) % kernel_h;
- int w_out = col_pos % width_col;
- int h_out = (col_pos / width_col) % height_col;
- int w_in = w_out * stride_w - pad_w;
- int h_in = h_out * stride_h - pad_h;
- const int data_offset_h_ptr = (((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out);
- const int data_offset_w_ptr = (((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out);
- const int data_mask_hw_ptr = (((i * kernel_w + j) * height_col + h_out) * width_col + w_out);
- const float offset_h = data_offset_ptr[data_offset_h_ptr];
- const float offset_w = data_offset_ptr[data_offset_w_ptr];
- const float mask = data_mask_ptr[data_mask_hw_ptr];
- float inv_h = h_in + i * dilation_h + offset_h;
- float inv_w = w_in + j * dilation_w + offset_w;
- if (inv_h <= -1 || inv_w <= -1 || inv_h >= height || inv_w >= width)
- {
- inv_h = inv_w = -2;
- }
- else
- {
- mval += data_col_ptr[col_pos] * dmcn_im2col_bilinear_cpu(data_im_ptr + cnt * height * width, width, height, width, inv_h, inv_w);
- }
- const float weight = dmcn_get_coordinate_weight_cpu(
- inv_h, inv_w,
- height, width, data_im_ptr + cnt * height * width, width, bp_dir);
- val += weight * data_col_ptr[col_pos] * mask;
- cnt += 1;
- }
- // KERNEL_ASSIGN(grad_offset[index], offset_req, val);
- grad_offset[index] = val;
- if (offset_c % 2 == 0)
- // KERNEL_ASSIGN(grad_mask[(((b * deformable_group + deformable_group_index) * kernel_h * kernel_w + offset_c / 2) * height_col + h) * width_col + w], mask_req, mval);
- grad_mask[(((b * deformable_group + deformable_group_index) * kernel_h * kernel_w + offset_c / 2) * height_col + h) * width_col + w] = mval;
- }
-}
-
-void modulated_deformable_im2col_cpu(const float* data_im, const float* data_offset, const float* data_mask,
- const int batch_size, const int channels, const int height_im, const int width_im,
- const int height_col, const int width_col, const int kernel_h, const int kernel_w,
- const int pad_h, const int pad_w, const int stride_h, const int stride_w,
- const int dilation_h, const int dilation_w,
- const int deformable_group, float* data_col) {
- // num_axes should be smaller than block size
- const int channel_per_deformable_group = channels / deformable_group;
- const int num_kernels = channels * batch_size * height_col * width_col;
- modulated_deformable_im2col_cpu_kernel(
- num_kernels, data_im, data_offset, data_mask, height_im, width_im, kernel_h, kernel_w,
- pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, channel_per_deformable_group,
- batch_size, channels, deformable_group, height_col, width_col, data_col);
-
- /*cudaError_t err = cudaGetLastError();
- if (err != cudaSuccess)
- {
- printf("error in modulated_deformable_im2col_cuda: %s\n", cudaGetErrorString(err));
- }*/
-
-}
-
-void modulated_deformable_col2im_cpu(const float* data_col, const float* data_offset, const float* data_mask,
- const int batch_size, const int channels, const int height_im, const int width_im,
- const int height_col, const int width_col, const int kernel_h, const int kernel_w,
- const int pad_h, const int pad_w, const int stride_h, const int stride_w,
- const int dilation_h, const int dilation_w,
- const int deformable_group, float* grad_im){
-
- const int channel_per_deformable_group = channels / deformable_group;
- const int num_kernels = channels * kernel_h * kernel_w * batch_size * height_col * width_col;
- modulated_deformable_col2im_cpu_kernel(
- num_kernels, data_col, data_offset, data_mask, channels, height_im, width_im,
- kernel_h, kernel_w, pad_h, pad_h, stride_h, stride_w,
- dilation_h, dilation_w, channel_per_deformable_group,
- batch_size, deformable_group, height_col, width_col, grad_im);
- /*cudaError_t err = cudaGetLastError();
- if (err != cudaSuccess)
- {
- printf("error in modulated_deformable_col2im_cuda: %s\n", cudaGetErrorString(err));
- }*/
-
-}
-
-void modulated_deformable_col2im_coord_cpu(const float* data_col, const float* data_im, const float* data_offset, const float* data_mask,
- const int batch_size, const int channels, const int height_im, const int width_im,
- const int height_col, const int width_col, const int kernel_h, const int kernel_w,
- const int pad_h, const int pad_w, const int stride_h, const int stride_w,
- const int dilation_h, const int dilation_w,
- const int deformable_group,
- float* grad_offset, float* grad_mask) {
- const int num_kernels = batch_size * height_col * width_col * 2 * kernel_h * kernel_w * deformable_group;
- const int channel_per_deformable_group = channels * kernel_h * kernel_w / deformable_group;
- modulated_deformable_col2im_coord_cpu_kernel(
- num_kernels, data_col, data_im, data_offset, data_mask, channels, height_im, width_im,
- kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w,
- dilation_h, dilation_w, channel_per_deformable_group,
- batch_size, 2 * kernel_h * kernel_w * deformable_group, deformable_group, height_col, width_col,
- grad_offset, grad_mask);
- /*cudaError_t err = cudaGetLastError();
- if (err != cudaSuccess)
- {
- printf("error in modulated_deformable_col2im_coord_cuda: %s\n", cudaGetErrorString(err));
- }*/
-}
\ No newline at end of file
diff --git a/spaces/DHEIVER/Classificacao.de.Imagens.de.Cardiomiopatia/README.md b/spaces/DHEIVER/Classificacao.de.Imagens.de.Cardiomiopatia/README.md
deleted file mode 100644
index 3c75846a271c38ecc56724b3590536cdc366fc29..0000000000000000000000000000000000000000
--- a/spaces/DHEIVER/Classificacao.de.Imagens.de.Cardiomiopatia/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Cardiomyopathy Image Classification
-emoji: 🐠
-colorFrom: pink
-colorTo: gray
-sdk: gradio
-sdk_version: 3.42.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DHEIVER/CoronaryAngioSegment/README.md b/spaces/DHEIVER/CoronaryAngioSegment/README.md
deleted file mode 100644
index 32bd868daf127f136881a5daf5c43b865dbc04e3..0000000000000000000000000000000000000000
--- a/spaces/DHEIVER/CoronaryAngioSegment/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: CoronaryAngioSegment
-emoji: 🌖
-colorFrom: green
-colorTo: pink
-sdk: gradio
-sdk_version: 3.33.1
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: KurtLin/CoronaryAngioSegment
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/util/bokeh_util.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/util/bokeh_util.py
deleted file mode 100644
index e75654d7c30c552c1e1bd0492a85d40e8f27de40..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/util/bokeh_util.py
+++ /dev/null
@@ -1,90 +0,0 @@
-from __future__ import annotations
-
-from typing import TYPE_CHECKING, cast
-
-from contourpy import FillType, LineType
-from contourpy.util.mpl_util import mpl_codes_to_offsets
-
-if TYPE_CHECKING:
- from contourpy._contourpy import (
- CoordinateArray, FillReturn, LineReturn, LineReturn_Separate, LineReturn_SeparateCode,
- )
-
-
-def filled_to_bokeh(
- filled: FillReturn,
- fill_type: FillType,
-) -> tuple[list[list[CoordinateArray]], list[list[CoordinateArray]]]:
- xs: list[list[CoordinateArray]] = []
- ys: list[list[CoordinateArray]] = []
- if fill_type in (FillType.OuterOffset, FillType.ChunkCombinedOffset,
- FillType.OuterCode, FillType.ChunkCombinedCode):
- have_codes = fill_type in (FillType.OuterCode, FillType.ChunkCombinedCode)
-
- for points, offsets in zip(*filled):
- if points is None:
- continue
- if have_codes:
- offsets = mpl_codes_to_offsets(offsets)
- xs.append([]) # New outer with zero or more holes.
- ys.append([])
- for i in range(len(offsets)-1):
- xys = points[offsets[i]:offsets[i+1]]
- xs[-1].append(xys[:, 0])
- ys[-1].append(xys[:, 1])
- elif fill_type in (FillType.ChunkCombinedCodeOffset, FillType.ChunkCombinedOffsetOffset):
- for points, codes_or_offsets, outer_offsets in zip(*filled):
- if points is None:
- continue
- for j in range(len(outer_offsets)-1):
- if fill_type == FillType.ChunkCombinedCodeOffset:
- codes = codes_or_offsets[outer_offsets[j]:outer_offsets[j+1]]
- offsets = mpl_codes_to_offsets(codes) + outer_offsets[j]
- else:
- offsets = codes_or_offsets[outer_offsets[j]:outer_offsets[j+1]+1]
- xs.append([]) # New outer with zero or more holes.
- ys.append([])
- for k in range(len(offsets)-1):
- xys = points[offsets[k]:offsets[k+1]]
- xs[-1].append(xys[:, 0])
- ys[-1].append(xys[:, 1])
- else:
- raise RuntimeError(f"Conversion of FillType {fill_type} to Bokeh is not implemented")
-
- return xs, ys
-
-
-def lines_to_bokeh(
- lines: LineReturn,
- line_type: LineType,
-) -> tuple[list[CoordinateArray], list[CoordinateArray]]:
- xs: list[CoordinateArray] = []
- ys: list[CoordinateArray] = []
-
- if line_type == LineType.Separate:
- if TYPE_CHECKING:
- lines = cast(LineReturn_Separate, lines)
- for line in lines:
- xs.append(line[:, 0])
- ys.append(line[:, 1])
- elif line_type == LineType.SeparateCode:
- if TYPE_CHECKING:
- lines = cast(LineReturn_SeparateCode, lines)
- for line in lines[0]:
- xs.append(line[:, 0])
- ys.append(line[:, 1])
- elif line_type in (LineType.ChunkCombinedCode, LineType.ChunkCombinedOffset):
- for points, offsets in zip(*lines):
- if points is None:
- continue
- if line_type == LineType.ChunkCombinedCode:
- offsets = mpl_codes_to_offsets(offsets)
-
- for i in range(len(offsets)-1):
- line = points[offsets[i]:offsets[i+1]]
- xs.append(line[:, 0])
- ys.append(line[:, 1])
- else:
- raise RuntimeError(f"Conversion of LineType {line_type} to Bokeh is not implemented")
-
- return xs, ys
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/qu2cu/__main__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/qu2cu/__main__.py
deleted file mode 100644
index 27728cc7aa400fa7389cf0ba31990165bc7b03b5..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/qu2cu/__main__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-import sys
-
-from .cli import main
-
-
-if __name__ == "__main__":
- sys.exit(main())
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/strings.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/strings.py
deleted file mode 100644
index d85bc052969438e1e05dbf3abd9c75c8effc7d03..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/strings.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import os
-import threading
-from typing import Dict
-
-import requests
-
-from gradio import wasm_utils
-
-MESSAGING_API_ENDPOINT = "https://api.gradio.app/gradio-messaging/en"
-
-en = {
- "RUNNING_LOCALLY": "Running on local URL: {}",
- "RUNNING_LOCALLY_SEPARATED": "Running on local URL: {}://{}:{}",
- "SHARE_LINK_DISPLAY": "Running on public URL: {}",
- "COULD_NOT_GET_SHARE_LINK": "\nCould not create share link. Please check your internet connection or our status page: https://status.gradio.app.",
- "COULD_NOT_GET_SHARE_LINK_MISSING_FILE": "\nCould not create share link. Missing file: {}. \n\nPlease check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps: \n\n1. Download this file: {}\n2. Rename the downloaded file to: {}\n3. Move the file to this location: {}",
- "COLAB_NO_LOCAL": "Cannot display local interface on google colab, public link created.",
- "PUBLIC_SHARE_TRUE": "\nTo create a public link, set `share=True` in `launch()`.",
- "MODEL_PUBLICLY_AVAILABLE_URL": "Model available publicly at: {} (may take up to a minute for link to be usable)",
- "GENERATING_PUBLIC_LINK": "Generating public link (may take a few seconds...):",
- "BETA_INVITE": "\nThanks for being a Gradio user! If you have questions or feedback, please join our Discord server and chat with us: https://discord.gg/feTf9x3ZSB",
- "COLAB_DEBUG_TRUE": "Colab notebook detected. This cell will run indefinitely so that you can see errors and logs. "
- "To turn off, set debug=False in launch().",
- "COLAB_DEBUG_FALSE": "Colab notebook detected. To show errors in colab notebook, set debug=True in launch()",
- "COLAB_WARNING": "Note: opening Chrome Inspector may crash demo inside Colab notebooks.",
- "SHARE_LINK_MESSAGE": "\nThis share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)",
- "INLINE_DISPLAY_BELOW": "Interface loading below...",
- "TIPS": [
- "You can add authentication to your app with the `auth=` kwarg in the `launch()` command; for example: `gr.Interface(...).launch(auth=('username', 'password'))`",
- "Let users specify why they flagged input with the `flagging_options=` kwarg; for example: `gr.Interface(..., flagging_options=['too slow', 'incorrect output', 'other'])`",
- "You can show or hide the button for flagging with the `allow_flagging=` kwarg; for example: gr.Interface(..., allow_flagging=False)",
- "The inputs and outputs flagged by the users are stored in the flagging directory, specified by the flagging_dir= kwarg. You can view this data through the interface by setting the examples= kwarg to the flagging directory; for example gr.Interface(..., examples='flagged')",
- "You can add a title and description to your interface using the `title=` and `description=` kwargs. The `article=` kwarg can be used to add a description under the interface; for example gr.Interface(..., title='My app', description='Lorem ipsum'). Try using Markdown!",
- "For a classification or regression model, set `interpretation='default'` to see why the model made a prediction.",
- ],
-}
-
-
-def get_updated_messaging(en: Dict):
- try:
- updated_messaging = requests.get(MESSAGING_API_ENDPOINT, timeout=3).json()
- en.update(updated_messaging)
- except Exception: # Use default messaging
- pass
-
-
-if os.getenv("GRADIO_ANALYTICS_ENABLED", "True") == "True" and not wasm_utils.IS_WASM:
- threading.Thread(target=get_updated_messaging, args=(en,)).start()
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Login-aa2d581f.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Login-aa2d581f.js
deleted file mode 100644
index fbb42150314250efd9cf9a32f40b1b4a51b71c8a..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Login-aa2d581f.js
+++ /dev/null
@@ -1,3 +0,0 @@
-import{S as j,e as q,s as A,N as h,k as $,K as C,U as L,p,o as v,z as x,v as w,A as c,x as k,O as g,P,M as B,R as H,h as N,j as S,t as I}from"./index-3370be2a.js";import{F as K}from"./Form-bf52aaa0.js";import{T}from"./Textbox-086bc878.js";import{a as M}from"./Button-89624748.js";import{C as R}from"./Column-61895400.js";/* empty css */import"./BlockTitle-bcf8c05e.js";import"./Info-5611e10f.js";import"./Copy-6cd42558.js";/* empty css */function z(i){let e,s;return{c(){e=h("p"),s=P(i[0]),C(e,"class","auth svelte-1ogxbi0")},m(l,o){p(l,e,o),B(e,s)},p(l,o){o&1&&H(s,l[0])},d(l){l&&c(e)}}}function D(i){let e;return{c(){e=h("p"),e.textContent=`If you are visiting a HuggingFace Space in Incognito mode, you must
- enable third party cookies.`,C(e,"class","auth svelte-1ogxbi0")},m(s,l){p(s,e,l)},d(s){s&&c(e)}}}function O(i){let e;return{c(){e=h("p"),e.textContent="Incorrect Credentials",C(e,"class","creds svelte-1ogxbi0")},m(s,l){p(s,e,l)},d(s){s&&c(e)}}}function U(i){let e,s,l,o,r,m;function d(n){i[8](n)}let _={label:"username",lines:1,show_label:!0,max_lines:1,mode:"dynamic"};i[3]!==void 0&&(_.value=i[3]),e=new T({props:_}),N.push(()=>S(e,"value",d)),e.$on("submit",i[6]);function b(n){i[9](n)}let u={label:"password",lines:1,show_label:!0,max_lines:1,mode:"dynamic",type:"password"};return i[4]!==void 0&&(u.value=i[4]),o=new T({props:u}),N.push(()=>S(o,"value",b)),o.$on("submit",i[6]),{c(){$(e.$$.fragment),l=g(),$(o.$$.fragment)},m(n,f){v(e,n,f),p(n,l,f),v(o,n,f),m=!0},p(n,f){const t={};!s&&f&8&&(s=!0,t.value=n[3],I(()=>s=!1)),e.$set(t);const a={};!r&&f&16&&(r=!0,a.value=n[4],I(()=>r=!1)),o.$set(a)},i(n){m||(x(e.$$.fragment,n),x(o.$$.fragment,n),m=!0)},o(n){w(e.$$.fragment,n),w(o.$$.fragment,n),m=!1},d(n){n&&c(l),k(e,n),k(o,n)}}}function E(i){let e;return{c(){e=P("Login")},m(s,l){p(s,e,l)},d(s){s&&c(e)}}}function G(i){let e,s,l,o,r,m,d,_,b,u=i[0]&&z(i),n=i[2]&&D(),f=i[5]&&O();return m=new K({props:{$$slots:{default:[U]},$$scope:{ctx:i}}}),_=new M({props:{size:"lg",variant:"primary",$$slots:{default:[E]},$$scope:{ctx:i}}}),_.$on("click",i[6]),{c(){e=h("h2"),e.textContent="Login",s=g(),u&&u.c(),l=g(),n&&n.c(),o=g(),f&&f.c(),r=g(),$(m.$$.fragment),d=g(),$(_.$$.fragment),C(e,"class","svelte-1ogxbi0")},m(t,a){p(t,e,a),p(t,s,a),u&&u.m(t,a),p(t,l,a),n&&n.m(t,a),p(t,o,a),f&&f.m(t,a),p(t,r,a),v(m,t,a),p(t,d,a),v(_,t,a),b=!0},p(t,a){t[0]?u?u.p(t,a):(u=z(t),u.c(),u.m(l.parentNode,l)):u&&(u.d(1),u=null),t[2]?n||(n=D(),n.c(),n.m(o.parentNode,o)):n&&(n.d(1),n=null),t[5]?f||(f=O(),f.c(),f.m(r.parentNode,r)):f&&(f.d(1),f=null);const y={};a&1048&&(y.$$scope={dirty:a,ctx:t}),m.$set(y);const F={};a&1024&&(F.$$scope={dirty:a,ctx:t}),_.$set(F)},i(t){b||(x(m.$$.fragment,t),x(_.$$.fragment,t),b=!0)},o(t){w(m.$$.fragment,t),w(_.$$.fragment,t),b=!1},d(t){t&&(c(e),c(s),c(l),c(o),c(r),c(d)),u&&u.d(t),n&&n.d(t),f&&f.d(t),k(m,t),k(_,t)}}}function J(i){let e,s,l;return s=new R({props:{variant:"panel",min_width:480,$$slots:{default:[G]},$$scope:{ctx:i}}}),{c(){e=h("div"),$(s.$$.fragment),C(e,"class","wrap svelte-1ogxbi0"),L(e,"min-h-screen",i[1])},m(o,r){p(o,e,r),v(s,e,null),l=!0},p(o,[r]){const m={};r&1085&&(m.$$scope={dirty:r,ctx:o}),s.$set(m),(!l||r&2)&&L(e,"min-h-screen",o[1])},i(o){l||(x(s.$$.fragment,o),l=!0)},o(o){w(s.$$.fragment,o),l=!1},d(o){o&&c(e),k(s)}}}function Q(i,e,s){let{root:l}=e,{auth_message:o}=e,{app_mode:r}=e,{space_id:m}=e,d="",_="",b=!1;const u=async()=>{const t=new FormData;t.append("username",d),t.append("password",_);let a=await fetch(l+"/login",{method:"POST",body:t});a.status===400?(s(5,b=!0),s(3,d=""),s(4,_="")):a.status==200&&location.reload()};function n(t){d=t,s(3,d)}function f(t){_=t,s(4,_)}return i.$$set=t=>{"root"in t&&s(7,l=t.root),"auth_message"in t&&s(0,o=t.auth_message),"app_mode"in t&&s(1,r=t.app_mode),"space_id"in t&&s(2,m=t.space_id)},[o,r,m,d,_,b,u,l,n,f]}class le extends j{constructor(e){super(),q(this,e,Q,J,A,{root:7,auth_message:0,app_mode:1,space_id:2})}}export{le as default};
-//# sourceMappingURL=Login-aa2d581f.js.map
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_client.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_client.py
deleted file mode 100644
index cb475e02045aafac34309e4b808e12c580e58d8f..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_client.py
+++ /dev/null
@@ -1,2006 +0,0 @@
-import datetime
-import enum
-import logging
-import typing
-import warnings
-from contextlib import asynccontextmanager, contextmanager
-from types import TracebackType
-
-from .__version__ import __version__
-from ._auth import Auth, BasicAuth, FunctionAuth
-from ._config import (
- DEFAULT_LIMITS,
- DEFAULT_MAX_REDIRECTS,
- DEFAULT_TIMEOUT_CONFIG,
- Limits,
- Proxy,
- Timeout,
-)
-from ._decoders import SUPPORTED_DECODERS
-from ._exceptions import (
- InvalidURL,
- RemoteProtocolError,
- TooManyRedirects,
- request_context,
-)
-from ._models import Cookies, Headers, Request, Response
-from ._status_codes import codes
-from ._transports.asgi import ASGITransport
-from ._transports.base import AsyncBaseTransport, BaseTransport
-from ._transports.default import AsyncHTTPTransport, HTTPTransport
-from ._transports.wsgi import WSGITransport
-from ._types import (
- AsyncByteStream,
- AuthTypes,
- CertTypes,
- CookieTypes,
- HeaderTypes,
- ProxiesTypes,
- QueryParamTypes,
- RequestContent,
- RequestData,
- RequestExtensions,
- RequestFiles,
- SyncByteStream,
- TimeoutTypes,
- URLTypes,
- VerifyTypes,
-)
-from ._urls import URL, QueryParams
-from ._utils import (
- Timer,
- URLPattern,
- get_environment_proxies,
- is_https_redirect,
- same_origin,
-)
-
-# The type annotation for @classmethod and context managers here follows PEP 484
-# https://www.python.org/dev/peps/pep-0484/#annotating-instance-and-class-methods
-T = typing.TypeVar("T", bound="Client")
-U = typing.TypeVar("U", bound="AsyncClient")
-
-
-class UseClientDefault:
- """
- For some parameters such as `auth=...` and `timeout=...` we need to be able
- to indicate the default "unset" state, in a way that is distinctly different
- to using `None`.
-
- The default "unset" state indicates that whatever default is set on the
- client should be used. This is different to setting `None`, which
- explicitly disables the parameter, possibly overriding a client default.
-
- For example we use `timeout=USE_CLIENT_DEFAULT` in the `request()` signature.
- Omitting the `timeout` parameter will send a request using whatever default
- timeout has been configured on the client. Including `timeout=None` will
- ensure no timeout is used.
-
- Note that user code shouldn't need to use the `USE_CLIENT_DEFAULT` constant,
- but it is used internally when a parameter is not included.
- """
-
-
-USE_CLIENT_DEFAULT = UseClientDefault()
-
-
-logger = logging.getLogger("httpx")
-
-USER_AGENT = f"python-httpx/{__version__}"
-ACCEPT_ENCODING = ", ".join(
- [key for key in SUPPORTED_DECODERS.keys() if key != "identity"]
-)
-
-
-class ClientState(enum.Enum):
- # UNOPENED:
- # The client has been instantiated, but has not been used to send a request,
- # or been opened by entering the context of a `with` block.
- UNOPENED = 1
- # OPENED:
- # The client has either sent a request, or is within a `with` block.
- OPENED = 2
- # CLOSED:
- # The client has either exited the `with` block, or `close()` has
- # been called explicitly.
- CLOSED = 3
-
-
-class BoundSyncStream(SyncByteStream):
- """
- A byte stream that is bound to a given response instance, and that
- ensures the `response.elapsed` is set once the response is closed.
- """
-
- def __init__(
- self, stream: SyncByteStream, response: Response, timer: Timer
- ) -> None:
- self._stream = stream
- self._response = response
- self._timer = timer
-
- def __iter__(self) -> typing.Iterator[bytes]:
- for chunk in self._stream:
- yield chunk
-
- def close(self) -> None:
- seconds = self._timer.sync_elapsed()
- self._response.elapsed = datetime.timedelta(seconds=seconds)
- self._stream.close()
-
-
-class BoundAsyncStream(AsyncByteStream):
- """
- An async byte stream that is bound to a given response instance, and that
- ensures the `response.elapsed` is set once the response is closed.
- """
-
- def __init__(
- self, stream: AsyncByteStream, response: Response, timer: Timer
- ) -> None:
- self._stream = stream
- self._response = response
- self._timer = timer
-
- async def __aiter__(self) -> typing.AsyncIterator[bytes]:
- async for chunk in self._stream:
- yield chunk
-
- async def aclose(self) -> None:
- seconds = await self._timer.async_elapsed()
- self._response.elapsed = datetime.timedelta(seconds=seconds)
- await self._stream.aclose()
-
-
-EventHook = typing.Callable[..., typing.Any]
-
-
-class BaseClient:
- def __init__(
- self,
- *,
- auth: typing.Optional[AuthTypes] = None,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG,
- follow_redirects: bool = False,
- max_redirects: int = DEFAULT_MAX_REDIRECTS,
- event_hooks: typing.Optional[
- typing.Mapping[str, typing.List[EventHook]]
- ] = None,
- base_url: URLTypes = "",
- trust_env: bool = True,
- default_encoding: typing.Union[str, typing.Callable[[bytes], str]] = "utf-8",
- ):
- event_hooks = {} if event_hooks is None else event_hooks
-
- self._base_url = self._enforce_trailing_slash(URL(base_url))
-
- self._auth = self._build_auth(auth)
- self._params = QueryParams(params)
- self.headers = Headers(headers)
- self._cookies = Cookies(cookies)
- self._timeout = Timeout(timeout)
- self.follow_redirects = follow_redirects
- self.max_redirects = max_redirects
- self._event_hooks = {
- "request": list(event_hooks.get("request", [])),
- "response": list(event_hooks.get("response", [])),
- }
- self._trust_env = trust_env
- self._default_encoding = default_encoding
- self._state = ClientState.UNOPENED
-
- @property
- def is_closed(self) -> bool:
- """
- Check if the client being closed
- """
- return self._state == ClientState.CLOSED
-
- @property
- def trust_env(self) -> bool:
- return self._trust_env
-
- def _enforce_trailing_slash(self, url: URL) -> URL:
- if url.raw_path.endswith(b"/"):
- return url
- return url.copy_with(raw_path=url.raw_path + b"/")
-
- def _get_proxy_map(
- self, proxies: typing.Optional[ProxiesTypes], allow_env_proxies: bool
- ) -> typing.Dict[str, typing.Optional[Proxy]]:
- if proxies is None:
- if allow_env_proxies:
- return {
- key: None if url is None else Proxy(url=url)
- for key, url in get_environment_proxies().items()
- }
- return {}
- if isinstance(proxies, dict):
- new_proxies = {}
- for key, value in proxies.items():
- proxy = Proxy(url=value) if isinstance(value, (str, URL)) else value
- new_proxies[str(key)] = proxy
- return new_proxies
- else:
- proxy = Proxy(url=proxies) if isinstance(proxies, (str, URL)) else proxies
- return {"all://": proxy}
-
- @property
- def timeout(self) -> Timeout:
- return self._timeout
-
- @timeout.setter
- def timeout(self, timeout: TimeoutTypes) -> None:
- self._timeout = Timeout(timeout)
-
- @property
- def event_hooks(self) -> typing.Dict[str, typing.List[EventHook]]:
- return self._event_hooks
-
- @event_hooks.setter
- def event_hooks(
- self, event_hooks: typing.Dict[str, typing.List[EventHook]]
- ) -> None:
- self._event_hooks = {
- "request": list(event_hooks.get("request", [])),
- "response": list(event_hooks.get("response", [])),
- }
-
- @property
- def auth(self) -> typing.Optional[Auth]:
- """
- Authentication class used when none is passed at the request-level.
-
- See also [Authentication][0].
-
- [0]: /quickstart/#authentication
- """
- return self._auth
-
- @auth.setter
- def auth(self, auth: AuthTypes) -> None:
- self._auth = self._build_auth(auth)
-
- @property
- def base_url(self) -> URL:
- """
- Base URL to use when sending requests with relative URLs.
- """
- return self._base_url
-
- @base_url.setter
- def base_url(self, url: URLTypes) -> None:
- self._base_url = self._enforce_trailing_slash(URL(url))
-
- @property
- def headers(self) -> Headers:
- """
- HTTP headers to include when sending requests.
- """
- return self._headers
-
- @headers.setter
- def headers(self, headers: HeaderTypes) -> None:
- client_headers = Headers(
- {
- b"Accept": b"*/*",
- b"Accept-Encoding": ACCEPT_ENCODING.encode("ascii"),
- b"Connection": b"keep-alive",
- b"User-Agent": USER_AGENT.encode("ascii"),
- }
- )
- client_headers.update(headers)
- self._headers = client_headers
-
- @property
- def cookies(self) -> Cookies:
- """
- Cookie values to include when sending requests.
- """
- return self._cookies
-
- @cookies.setter
- def cookies(self, cookies: CookieTypes) -> None:
- self._cookies = Cookies(cookies)
-
- @property
- def params(self) -> QueryParams:
- """
- Query parameters to include in the URL when sending requests.
- """
- return self._params
-
- @params.setter
- def params(self, params: QueryParamTypes) -> None:
- self._params = QueryParams(params)
-
- def build_request(
- self,
- method: str,
- url: URLTypes,
- *,
- content: typing.Optional[RequestContent] = None,
- data: typing.Optional[RequestData] = None,
- files: typing.Optional[RequestFiles] = None,
- json: typing.Optional[typing.Any] = None,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> Request:
- """
- Build and return a request instance.
-
- * The `params`, `headers` and `cookies` arguments
- are merged with any values set on the client.
- * The `url` argument is merged with any `base_url` set on the client.
-
- See also: [Request instances][0]
-
- [0]: /advanced/#request-instances
- """
- url = self._merge_url(url)
- headers = self._merge_headers(headers)
- cookies = self._merge_cookies(cookies)
- params = self._merge_queryparams(params)
- extensions = {} if extensions is None else extensions
- if "timeout" not in extensions:
- timeout = (
- self.timeout
- if isinstance(timeout, UseClientDefault)
- else Timeout(timeout)
- )
- extensions = dict(**extensions, timeout=timeout.as_dict())
- return Request(
- method,
- url,
- content=content,
- data=data,
- files=files,
- json=json,
- params=params,
- headers=headers,
- cookies=cookies,
- extensions=extensions,
- )
-
- def _merge_url(self, url: URLTypes) -> URL:
- """
- Merge a URL argument together with any 'base_url' on the client,
- to create the URL used for the outgoing request.
- """
- merge_url = URL(url)
- if merge_url.is_relative_url:
- # To merge URLs we always append to the base URL. To get this
- # behaviour correct we always ensure the base URL ends in a '/'
- # separator, and strip any leading '/' from the merge URL.
- #
- # So, eg...
- #
- # >>> client = Client(base_url="https://www.example.com/subpath")
- # >>> client.base_url
- # URL('https://www.example.com/subpath/')
- # >>> client.build_request("GET", "/path").url
- # URL('https://www.example.com/subpath/path')
- merge_raw_path = self.base_url.raw_path + merge_url.raw_path.lstrip(b"/")
- return self.base_url.copy_with(raw_path=merge_raw_path)
- return merge_url
-
- def _merge_cookies(
- self, cookies: typing.Optional[CookieTypes] = None
- ) -> typing.Optional[CookieTypes]:
- """
- Merge a cookies argument together with any cookies on the client,
- to create the cookies used for the outgoing request.
- """
- if cookies or self.cookies:
- merged_cookies = Cookies(self.cookies)
- merged_cookies.update(cookies)
- return merged_cookies
- return cookies
-
- def _merge_headers(
- self, headers: typing.Optional[HeaderTypes] = None
- ) -> typing.Optional[HeaderTypes]:
- """
- Merge a headers argument together with any headers on the client,
- to create the headers used for the outgoing request.
- """
- merged_headers = Headers(self.headers)
- merged_headers.update(headers)
- return merged_headers
-
- def _merge_queryparams(
- self, params: typing.Optional[QueryParamTypes] = None
- ) -> typing.Optional[QueryParamTypes]:
- """
- Merge a queryparams argument together with any queryparams on the client,
- to create the queryparams used for the outgoing request.
- """
- if params or self.params:
- merged_queryparams = QueryParams(self.params)
- return merged_queryparams.merge(params)
- return params
-
- def _build_auth(self, auth: typing.Optional[AuthTypes]) -> typing.Optional[Auth]:
- if auth is None:
- return None
- elif isinstance(auth, tuple):
- return BasicAuth(username=auth[0], password=auth[1])
- elif isinstance(auth, Auth):
- return auth
- elif callable(auth):
- return FunctionAuth(func=auth)
- else:
- raise TypeError(f'Invalid "auth" argument: {auth!r}')
-
- def _build_request_auth(
- self,
- request: Request,
- auth: typing.Union[AuthTypes, UseClientDefault, None] = USE_CLIENT_DEFAULT,
- ) -> Auth:
- auth = (
- self._auth if isinstance(auth, UseClientDefault) else self._build_auth(auth)
- )
-
- if auth is not None:
- return auth
-
- username, password = request.url.username, request.url.password
- if username or password:
- return BasicAuth(username=username, password=password)
-
- return Auth()
-
- def _build_redirect_request(self, request: Request, response: Response) -> Request:
- """
- Given a request and a redirect response, return a new request that
- should be used to effect the redirect.
- """
- method = self._redirect_method(request, response)
- url = self._redirect_url(request, response)
- headers = self._redirect_headers(request, url, method)
- stream = self._redirect_stream(request, method)
- cookies = Cookies(self.cookies)
- return Request(
- method=method,
- url=url,
- headers=headers,
- cookies=cookies,
- stream=stream,
- extensions=request.extensions,
- )
-
- def _redirect_method(self, request: Request, response: Response) -> str:
- """
- When being redirected we may want to change the method of the request
- based on certain specs or browser behavior.
- """
- method = request.method
-
- # https://tools.ietf.org/html/rfc7231#section-6.4.4
- if response.status_code == codes.SEE_OTHER and method != "HEAD":
- method = "GET"
-
- # Do what the browsers do, despite standards...
- # Turn 302s into GETs.
- if response.status_code == codes.FOUND and method != "HEAD":
- method = "GET"
-
- # If a POST is responded to with a 301, turn it into a GET.
- # This bizarre behaviour is explained in 'requests' issue 1704.
- if response.status_code == codes.MOVED_PERMANENTLY and method == "POST":
- method = "GET"
-
- return method
-
- def _redirect_url(self, request: Request, response: Response) -> URL:
- """
- Return the URL for the redirect to follow.
- """
- location = response.headers["Location"]
-
- try:
- url = URL(location)
- except InvalidURL as exc:
- raise RemoteProtocolError(
- f"Invalid URL in location header: {exc}.", request=request
- ) from None
-
- # Handle malformed 'Location' headers that are "absolute" form, have no host.
- # See: https://github.com/encode/httpx/issues/771
- if url.scheme and not url.host:
- url = url.copy_with(host=request.url.host)
-
- # Facilitate relative 'Location' headers, as allowed by RFC 7231.
- # (e.g. '/path/to/resource' instead of 'http://domain.tld/path/to/resource')
- if url.is_relative_url:
- url = request.url.join(url)
-
- # Attach previous fragment if needed (RFC 7231 7.1.2)
- if request.url.fragment and not url.fragment:
- url = url.copy_with(fragment=request.url.fragment)
-
- return url
-
- def _redirect_headers(self, request: Request, url: URL, method: str) -> Headers:
- """
- Return the headers that should be used for the redirect request.
- """
- headers = Headers(request.headers)
-
- if not same_origin(url, request.url):
- if not is_https_redirect(request.url, url):
- # Strip Authorization headers when responses are redirected
- # away from the origin. (Except for direct HTTP to HTTPS redirects.)
- headers.pop("Authorization", None)
-
- # Update the Host header.
- headers["Host"] = url.netloc.decode("ascii")
-
- if method != request.method and method == "GET":
- # If we've switch to a 'GET' request, then strip any headers which
- # are only relevant to the request body.
- headers.pop("Content-Length", None)
- headers.pop("Transfer-Encoding", None)
-
- # We should use the client cookie store to determine any cookie header,
- # rather than whatever was on the original outgoing request.
- headers.pop("Cookie", None)
-
- return headers
-
- def _redirect_stream(
- self, request: Request, method: str
- ) -> typing.Optional[typing.Union[SyncByteStream, AsyncByteStream]]:
- """
- Return the body that should be used for the redirect request.
- """
- if method != request.method and method == "GET":
- return None
-
- return request.stream
-
-
-class Client(BaseClient):
- """
- An HTTP client, with connection pooling, HTTP/2, redirects, cookie persistence, etc.
-
- It can be shared between threads.
-
- Usage:
-
- ```python
- >>> client = httpx.Client()
- >>> response = client.get('https://example.org')
- ```
-
- **Parameters:**
-
- * **auth** - *(optional)* An authentication class to use when sending
- requests.
- * **params** - *(optional)* Query parameters to include in request URLs, as
- a string, dictionary, or sequence of two-tuples.
- * **headers** - *(optional)* Dictionary of HTTP headers to include when
- sending requests.
- * **cookies** - *(optional)* Dictionary of Cookie items to include when
- sending requests.
- * **verify** - *(optional)* SSL certificates (a.k.a CA bundle) used to
- verify the identity of requested hosts. Either `True` (default CA bundle),
- a path to an SSL certificate file, an `ssl.SSLContext`, or `False`
- (which will disable verification).
- * **cert** - *(optional)* An SSL certificate used by the requested host
- to authenticate the client. Either a path to an SSL certificate file, or
- two-tuple of (certificate file, key file), or a three-tuple of (certificate
- file, key file, password).
- * **proxies** - *(optional)* A dictionary mapping proxy keys to proxy
- URLs.
- * **timeout** - *(optional)* The timeout configuration to use when sending
- requests.
- * **limits** - *(optional)* The limits configuration to use.
- * **max_redirects** - *(optional)* The maximum number of redirect responses
- that should be followed.
- * **base_url** - *(optional)* A URL to use as the base when building
- request URLs.
- * **transport** - *(optional)* A transport class to use for sending requests
- over the network.
- * **app** - *(optional)* An WSGI application to send requests to,
- rather than sending actual network requests.
- * **trust_env** - *(optional)* Enables or disables usage of environment
- variables for configuration.
- * **default_encoding** - *(optional)* The default encoding to use for decoding
- response text, if no charset information is included in a response Content-Type
- header. Set to a callable for automatic character set detection. Default: "utf-8".
- """
-
- def __init__(
- self,
- *,
- auth: typing.Optional[AuthTypes] = None,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- verify: VerifyTypes = True,
- cert: typing.Optional[CertTypes] = None,
- http1: bool = True,
- http2: bool = False,
- proxies: typing.Optional[ProxiesTypes] = None,
- mounts: typing.Optional[typing.Mapping[str, BaseTransport]] = None,
- timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG,
- follow_redirects: bool = False,
- limits: Limits = DEFAULT_LIMITS,
- max_redirects: int = DEFAULT_MAX_REDIRECTS,
- event_hooks: typing.Optional[
- typing.Mapping[str, typing.List[EventHook]]
- ] = None,
- base_url: URLTypes = "",
- transport: typing.Optional[BaseTransport] = None,
- app: typing.Optional[typing.Callable[..., typing.Any]] = None,
- trust_env: bool = True,
- default_encoding: typing.Union[str, typing.Callable[[bytes], str]] = "utf-8",
- ):
- super().__init__(
- auth=auth,
- params=params,
- headers=headers,
- cookies=cookies,
- timeout=timeout,
- follow_redirects=follow_redirects,
- max_redirects=max_redirects,
- event_hooks=event_hooks,
- base_url=base_url,
- trust_env=trust_env,
- default_encoding=default_encoding,
- )
-
- if http2:
- try:
- import h2 # noqa
- except ImportError: # pragma: no cover
- raise ImportError(
- "Using http2=True, but the 'h2' package is not installed. "
- "Make sure to install httpx using `pip install httpx[http2]`."
- ) from None
-
- allow_env_proxies = trust_env and app is None and transport is None
- proxy_map = self._get_proxy_map(proxies, allow_env_proxies)
-
- self._transport = self._init_transport(
- verify=verify,
- cert=cert,
- http1=http1,
- http2=http2,
- limits=limits,
- transport=transport,
- app=app,
- trust_env=trust_env,
- )
- self._mounts: typing.Dict[URLPattern, typing.Optional[BaseTransport]] = {
- URLPattern(key): None
- if proxy is None
- else self._init_proxy_transport(
- proxy,
- verify=verify,
- cert=cert,
- http1=http1,
- http2=http2,
- limits=limits,
- trust_env=trust_env,
- )
- for key, proxy in proxy_map.items()
- }
- if mounts is not None:
- self._mounts.update(
- {URLPattern(key): transport for key, transport in mounts.items()}
- )
-
- self._mounts = dict(sorted(self._mounts.items()))
-
- def _init_transport(
- self,
- verify: VerifyTypes = True,
- cert: typing.Optional[CertTypes] = None,
- http1: bool = True,
- http2: bool = False,
- limits: Limits = DEFAULT_LIMITS,
- transport: typing.Optional[BaseTransport] = None,
- app: typing.Optional[typing.Callable[..., typing.Any]] = None,
- trust_env: bool = True,
- ) -> BaseTransport:
- if transport is not None:
- return transport
-
- if app is not None:
- return WSGITransport(app=app)
-
- return HTTPTransport(
- verify=verify,
- cert=cert,
- http1=http1,
- http2=http2,
- limits=limits,
- trust_env=trust_env,
- )
-
- def _init_proxy_transport(
- self,
- proxy: Proxy,
- verify: VerifyTypes = True,
- cert: typing.Optional[CertTypes] = None,
- http1: bool = True,
- http2: bool = False,
- limits: Limits = DEFAULT_LIMITS,
- trust_env: bool = True,
- ) -> BaseTransport:
- return HTTPTransport(
- verify=verify,
- cert=cert,
- http1=http1,
- http2=http2,
- limits=limits,
- trust_env=trust_env,
- proxy=proxy,
- )
-
- def _transport_for_url(self, url: URL) -> BaseTransport:
- """
- Returns the transport instance that should be used for a given URL.
- This will either be the standard connection pool, or a proxy.
- """
- for pattern, transport in self._mounts.items():
- if pattern.matches(url):
- return self._transport if transport is None else transport
-
- return self._transport
-
- def request(
- self,
- method: str,
- url: URLTypes,
- *,
- content: typing.Optional[RequestContent] = None,
- data: typing.Optional[RequestData] = None,
- files: typing.Optional[RequestFiles] = None,
- json: typing.Optional[typing.Any] = None,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Union[AuthTypes, UseClientDefault, None] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> Response:
- """
- Build and send a request.
-
- Equivalent to:
-
- ```python
- request = client.build_request(...)
- response = client.send(request, ...)
- ```
-
- See `Client.build_request()`, `Client.send()` and
- [Merging of configuration][0] for how the various parameters
- are merged with client-level configuration.
-
- [0]: /advanced/#merging-of-configuration
- """
- if cookies is not None:
- message = (
- "Setting per-request cookies=<...> is being deprecated, because "
- "the expected behaviour on cookie persistence is ambiguous. Set "
- "cookies directly on the client instance instead."
- )
- warnings.warn(message, DeprecationWarning)
-
- request = self.build_request(
- method=method,
- url=url,
- content=content,
- data=data,
- files=files,
- json=json,
- params=params,
- headers=headers,
- cookies=cookies,
- timeout=timeout,
- extensions=extensions,
- )
- return self.send(request, auth=auth, follow_redirects=follow_redirects)
-
- @contextmanager
- def stream(
- self,
- method: str,
- url: URLTypes,
- *,
- content: typing.Optional[RequestContent] = None,
- data: typing.Optional[RequestData] = None,
- files: typing.Optional[RequestFiles] = None,
- json: typing.Optional[typing.Any] = None,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Union[AuthTypes, UseClientDefault, None] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> typing.Iterator[Response]:
- """
- Alternative to `httpx.request()` that streams the response body
- instead of loading it into memory at once.
-
- **Parameters**: See `httpx.request`.
-
- See also: [Streaming Responses][0]
-
- [0]: /quickstart#streaming-responses
- """
- request = self.build_request(
- method=method,
- url=url,
- content=content,
- data=data,
- files=files,
- json=json,
- params=params,
- headers=headers,
- cookies=cookies,
- timeout=timeout,
- extensions=extensions,
- )
- response = self.send(
- request=request,
- auth=auth,
- follow_redirects=follow_redirects,
- stream=True,
- )
- try:
- yield response
- finally:
- response.close()
-
- def send(
- self,
- request: Request,
- *,
- stream: bool = False,
- auth: typing.Union[AuthTypes, UseClientDefault, None] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- ) -> Response:
- """
- Send a request.
-
- The request is sent as-is, unmodified.
-
- Typically you'll want to build one with `Client.build_request()`
- so that any client-level configuration is merged into the request,
- but passing an explicit `httpx.Request()` is supported as well.
-
- See also: [Request instances][0]
-
- [0]: /advanced/#request-instances
- """
- if self._state == ClientState.CLOSED:
- raise RuntimeError("Cannot send a request, as the client has been closed.")
-
- self._state = ClientState.OPENED
- follow_redirects = (
- self.follow_redirects
- if isinstance(follow_redirects, UseClientDefault)
- else follow_redirects
- )
-
- auth = self._build_request_auth(request, auth)
-
- response = self._send_handling_auth(
- request,
- auth=auth,
- follow_redirects=follow_redirects,
- history=[],
- )
- try:
- if not stream:
- response.read()
-
- return response
-
- except BaseException as exc:
- response.close()
- raise exc
-
- def _send_handling_auth(
- self,
- request: Request,
- auth: Auth,
- follow_redirects: bool,
- history: typing.List[Response],
- ) -> Response:
- auth_flow = auth.sync_auth_flow(request)
- try:
- request = next(auth_flow)
-
- while True:
- response = self._send_handling_redirects(
- request,
- follow_redirects=follow_redirects,
- history=history,
- )
- try:
- try:
- next_request = auth_flow.send(response)
- except StopIteration:
- return response
-
- response.history = list(history)
- response.read()
- request = next_request
- history.append(response)
-
- except BaseException as exc:
- response.close()
- raise exc
- finally:
- auth_flow.close()
-
- def _send_handling_redirects(
- self,
- request: Request,
- follow_redirects: bool,
- history: typing.List[Response],
- ) -> Response:
- while True:
- if len(history) > self.max_redirects:
- raise TooManyRedirects(
- "Exceeded maximum allowed redirects.", request=request
- )
-
- for hook in self._event_hooks["request"]:
- hook(request)
-
- response = self._send_single_request(request)
- try:
- for hook in self._event_hooks["response"]:
- hook(response)
- response.history = list(history)
-
- if not response.has_redirect_location:
- return response
-
- request = self._build_redirect_request(request, response)
- history = history + [response]
-
- if follow_redirects:
- response.read()
- else:
- response.next_request = request
- return response
-
- except BaseException as exc:
- response.close()
- raise exc
-
- def _send_single_request(self, request: Request) -> Response:
- """
- Sends a single request, without handling any redirections.
- """
- transport = self._transport_for_url(request.url)
- timer = Timer()
- timer.sync_start()
-
- if not isinstance(request.stream, SyncByteStream):
- raise RuntimeError(
- "Attempted to send an async request with a sync Client instance."
- )
-
- with request_context(request=request):
- response = transport.handle_request(request)
-
- assert isinstance(response.stream, SyncByteStream)
-
- response.request = request
- response.stream = BoundSyncStream(
- response.stream, response=response, timer=timer
- )
- self.cookies.extract_cookies(response)
- response.default_encoding = self._default_encoding
-
- logger.info(
- 'HTTP Request: %s %s "%s %d %s"',
- request.method,
- request.url,
- response.http_version,
- response.status_code,
- response.reason_phrase,
- )
-
- return response
-
- def get(
- self,
- url: URLTypes,
- *,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> Response:
- """
- Send a `GET` request.
-
- **Parameters**: See `httpx.request`.
- """
- return self.request(
- "GET",
- url,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- follow_redirects=follow_redirects,
- timeout=timeout,
- extensions=extensions,
- )
-
- def options(
- self,
- url: URLTypes,
- *,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> Response:
- """
- Send an `OPTIONS` request.
-
- **Parameters**: See `httpx.request`.
- """
- return self.request(
- "OPTIONS",
- url,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- follow_redirects=follow_redirects,
- timeout=timeout,
- extensions=extensions,
- )
-
- def head(
- self,
- url: URLTypes,
- *,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> Response:
- """
- Send a `HEAD` request.
-
- **Parameters**: See `httpx.request`.
- """
- return self.request(
- "HEAD",
- url,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- follow_redirects=follow_redirects,
- timeout=timeout,
- extensions=extensions,
- )
-
- def post(
- self,
- url: URLTypes,
- *,
- content: typing.Optional[RequestContent] = None,
- data: typing.Optional[RequestData] = None,
- files: typing.Optional[RequestFiles] = None,
- json: typing.Optional[typing.Any] = None,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> Response:
- """
- Send a `POST` request.
-
- **Parameters**: See `httpx.request`.
- """
- return self.request(
- "POST",
- url,
- content=content,
- data=data,
- files=files,
- json=json,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- follow_redirects=follow_redirects,
- timeout=timeout,
- extensions=extensions,
- )
-
- def put(
- self,
- url: URLTypes,
- *,
- content: typing.Optional[RequestContent] = None,
- data: typing.Optional[RequestData] = None,
- files: typing.Optional[RequestFiles] = None,
- json: typing.Optional[typing.Any] = None,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> Response:
- """
- Send a `PUT` request.
-
- **Parameters**: See `httpx.request`.
- """
- return self.request(
- "PUT",
- url,
- content=content,
- data=data,
- files=files,
- json=json,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- follow_redirects=follow_redirects,
- timeout=timeout,
- extensions=extensions,
- )
-
- def patch(
- self,
- url: URLTypes,
- *,
- content: typing.Optional[RequestContent] = None,
- data: typing.Optional[RequestData] = None,
- files: typing.Optional[RequestFiles] = None,
- json: typing.Optional[typing.Any] = None,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> Response:
- """
- Send a `PATCH` request.
-
- **Parameters**: See `httpx.request`.
- """
- return self.request(
- "PATCH",
- url,
- content=content,
- data=data,
- files=files,
- json=json,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- follow_redirects=follow_redirects,
- timeout=timeout,
- extensions=extensions,
- )
-
- def delete(
- self,
- url: URLTypes,
- *,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> Response:
- """
- Send a `DELETE` request.
-
- **Parameters**: See `httpx.request`.
- """
- return self.request(
- "DELETE",
- url,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- follow_redirects=follow_redirects,
- timeout=timeout,
- extensions=extensions,
- )
-
- def close(self) -> None:
- """
- Close transport and proxies.
- """
- if self._state != ClientState.CLOSED:
- self._state = ClientState.CLOSED
-
- self._transport.close()
- for transport in self._mounts.values():
- if transport is not None:
- transport.close()
-
- def __enter__(self: T) -> T:
- if self._state != ClientState.UNOPENED:
- msg = {
- ClientState.OPENED: "Cannot open a client instance more than once.",
- ClientState.CLOSED: "Cannot reopen a client instance, once it has been closed.",
- }[self._state]
- raise RuntimeError(msg)
-
- self._state = ClientState.OPENED
-
- self._transport.__enter__()
- for transport in self._mounts.values():
- if transport is not None:
- transport.__enter__()
- return self
-
- def __exit__(
- self,
- exc_type: typing.Optional[typing.Type[BaseException]] = None,
- exc_value: typing.Optional[BaseException] = None,
- traceback: typing.Optional[TracebackType] = None,
- ) -> None:
- self._state = ClientState.CLOSED
-
- self._transport.__exit__(exc_type, exc_value, traceback)
- for transport in self._mounts.values():
- if transport is not None:
- transport.__exit__(exc_type, exc_value, traceback)
-
-
-class AsyncClient(BaseClient):
- """
- An asynchronous HTTP client, with connection pooling, HTTP/2, redirects,
- cookie persistence, etc.
-
- Usage:
-
- ```python
- >>> async with httpx.AsyncClient() as client:
- >>> response = await client.get('https://example.org')
- ```
-
- **Parameters:**
-
- * **auth** - *(optional)* An authentication class to use when sending
- requests.
- * **params** - *(optional)* Query parameters to include in request URLs, as
- a string, dictionary, or sequence of two-tuples.
- * **headers** - *(optional)* Dictionary of HTTP headers to include when
- sending requests.
- * **cookies** - *(optional)* Dictionary of Cookie items to include when
- sending requests.
- * **verify** - *(optional)* SSL certificates (a.k.a CA bundle) used to
- verify the identity of requested hosts. Either `True` (default CA bundle),
- a path to an SSL certificate file, an `ssl.SSLContext`, or `False`
- (which will disable verification).
- * **cert** - *(optional)* An SSL certificate used by the requested host
- to authenticate the client. Either a path to an SSL certificate file, or
- two-tuple of (certificate file, key file), or a three-tuple of (certificate
- file, key file, password).
- * **http2** - *(optional)* A boolean indicating if HTTP/2 support should be
- enabled. Defaults to `False`.
- * **proxies** - *(optional)* A dictionary mapping HTTP protocols to proxy
- URLs.
- * **timeout** - *(optional)* The timeout configuration to use when sending
- requests.
- * **limits** - *(optional)* The limits configuration to use.
- * **max_redirects** - *(optional)* The maximum number of redirect responses
- that should be followed.
- * **base_url** - *(optional)* A URL to use as the base when building
- request URLs.
- * **transport** - *(optional)* A transport class to use for sending requests
- over the network.
- * **app** - *(optional)* An ASGI application to send requests to,
- rather than sending actual network requests.
- * **trust_env** - *(optional)* Enables or disables usage of environment
- variables for configuration.
- * **default_encoding** - *(optional)* The default encoding to use for decoding
- response text, if no charset information is included in a response Content-Type
- header. Set to a callable for automatic character set detection. Default: "utf-8".
- """
-
- def __init__(
- self,
- *,
- auth: typing.Optional[AuthTypes] = None,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- verify: VerifyTypes = True,
- cert: typing.Optional[CertTypes] = None,
- http1: bool = True,
- http2: bool = False,
- proxies: typing.Optional[ProxiesTypes] = None,
- mounts: typing.Optional[typing.Mapping[str, AsyncBaseTransport]] = None,
- timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG,
- follow_redirects: bool = False,
- limits: Limits = DEFAULT_LIMITS,
- max_redirects: int = DEFAULT_MAX_REDIRECTS,
- event_hooks: typing.Optional[
- typing.Mapping[str, typing.List[typing.Callable[..., typing.Any]]]
- ] = None,
- base_url: URLTypes = "",
- transport: typing.Optional[AsyncBaseTransport] = None,
- app: typing.Optional[typing.Callable[..., typing.Any]] = None,
- trust_env: bool = True,
- default_encoding: typing.Union[str, typing.Callable[[bytes], str]] = "utf-8",
- ):
- super().__init__(
- auth=auth,
- params=params,
- headers=headers,
- cookies=cookies,
- timeout=timeout,
- follow_redirects=follow_redirects,
- max_redirects=max_redirects,
- event_hooks=event_hooks,
- base_url=base_url,
- trust_env=trust_env,
- default_encoding=default_encoding,
- )
-
- if http2:
- try:
- import h2 # noqa
- except ImportError: # pragma: no cover
- raise ImportError(
- "Using http2=True, but the 'h2' package is not installed. "
- "Make sure to install httpx using `pip install httpx[http2]`."
- ) from None
-
- allow_env_proxies = trust_env and app is None and transport is None
- proxy_map = self._get_proxy_map(proxies, allow_env_proxies)
-
- self._transport = self._init_transport(
- verify=verify,
- cert=cert,
- http1=http1,
- http2=http2,
- limits=limits,
- transport=transport,
- app=app,
- trust_env=trust_env,
- )
-
- self._mounts: typing.Dict[URLPattern, typing.Optional[AsyncBaseTransport]] = {
- URLPattern(key): None
- if proxy is None
- else self._init_proxy_transport(
- proxy,
- verify=verify,
- cert=cert,
- http1=http1,
- http2=http2,
- limits=limits,
- trust_env=trust_env,
- )
- for key, proxy in proxy_map.items()
- }
- if mounts is not None:
- self._mounts.update(
- {URLPattern(key): transport for key, transport in mounts.items()}
- )
- self._mounts = dict(sorted(self._mounts.items()))
-
- def _init_transport(
- self,
- verify: VerifyTypes = True,
- cert: typing.Optional[CertTypes] = None,
- http1: bool = True,
- http2: bool = False,
- limits: Limits = DEFAULT_LIMITS,
- transport: typing.Optional[AsyncBaseTransport] = None,
- app: typing.Optional[typing.Callable[..., typing.Any]] = None,
- trust_env: bool = True,
- ) -> AsyncBaseTransport:
- if transport is not None:
- return transport
-
- if app is not None:
- return ASGITransport(app=app)
-
- return AsyncHTTPTransport(
- verify=verify,
- cert=cert,
- http1=http1,
- http2=http2,
- limits=limits,
- trust_env=trust_env,
- )
-
- def _init_proxy_transport(
- self,
- proxy: Proxy,
- verify: VerifyTypes = True,
- cert: typing.Optional[CertTypes] = None,
- http1: bool = True,
- http2: bool = False,
- limits: Limits = DEFAULT_LIMITS,
- trust_env: bool = True,
- ) -> AsyncBaseTransport:
- return AsyncHTTPTransport(
- verify=verify,
- cert=cert,
- http2=http2,
- limits=limits,
- trust_env=trust_env,
- proxy=proxy,
- )
-
- def _transport_for_url(self, url: URL) -> AsyncBaseTransport:
- """
- Returns the transport instance that should be used for a given URL.
- This will either be the standard connection pool, or a proxy.
- """
- for pattern, transport in self._mounts.items():
- if pattern.matches(url):
- return self._transport if transport is None else transport
-
- return self._transport
-
- async def request(
- self,
- method: str,
- url: URLTypes,
- *,
- content: typing.Optional[RequestContent] = None,
- data: typing.Optional[RequestData] = None,
- files: typing.Optional[RequestFiles] = None,
- json: typing.Optional[typing.Any] = None,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Union[AuthTypes, UseClientDefault, None] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> Response:
- """
- Build and send a request.
-
- Equivalent to:
-
- ```python
- request = client.build_request(...)
- response = await client.send(request, ...)
- ```
-
- See `AsyncClient.build_request()`, `AsyncClient.send()`
- and [Merging of configuration][0] for how the various parameters
- are merged with client-level configuration.
-
- [0]: /advanced/#merging-of-configuration
- """
- request = self.build_request(
- method=method,
- url=url,
- content=content,
- data=data,
- files=files,
- json=json,
- params=params,
- headers=headers,
- cookies=cookies,
- timeout=timeout,
- extensions=extensions,
- )
- return await self.send(request, auth=auth, follow_redirects=follow_redirects)
-
- @asynccontextmanager
- async def stream(
- self,
- method: str,
- url: URLTypes,
- *,
- content: typing.Optional[RequestContent] = None,
- data: typing.Optional[RequestData] = None,
- files: typing.Optional[RequestFiles] = None,
- json: typing.Optional[typing.Any] = None,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> typing.AsyncIterator[Response]:
- """
- Alternative to `httpx.request()` that streams the response body
- instead of loading it into memory at once.
-
- **Parameters**: See `httpx.request`.
-
- See also: [Streaming Responses][0]
-
- [0]: /quickstart#streaming-responses
- """
- request = self.build_request(
- method=method,
- url=url,
- content=content,
- data=data,
- files=files,
- json=json,
- params=params,
- headers=headers,
- cookies=cookies,
- timeout=timeout,
- extensions=extensions,
- )
- response = await self.send(
- request=request,
- auth=auth,
- follow_redirects=follow_redirects,
- stream=True,
- )
- try:
- yield response
- finally:
- await response.aclose()
-
- async def send(
- self,
- request: Request,
- *,
- stream: bool = False,
- auth: typing.Union[AuthTypes, UseClientDefault, None] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- ) -> Response:
- """
- Send a request.
-
- The request is sent as-is, unmodified.
-
- Typically you'll want to build one with `AsyncClient.build_request()`
- so that any client-level configuration is merged into the request,
- but passing an explicit `httpx.Request()` is supported as well.
-
- See also: [Request instances][0]
-
- [0]: /advanced/#request-instances
- """
- if self._state == ClientState.CLOSED:
- raise RuntimeError("Cannot send a request, as the client has been closed.")
-
- self._state = ClientState.OPENED
- follow_redirects = (
- self.follow_redirects
- if isinstance(follow_redirects, UseClientDefault)
- else follow_redirects
- )
-
- auth = self._build_request_auth(request, auth)
-
- response = await self._send_handling_auth(
- request,
- auth=auth,
- follow_redirects=follow_redirects,
- history=[],
- )
- try:
- if not stream:
- await response.aread()
-
- return response
-
- except BaseException as exc: # pragma: no cover
- await response.aclose()
- raise exc
-
- async def _send_handling_auth(
- self,
- request: Request,
- auth: Auth,
- follow_redirects: bool,
- history: typing.List[Response],
- ) -> Response:
- auth_flow = auth.async_auth_flow(request)
- try:
- request = await auth_flow.__anext__()
-
- while True:
- response = await self._send_handling_redirects(
- request,
- follow_redirects=follow_redirects,
- history=history,
- )
- try:
- try:
- next_request = await auth_flow.asend(response)
- except StopAsyncIteration:
- return response
-
- response.history = list(history)
- await response.aread()
- request = next_request
- history.append(response)
-
- except BaseException as exc:
- await response.aclose()
- raise exc
- finally:
- await auth_flow.aclose()
-
- async def _send_handling_redirects(
- self,
- request: Request,
- follow_redirects: bool,
- history: typing.List[Response],
- ) -> Response:
- while True:
- if len(history) > self.max_redirects:
- raise TooManyRedirects(
- "Exceeded maximum allowed redirects.", request=request
- )
-
- for hook in self._event_hooks["request"]:
- await hook(request)
-
- response = await self._send_single_request(request)
- try:
- for hook in self._event_hooks["response"]:
- await hook(response)
-
- response.history = list(history)
-
- if not response.has_redirect_location:
- return response
-
- request = self._build_redirect_request(request, response)
- history = history + [response]
-
- if follow_redirects:
- await response.aread()
- else:
- response.next_request = request
- return response
-
- except BaseException as exc:
- await response.aclose()
- raise exc
-
- async def _send_single_request(self, request: Request) -> Response:
- """
- Sends a single request, without handling any redirections.
- """
- transport = self._transport_for_url(request.url)
- timer = Timer()
- await timer.async_start()
-
- if not isinstance(request.stream, AsyncByteStream):
- raise RuntimeError(
- "Attempted to send an sync request with an AsyncClient instance."
- )
-
- with request_context(request=request):
- response = await transport.handle_async_request(request)
-
- assert isinstance(response.stream, AsyncByteStream)
- response.request = request
- response.stream = BoundAsyncStream(
- response.stream, response=response, timer=timer
- )
- self.cookies.extract_cookies(response)
- response.default_encoding = self._default_encoding
-
- logger.info(
- 'HTTP Request: %s %s "%s %d %s"',
- request.method,
- request.url,
- response.http_version,
- response.status_code,
- response.reason_phrase,
- )
-
- return response
-
- async def get(
- self,
- url: URLTypes,
- *,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Union[AuthTypes, UseClientDefault, None] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> Response:
- """
- Send a `GET` request.
-
- **Parameters**: See `httpx.request`.
- """
- return await self.request(
- "GET",
- url,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- follow_redirects=follow_redirects,
- timeout=timeout,
- extensions=extensions,
- )
-
- async def options(
- self,
- url: URLTypes,
- *,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> Response:
- """
- Send an `OPTIONS` request.
-
- **Parameters**: See `httpx.request`.
- """
- return await self.request(
- "OPTIONS",
- url,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- follow_redirects=follow_redirects,
- timeout=timeout,
- extensions=extensions,
- )
-
- async def head(
- self,
- url: URLTypes,
- *,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> Response:
- """
- Send a `HEAD` request.
-
- **Parameters**: See `httpx.request`.
- """
- return await self.request(
- "HEAD",
- url,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- follow_redirects=follow_redirects,
- timeout=timeout,
- extensions=extensions,
- )
-
- async def post(
- self,
- url: URLTypes,
- *,
- content: typing.Optional[RequestContent] = None,
- data: typing.Optional[RequestData] = None,
- files: typing.Optional[RequestFiles] = None,
- json: typing.Optional[typing.Any] = None,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> Response:
- """
- Send a `POST` request.
-
- **Parameters**: See `httpx.request`.
- """
- return await self.request(
- "POST",
- url,
- content=content,
- data=data,
- files=files,
- json=json,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- follow_redirects=follow_redirects,
- timeout=timeout,
- extensions=extensions,
- )
-
- async def put(
- self,
- url: URLTypes,
- *,
- content: typing.Optional[RequestContent] = None,
- data: typing.Optional[RequestData] = None,
- files: typing.Optional[RequestFiles] = None,
- json: typing.Optional[typing.Any] = None,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> Response:
- """
- Send a `PUT` request.
-
- **Parameters**: See `httpx.request`.
- """
- return await self.request(
- "PUT",
- url,
- content=content,
- data=data,
- files=files,
- json=json,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- follow_redirects=follow_redirects,
- timeout=timeout,
- extensions=extensions,
- )
-
- async def patch(
- self,
- url: URLTypes,
- *,
- content: typing.Optional[RequestContent] = None,
- data: typing.Optional[RequestData] = None,
- files: typing.Optional[RequestFiles] = None,
- json: typing.Optional[typing.Any] = None,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> Response:
- """
- Send a `PATCH` request.
-
- **Parameters**: See `httpx.request`.
- """
- return await self.request(
- "PATCH",
- url,
- content=content,
- data=data,
- files=files,
- json=json,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- follow_redirects=follow_redirects,
- timeout=timeout,
- extensions=extensions,
- )
-
- async def delete(
- self,
- url: URLTypes,
- *,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT,
- timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT,
- extensions: typing.Optional[RequestExtensions] = None,
- ) -> Response:
- """
- Send a `DELETE` request.
-
- **Parameters**: See `httpx.request`.
- """
- return await self.request(
- "DELETE",
- url,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- follow_redirects=follow_redirects,
- timeout=timeout,
- extensions=extensions,
- )
-
- async def aclose(self) -> None:
- """
- Close transport and proxies.
- """
- if self._state != ClientState.CLOSED:
- self._state = ClientState.CLOSED
-
- await self._transport.aclose()
- for proxy in self._mounts.values():
- if proxy is not None:
- await proxy.aclose()
-
- async def __aenter__(self: U) -> U:
- if self._state != ClientState.UNOPENED:
- msg = {
- ClientState.OPENED: "Cannot open a client instance more than once.",
- ClientState.CLOSED: "Cannot reopen a client instance, once it has been closed.",
- }[self._state]
- raise RuntimeError(msg)
-
- self._state = ClientState.OPENED
-
- await self._transport.__aenter__()
- for proxy in self._mounts.values():
- if proxy is not None:
- await proxy.__aenter__()
- return self
-
- async def __aexit__(
- self,
- exc_type: typing.Optional[typing.Type[BaseException]] = None,
- exc_value: typing.Optional[BaseException] = None,
- traceback: typing.Optional[TracebackType] = None,
- ) -> None:
- self._state = ClientState.CLOSED
-
- await self._transport.__aexit__(exc_type, exc_value, traceback)
- for proxy in self._mounts.values():
- if proxy is not None:
- await proxy.__aexit__(exc_type, exc_value, traceback)
diff --git a/spaces/Datasculptor/StyleGAN-NADA/op/fused_act.py b/spaces/Datasculptor/StyleGAN-NADA/op/fused_act.py
deleted file mode 100644
index 8459d510d7b79684779dfe47f5b46d81c94b4a4d..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/StyleGAN-NADA/op/fused_act.py
+++ /dev/null
@@ -1,86 +0,0 @@
-import os
-
-import torch
-from torch import nn
-from torch.autograd import Function
-from torch.utils.cpp_extension import load
-
-
-module_path = os.path.dirname(__file__)
-fused = load(
- 'fused',
- sources=[
- os.path.join(module_path, 'fused_bias_act.cpp'),
- os.path.join(module_path, 'fused_bias_act_kernel.cu'),
- ],
-)
-
-
-class FusedLeakyReLUFunctionBackward(Function):
- @staticmethod
- def forward(ctx, grad_output, out, negative_slope, scale):
- ctx.save_for_backward(out)
- ctx.negative_slope = negative_slope
- ctx.scale = scale
-
- empty = grad_output.new_empty(0)
-
- grad_input = fused.fused_bias_act(
- grad_output, empty, out, 3, 1, negative_slope, scale
- )
-
- dim = [0]
-
- if grad_input.ndim > 2:
- dim += list(range(2, grad_input.ndim))
-
- grad_bias = grad_input.sum(dim).detach()
-
- return grad_input, grad_bias
-
- @staticmethod
- def backward(ctx, gradgrad_input, gradgrad_bias):
- out, = ctx.saved_tensors
- gradgrad_out = fused.fused_bias_act(
- gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale
- )
-
- return gradgrad_out, None, None, None
-
-
-class FusedLeakyReLUFunction(Function):
- @staticmethod
- def forward(ctx, input, bias, negative_slope, scale):
- empty = input.new_empty(0)
- out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale)
- ctx.save_for_backward(out)
- ctx.negative_slope = negative_slope
- ctx.scale = scale
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- out, = ctx.saved_tensors
-
- grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply(
- grad_output, out, ctx.negative_slope, ctx.scale
- )
-
- return grad_input, grad_bias, None, None
-
-
-class FusedLeakyReLU(nn.Module):
- def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5):
- super().__init__()
-
- self.bias = nn.Parameter(torch.zeros(channel))
- self.negative_slope = negative_slope
- self.scale = scale
-
- def forward(self, input):
- return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale)
-
-
-def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5):
- return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale)
diff --git a/spaces/Demosthene-OR/avr23-cds-translation/tabs/game_tab.py b/spaces/Demosthene-OR/avr23-cds-translation/tabs/game_tab.py
deleted file mode 100644
index ee089e60491b6e13904d457d9e30c99b1ac0dc10..0000000000000000000000000000000000000000
--- a/spaces/Demosthene-OR/avr23-cds-translation/tabs/game_tab.py
+++ /dev/null
@@ -1,214 +0,0 @@
-import streamlit as st
-import pandas as pd
-import numpy as np
-import os
-import time
-import matplotlib.pyplot as plt
-import random
-import json
-import csv
-from extra_streamlit_components import tab_bar, TabBarItemData
-import matplotlib.pyplot as plt
-from datetime import datetime
-
-title = "Jouez avec nous !"
-sidebar_name = "Jeu"
-
-@st.cache_data
-def init_game():
- new = int(time.time())
- sentence_test = pd.read_csv('data/multilingue/sentence_test_extract.csv')
- sentence_test = sentence_test[4750:]
- # Lisez le contenu du fichier JSON
- with open('data/multilingue/lan_to_language.json', 'r') as fichier:
- lan_to_language = json.load(fichier)
- t_now = time.time()
- return sentence_test, lan_to_language, new, t_now
-
-def find_indice(sent_selected):
- l = list(lan_to_language.keys())
- for i in range(len(l)):
- if l[i] == sentence_test['lan_code'].iloc[sent_selected]:
- return i
-
-@st.cache_data
-def set_game(new):
- nb_st = len(sentence_test)
- sent_sel = []
- # Utilisez une boucle pour générer 5 nombres aléatoires différents
- while len(sent_sel) < 5:
- nombre = random.randint(0, nb_st)
- if nombre not in sent_sel:
- sent_sel.append(nombre)
-
- rep_possibles=[]
- for i in range(5):
- rep_possibles.append([find_indice(sent_sel[i])])
- while len(rep_possibles[i]) < 5:
- rep_possible = random.randint(0, 95)
- if rep_possible not in rep_possibles[i]:
- rep_possibles[i].append(rep_possible)
- random.shuffle(rep_possibles[i])
- return sent_sel, rep_possibles, new
-
-def calc_score(n_rep,duration):
-
- if n_rep==0: return 0
- s1 = n_rep*200
- if duration < 60:
- s2 = (60-duration)*200/60
- if n_rep==5:
- s2 *= 2.5
- else:
- s2 = max(-(duration-60)*100/60,-100)
- s = int(s1+s2)
- return s
-
-def read_leaderboard():
- return pd.read_csv('data/game_leaderboard.csv', index_col=False,encoding='utf8')
-
-def write_leaderboard(lb):
- lb['Nom'] = lb['Nom'].astype(str)
- lb['Rang'] = lb['Rang'].astype(int)
- lb.to_csv(path_or_buf='data/game_leaderboard.csv',columns=['Rang','Nom','Score','Timestamp','BR','Duree'],index=False, header=True,encoding='utf8')
-
-def display_leaderboard():
- lb = read_leaderboard()
- st.write("**Leaderboard :**")
- list_champ = """
- | Rang | Nom | Score |
- |------|------------|-------|"""
- if len(lb)>0:
- for i in range(len(lb)):
- list_champ += """
- | """+str(lb['Rang'].iloc[i])+""" | """+str(lb['Nom'].iloc[i])[:9]+""" | """+str(lb['Score'].iloc[i])+""" |"""
- st.markdown(list_champ, unsafe_allow_html=True )
- return lb
-
-def write_log(TS,Nom,Score,BR,Duree):
- log = pd.read_csv('data/game_log.csv', index_col=False,encoding='utf8')
- date_heure = datetime.fromtimestamp(TS)
- Date = date_heure.strftime('%Y-%m-%d %H:%M:%S')
- log = pd.concat([log, pd.DataFrame(data={'Date':[Date], 'Nom':[Nom],'Score':[Score],'BR':[BR],'Duree':[Duree]})], ignore_index=True)
- log.to_csv(path_or_buf='data/game_log.csv',columns=['Date','Nom','Score','BR','Duree'],index=False, header=True,encoding='utf8')
-
-def display_files():
- log = pd.read_csv('data/game_log.csv', index_col=False,encoding='utf8')
- lb = pd.read_csv('data/game_leaderboard.csv', index_col=False,encoding='utf8')
- st.dataframe(lb)
- st.dataframe(log)
-
-def run():
- global sentence_test, lan_to_language
-
- sentence_test, lan_to_language, new, t_debut = init_game()
-
- st.write("")
- st.title(title)
- st.write("#### **Etes vous un expert es Langues ?**\n")
- st.markdown(
- """
- Essayer de trouvez, sans aide, la langue des 5 phrases suivantes.
- Attention : Vous devez être le plus rapide possible !
- """, unsafe_allow_html=True
- )
- st.write("")
- player_name = st.text_input("Quel est votre nom ?")
-
- if player_name == 'display_files':
- display_files()
- return
-
- score = 0
- col1, col2 = st.columns([0.7,0.3])
- with col2:
- lb = display_leaderboard()
- with col1:
- sent_sel, rep_possibles, new = set_game(new)
- answer = [""] * 5
- l = list(lan_to_language.values())
- for i in range(5):
- answer[i] = st.radio("**:blue["+sentence_test['sentence'].iloc[sent_sel[i]]+"]**\n",[l[rep_possibles[i][0]],l[rep_possibles[i][1]],l[rep_possibles[i][2]], \
- l[rep_possibles[i][3]],l[rep_possibles[i][4]]], horizontal=True, key=i)
- t_previous_debut = t_debut
- t_debut = time.time()
-
- if st.button(label="Valider", type="primary"):
- st.cache_data.clear()
-
- nb_bonnes_reponses = 0
- for i in range(5):
- if lan_to_language[sentence_test['lan_code'].iloc[sent_sel[i]]]==answer[i]:
- nb_bonnes_reponses +=1
-
- t_fin = time.time()
- duration = t_fin - t_previous_debut
-
- score = calc_score(nb_bonnes_reponses,duration)
- write_log(time.time(),player_name,score,nb_bonnes_reponses,duration)
- if nb_bonnes_reponses >=4:
- st.write(":red[**Félicitations, vous avez "+str(nb_bonnes_reponses)+" bonnes réponses !**]")
- st.write(":red[Votre score est de "+str(score)+" points]")
- else:
- if nb_bonnes_reponses >1 : s="s"
- else: s=""
- st.write("**:red[Vous avez "+str(nb_bonnes_reponses)+" bonne"+s+" réponse"+s+".]**")
- if nb_bonnes_reponses >0 : s="s"
- else: s=""
- st.write(":red[Votre score est de "+str(score)+" point"+s+"]")
-
- st.write("Bonne réponses:")
- for i in range(5):
- st.write("- "+sentence_test['sentence'].iloc[sent_sel[i]]+" -> :blue[**"+lan_to_language[sentence_test['lan_code'].iloc[sent_sel[i]]]+"**]")
- new = int(time.time())
- st.button(label="Play again ?", type="primary")
-
- with col2:
- now = time.time()
- # Si le score du dernier est plus vieux d'une semaine, il est remplacé par un score + récent
- renew_old = ((len(lb)>9) and (lb['Timestamp'].iloc[9])<(now-604800))
-
- if (score>0) and ((((score >= lb['Score'].min()) and (len(lb)>9)) or (len(lb)<=9)) or (pd.isna(lb['Score'].min())) or renew_old):
- if player_name not in lb['Nom'].tolist():
- if (((score >= lb['Score'].min()) and (len(lb)>9)) or (len(lb)<=9)) or (pd.isna(lb['Score'].min())) :
- lb = pd.concat([lb, pd.DataFrame(data={'Nom':[player_name],'Score':[score],'Timestamp':[now],'BR':[nb_bonnes_reponses],'Duree':[duration]})], ignore_index=True)
- lb = lb.sort_values(by=['Score', 'Timestamp'], ascending=[False, False]).reset_index()
- lb = lb.drop(lb.index[10:])
- else:
- st.write('2:',player_name)
- lb['Nom'].iloc[9]= player_name
- lb['Score'].iloc[9]= score
- lb['Timestamp'].iloc[9]=now
- lb['BR'].iloc[9]=nb_bonnes_reponses
- lb['Duree'].iloc[9]=duration
- lb = lb.reset_index()
- else:
- liste_Nom = lb['Nom'].tolist()
- for i,player in enumerate(liste_Nom):
- if player == player_name:
- if lb['Score'].iloc[i] < score:
- lb['Score'].iloc[i] = score
- lb['Timestamp'].iloc[i]=now
- lb = lb.sort_values(by=['Score', 'Timestamp'], ascending=[False, False]).reset_index()
- for i in range(len(lb)):
- if (i>0):
- if (lb['Score'].iloc[i]==lb['Score'].iloc[i-1]):
- lb['Rang'].iloc[i] = lb['Rang'].iloc[i-1]
- else:
- lb['Rang'].iloc[i] = i+1
- else:
- lb['Rang'].iloc[i] = i+1
- if player_name !="":
- write_leaderboard(lb)
-
-
- return
-
-
-
-
-
-
-
-
-
diff --git a/spaces/Dinoking/Guccio-AI-Designer/models/wrappers.py b/spaces/Dinoking/Guccio-AI-Designer/models/wrappers.py
deleted file mode 100644
index 335321bc67e7b3c7f1e715948e967388c3be05f9..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Guccio-AI-Designer/models/wrappers.py
+++ /dev/null
@@ -1,737 +0,0 @@
-# Copyright 2020 Erik Härkönen. All rights reserved.
-# This file is licensed to you under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License. You may obtain a copy
-# of the License at http://www.apache.org/licenses/LICENSE-2.0
-
-# Unless required by applicable law or agreed to in writing, software distributed under
-# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS
-# OF ANY KIND, either express or implied. See the License for the specific language
-# governing permissions and limitations under the License.
-
-import torch
-import numpy as np
-import re
-import os
-import random
-from pathlib import Path
-from types import SimpleNamespace
-from utils import download_ckpt
-from config import Config
-from netdissect import proggan, zdataset
-from . import biggan
-from . import stylegan
-from . import stylegan2
-from abc import abstractmethod, ABC as AbstractBaseClass
-from functools import singledispatch
-
-class BaseModel(AbstractBaseClass, torch.nn.Module):
-
- # Set parameters for identifying model from instance
- def __init__(self, model_name, class_name):
- super(BaseModel, self).__init__()
- self.model_name = model_name
- self.outclass = class_name
-
- # Stop model evaluation as soon as possible after
- # given layer has been executed, used to speed up
- # netdissect.InstrumentedModel::retain_layer().
- # Validate with tests/partial_forward_test.py
- # Can use forward() as fallback at the cost of performance.
- @abstractmethod
- def partial_forward(self, x, layer_name):
- pass
-
- # Generate batch of latent vectors
- @abstractmethod
- def sample_latent(self, n_samples=1, seed=None, truncation=None):
- pass
-
- # Maximum number of latents that can be provided
- # Typically one for each layer
- def get_max_latents(self):
- return 1
-
- # Name of primary latent space
- # E.g. StyleGAN can alternatively use W
- def latent_space_name(self):
- return 'Z'
-
- def get_latent_shape(self):
- return tuple(self.sample_latent(1).shape)
-
- def get_latent_dims(self):
- return np.prod(self.get_latent_shape())
-
- def set_output_class(self, new_class):
- self.outclass = new_class
-
- # Map from typical range [-1, 1] to [0, 1]
- def forward(self, x):
- out = self.model.forward(x)
- return 0.5*(out+1)
-
- # Generate images and convert to numpy
- def sample_np(self, z=None, n_samples=1, seed=None):
- if z is None:
- z = self.sample_latent(n_samples, seed=seed)
- elif isinstance(z, list):
- z = [torch.tensor(l).to(self.device) if not torch.is_tensor(l) else l for l in z]
- elif not torch.is_tensor(z):
- z = torch.tensor(z).to(self.device)
- img = self.forward(z)
- img_np = img.permute(0, 2, 3, 1).cpu().detach().numpy()
- return np.clip(img_np, 0.0, 1.0).squeeze()
-
- # For models that use part of latent as conditioning
- def get_conditional_state(self, z):
- return None
-
- # For models that use part of latent as conditioning
- def set_conditional_state(self, z, c):
- return z
-
- def named_modules(self, *args, **kwargs):
- return self.model.named_modules(*args, **kwargs)
-
-# PyTorch port of StyleGAN 2
-class StyleGAN2(BaseModel):
- def __init__(self, device, class_name, truncation=1.0, use_w=False):
- super(StyleGAN2, self).__init__('StyleGAN2', class_name or 'ffhq')
- self.device = device
- self.truncation = truncation
- self.latent_avg = None
- self.w_primary = use_w # use W as primary latent space?
-
- # Image widths
- configs = {
- # Converted NVIDIA official
- 'ffhq': 1024,
- 'car': 512,
- 'cat': 256,
- 'church': 256,
- 'horse': 256,
- # Tuomas
- 'bedrooms': 256,
- 'kitchen': 256,
- 'places': 256,
- 'lookbook': 512
- }
-
- assert self.outclass in configs, \
- f'Invalid StyleGAN2 class {self.outclass}, should be one of [{", ".join(configs.keys())}]'
-
- self.resolution = configs[self.outclass]
- self.name = f'StyleGAN2-{self.outclass}'
- self.has_latent_residual = True
- self.load_model()
- self.set_noise_seed(0)
-
- def latent_space_name(self):
- return 'W' if self.w_primary else 'Z'
-
- def use_w(self):
- self.w_primary = True
-
- def use_z(self):
- self.w_primary = False
-
- # URLs created with https://sites.google.com/site/gdocs2direct/
- def download_checkpoint(self, outfile):
- checkpoints = {
- 'horse': 'https://drive.google.com/uc?export=download&id=18SkqWAkgt0fIwDEf2pqeaenNi4OoCo-0',
- 'ffhq': 'https://drive.google.com/uc?export=download&id=1FJRwzAkV-XWbxgTwxEmEACvuqF5DsBiV',
- 'church': 'https://drive.google.com/uc?export=download&id=1HFM694112b_im01JT7wop0faftw9ty5g',
- 'car': 'https://drive.google.com/uc?export=download&id=1iRoWclWVbDBAy5iXYZrQnKYSbZUqXI6y',
- 'cat': 'https://drive.google.com/uc?export=download&id=15vJP8GDr0FlRYpE8gD7CdeEz2mXrQMgN',
- 'places': 'https://drive.google.com/uc?export=download&id=1X8-wIH3aYKjgDZt4KMOtQzN1m4AlCVhm',
- 'bedrooms': 'https://drive.google.com/uc?export=download&id=1nZTW7mjazs-qPhkmbsOLLA_6qws-eNQu',
- 'kitchen': 'https://drive.google.com/uc?export=download&id=15dCpnZ1YLAnETAPB0FGmXwdBclbwMEkZ',
- 'lookbook': 'https://drive.google.com/uc?export=download&id=1-F-RMkbHUv_S_k-_olh43mu5rDUMGYKe'
- }
-
- url = checkpoints[self.outclass]
- download_ckpt(url, outfile)
-
- def load_model(self):
- checkpoint_root = os.environ.get('GANCONTROL_CHECKPOINT_DIR', Path(__file__).parent / 'checkpoints')
- checkpoint = Path(checkpoint_root) / f'stylegan2/stylegan2_{self.outclass}_{self.resolution}.pt'
-
- self.model = stylegan2.Generator(self.resolution, 512, 8).to(self.device)
-
- if not checkpoint.is_file():
- os.makedirs(checkpoint.parent, exist_ok=True)
- self.download_checkpoint(checkpoint)
-
- ckpt = torch.load(checkpoint)
- self.model.load_state_dict(ckpt['g_ema'], strict=False)
- self.latent_avg = 0
-
- def sample_latent(self, n_samples=1, seed=None, truncation=None):
- if seed is None:
- seed = np.random.randint(np.iinfo(np.int32).max) # use (reproducible) global rand state
-
- rng = np.random.RandomState(seed)
- z = torch.from_numpy(
- rng.standard_normal(512 * n_samples)
- .reshape(n_samples, 512)).float().to(self.device) #[N, 512]
-
- if self.w_primary:
- z = self.model.style(z)
-
- return z
-
- def get_max_latents(self):
- return self.model.n_latent
-
- def set_output_class(self, new_class):
- if self.outclass != new_class:
- raise RuntimeError('StyleGAN2: cannot change output class without reloading')
-
- def forward(self, x):
- x = x if isinstance(x, list) else [x]
- out, _ = self.model(x, noise=self.noise,
- truncation=self.truncation, truncation_latent=self.latent_avg, input_is_w=self.w_primary)
- return 0.5*(out+1)
-
- def partial_forward(self, x, layer_name):
- styles = x if isinstance(x, list) else [x]
- inject_index = None
- noise = self.noise
-
- if not self.w_primary:
- styles = [self.model.style(s) for s in styles]
-
- if len(styles) == 1:
- # One global latent
- inject_index = self.model.n_latent
- latent = self.model.strided_style(styles[0].unsqueeze(1).repeat(1, inject_index, 1)) # [N, 18, 512]
- elif len(styles) == 2:
- # Latent mixing with two latents
- if inject_index is None:
- inject_index = random.randint(1, self.model.n_latent - 1)
-
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- latent2 = styles[1].unsqueeze(1).repeat(1, self.model.n_latent - inject_index, 1)
-
- latent = self.model.strided_style(torch.cat([latent, latent2], 1))
- else:
- # One latent per layer
- assert len(styles) == self.model.n_latent, f'Expected {self.model.n_latents} latents, got {len(styles)}'
- styles = torch.stack(styles, dim=1) # [N, 18, 512]
- latent = self.model.strided_style(styles)
-
- if 'style' in layer_name:
- return
-
- out = self.model.input(latent)
- if 'input' == layer_name:
- return
-
- out = self.model.conv1(out, latent[:, 0], noise=noise[0])
- if 'conv1' in layer_name:
- return
-
- skip = self.model.to_rgb1(out, latent[:, 1])
- if 'to_rgb1' in layer_name:
- return
-
- i = 1
- noise_i = 1
-
- for conv1, conv2, to_rgb in zip(
- self.model.convs[::2], self.model.convs[1::2], self.model.to_rgbs
- ):
- out = conv1(out, latent[:, i], noise=noise[noise_i])
- if f'convs.{i-1}' in layer_name:
- return
-
- out = conv2(out, latent[:, i + 1], noise=noise[noise_i + 1])
- if f'convs.{i}' in layer_name:
- return
-
- skip = to_rgb(out, latent[:, i + 2], skip)
- if f'to_rgbs.{i//2}' in layer_name:
- return
-
- i += 2
- noise_i += 2
-
- image = skip
-
- raise RuntimeError(f'Layer {layer_name} not encountered in partial_forward')
-
- def set_noise_seed(self, seed):
- torch.manual_seed(seed)
- self.noise = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=self.device)]
-
- for i in range(3, self.model.log_size + 1):
- for _ in range(2):
- self.noise.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=self.device))
-
-# PyTorch port of StyleGAN 1
-class StyleGAN(BaseModel):
- def __init__(self, device, class_name, truncation=1.0, use_w=False):
- super(StyleGAN, self).__init__('StyleGAN', class_name or 'ffhq')
- self.device = device
- self.w_primary = use_w # is W primary latent space?
-
- configs = {
- # Official
- 'ffhq': 1024,
- 'celebahq': 1024,
- 'bedrooms': 256,
- 'cars': 512,
- 'cats': 256,
-
- # From https://github.com/justinpinkney/awesome-pretrained-stylegan
- 'vases': 1024,
- 'wikiart': 512,
- 'fireworks': 512,
- 'abstract': 512,
- 'anime': 512,
- 'ukiyo-e': 512,
- }
-
- assert self.outclass in configs, \
- f'Invalid StyleGAN class {self.outclass}, should be one of [{", ".join(configs.keys())}]'
-
- self.resolution = configs[self.outclass]
- self.name = f'StyleGAN-{self.outclass}'
- self.has_latent_residual = True
- self.load_model()
- self.set_noise_seed(0)
-
- def latent_space_name(self):
- return 'W' if self.w_primary else 'Z'
-
- def use_w(self):
- self.w_primary = True
-
- def use_z(self):
- self.w_primary = False
-
- def load_model(self):
- checkpoint_root = os.environ.get('GANCONTROL_CHECKPOINT_DIR', Path(__file__).parent / 'checkpoints')
- checkpoint = Path(checkpoint_root) / f'stylegan/stylegan_{self.outclass}_{self.resolution}.pt'
-
- self.model = stylegan.StyleGAN_G(self.resolution).to(self.device)
-
- urls_tf = {
- 'vases': 'https://thisvesseldoesnotexist.s3-us-west-2.amazonaws.com/public/network-snapshot-008980.pkl',
- 'fireworks': 'https://mega.nz/#!7uBHnACY!quIW-pjdDa7NqnZOYh1z5UemWwPOW6HkYSoJ4usCg9U',
- 'abstract': 'https://mega.nz/#!vCQyHQZT!zdeOg3VvT4922Z2UfxO51xgAfJD-NAK2nW7H_jMlilU',
- 'anime': 'https://mega.nz/#!vawjXISI!F7s13yRicxDA3QYqYDL2kjnc2K7Zk3DwCIYETREmBP4',
- 'ukiyo-e': 'https://drive.google.com/uc?id=1CHbJlci9NhVFifNQb3vCGu6zw4eqzvTd',
- }
-
- urls_torch = {
- 'celebahq': 'https://drive.google.com/uc?export=download&id=1lGcRwNoXy_uwXkD6sy43aAa-rMHRR7Ad',
- 'bedrooms': 'https://drive.google.com/uc?export=download&id=1r0_s83-XK2dKlyY3WjNYsfZ5-fnH8QgI',
- 'ffhq': 'https://drive.google.com/uc?export=download&id=1GcxTcLDPYxQqcQjeHpLUutGzwOlXXcks',
- 'cars': 'https://drive.google.com/uc?export=download&id=1aaUXHRHjQ9ww91x4mtPZD0w50fsIkXWt',
- 'cats': 'https://drive.google.com/uc?export=download&id=1JzA5iiS3qPrztVofQAjbb0N4xKdjOOyV',
- 'wikiart': 'https://drive.google.com/uc?export=download&id=1fN3noa7Rsl9slrDXsgZVDsYFxV0O08Vx',
- }
-
- if not checkpoint.is_file():
- os.makedirs(checkpoint.parent, exist_ok=True)
- if self.outclass in urls_torch:
- download_ckpt(urls_torch[self.outclass], checkpoint)
- else:
- checkpoint_tf = checkpoint.with_suffix('.pkl')
- if not checkpoint_tf.is_file():
- download_ckpt(urls_tf[self.outclass], checkpoint_tf)
- print('Converting TensorFlow checkpoint to PyTorch')
- self.model.export_from_tf(checkpoint_tf)
-
- self.model.load_weights(checkpoint)
-
- def sample_latent(self, n_samples=1, seed=None, truncation=None):
- if seed is None:
- seed = np.random.randint(np.iinfo(np.int32).max) # use (reproducible) global rand state
-
- rng = np.random.RandomState(seed)
- noise = torch.from_numpy(
- rng.standard_normal(512 * n_samples)
- .reshape(n_samples, 512)).float().to(self.device) #[N, 512]
-
- if self.w_primary:
- noise = self.model._modules['g_mapping'].forward(noise)
-
- return noise
-
- def get_max_latents(self):
- return 18
-
- def set_output_class(self, new_class):
- if self.outclass != new_class:
- raise RuntimeError('StyleGAN: cannot change output class without reloading')
-
- def forward(self, x):
- out = self.model.forward(x, latent_is_w=self.w_primary)
- return 0.5*(out+1)
-
- # Run model only until given layer
- def partial_forward(self, x, layer_name):
- mapping = self.model._modules['g_mapping']
- G = self.model._modules['g_synthesis']
- trunc = self.model._modules.get('truncation', lambda x : x)
-
- if not self.w_primary:
- x = mapping.forward(x) # handles list inputs
-
- if isinstance(x, list):
- x = torch.stack(x, dim=1)
- else:
- x = x.unsqueeze(1).expand(-1, 18, -1)
-
- # Whole mapping
- if 'g_mapping' in layer_name:
- return
-
- x = trunc(x)
- if layer_name == 'truncation':
- return
-
- # Get names of children
- def iterate(m, name, seen):
- children = getattr(m, '_modules', [])
- if len(children) > 0:
- for child_name, module in children.items():
- seen += iterate(module, f'{name}.{child_name}', seen)
- return seen
- else:
- return [name]
-
- # Generator
- batch_size = x.size(0)
- for i, (n, m) in enumerate(G.blocks.items()): # InputBlock or GSynthesisBlock
- if i == 0:
- r = m(x[:, 2*i:2*i+2])
- else:
- r = m(r, x[:, 2*i:2*i+2])
-
- children = iterate(m, f'g_synthesis.blocks.{n}', [])
- for c in children:
- if layer_name in c: # substring
- return
-
- raise RuntimeError(f'Layer {layer_name} not encountered in partial_forward')
-
-
- def set_noise_seed(self, seed):
- G = self.model._modules['g_synthesis']
-
- def for_each_child(this, name, func):
- children = getattr(this, '_modules', [])
- for child_name, module in children.items():
- for_each_child(module, f'{name}.{child_name}', func)
- func(this, name)
-
- def modify(m, name):
- if isinstance(m, stylegan.NoiseLayer):
- H, W = [int(s) for s in name.split('.')[2].split('x')]
- torch.random.manual_seed(seed)
- m.noise = torch.randn(1, 1, H, W, device=self.device, dtype=torch.float32)
- #m.noise = 1.0 # should be [N, 1, H, W], but this also works
-
- for_each_child(G, 'g_synthesis', modify)
-
-class GANZooModel(BaseModel):
- def __init__(self, device, model_name):
- super(GANZooModel, self).__init__(model_name, 'default')
- self.device = device
- self.base_model = torch.hub.load('facebookresearch/pytorch_GAN_zoo:hub',
- model_name, pretrained=True, useGPU=(device.type == 'cuda'))
- self.model = self.base_model.netG.to(self.device)
- self.name = model_name
- self.has_latent_residual = False
-
- def sample_latent(self, n_samples=1, seed=0, truncation=None):
- # Uses torch.randn
- noise, _ = self.base_model.buildNoiseData(n_samples)
- return noise
-
- # Don't bother for now
- def partial_forward(self, x, layer_name):
- return self.forward(x)
-
- def get_conditional_state(self, z):
- return z[:, -20:] # last 20 = conditioning
-
- def set_conditional_state(self, z, c):
- z[:, -20:] = c
- return z
-
- def forward(self, x):
- out = self.base_model.test(x)
- return 0.5*(out+1)
-
-
-class ProGAN(BaseModel):
- def __init__(self, device, lsun_class=None):
- super(ProGAN, self).__init__('ProGAN', lsun_class)
- self.device = device
-
- # These are downloaded by GANDissect
- valid_classes = [ 'bedroom', 'churchoutdoor', 'conferenceroom', 'diningroom', 'kitchen', 'livingroom', 'restaurant' ]
- assert self.outclass in valid_classes, \
- f'Invalid LSUN class {self.outclass}, should be one of {valid_classes}'
-
- self.load_model()
- self.name = f'ProGAN-{self.outclass}'
- self.has_latent_residual = False
-
- def load_model(self):
- checkpoint_root = os.environ.get('GANCONTROL_CHECKPOINT_DIR', Path(__file__).parent / 'checkpoints')
- checkpoint = Path(checkpoint_root) / f'progan/{self.outclass}_lsun.pth'
-
- if not checkpoint.is_file():
- os.makedirs(checkpoint.parent, exist_ok=True)
- url = f'http://netdissect.csail.mit.edu/data/ganmodel/karras/{self.outclass}_lsun.pth'
- download_ckpt(url, checkpoint)
-
- self.model = proggan.from_pth_file(str(checkpoint.resolve())).to(self.device)
-
- def sample_latent(self, n_samples=1, seed=None, truncation=None):
- if seed is None:
- seed = np.random.randint(np.iinfo(np.int32).max) # use (reproducible) global rand state
- noise = zdataset.z_sample_for_model(self.model, n_samples, seed=seed)[...]
- return noise.to(self.device)
-
- def forward(self, x):
- if isinstance(x, list):
- assert len(x) == 1, "ProGAN only supports a single global latent"
- x = x[0]
-
- out = self.model.forward(x)
- return 0.5*(out+1)
-
- # Run model only until given layer
- def partial_forward(self, x, layer_name):
- assert isinstance(self.model, torch.nn.Sequential), 'Expected sequential model'
-
- if isinstance(x, list):
- assert len(x) == 1, "ProGAN only supports a single global latent"
- x = x[0]
-
- x = x.view(x.shape[0], x.shape[1], 1, 1)
- for name, module in self.model._modules.items(): # ordered dict
- x = module(x)
- if name == layer_name:
- return
-
- raise RuntimeError(f'Layer {layer_name} not encountered in partial_forward')
-
-
-class BigGAN(BaseModel):
- def __init__(self, device, resolution, class_name, truncation=1.0):
- super(BigGAN, self).__init__(f'BigGAN-{resolution}', class_name)
- self.device = device
- self.truncation = truncation
- self.load_model(f'biggan-deep-{resolution}')
- self.set_output_class(class_name or 'husky')
- self.name = f'BigGAN-{resolution}-{self.outclass}-t{self.truncation}'
- self.has_latent_residual = True
-
- # Default implementaiton fails without an internet
- # connection, even if the model has been cached
- def load_model(self, name):
- if name not in biggan.model.PRETRAINED_MODEL_ARCHIVE_MAP:
- raise RuntimeError('Unknown BigGAN model name', name)
-
- checkpoint_root = os.environ.get('GANCONTROL_CHECKPOINT_DIR', Path(__file__).parent / 'checkpoints')
- model_path = Path(checkpoint_root) / name
-
- os.makedirs(model_path, exist_ok=True)
-
- model_file = model_path / biggan.model.WEIGHTS_NAME
- config_file = model_path / biggan.model.CONFIG_NAME
- model_url = biggan.model.PRETRAINED_MODEL_ARCHIVE_MAP[name]
- config_url = biggan.model.PRETRAINED_CONFIG_ARCHIVE_MAP[name]
-
- for filename, url in ((model_file, model_url), (config_file, config_url)):
- if not filename.is_file():
- print('Downloading', url)
- with open(filename, 'wb') as f:
- if url.startswith("s3://"):
- biggan.s3_get(url, f)
- else:
- biggan.http_get(url, f)
-
- self.model = biggan.BigGAN.from_pretrained(model_path).to(self.device)
-
- def sample_latent(self, n_samples=1, truncation=None, seed=None):
- if seed is None:
- seed = np.random.randint(np.iinfo(np.int32).max) # use (reproducible) global rand state
-
- noise_vector = biggan.truncated_noise_sample(truncation=truncation or self.truncation, batch_size=n_samples, seed=seed)
- noise = torch.from_numpy(noise_vector) #[N, 128]
-
- return noise.to(self.device)
-
- # One extra for gen_z
- def get_max_latents(self):
- return len(self.model.config.layers) + 1
-
- def get_conditional_state(self, z):
- return self.v_class
-
- def set_conditional_state(self, z, c):
- self.v_class = c
-
- def is_valid_class(self, class_id):
- if isinstance(class_id, int):
- return class_id < 1000
- elif isinstance(class_id, str):
- return biggan.one_hot_from_names([class_id.replace(' ', '_')]) is not None
- else:
- raise RuntimeError(f'Unknown class identifier {class_id}')
-
- def set_output_class(self, class_id):
- if isinstance(class_id, int):
- self.v_class = torch.from_numpy(biggan.one_hot_from_int([class_id])).to(self.device)
- self.outclass = f'class{class_id}'
- elif isinstance(class_id, str):
- self.outclass = class_id.replace(' ', '_')
- self.v_class = torch.from_numpy(biggan.one_hot_from_names([class_id])).to(self.device)
- else:
- raise RuntimeError(f'Unknown class identifier {class_id}')
-
- def forward(self, x):
- # Duplicate along batch dimension
- if isinstance(x, list):
- c = self.v_class.repeat(x[0].shape[0], 1)
- class_vector = len(x)*[c]
- else:
- class_vector = self.v_class.repeat(x.shape[0], 1)
- out = self.model.forward(x, class_vector, self.truncation) # [N, 3, 128, 128], in [-1, 1]
- return 0.5*(out+1)
-
- # Run model only until given layer
- # Used to speed up PCA sample collection
- def partial_forward(self, x, layer_name):
- if layer_name in ['embeddings', 'generator.gen_z']:
- n_layers = 0
- elif 'generator.layers' in layer_name:
- layer_base = re.match('^generator\.layers\.[0-9]+', layer_name)[0]
- n_layers = int(layer_base.split('.')[-1]) + 1
- else:
- n_layers = len(self.model.config.layers)
-
- if not isinstance(x, list):
- x = self.model.n_latents*[x]
-
- if isinstance(self.v_class, list):
- labels = [c.repeat(x[0].shape[0], 1) for c in class_label]
- embed = [self.model.embeddings(l) for l in labels]
- else:
- class_label = self.v_class.repeat(x[0].shape[0], 1)
- embed = len(x)*[self.model.embeddings(class_label)]
-
- assert len(x) == self.model.n_latents, f'Expected {self.model.n_latents} latents, got {len(x)}'
- assert len(embed) == self.model.n_latents, f'Expected {self.model.n_latents} class vectors, got {len(class_label)}'
-
- cond_vectors = [torch.cat((z, e), dim=1) for (z, e) in zip(x, embed)]
-
- # Generator forward
- z = self.model.generator.gen_z(cond_vectors[0])
- z = z.view(-1, 4, 4, 16 * self.model.generator.config.channel_width)
- z = z.permute(0, 3, 1, 2).contiguous()
-
- cond_idx = 1
- for i, layer in enumerate(self.model.generator.layers[:n_layers]):
- if isinstance(layer, biggan.GenBlock):
- z = layer(z, cond_vectors[cond_idx], self.truncation)
- cond_idx += 1
- else:
- z = layer(z)
-
- return None
-
-# Version 1: separate parameters
-@singledispatch
-def get_model(name, output_class, device, **kwargs):
- # Check if optionally provided existing model can be reused
- inst = kwargs.get('inst', None)
- model = kwargs.get('model', None)
-
- if inst or model:
- cached = model or inst.model
-
- network_same = (cached.model_name == name)
- outclass_same = (cached.outclass == output_class)
- can_change_class = ('BigGAN' in name)
-
- if network_same and (outclass_same or can_change_class):
- cached.set_output_class(output_class)
- return cached
-
- if name == 'DCGAN':
- import warnings
- warnings.filterwarnings("ignore", message="nn.functional.tanh is deprecated")
- model = GANZooModel(device, 'DCGAN')
- elif name == 'ProGAN':
- model = ProGAN(device, output_class)
- elif 'BigGAN' in name:
- assert '-' in name, 'Please specify BigGAN resolution, e.g. BigGAN-512'
- model = BigGAN(device, name.split('-')[-1], class_name=output_class)
- elif name == 'StyleGAN':
- model = StyleGAN(device, class_name=output_class)
- elif name == 'StyleGAN2':
- model = StyleGAN2(device, class_name=output_class)
- else:
- raise RuntimeError(f'Unknown model {name}')
-
- return model
-
-# Version 2: Config object
-@get_model.register(Config)
-def _(cfg, device, **kwargs):
- kwargs['use_w'] = kwargs.get('use_w', cfg.use_w) # explicit arg can override cfg
- return get_model(cfg.model, cfg.output_class, device, **kwargs)
-
-# Version 1: separate parameters
-@singledispatch
-def get_instrumented_model(name, output_class, layers, device, **kwargs):
- model = get_model(name, output_class, device, **kwargs)
- model.eval()
-
- inst = kwargs.get('inst', None)
- if inst:
- inst.close()
-
- if not isinstance(layers, list):
- layers = [layers]
-
- # Verify given layer names
- module_names = [name for (name, _) in model.named_modules()]
- for layer_name in layers:
- if not layer_name in module_names:
- print(f"Layer '{layer_name}' not found in model!")
- print("Available layers:", '\n'.join(module_names))
- raise RuntimeError(f"Unknown layer '{layer_name}''")
-
- # Reset StyleGANs to z mode for shape annotation
- if hasattr(model, 'use_z'):
- model.use_z()
-
- from netdissect.modelconfig import create_instrumented_model
- inst = create_instrumented_model(SimpleNamespace(
- model = model,
- layers = layers,
- cuda = device.type == 'cuda',
- gen = True,
- latent_shape = model.get_latent_shape()
- ))
-
- if kwargs.get('use_w', False):
- model.use_w()
-
- return inst
-
-# Version 2: Config object
-@get_instrumented_model.register(Config)
-def _(cfg, device, **kwargs):
- kwargs['use_w'] = kwargs.get('use_w', cfg.use_w) # explicit arg can override cfg
- return get_instrumented_model(cfg.model, cfg.output_class, cfg.layer, device, **kwargs)
diff --git a/spaces/ECCV2022/PSG/OpenPSG/configs/_base_/models/detr_r50.py b/spaces/ECCV2022/PSG/OpenPSG/configs/_base_/models/detr_r50.py
deleted file mode 100644
index b83d7d5e108ff52eb9c2c8701697684e1fd88844..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/PSG/OpenPSG/configs/_base_/models/detr_r50.py
+++ /dev/null
@@ -1,64 +0,0 @@
-model = dict(
- type='DETR',
- backbone=dict(type='ResNet',
- depth=50,
- num_stages=4,
- out_indices=(3, ),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=False),
- norm_eval=True,
- style='pytorch',
- init_cfg=dict(type='Pretrained',
- checkpoint='torchvision://resnet50')),
- bbox_head=dict(type='DETRHead',
- num_classes=80,
- in_channels=2048,
- transformer=dict(
- type='Transformer',
- encoder=dict(type='DetrTransformerEncoder',
- num_layers=6,
- transformerlayers=dict(
- type='BaseTransformerLayer',
- attn_cfgs=[
- dict(type='MultiheadAttention',
- embed_dims=256,
- num_heads=8,
- dropout=0.1)
- ],
- feedforward_channels=2048,
- ffn_dropout=0.1,
- operation_order=('self_attn', 'norm',
- 'ffn', 'norm'))),
- decoder=dict(
- type='DetrTransformerDecoder',
- return_intermediate=True,
- num_layers=6,
- transformerlayers=dict(
- type='DetrTransformerDecoderLayer',
- attn_cfgs=dict(type='MultiheadAttention',
- embed_dims=256,
- num_heads=8,
- dropout=0.1),
- feedforward_channels=2048,
- ffn_dropout=0.1,
- operation_order=('self_attn', 'norm',
- 'cross_attn', 'norm', 'ffn',
- 'norm')),
- )),
- positional_encoding=dict(type='SinePositionalEncoding',
- num_feats=128,
- normalize=True),
- loss_cls=dict(type='CrossEntropyLoss',
- bg_cls_weight=0.1,
- use_sigmoid=False,
- loss_weight=1.0,
- class_weight=1.0),
- loss_bbox=dict(type='L1Loss', loss_weight=5.0),
- loss_iou=dict(type='GIoULoss', loss_weight=2.0)),
- # training and testing settings
- train_cfg=dict(assigner=dict(
- type='HungarianAssigner',
- cls_cost=dict(type='ClassificationCost', weight=1.),
- reg_cost=dict(type='BBoxL1Cost', weight=5.0, box_format='xywh'),
- iou_cost=dict(type='IoUCost', iou_mode='giou', weight=2.0))),
- test_cfg=dict(max_per_img=100))
diff --git a/spaces/ECCV2022/bytetrack/yolox/data/datasets/mot.py b/spaces/ECCV2022/bytetrack/yolox/data/datasets/mot.py
deleted file mode 100644
index d52febcbbe816bdd3d1e07f2d042e115ae330442..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/yolox/data/datasets/mot.py
+++ /dev/null
@@ -1,132 +0,0 @@
-import cv2
-import numpy as np
-from pycocotools.coco import COCO
-
-import os
-
-from ..dataloading import get_yolox_datadir
-from .datasets_wrapper import Dataset
-
-
-class MOTDataset(Dataset):
- """
- COCO dataset class.
- """
-
- def __init__(
- self,
- data_dir=None,
- json_file="train_half.json",
- name="train",
- img_size=(608, 1088),
- preproc=None,
- ):
- """
- COCO dataset initialization. Annotation data are read into memory by COCO API.
- Args:
- data_dir (str): dataset root directory
- json_file (str): COCO json file name
- name (str): COCO data name (e.g. 'train2017' or 'val2017')
- img_size (int): target image size after pre-processing
- preproc: data augmentation strategy
- """
- super().__init__(img_size)
- if data_dir is None:
- data_dir = os.path.join(get_yolox_datadir(), "mot")
- self.data_dir = data_dir
- self.json_file = json_file
-
- self.coco = COCO(os.path.join(self.data_dir, "annotations", self.json_file))
- self.ids = self.coco.getImgIds()
- self.class_ids = sorted(self.coco.getCatIds())
- cats = self.coco.loadCats(self.coco.getCatIds())
- self._classes = tuple([c["name"] for c in cats])
- self.annotations = self._load_coco_annotations()
- self.name = name
- self.img_size = img_size
- self.preproc = preproc
-
- def __len__(self):
- return len(self.ids)
-
- def _load_coco_annotations(self):
- return [self.load_anno_from_ids(_ids) for _ids in self.ids]
-
- def load_anno_from_ids(self, id_):
- im_ann = self.coco.loadImgs(id_)[0]
- width = im_ann["width"]
- height = im_ann["height"]
- frame_id = im_ann["frame_id"]
- video_id = im_ann["video_id"]
- anno_ids = self.coco.getAnnIds(imgIds=[int(id_)], iscrowd=False)
- annotations = self.coco.loadAnns(anno_ids)
- objs = []
- for obj in annotations:
- x1 = obj["bbox"][0]
- y1 = obj["bbox"][1]
- x2 = x1 + obj["bbox"][2]
- y2 = y1 + obj["bbox"][3]
- if obj["area"] > 0 and x2 >= x1 and y2 >= y1:
- obj["clean_bbox"] = [x1, y1, x2, y2]
- objs.append(obj)
-
- num_objs = len(objs)
-
- res = np.zeros((num_objs, 6))
-
- for ix, obj in enumerate(objs):
- cls = self.class_ids.index(obj["category_id"])
- res[ix, 0:4] = obj["clean_bbox"]
- res[ix, 4] = cls
- res[ix, 5] = obj["track_id"]
-
- file_name = im_ann["file_name"] if "file_name" in im_ann else "{:012}".format(id_) + ".jpg"
- img_info = (height, width, frame_id, video_id, file_name)
-
- del im_ann, annotations
-
- return (res, img_info, file_name)
-
- def load_anno(self, index):
- return self.annotations[index][0]
-
- def pull_item(self, index):
- id_ = self.ids[index]
-
- res, img_info, file_name = self.annotations[index]
- # load image and preprocess
- img_file = os.path.join(
- self.data_dir, self.name, file_name
- )
- img = cv2.imread(img_file)
- assert img is not None
-
- return img, res.copy(), img_info, np.array([id_])
-
- @Dataset.resize_getitem
- def __getitem__(self, index):
- """
- One image / label pair for the given index is picked up and pre-processed.
-
- Args:
- index (int): data index
-
- Returns:
- img (numpy.ndarray): pre-processed image
- padded_labels (torch.Tensor): pre-processed label data.
- The shape is :math:`[max_labels, 5]`.
- each label consists of [class, xc, yc, w, h]:
- class (float): class index.
- xc, yc (float) : center of bbox whose values range from 0 to 1.
- w, h (float) : size of bbox whose values range from 0 to 1.
- info_img : tuple of h, w, nh, nw, dx, dy.
- h, w (int): original shape of the image
- nh, nw (int): shape of the resized image without padding
- dx, dy (int): pad size
- img_id (int): same as the input index. Used for evaluation.
- """
- img, target, img_info, img_id = self.pull_item(index)
-
- if self.preproc is not None:
- img, target = self.preproc(img, target, self.input_dim)
- return img, target, img_info, img_id
diff --git a/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/transformer_decoder/position_encoding.py b/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/transformer_decoder/position_encoding.py
deleted file mode 100644
index f32532e070e67b2cd25771aea1ad10e7e5a5dc69..0000000000000000000000000000000000000000
--- a/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/transformer_decoder/position_encoding.py
+++ /dev/null
@@ -1,64 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# # Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/position_encoding.py
-"""
-Various positional encodings for the transformer.
-"""
-import math
-
-import torch
-from torch import nn
-
-
-class PositionEmbeddingSine(nn.Module):
- """
- This is a more standard version of the position embedding, very similar to the one
- used by the Attention is all you need paper, generalized to work on images.
- """
-
- def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None):
- super().__init__()
- self.num_pos_feats = num_pos_feats
- self.temperature = temperature
- self.normalize = normalize
- if scale is not None and normalize is False:
- raise ValueError("normalize should be True if scale is passed")
- if scale is None:
- scale = 2 * math.pi
- self.scale = scale
-
- def forward(self, x, mask=None):
- if mask is None:
- mask = torch.zeros((x.size(0), x.size(2), x.size(3)), device=x.device, dtype=torch.bool)
- not_mask = ~mask
- y_embed = not_mask.cumsum(1, dtype=torch.float32)
- x_embed = not_mask.cumsum(2, dtype=torch.float32)
- if self.normalize:
- eps = 1e-6
- y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale
- x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale
-
- dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)
- dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)
-
- pos_x = x_embed[:, :, :, None] / dim_t
- pos_y = y_embed[:, :, :, None] / dim_t
- pos_x = torch.stack(
- (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4
- ).flatten(3)
- pos_y = torch.stack(
- (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4
- ).flatten(3)
- pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)
- return pos
-
- def __repr__(self, _repr_indent=4):
- head = "Positional encoding " + self.__class__.__name__
- body = [
- "num_pos_feats: {}".format(self.num_pos_feats),
- "temperature: {}".format(self.temperature),
- "normalize: {}".format(self.normalize),
- "scale: {}".format(self.scale),
- ]
- # _repr_indent = 4
- lines = [head] + [" " * _repr_indent + line for line in body]
- return "\n".join(lines)
diff --git a/spaces/EPFL-VILAB/MultiMAE/utils/log_images.py b/spaces/EPFL-VILAB/MultiMAE/utils/log_images.py
deleted file mode 100644
index 826f29cfb5d29d22044d07c14068f1678a5ae003..0000000000000000000000000000000000000000
--- a/spaces/EPFL-VILAB/MultiMAE/utils/log_images.py
+++ /dev/null
@@ -1,138 +0,0 @@
-# Copyright (c) EPFL VILAB.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from typing import Dict, List
-
-import numpy as np
-import torch
-import torch.nn.functional as F
-import torchvision.transforms as transforms
-import wandb
-
-import utils
-from utils.datasets_semseg import (ade_classes, hypersim_classes,
- nyu_v2_40_classes)
-
-
-def inv_norm(tensor: torch.Tensor) -> torch.Tensor:
- """Inverse of the normalization that was done during pre-processing
- """
- inv_normalize = transforms.Normalize(
- mean=[-0.485 / 0.229, -0.456 / 0.224, -0.406 / 0.225],
- std=[1 / 0.229, 1 / 0.224, 1 / 0.225])
-
- return inv_normalize(tensor)
-
-
-@torch.no_grad()
-def log_semseg_wandb(
- images: torch.Tensor,
- preds: List[np.ndarray],
- gts: List[np.ndarray],
- depth_gts: List[np.ndarray],
- dataset_name: str = 'ade20k',
- image_count=8,
- prefix=""
- ):
-
- if dataset_name == 'ade20k':
- classes = ade_classes()
- elif dataset_name == 'hypersim':
- classes = hypersim_classes()
- elif dataset_name == 'nyu':
- classes = nyu_v2_40_classes()
- else:
- raise ValueError(f'Dataset {dataset_name} not supported for logging to wandb.')
-
- class_labels = {i: cls for i, cls in enumerate(classes)}
- class_labels[len(classes)] = "void"
- class_labels[utils.SEG_IGNORE_INDEX] = "ignore"
-
- image_count = min(len(images), image_count)
-
- images = images[:image_count]
- preds = preds[:image_count]
- gts = gts[:image_count]
- depth_gts = depth_gts[:image_count] if len(depth_gts) > 0 else None
-
- semseg_images = {}
-
- for i, (image, pred, gt) in enumerate(zip(images, preds, gts)):
- image = inv_norm(image)
- pred[gt == utils.SEG_IGNORE_INDEX] = utils.SEG_IGNORE_INDEX
-
- semseg_image = wandb.Image(image, masks={
- "predictions": {
- "mask_data": pred,
- "class_labels": class_labels,
- },
- "ground_truth": {
- "mask_data": gt,
- "class_labels": class_labels,
- }
- })
-
- semseg_images[f"{prefix}_{i}"] = semseg_image
-
- if depth_gts is not None:
- semseg_images[f"{prefix}_{i}_depth"] = wandb.Image(depth_gts[i])
-
- wandb.log(semseg_images, commit=False)
-
-
-@torch.no_grad()
-def log_taskonomy_wandb(
- preds: Dict[str, torch.Tensor],
- gts: Dict[str, torch.Tensor],
- image_count=8,
- prefix=""
- ):
- pred_tasks = list(preds.keys())
- gt_tasks = list(gts.keys())
- if 'mask_valid' in gt_tasks:
- gt_tasks.remove('mask_valid')
-
- image_count = min(len(preds[pred_tasks[0]]), image_count)
-
- all_images = {}
-
- for i in range(image_count):
-
- # Log GTs
- for task in gt_tasks:
- gt_img = gts[task][i]
- if task == 'rgb':
- gt_img = inv_norm(gt_img)
- if gt_img.shape[0] == 1:
- gt_img = gt_img[0]
- elif gt_img.shape[0] == 2:
- gt_img = F.pad(gt_img, (0,0,0,0,0,1), mode='constant', value=0.0)
-
- gt_img = wandb.Image(gt_img, caption=f'GT #{i}')
- key = f'{prefix}_gt_{task}'
- if key not in all_images:
- all_images[key] = [gt_img]
- else:
- all_images[key].append(gt_img)
-
- # Log preds
- for task in pred_tasks:
- pred_img = preds[task][i]
- if task == 'rgb':
- pred_img = inv_norm(pred_img)
- if pred_img.shape[0] == 1:
- pred_img = pred_img[0]
- elif pred_img.shape[0] == 2:
- pred_img = F.pad(pred_img, (0,0,0,0,0,1), mode='constant', value=0.0)
-
- pred_img = wandb.Image(pred_img, caption=f'Pred #{i}')
- key = f'{prefix}_pred_{task}'
- if key not in all_images:
- all_images[key] = [pred_img]
- else:
- all_images[key].append(pred_img)
-
- wandb.log(all_images, commit=False)
diff --git a/spaces/Ekimetrics/Biomap/biomap/utils_gee.py b/spaces/Ekimetrics/Biomap/biomap/utils_gee.py
deleted file mode 100644
index 24603ad7c4552526a1159ca9afac5431a6b6efc6..0000000000000000000000000000000000000000
--- a/spaces/Ekimetrics/Biomap/biomap/utils_gee.py
+++ /dev/null
@@ -1,174 +0,0 @@
-import io
-import requests
-import ee
-import numpy as np
-import matplotlib.pyplot as plt
-import os
-from pathlib import Path
-import logging
-import json
-
-#Initialize
-service_account = os.environ["SERVICE_ACCOUNT_EE"]
-private_key = json.loads(os.environ["PRIVATE_KEY_EE"])
-
-with open(os.path.join(os.path.dirname(__file__), '.private-key-2.json'), "w") as ipt:
- json.dump(private_key, ipt)
-
-credentials = ee.ServiceAccountCredentials(service_account, os.path.join(os.path.dirname(__file__), '.private-key-2.json'))
-ee.Initialize(credentials)
-
-def get_image(location, d1, d2):
- logging.info(f"getting image for {d1} to {d2} at location {location}")
- img = extract_img(location, d1, d2)
-
- img_test = transform_ee_img(
- img, max=0.3
- )
- return img_test
-
-#delete clouds
-def maskS2clouds(image):
- qa = image.select('QA60');
-
- # // Bits 10 and 11 are clouds and cirrus, respectively.
- cloudBitMask = 1 << 10;
- cirrusBitMask = 1 << 11;
-
- # // Both flags should be set to zero, indicating clear conditions.
- mask = (qa.bitwiseAnd(cloudBitMask).eq(0))and(qa.bitwiseAnd(cirrusBitMask).eq(0))
-
- return image.updateMask(mask).divide(10000);
-
-
-#find ee_img
-def extract_ee_img(location,start_date,end_date, width = 0.01 , len = 0.01) :
- """Extract the earth engine image
-
- Args:
- location (list[float]):
- start_date (str): the start date for finding an image
- end_date (str): the end date for finding an image
- width (float, optional): _description_. Defaults to 0.01.
- len (float, optional): _description_. Defaults to 0.01.
-
- Returns:
- _type_: _description_
- """
- # define the polygone
- polygone =[[[float(location[0])-0.01,float(location[1])+0.01],
- [float(location[0])-0.01,float(location[1])-0.01],
- [float(location[0])+0.01,float(location[1])-0.01],
- [float(location[0])+0.01,float(location[1])+0.01],
- ]]
-
- #define the ee geometry
- geometry = ee.Geometry.Polygon(polygone, None, False);
-
- #extract the dataset
- dataset = ee.ImageCollection('COPERNICUS/S2_SR_HARMONIZED')\
- .filterDate(start_date, end_date)\
- .filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE',1))\
- .map(maskS2clouds)
- return dataset.mean(), geometry
-
-
-
-# Get URL
-def get_url(ee_img, geometry, scale=5):
- """Get the url of a dataset and a geometry
-
- Args:
- ee_img (ee.ImageCollection: meta data on the image
- geometry (ee.Geometry.Polygon): geometry of the desired landscape
- scale (int, optional): _description_. Defaults to 5.
-
- Returns:
- str: the url to use to ask the server
- """
- region = geometry
-
- # collectionList = ee_img.toList(ee_img.size())
- # collectionSize = collectionList.size().getInfo()
- # for i in xrange(collectionSize):
- # ee.batch.Export.image.toDrive(
- # image = ee.Image(collectionList.get(i)).clip(rectangle),
- # fileNamePrefix = 'foo' + str(i + 1),
- # dimensions = '128x128').start()
-
- url = ee_img.getDownloadURL({
- # 'min': 0.0,
- # 'max': 0.3,
- 'bands': ['B4', 'B3', 'B2'],
- 'region' : region,
- 'scale' : scale,
- 'format' : 'NPY'
- })
-
- return url
-
-def extract_np_from_url(url):
- """extract a numpy array based on a url
-
- Args:
- url (str): _description_
-
- Returns:
- numpyarray: response from earth engine as numpy
- """
- #get the response from url
- response = requests.get(url)
-
- #transform it into numpy
- data = np.load(io.BytesIO(response.content))
-
- #transform numpy of tuples to 3D numpy
- temp1 = []
-
- for x in data:
- temp2 = []
- for y in x :
- temp2.append([z for z in y])
- temp1.append(temp2)
-
- data = np.array(temp1)
- return data
-
-#Fonction globale
-def extract_img(location,start_date,end_date, width = 0.01 , len = 0.01,scale=5):
- """Extract an image of the landscape at the selected longitude and latitude with the selected width and length
-
- Args:
- location (list[float]): [latitude of the center of the landscape, longitude of the center of the landscape]
- start_date (str): the start date
- end_date (str): _description_
- width (float, optional): _description_. Defaults to 0.01.
- len (float, optional): _description_. Defaults to 0.01.
- scale (int, optional): _description_. Defaults to 5.
-
- Returns:
- img: image as numpy array
- """
- # reversed longitude latitude
- location = (location[1], location[0])
- ee_img, geometry = extract_ee_img(location, width,start_date,end_date , len)
- url = get_url(ee_img, geometry, scale)
- img = extract_np_from_url(url)
-
- return img
-
-# transform img from numpy to PIL
-def transform_ee_img(img, min = 0, max=0.3):
- """Transform an img from numpy to PIL
-
- Args:
- img (numpy array): the original image as a numpy array
- min (int, optional): _description_. Defaults to 0.
- max (float, optional): _description_. Defaults to 0.3.
-
- Returns:
- img_test: a PIL image
- """
- img=np.minimum(img*255/max,np.ones(img.shape)*255)
- img=np.uint8((np.rint(img)).astype(int))
- return img
\ No newline at end of file
diff --git a/spaces/Enutrof/GenreClassifier/app.py b/spaces/Enutrof/GenreClassifier/app.py
deleted file mode 100644
index 0f64cfdfd3d0d26771169b709c16fe2601f14c7b..0000000000000000000000000000000000000000
--- a/spaces/Enutrof/GenreClassifier/app.py
+++ /dev/null
@@ -1,7 +0,0 @@
-import gradio as gr
-from inference import *
-
-iface = gr.Interface(fn=inference,
- inputs=gr.inputs.Audio(source="upload", type="filepath"),
- outputs="text")
-iface.launch()
\ No newline at end of file
diff --git a/spaces/EswarBilla/EswarGenAiChatbot/app.py b/spaces/EswarBilla/EswarGenAiChatbot/app.py
deleted file mode 100644
index a362dcc7d0ddd1eee86961f1bc3db6d894fbd3d5..0000000000000000000000000000000000000000
--- a/spaces/EswarBilla/EswarGenAiChatbot/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import os
-import gradio as gr
-from langchain.chat_models import ChatOpenAI
-from langchain import LLMChain, PromptTemplate
-from langchain.memory import ConversationBufferMemory
-
-OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
-
-template = """You are a helpful assistant to answer all user queries.
-{chat_history}
-User: {user_message}
-Chatbot:"""
-
-prompt = PromptTemplate(
- input_variables=["chat_history", "user_message"], template=template
-)
-
-memory = ConversationBufferMemory(memory_key="chat_history")
-
-llm_chain = LLMChain(
- llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
- prompt=prompt,
- verbose=True,
- memory=memory,
-)
-
-def get_text_response(user_message,history):
- response = llm_chain.predict(user_message = user_message)
- return response
-
-demo = gr.ChatInterface(get_text_response)
-
-if __name__ == "__main__":
- demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
diff --git a/spaces/FridaZuley/RVC_HFKawaii/go-applio-manager-recode.bat b/spaces/FridaZuley/RVC_HFKawaii/go-applio-manager-recode.bat
deleted file mode 100644
index 91b8acfc0c69a356fd5b1d77650b2cd728b1072b..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/go-applio-manager-recode.bat
+++ /dev/null
@@ -1,322 +0,0 @@
-@echo off
-title Applio Installer
-
-::: _ _ _____ _
-::: /\ | (_) | __ \ | |
-::: / \ _ __ _ __ | |_ ___ | |__) |___ ___ ___ __| | ___
-::: / /\ \ | '_ \| '_ \| | |/ _ \ | _ // _ \/ __/ _ \ / _` |/ _ \
-::: / ____ \| |_) | |_) | | | (_) | | | \ \ __/ (_| (_) | (_| | __/
-::: /_/ \_\ .__/| .__/|_|_|\___/ |_| \_\___|\___\___/ \__,_|\___|
-::: | | | |
-::: |_| |_|
-:::
-:::
-
-setlocal
-set "branch=applio-recode"
-set "runtime=runtime-recode"
-set "repoUrl=https://github.com/IAHispano/Applio-RVC-Fork/archive/refs/heads/%branch%.zip"
-set "fixesFolder=fixes"
-set "localFixesPy=local_fixes.py"
-set "principal=%cd%"
-set "URL_BASE=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main"
-set "URL_EXTRA=https://huggingface.co/IAHispano/applio/resolve/main"
-
-:menu
-for /f "delims=: tokens=*" %%A in ('findstr /b ":::" "%~f0"') do @echo(%%A
-
-echo [1] Reinstall Applio
-echo [2] Update Applio
-echo [3] Update Applio + Runtime
-echo.
-
-set /p choice=Select an option:
-set choice=%choice: =%
-
-if "%choice%"=="1" (
- cls
- echo Starting Applio Reinstaller...
- echo.
- goto reinstaller
- pause
- cls
- goto menu
-
-)
-
-if "%choice%"=="2" (
- cls
- echo Starting Applio Updater...
- echo.
- goto updater
- pause
- cls
- goto menu
-)
-
-if "%choice%"=="3" (
- cls
- echo Updating Applio + Runtime...
- echo.
- goto updaterRuntime
- pause
- cls
- goto menu
-
-)
-
-cls
-echo Invalid option. Please enter a number from 1 to 3.
-echo.
-echo Press 'Enter' to access the main menu...
-pause>nul
-cls
-goto menu
-
-:reinstaller
-
-echo WARNING: Remember to install Microsoft C++ Build Tools, Redistributable, Python, and Git before continuing.
-echo.
-echo Step-by-step guide: https://rentry.org/appliolocal
-echo Build Tools: https://aka.ms/vs/17/release/vs_BuildTools.exe
-echo Redistributable: https://aka.ms/vs/17/release/vc_redist.x64.exe
-echo Git: https://github.com/git-for-windows/git/releases/download/v2.42.0.windows.2/Git-2.42.0.2-64-bit.exe
-echo Python: Add this route to the windows enviroment variables the user path variable: %principal%\runtime\Scripts
-echo.
-pause
-cls
-
-echo Downloading ZIP file...
-powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }"
-echo.
-
-echo Extracting ZIP file...
-powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }"
-echo.
-
-echo Copying folder and file structure from subdirectory to main directory...
-robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E
-echo.
-
-echo Deleting contents of subdirectory (files and folders)...
-rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q
-echo.
-
-echo Cleaning up...
-del "%principal%\repo.zip"
-echo.
-cls
-
-echo Proceeding to download the models...
-echo.
-
-echo WARNING: At this point, it's recommended to disable antivirus or firewall, as errors might occur when downloading pretrained models.
-pause
-cls
-
-echo Downloading models in the assets folder...
-cd "assets"
-echo.
-echo Downloading the "pretrained" folder...
-cd "pretrained"
-curl -LJO "%URL_BASE%/pretrained/D32k.pth"
-curl -LJO "%URL_BASE%/pretrained/D40k.pth"
-curl -LJO "%URL_BASE%/pretrained/D48k.pth"
-curl -LJO "%URL_BASE%/pretrained/G32k.pth"
-curl -LJO "%URL_BASE%/pretrained/G40k.pth"
-curl -LJO "%URL_BASE%/pretrained/G48k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0D32k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0D40k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0D48k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0G32k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0G40k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0G48k.pth"
-cd ".."
-echo.
-cls
-
-echo Downloading the "pretrained_v2" folder...
-cd "pretrained_v2"
-curl -LJO "%URL_BASE%/pretrained_v2/D32k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/D40k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/D48k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/G32k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/G40k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/G48k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0D32k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0D40k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0D48k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0G32k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0G40k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0G48k.pth"
-cd ".."
-echo.
-cls
-
-echo Downloading the hubert_base.pt file...
-cd "hubert"
-curl -LJO "%URL_BASE%/hubert_base.pt"
-cd ".."
-echo.
-cls
-
-
-echo Downloading the rmvpe.pt file...
-cd "rmvpe"
-curl -LJO "%URL_BASE%/rmvpe.pt"
-echo.
-cls
-
-echo Downloading the rmvpe.onnx file...
-curl -LJO "%URL_BASE%/rmvpe.onnx"
-cd ".."
-cd ".."
-echo.
-cls
-
-echo Downloading the rest of the large files
-
-echo Downloading the "uvr5_weights" folder...
-cd "uvr5_weights"
-curl -LJO "%URL_BASE%/uvr5_weights/HP2_all_vocals.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/HP3_all_vocals.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/HP5_only_main_vocal.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoAggressive.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoDeReverb.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoNormal.pth"
-cd ".."
-echo.
-cls
-
-echo Downloading the ffmpeg.exe file...
-curl -LJO "%URL_BASE%/ffmpeg.exe"
-echo.
-cls
-
-echo Downloading the ffprobe.exe file...
-curl -LJO "%URL_BASE%/ffprobe.exe"
-echo.
-cls
-
-echo Downloading the runtime.zip file...
-curl -LJO "%URL_EXTRA%/%runtime%.zip"
-echo.
-cls
-
-echo Extracting the runtime.zip file, this might take a while...
-powershell -Command "Expand-Archive -Path '%runtime%.zip' -DestinationPath '.'"
-del %runtime%.zip
-echo.
-cls
-
-echo Downloads completed!
-echo.
-
-echo Checking if the local_fixes.py file exists in the Fixes folder...
-if exist "%fixesFolder%\%localFixesPy%" (
- echo Running the file...
- runtime\python.exe "%fixesFolder%\%localFixesPy%"
-) else (
- echo The "%localFixesPy%" file was not found in the "Fixes" folder.
-)
-echo.
-
-echo Fixes Applied!
-echo.
-
-echo Applio has been reinstalled!
-echo.
-echo Press 'Enter' to access the main menu...
-pause>nul
-cls
-goto menu
-
-
-:updater
-
-echo Downloading the ZIP file...
-powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }"
-echo.
-
-echo Extracting ZIP file...
-powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }"
-echo.
-
-echo Copying folder and file structure from subdirectory to main directory...
-robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E
-echo.
-
-echo Deleting contents of the subdirectory (files and folders)...
-rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q
-echo.
-
-echo Cleaning up...
-del "%principal%\repo.zip"
-echo.
-cls
-
-echo Verifying if the local_fixes.py file exists in the Fixes folder...
-if exist "%fixesFolder%\%localFixesPy%" (
- echo Running the file...
- runtime\python.exe "%fixesFolder%\%localFixesPy%"
-) else (
- echo The file "%localFixesPy%" was not found in the "Fixes" folder.
-)
-echo.
-
-echo Applio has been updated!
-echo.
-echo Press 'Enter' to access the main menu...
-pause>nul
-cls
-goto menu
-
-
-:updaterRuntime
-
-echo Downloading the ZIP file...
-powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }"
-echo.
-
-echo Extracting ZIP file...
-powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }"
-echo.
-
-echo Copying folder and file structure from subdirectory to main directory...
-robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E
-echo.
-
-echo Deleting contents of the subdirectory (files and folders)...
-rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q
-echo.
-
-echo Cleaning up...
-del "%principal%\repo.zip"
-echo.
-cls
-
-echo Downloading the runtime.zip file...
-curl -LJO "%URL_EXTRA%/%runtime%.zip"
-echo.
-cls
-echo Extracting the runtime.zip file, this might take a while...
-powershell -Command "Expand-Archive -Path '%runtime%.zip' -DestinationPath '.'"
-del runtime.zip
-echo.
-cls
-
-echo Verifying if the local_fixes.py file exists in the Fixes folder...
-if exist "%fixesFolder%\%localFixesPy%" (
- echo Running the file...
- runtime\python.exe "%fixesFolder%\%localFixesPy%"
-) else (
- echo The file "%localFixesPy%" was not found in the "Fixes" folder.
-)
-echo.
-
-echo Applio has been updated!
-echo.
-echo Press 'Enter' to access the main menu...
-pause>nul
-cls
-goto menu
diff --git a/spaces/GTKJF/SFE/README.md b/spaces/GTKJF/SFE/README.md
deleted file mode 100644
index cae22b107c940df82ed5e79ecffff21e9534e426..0000000000000000000000000000000000000000
--- a/spaces/GTKJF/SFE/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Panel Template
-emoji: 📈
-colorFrom: gray
-colorTo: green
-sdk: docker
-pinned: false
-duplicated_from: Panel-Org/panel-template
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/respace.py b/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/respace.py
deleted file mode 100644
index fa0e3972184f83a3bea359f25f53a9e69d691d3a..0000000000000000000000000000000000000000
--- a/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/respace.py
+++ /dev/null
@@ -1,117 +0,0 @@
-"""
-Utilities for changing sampling schedules of a trained model.
-
-Simplified from: https://github.com/openai/guided-diffusion/blob/main/guided_diffusion/respace.py
-"""
-
-import numpy as np
-import torch as th
-
-from .gaussian_diffusion import GaussianDiffusion
-
-
-def space_timesteps(num_timesteps, section_counts):
- """
- Create a list of timesteps to use from an original diffusion process,
- given the number of timesteps we want to take from equally-sized portions
- of the original process.
-
- For example, if there's 300 timesteps and the section counts are [10,15,20]
- then the first 100 timesteps are strided to be 10 timesteps, the second 100
- are strided to be 15 timesteps, and the final 100 are strided to be 20.
-
- :param num_timesteps: the number of diffusion steps in the original
- process to divide up.
- :param section_counts: either a list of numbers, or a string containing
- comma-separated numbers, indicating the step count
- per section. As a special case, use "ddimN" where N
- is a number of steps to use the striding from the
- DDIM paper.
- :return: a set of diffusion steps from the original process to use.
- """
- if isinstance(section_counts, str):
- if section_counts.startswith("ddim"):
- desired_count = int(section_counts[len("ddim") :])
- for i in range(1, num_timesteps):
- if len(range(0, num_timesteps, i)) == desired_count:
- return set(range(0, num_timesteps, i))
- raise ValueError(f"cannot create exactly {num_timesteps} steps with an integer stride")
- elif section_counts == "fast27":
- steps = space_timesteps(num_timesteps, "10,10,3,2,2")
- # Help reduce DDIM artifacts from noisiest timesteps.
- steps.remove(num_timesteps - 1)
- steps.add(num_timesteps - 3)
- return steps
- section_counts = [int(x) for x in section_counts.split(",")]
- size_per = num_timesteps // len(section_counts)
- extra = num_timesteps % len(section_counts)
- start_idx = 0
- all_steps = []
- for i, section_count in enumerate(section_counts):
- size = size_per + (1 if i < extra else 0)
- if size < section_count:
- raise ValueError(f"cannot divide section of {size} steps into {section_count}")
- if section_count <= 1:
- frac_stride = 1
- else:
- frac_stride = (size - 1) / (section_count - 1)
- cur_idx = 0.0
- taken_steps = []
- for _ in range(section_count):
- taken_steps.append(start_idx + round(cur_idx))
- cur_idx += frac_stride
- all_steps += taken_steps
- start_idx += size
- return set(all_steps)
-
-
-class SpacedDiffusion(GaussianDiffusion):
- """
- A diffusion process which can skip steps in a base diffusion process.
-
- :param use_timesteps: a collection (sequence or set) of timesteps from the
- original diffusion process to retain.
- :param kwargs: the kwargs to create the base diffusion process.
- """
-
- def __init__(self, use_timesteps, **kwargs):
- self.use_timesteps = set(use_timesteps)
- self.timestep_map = []
- self.original_num_steps = len(kwargs["betas"])
-
- base_diffusion = GaussianDiffusion(**kwargs) # pylint: disable=missing-kwoa
- last_alpha_cumprod = 1.0
- new_betas = []
- for i, alpha_cumprod in enumerate(base_diffusion.alphas_cumprod):
- if i in self.use_timesteps:
- new_betas.append(1 - alpha_cumprod / last_alpha_cumprod)
- last_alpha_cumprod = alpha_cumprod
- self.timestep_map.append(i)
- kwargs["betas"] = np.array(new_betas)
- super().__init__(**kwargs)
-
- def p_mean_variance(self, model, *args, **kwargs):
- return super().p_mean_variance(self._wrap_model(model), *args, **kwargs)
-
- def condition_mean(self, cond_fn, *args, **kwargs):
- return super().condition_mean(self._wrap_model(cond_fn), *args, **kwargs)
-
- def condition_score(self, cond_fn, *args, **kwargs):
- return super().condition_score(self._wrap_model(cond_fn), *args, **kwargs)
-
- def _wrap_model(self, model):
- if isinstance(model, _WrappedModel):
- return model
- return _WrappedModel(model, self.timestep_map, self.original_num_steps)
-
-
-class _WrappedModel:
- def __init__(self, model, timestep_map, original_num_steps):
- self.model = model
- self.timestep_map = timestep_map
- self.original_num_steps = original_num_steps
-
- def __call__(self, x, ts, **kwargs):
- map_tensor = th.tensor(self.timestep_map, device=ts.device, dtype=ts.dtype)
- new_ts = map_tensor[ts]
- return self.model(x, new_ts, **kwargs)
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/colored_cylinder_in_square.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/colored_cylinder_in_square.py
deleted file mode 100644
index be3f01bea7c5d8e3f302d9d92ec0c6193612d78e..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/colored_cylinder_in_square.py
+++ /dev/null
@@ -1,44 +0,0 @@
-import numpy as np
-from cliport.tasks.task import Task
-from cliport.utils import utils
-
-class ColoredCylinderInSquare(Task):
- """Pick up five differently colored cylinder blocks and arrange them inside the square template on the tabletop. Each block should be placed along the corresponding color edge: red, blue, green, yellow, and orange."""
-
- def __init__(self):
- super().__init__()
- self.max_steps = 20
- self.lang_template = "arrange the {color} cylinder along the {color} edge"
- self.task_completed_desc = "done arranging cylinders."
- self.additional_reset()
-
- def reset(self, env):
- super().reset(env)
-
- # Add square template.
- square_size = (0.3, 0.3, 0.005) # x, y, z dimensions for the asset size
- square_pose = self.get_random_pose(env, square_size)
- square_urdf = 'square/square-template.urdf'
- env.add_object(square_urdf, square_pose, 'fixed')
-
- # Cylinder colors.
- colors = ['red', 'blue', 'green', 'yellow', 'orange']
-
- # Add cylinders.
- cylinder_size = (0.04, 0.04, 0.08) # x, y, z dimensions for the asset size
- cylinder_urdf = 'cylinder/cylinder-template.urdf'
- cylinders = []
- for color in colors:
- cylinder_pose = self.get_random_pose(env, cylinder_size)
- cylinder_id = env.add_object(cylinder_urdf, cylinder_pose, color=utils.COLORS[color])
- cylinders.append(cylinder_id)
-
- # Associate placement locations for goals.
- place_pos = [(0.1, 0, 0.04), (-0.1, 0, 0.04), (0, 0.1, 0.04), (0, -0.1, 0.04), (0, 0, 0.04)]
- targs = [(utils.apply(square_pose, i), square_pose[1]) for i in place_pos]
-
- # Goal: each cylinder is placed along the corresponding color edge.
- for i, cylinder in enumerate(cylinders):
- self.add_goal(objs=[cylinder], matches=np.ones((1, 1)), targ_poses=[targs[i]], replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=1 / 5,
- language_goal=self.lang_template.format(color=colors[i]))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/free_anchor/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/free_anchor/README.md
deleted file mode 100644
index 6d6474c90f1e76f80a0043d35897133ef604ce0a..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/free_anchor/README.md
+++ /dev/null
@@ -1,27 +0,0 @@
-# FreeAnchor: Learning to Match Anchors for Visual Object Detection
-
-## Introduction
-
-[ALGORITHM]
-
-```latex
-@inproceedings{zhang2019freeanchor,
- title = {{FreeAnchor}: Learning to Match Anchors for Visual Object Detection},
- author = {Zhang, Xiaosong and Wan, Fang and Liu, Chang and Ji, Rongrong and Ye, Qixiang},
- booktitle = {Neural Information Processing Systems},
- year = {2019}
-}
-```
-
-## Results and Models
-
-| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
-|:--------:|:-------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:|
-| R-50 | pytorch | 1x | 4.9 | 18.4 | 38.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/free_anchor/retinanet_free_anchor_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/free_anchor/retinanet_free_anchor_r50_fpn_1x_coco/retinanet_free_anchor_r50_fpn_1x_coco_20200130-0f67375f.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/free_anchor/retinanet_free_anchor_r50_fpn_1x_coco/retinanet_free_anchor_r50_fpn_1x_coco_20200130_095625.log.json) |
-| R-101 | pytorch | 1x | 6.8 | 14.9 | 40.3 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/free_anchor/retinanet_free_anchor_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/free_anchor/retinanet_free_anchor_r101_fpn_1x_coco/retinanet_free_anchor_r101_fpn_1x_coco_20200130-358324e6.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/free_anchor/retinanet_free_anchor_r101_fpn_1x_coco/retinanet_free_anchor_r101_fpn_1x_coco_20200130_100723.log.json) |
-| X-101-32x4d | pytorch | 1x | 8.1 | 11.1 | 41.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/free_anchor/retinanet_free_anchor_x101_32x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/free_anchor/retinanet_free_anchor_x101_32x4d_fpn_1x_coco/retinanet_free_anchor_x101_32x4d_fpn_1x_coco_20200130-d4846968.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/free_anchor/retinanet_free_anchor_x101_32x4d_fpn_1x_coco/retinanet_free_anchor_x101_32x4d_fpn_1x_coco_20200130_095627.log.json) |
-
-**Notes:**
-
-- We use 8 GPUs with 2 images/GPU.
-- For more settings and models, please refer to the [official repo](https://github.com/zhangxiaosong18/FreeAnchor).
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_2x_coco.py
deleted file mode 100644
index 9c85d26d2372ad1ab5490b4ec93dd7484dc9f6f0..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_2x_coco.py
+++ /dev/null
@@ -1,46 +0,0 @@
-_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
-norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
-model = dict(
- pretrained='open-mmlab://detectron/resnet50_gn',
- backbone=dict(norm_cfg=norm_cfg),
- neck=dict(norm_cfg=norm_cfg),
- roi_head=dict(
- bbox_head=dict(
- type='Shared4Conv1FCBBoxHead',
- conv_out_channels=256,
- norm_cfg=norm_cfg),
- mask_head=dict(norm_cfg=norm_cfg)))
-img_norm_cfg = dict(
- mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
-# learning policy
-lr_config = dict(step=[16, 22])
-runner = dict(type='EpochBasedRunner', max_epochs=24)
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/grids/diffusion/_explorers.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/grids/diffusion/_explorers.py
deleted file mode 100644
index 0bf4ca57b63f5f9308bd1178ddbde5d8f06748e5..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/grids/diffusion/_explorers.py
+++ /dev/null
@@ -1,66 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import treetable as tt
-
-from .._base_explorers import BaseExplorer
-
-
-class DiffusionExplorer(BaseExplorer):
- eval_metrics = ["sisnr", "visqol"]
-
- def stages(self):
- return ["train", "valid", "valid_ema", "evaluate", "evaluate_ema"]
-
- def get_grid_meta(self):
- """Returns the list of Meta information to display for each XP/job.
- """
- return [
- tt.leaf("index", align=">"),
- tt.leaf("name", wrap=140),
- tt.leaf("state"),
- tt.leaf("sig", align=">"),
- ]
-
- def get_grid_metrics(self):
- """Return the metrics that should be displayed in the tracking table.
- """
- return [
- tt.group(
- "train",
- [
- tt.leaf("epoch"),
- tt.leaf("loss", ".3%"),
- ],
- align=">",
- ),
- tt.group(
- "valid",
- [
- tt.leaf("loss", ".3%"),
- # tt.leaf("loss_0", ".3%"),
- ],
- align=">",
- ),
- tt.group(
- "valid_ema",
- [
- tt.leaf("loss", ".3%"),
- # tt.leaf("loss_0", ".3%"),
- ],
- align=">",
- ),
- tt.group(
- "evaluate", [tt.leaf("rvm", ".4f"), tt.leaf("rvm_0", ".4f"),
- tt.leaf("rvm_1", ".4f"), tt.leaf("rvm_2", ".4f"),
- tt.leaf("rvm_3", ".4f"), ], align=">"
- ),
- tt.group(
- "evaluate_ema", [tt.leaf("rvm", ".4f"), tt.leaf("rvm_0", ".4f"),
- tt.leaf("rvm_1", ".4f"), tt.leaf("rvm_2", ".4f"),
- tt.leaf("rvm_3", ".4f")], align=">"
- ),
- ]
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/grids/musicgen/_explorers.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/grids/musicgen/_explorers.py
deleted file mode 100644
index 334836b72559a120feb8a15eef3fe96ce88a4edb..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/grids/musicgen/_explorers.py
+++ /dev/null
@@ -1,93 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import typing as tp
-
-import treetable as tt
-
-from .._base_explorers import BaseExplorer
-
-
-class LMExplorer(BaseExplorer):
- eval_metrics: tp.List[str] = []
-
- def stages(self) -> tp.List[str]:
- return ['train', 'valid']
-
- def get_grid_metrics(self):
- """Return the metrics that should be displayed in the tracking table."""
- return [
- tt.group(
- 'train',
- [
- tt.leaf('epoch'),
- tt.leaf('duration', '.1f'), # duration in minutes
- tt.leaf('ping'),
- tt.leaf('ce', '.4f'), # cross entropy
- tt.leaf("ppl", '.3f'), # perplexity
- ],
- align='>',
- ),
- tt.group(
- 'valid',
- [
- tt.leaf('ce', '.4f'),
- tt.leaf('ppl', '.3f'),
- tt.leaf('best_ppl', '.3f'),
- ],
- align='>',
- ),
- ]
-
- def process_sheep(self, sheep, history):
- parts = super().process_sheep(sheep, history)
-
- track_by = {'ppl': 'lower'} # values should be in ['lower', 'higher']
- best_metrics = {k: (1 if v == 'lower' else -1) * float('inf') for k, v in track_by.items()}
-
- def comparator(mode, a, b):
- return a < b if mode == 'lower' else a > b
-
- for metrics in history:
- for key, sub in metrics.items():
- for metric in track_by:
- # for the validation set, keep track of best metrics (ppl in this example)
- # this is so we can conveniently compare metrics between runs in the grid
- if key == 'valid' and metric in sub and comparator(
- track_by[metric], sub[metric], best_metrics[metric]
- ):
- best_metrics[metric] = sub[metric]
-
- if 'valid' in parts:
- parts['valid'].update({f'best_{k}': v for k, v in best_metrics.items()})
- return parts
-
-
-class GenerationEvalExplorer(BaseExplorer):
- eval_metrics: tp.List[str] = []
-
- def stages(self) -> tp.List[str]:
- return ['evaluate']
-
- def get_grid_metrics(self):
- """Return the metrics that should be displayed in the tracking table."""
- return [
- tt.group(
- 'evaluate',
- [
- tt.leaf('epoch', '.3f'),
- tt.leaf('duration', '.1f'),
- tt.leaf('ping'),
- tt.leaf('ce', '.4f'),
- tt.leaf('ppl', '.3f'),
- tt.leaf('fad', '.3f'),
- tt.leaf('kld', '.3f'),
- tt.leaf('text_consistency', '.3f'),
- tt.leaf('chroma_cosine', '.3f'),
- ],
- align='>',
- ),
- ]
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/losses/__init__.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/losses/__init__.py
deleted file mode 100644
index d55107b2c11822cab749ed3683cf19020802898a..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/losses/__init__.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-"""Loss related classes and functions. In particular the loss balancer from
-EnCodec, and the usual spectral losses."""
-
-# flake8: noqa
-from .balancer import Balancer
-from .sisnr import SISNR
-from .stftloss import (
- LogSTFTMagnitudeLoss,
- MRSTFTLoss,
- SpectralConvergenceLoss,
- STFTLoss
-)
-from .specloss import (
- MelSpectrogramL1Loss,
- MultiScaleMelSpectrogramLoss,
-)
diff --git a/spaces/GroveStreet/GTA_SOVITS/inference/infer_tool.py b/spaces/GroveStreet/GTA_SOVITS/inference/infer_tool.py
deleted file mode 100644
index df81d0ffa449baba56be359dd88f02e5ce82f4f8..0000000000000000000000000000000000000000
--- a/spaces/GroveStreet/GTA_SOVITS/inference/infer_tool.py
+++ /dev/null
@@ -1,550 +0,0 @@
-import hashlib
-import io
-import json
-import logging
-import os
-import time
-from pathlib import Path
-from inference import slicer
-import gc
-
-import librosa
-import numpy as np
-# import onnxruntime
-import soundfile
-import torch
-import torchaudio
-
-import cluster
-import utils
-from models import SynthesizerTrn
-import pickle
-
-from diffusion.unit2mel import load_model_vocoder
-import yaml
-
-logging.getLogger('matplotlib').setLevel(logging.WARNING)
-
-
-def read_temp(file_name):
- if not os.path.exists(file_name):
- with open(file_name, "w") as f:
- f.write(json.dumps({"info": "temp_dict"}))
- return {}
- else:
- try:
- with open(file_name, "r") as f:
- data = f.read()
- data_dict = json.loads(data)
- if os.path.getsize(file_name) > 50 * 1024 * 1024:
- f_name = file_name.replace("\\", "/").split("/")[-1]
- print(f"clean {f_name}")
- for wav_hash in list(data_dict.keys()):
- if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600:
- del data_dict[wav_hash]
- except Exception as e:
- print(e)
- print(f"{file_name} error,auto rebuild file")
- data_dict = {"info": "temp_dict"}
- return data_dict
-
-
-def write_temp(file_name, data):
- with open(file_name, "w") as f:
- f.write(json.dumps(data))
-
-
-def timeit(func):
- def run(*args, **kwargs):
- t = time.time()
- res = func(*args, **kwargs)
- print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t))
- return res
-
- return run
-
-
-def format_wav(audio_path):
- if Path(audio_path).suffix == '.wav':
- return
- raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None)
- soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate)
-
-
-def get_end_file(dir_path, end):
- file_lists = []
- for root, dirs, files in os.walk(dir_path):
- files = [f for f in files if f[0] != '.']
- dirs[:] = [d for d in dirs if d[0] != '.']
- for f_file in files:
- if f_file.endswith(end):
- file_lists.append(os.path.join(root, f_file).replace("\\", "/"))
- return file_lists
-
-
-def get_md5(content):
- return hashlib.new("md5", content).hexdigest()
-
-
-def fill_a_to_b(a, b):
- if len(a) < len(b):
- for _ in range(0, len(b) - len(a)):
- a.append(a[0])
-
-
-def mkdir(paths: list):
- for path in paths:
- if not os.path.exists(path):
- os.mkdir(path)
-
-
-def pad_array(arr, target_length):
- current_length = arr.shape[0]
- if current_length >= target_length:
- return arr
- else:
- pad_width = target_length - current_length
- pad_left = pad_width // 2
- pad_right = pad_width - pad_left
- padded_arr = np.pad(arr, (pad_left, pad_right), 'constant', constant_values=(0, 0))
- return padded_arr
-
-
-def split_list_by_n(list_collection, n, pre=0):
- for i in range(0, len(list_collection), n):
- yield list_collection[i - pre if i - pre >= 0 else i: i + n]
-
-
-class F0FilterException(Exception):
- pass
-
-
-class Svc(object):
- def __init__(self, net_g_path, config_path,
- device=None,
- cluster_model_path="logs/44k/kmeans_10000.pt",
- nsf_hifigan_enhance=False,
- diffusion_model_path="logs/44k/diffusion/model_0.pt",
- diffusion_config_path="configs/diffusion.yaml",
- shallow_diffusion=False,
- only_diffusion=False,
- spk_mix_enable=False,
- feature_retrieval=False
- ):
- self.net_g_path = net_g_path
- self.only_diffusion = only_diffusion
- self.shallow_diffusion = shallow_diffusion
- self.feature_retrieval = feature_retrieval
- if device is None:
- self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- else:
- self.dev = torch.device(device)
- self.net_g_ms = None
- if not self.only_diffusion:
- self.hps_ms = utils.get_hparams_from_file(config_path)
- self.target_sample = self.hps_ms.data.sampling_rate
- self.hop_size = self.hps_ms.data.hop_length
- self.spk2id = self.hps_ms.spk
- try:
- self.vol_embedding = self.hps_ms.model.vol_embedding
- except Exception as e:
- self.vol_embedding = False
- try:
- self.speech_encoder = self.hps_ms.model.speech_encoder
- except Exception as e:
- self.speech_encoder = 'vec768l12'
-
- self.nsf_hifigan_enhance = nsf_hifigan_enhance
- if self.shallow_diffusion or self.only_diffusion:
- if os.path.exists(diffusion_model_path) and os.path.exists(diffusion_model_path):
- self.diffusion_model, self.vocoder, self.diffusion_args = load_model_vocoder(diffusion_model_path,
- self.dev,
- config_path=diffusion_config_path)
- if self.only_diffusion:
- self.target_sample = self.diffusion_args.data.sampling_rate
- self.hop_size = self.diffusion_args.data.block_size
- self.spk2id = self.diffusion_args.spk
- self.speech_encoder = self.diffusion_args.data.encoder
- if spk_mix_enable:
- self.diffusion_model.init_spkmix(len(self.spk2id))
- else:
- print("No diffusion model or config found. Shallow diffusion mode will False")
- self.shallow_diffusion = self.only_diffusion = False
-
- # load hubert and model
- if not self.only_diffusion:
- self.load_model(spk_mix_enable)
- self.hubert_model = utils.get_speech_encoder(self.speech_encoder, device=self.dev)
- self.volume_extractor = utils.Volume_Extractor(self.hop_size)
- else:
- self.hubert_model = utils.get_speech_encoder(self.diffusion_args.data.encoder, device=self.dev)
- self.volume_extractor = utils.Volume_Extractor(self.diffusion_args.data.block_size)
-
- if os.path.exists(cluster_model_path):
- if self.feature_retrieval:
- with open(cluster_model_path, "rb") as f:
- self.cluster_model = pickle.load(f)
- self.big_npy = None
- self.now_spk_id = -1
- else:
- self.cluster_model = cluster.get_cluster_model(cluster_model_path)
- else:
- self.feature_retrieval = False
-
- if self.shallow_diffusion: self.nsf_hifigan_enhance = False
- if self.nsf_hifigan_enhance:
- from modules.enhancer import Enhancer
- self.enhancer = Enhancer('nsf-hifigan', 'pretrain/nsf_hifigan/model', device=self.dev)
-
- def load_model(self, spk_mix_enable=False):
- # get model configuration
- self.net_g_ms = SynthesizerTrn(
- self.hps_ms.data.filter_length // 2 + 1,
- self.hps_ms.train.segment_size // self.hps_ms.data.hop_length,
- **self.hps_ms.model)
- _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None)
- if "half" in self.net_g_path and torch.cuda.is_available():
- _ = self.net_g_ms.half().eval().to(self.dev)
- else:
- _ = self.net_g_ms.eval().to(self.dev)
- if spk_mix_enable:
- self.net_g_ms.EnableCharacterMix(len(self.spk2id), self.dev)
-
- def get_unit_f0(self, wav, tran, cluster_infer_ratio, speaker, f0_filter, f0_predictor, cr_threshold=0.05):
-
- f0_predictor_object = utils.get_f0_predictor(f0_predictor, hop_length=self.hop_size,
- sampling_rate=self.target_sample, device=self.dev,
- threshold=cr_threshold)
-
- f0, uv = f0_predictor_object.compute_f0_uv(wav)
- if f0_filter and sum(f0) == 0:
- raise F0FilterException("No voice detected")
- f0 = torch.FloatTensor(f0).to(self.dev)
- uv = torch.FloatTensor(uv).to(self.dev)
-
- f0 = f0 * 2 ** (tran / 12)
- f0 = f0.unsqueeze(0)
- uv = uv.unsqueeze(0)
-
- wav16k = librosa.resample(wav, orig_sr=self.target_sample, target_sr=16000)
- wav16k = torch.from_numpy(wav16k).to(self.dev)
- c = self.hubert_model.encoder(wav16k)
- c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1])
-
- if cluster_infer_ratio != 0:
- if self.feature_retrieval:
- speaker_id = self.spk2id.get(speaker)
- if speaker_id is None:
- raise RuntimeError("The name you entered is not in the speaker list!")
- if not speaker_id and type(speaker) is int:
- if len(self.spk2id.__dict__) >= speaker:
- speaker_id = speaker
- feature_index = self.cluster_model[speaker_id]
- feat_np = c.transpose(0, 1).cpu().numpy()
- if self.big_npy is None or self.now_spk_id != speaker_id:
- self.big_npy = feature_index.reconstruct_n(0, feature_index.ntotal)
- self.now_spk_id = speaker_id
- print("starting feature retrieval...")
- score, ix = feature_index.search(feat_np, k=8)
- weight = np.square(1 / score)
- weight /= weight.sum(axis=1, keepdims=True)
- npy = np.sum(self.big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
- c = cluster_infer_ratio * npy + (1 - cluster_infer_ratio) * feat_np
- c = torch.FloatTensor(c).to(self.dev).transpose(0, 1)
- print("end feature retrieval...")
- else:
- cluster_c = cluster.get_cluster_center_result(self.cluster_model, c.cpu().numpy().T, speaker).T
- cluster_c = torch.FloatTensor(cluster_c).to(self.dev)
- c = cluster_infer_ratio * cluster_c + (1 - cluster_infer_ratio) * c
-
- c = c.unsqueeze(0)
- return c, f0, uv
-
- def infer(self, speaker, tran, raw_path,
- cluster_infer_ratio=0,
- auto_predict_f0=False,
- noice_scale=0.4,
- f0_filter=False,
- f0_predictor='pm',
- enhancer_adaptive_key=0,
- cr_threshold=0.05,
- k_step=100,
- frame=0,
- spk_mix=False,
- second_encoding=False,
- loudness_envelope_adjustment=1
- ):
- wav, sr = librosa.load(raw_path, sr=self.target_sample)
- if spk_mix:
- c, f0, uv = self.get_unit_f0(wav, tran, 0, None, f0_filter, f0_predictor, cr_threshold=cr_threshold)
- n_frames = f0.size(1)
- sid = speaker[:, frame:frame + n_frames].transpose(0, 1)
- else:
- speaker_id = self.spk2id.get(speaker)
- if not speaker_id and type(speaker) is int:
- if len(self.spk2id.__dict__) >= speaker:
- speaker_id = speaker
- if speaker_id is None:
- raise RuntimeError("The name you entered is not in the speaker list!")
- sid = torch.LongTensor([int(speaker_id)]).to(self.dev).unsqueeze(0)
- c, f0, uv = self.get_unit_f0(wav, tran, cluster_infer_ratio, speaker, f0_filter, f0_predictor,
- cr_threshold=cr_threshold)
- n_frames = f0.size(1)
- if "half" in self.net_g_path and torch.cuda.is_available():
- c = c.half()
- with torch.no_grad():
- start = time.time()
- vol = None
- if not self.only_diffusion:
- vol = self.volume_extractor.extract(torch.FloatTensor(wav).to(self.dev)[None, :])[None, :].to(
- self.dev) if self.vol_embedding else None
- audio, f0 = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0,
- noice_scale=noice_scale, vol=vol)
- audio = audio[0, 0].data.float()
- audio_mel = self.vocoder.extract(audio[None, :], self.target_sample) if self.shallow_diffusion else None
- else:
- audio = torch.FloatTensor(wav).to(self.dev)
- audio_mel = None
- if self.only_diffusion or self.shallow_diffusion:
- vol = self.volume_extractor.extract(audio[None, :])[None, :, None].to(self.dev) if vol == None else vol[
- :,
- :,
- None]
- if self.shallow_diffusion and second_encoding:
- audio16k = librosa.resample(audio.detach().cpu().numpy(), orig_sr=self.target_sample,
- target_sr=16000)
- audio16k = torch.from_numpy(audio16k).to(self.dev)
- c = self.hubert_model.encoder(audio16k)
- c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1])
- f0 = f0[:, :, None]
- c = c.transpose(-1, -2)
- audio_mel = self.diffusion_model(
- c,
- f0,
- vol,
- spk_id=sid,
- spk_mix_dict=None,
- gt_spec=audio_mel,
- infer=True,
- infer_speedup=self.diffusion_args.infer.speedup,
- method=self.diffusion_args.infer.method,
- k_step=k_step)
- audio = self.vocoder.infer(audio_mel, f0).squeeze()
- if self.nsf_hifigan_enhance:
- audio, _ = self.enhancer.enhance(
- audio[None, :],
- self.target_sample,
- f0[:, :, None],
- self.hps_ms.data.hop_length,
- adaptive_key=enhancer_adaptive_key)
- if loudness_envelope_adjustment != 1:
- audio = utils.change_rms(wav, self.target_sample, audio, self.target_sample,
- loudness_envelope_adjustment)
- use_time = time.time() - start
- print("vits use time:{}".format(use_time))
- return audio, audio.shape[-1], n_frames
-
- def clear_empty(self):
- # clean up vram
- torch.cuda.empty_cache()
-
- def unload_model(self):
- # unload model
- self.net_g_ms = self.net_g_ms.to("cpu")
- del self.net_g_ms
- if hasattr(self, "enhancer"):
- self.enhancer.enhancer = self.enhancer.enhancer.to("cpu")
- del self.enhancer.enhancer
- del self.enhancer
- gc.collect()
-
- def slice_inference(self,
- raw_audio_path,
- spk,
- tran,
- slice_db,
- cluster_infer_ratio,
- auto_predict_f0,
- noice_scale,
- pad_seconds=0.5,
- clip_seconds=0,
- lg_num=0,
- lgr_num=0.75,
- f0_predictor='pm',
- enhancer_adaptive_key=0,
- cr_threshold=0.05,
- k_step=100,
- use_spk_mix=False,
- second_encoding=False,
- loudness_envelope_adjustment=1
- ):
- if use_spk_mix:
- if len(self.spk2id) == 1:
- spk = self.spk2id.keys()[0]
- use_spk_mix = False
- wav_path = Path(raw_audio_path).with_suffix('.wav')
- chunks = slicer.cut(wav_path, db_thresh=slice_db)
- audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks)
- per_size = int(clip_seconds * audio_sr)
- lg_size = int(lg_num * audio_sr)
- lg_size_r = int(lg_size * lgr_num)
- lg_size_c_l = (lg_size - lg_size_r) // 2
- lg_size_c_r = lg_size - lg_size_r - lg_size_c_l
- lg = np.linspace(0, 1, lg_size_r) if lg_size != 0 else 0
-
- if use_spk_mix:
- assert len(self.spk2id) == len(spk)
- audio_length = 0
- for (slice_tag, data) in audio_data:
- aud_length = int(np.ceil(len(data) / audio_sr * self.target_sample))
- if slice_tag:
- audio_length += aud_length // self.hop_size
- continue
- if per_size != 0:
- datas = split_list_by_n(data, per_size, lg_size)
- else:
- datas = [data]
- for k, dat in enumerate(datas):
- pad_len = int(audio_sr * pad_seconds)
- per_length = int(np.ceil(len(dat) / audio_sr * self.target_sample))
- a_length = per_length + 2 * pad_len
- audio_length += a_length // self.hop_size
- audio_length += len(audio_data)
- spk_mix_tensor = torch.zeros(size=(len(spk), audio_length)).to(self.dev)
- for i in range(len(spk)):
- last_end = None
- for mix in spk[i]:
- if mix[3] < 0. or mix[2] < 0.:
- raise RuntimeError("mix value must higer Than zero!")
- begin = int(audio_length * mix[0])
- end = int(audio_length * mix[1])
- length = end - begin
- if length <= 0:
- raise RuntimeError("begin Must lower Than end!")
- step = (mix[3] - mix[2]) / length
- if last_end is not None:
- if last_end != begin:
- raise RuntimeError("[i]EndTime Must Equal [i+1]BeginTime!")
- last_end = end
- if step == 0.:
- spk_mix_data = torch.zeros(length).to(self.dev) + mix[2]
- else:
- spk_mix_data = torch.arange(mix[2], mix[3], step).to(self.dev)
- if (len(spk_mix_data) < length):
- num_pad = length - len(spk_mix_data)
- spk_mix_data = torch.nn.functional.pad(spk_mix_data, [0, num_pad], mode="reflect").to(self.dev)
- spk_mix_tensor[i][begin:end] = spk_mix_data[:length]
-
- spk_mix_ten = torch.sum(spk_mix_tensor, dim=0).unsqueeze(0).to(self.dev)
- # spk_mix_tensor[0][spk_mix_ten<0.001] = 1.0
- for i, x in enumerate(spk_mix_ten[0]):
- if x == 0.0:
- spk_mix_ten[0][i] = 1.0
- spk_mix_tensor[:, i] = 1.0 / len(spk)
- spk_mix_tensor = spk_mix_tensor / spk_mix_ten
- if not ((torch.sum(spk_mix_tensor, dim=0) - 1.) < 0.0001).all():
- raise RuntimeError("sum(spk_mix_tensor) not equal 1")
- spk = spk_mix_tensor
-
- global_frame = 0
- audio = []
- for (slice_tag, data) in audio_data:
- print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======')
- # padd
- length = int(np.ceil(len(data) / audio_sr * self.target_sample))
- if slice_tag:
- print('jump empty segment')
- _audio = np.zeros(length)
- audio.extend(list(pad_array(_audio, length)))
- global_frame += length // self.hop_size
- continue
- if per_size != 0:
- datas = split_list_by_n(data, per_size, lg_size)
- else:
- datas = [data]
- for k, dat in enumerate(datas):
- per_length = int(np.ceil(len(dat) / audio_sr * self.target_sample)) if clip_seconds != 0 else length
- if clip_seconds != 0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======')
- # padd
- pad_len = int(audio_sr * pad_seconds)
- dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])])
- raw_path = io.BytesIO()
- soundfile.write(raw_path, dat, audio_sr, format="wav")
- raw_path.seek(0)
- out_audio, out_sr, out_frame = self.infer(spk, tran, raw_path,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale,
- f0_predictor=f0_predictor,
- enhancer_adaptive_key=enhancer_adaptive_key,
- cr_threshold=cr_threshold,
- k_step=k_step,
- frame=global_frame,
- spk_mix=use_spk_mix,
- second_encoding=second_encoding,
- loudness_envelope_adjustment=loudness_envelope_adjustment
- )
- global_frame += out_frame
- _audio = out_audio.cpu().numpy()
- pad_len = int(self.target_sample * pad_seconds)
- _audio = _audio[pad_len:-pad_len]
- _audio = pad_array(_audio, per_length)
- if lg_size != 0 and k != 0:
- lg1 = audio[-(lg_size_r + lg_size_c_r):-lg_size_c_r] if lgr_num != 1 else audio[-lg_size:]
- lg2 = _audio[lg_size_c_l:lg_size_c_l + lg_size_r] if lgr_num != 1 else _audio[0:lg_size]
- lg_pre = lg1 * (1 - lg) + lg2 * lg
- audio = audio[0:-(lg_size_r + lg_size_c_r)] if lgr_num != 1 else audio[0:-lg_size]
- audio.extend(lg_pre)
- _audio = _audio[lg_size_c_l + lg_size_r:] if lgr_num != 1 else _audio[lg_size:]
- audio.extend(list(_audio))
- return np.array(audio)
-
-
-class RealTimeVC:
- def __init__(self):
- self.last_chunk = None
- self.last_o = None
- self.chunk_len = 16000 # chunk length
- self.pre_len = 3840 # cross fade length, multiples of 640
-
- # Input and output are 1-dimensional numpy waveform arrays
-
- def process(self, svc_model, speaker_id, f_pitch_change, input_wav_path,
- cluster_infer_ratio=0,
- auto_predict_f0=False,
- noice_scale=0.4,
- f0_filter=False):
-
- import maad
- audio, sr = torchaudio.load(input_wav_path)
- audio = audio.cpu().numpy()[0]
- temp_wav = io.BytesIO()
- if self.last_chunk is None:
- input_wav_path.seek(0)
-
- audio, sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale,
- f0_filter=f0_filter)
-
- audio = audio.cpu().numpy()
- self.last_chunk = audio[-self.pre_len:]
- self.last_o = audio
- return audio[-self.chunk_len:]
- else:
- audio = np.concatenate([self.last_chunk, audio])
- soundfile.write(temp_wav, audio, sr, format="wav")
- temp_wav.seek(0)
-
- audio, sr = svc_model.infer(speaker_id, f_pitch_change, temp_wav,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale,
- f0_filter=f0_filter)
-
- audio = audio.cpu().numpy()
- ret = maad.util.crossfade(self.last_o, audio, self.pre_len)
- self.last_chunk = audio[-self.pre_len:]
- self.last_o = audio
- return ret[self.chunk_len:2 * self.chunk_len]
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/ner_zen2_base_cluener.sh b/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/ner_zen2_base_cluener.sh
deleted file mode 100644
index 04b97b5fe5123af3170523dfde0ae008a78b2428..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/ner_zen2_base_cluener.sh
+++ /dev/null
@@ -1,91 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=zen2_base_cluener # create a short name for your job
-#SBATCH --nodes=1 # node count
-#SBATCH --ntasks=1 # total number of tasks across all nodes
-#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks)
-#SBATCH --gres=gpu:1 # number of gpus per node
-#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc.
-#SBATCH -o /cognitive_comp/ganruyi/experiments/ner_finetune/zen2_base_cluener/%x-%j.log # output and error file name (%x=job name, %j=job id)
-
-
-# export CUDA_VISIBLE_DEVICES='2'
-export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions
-
-MODEL_NAME=zen2_base
-
-TASK=cluener
-
-ZERO_STAGE=1
-STRATEGY=deepspeed_stage_${ZERO_STAGE}
-
-ROOT_DIR=/cognitive_comp/ganruyi/experiments/ner_finetune/${MODEL_NAME}_${TASK}
-if [ ! -d ${ROOT_DIR} ];then
- mkdir -p ${ROOT_DIR}
- echo ${ROOT_DIR} created!!!!!!!!!!!!!!
-else
- echo ${ROOT_DIR} exist!!!!!!!!!!!!!!!
-fi
-
-DATA_DIR=/cognitive_comp/lujunyu/data_zh/NER_Aligned/CLUENER/
-PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/zh_zen_base_2.0
-
-CHECKPOINT_PATH=${ROOT_DIR}/ckpt/
-OUTPUT_PATH=${ROOT_DIR}/predict.json
-
-DATA_ARGS="\
- --data_dir $DATA_DIR \
- --train_data train.char.txt \
- --valid_data dev.char.txt \
- --test_data dev.char.txt \
- --train_batchsize 32 \
- --valid_batchsize 16 \
- --max_seq_length 256 \
- --task_name cluener \
- "
-
-MODEL_ARGS="\
- --learning_rate 3e-5 \
- --weight_decay 0.1 \
- --warmup_ratio 0.01 \
- --markup bio \
- --middle_prefix I- \
- "
-
-MODEL_CHECKPOINT_ARGS="\
- --monitor val_f1 \
- --save_top_k 3 \
- --mode max \
- --every_n_train_steps 100 \
- --save_weights_only True \
- --dirpath $CHECKPOINT_PATH \
- --filename model-{epoch:02d}-{val_f1:.4f} \
- "
-
-TRAINER_ARGS="\
- --max_epochs 30 \
- --gpus 1 \
- --check_val_every_n_epoch 1 \
- --val_check_interval 100 \
- --default_root_dir $ROOT_DIR \
- "
-
-
-options=" \
- --pretrained_model_path $PRETRAINED_MODEL_PATH \
- --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \
- --do_lower_case \
- --output_save_path $OUTPUT_PATH \
- $DATA_ARGS \
- $MODEL_ARGS \
- $MODEL_CHECKPOINT_ARGS \
- $TRAINER_ARGS \
-"
-SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/zen2_finetune/fengshen_token_level_ft_task.py
-/home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-
-# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif
-# python3 $SCRIPT_PATH $options
-# source activate base
-# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/byte_level_bpe/README.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/byte_level_bpe/README.md
deleted file mode 100644
index 657092660eae42d20f67647417623b8b8cb7b66c..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/byte_level_bpe/README.md
+++ /dev/null
@@ -1,88 +0,0 @@
-# Neural Machine Translation with Byte-Level Subwords
-
-https://arxiv.org/abs/1909.03341
-
-We provide an implementation of byte-level byte-pair encoding (BBPE), taking IWSLT 2017 Fr-En translation as
-example.
-
-## Data
-Get data and generate fairseq binary dataset:
-```bash
-bash ./get_data.sh
-```
-
-## Model Training
-Train Transformer model with Bi-GRU embedding contextualization (implemented in `gru_transformer.py`):
-```bash
-# VOCAB=bytes
-# VOCAB=chars
-VOCAB=bbpe2048
-# VOCAB=bpe2048
-# VOCAB=bbpe4096
-# VOCAB=bpe4096
-# VOCAB=bpe16384
-```
-```bash
-fairseq-train "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \
- --arch gru_transformer --encoder-layers 2 --decoder-layers 2 --dropout 0.3 --share-all-embeddings \
- --optimizer adam --adam-betas '(0.9, 0.98)' \
- --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --log-format 'simple' --log-interval 100 --save-dir "checkpoints/${VOCAB}" \
- --batch-size 100 --max-update 100000 --update-freq 2
-```
-
-## Generation
-`fairseq-generate` requires bytes (BBPE) decoder to convert byte-level representation back to characters:
-```bash
-# BPE=--bpe bytes
-# BPE=--bpe characters
-BPE=--bpe byte_bpe --sentencepiece-model-path data/spm_bbpe2048.model
-# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe2048.model
-# BPE=--bpe byte_bpe --sentencepiece-model-path data/spm_bbpe4096.model
-# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe4096.model
-# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe16384.model
-```
-
-```bash
-fairseq-generate "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \
- --source-lang fr --gen-subset test --sacrebleu --path "checkpoints/${VOCAB}/checkpoint_last.pt" \
- --tokenizer moses --moses-target-lang en ${BPE}
-```
-When using `fairseq-interactive`, bytes (BBPE) encoder/decoder is required to tokenize input data and detokenize model predictions:
-```bash
-fairseq-interactive "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \
- --path "checkpoints/${VOCAB}/checkpoint_last.pt" --input data/test.fr --tokenizer moses --moses-source-lang fr \
- --moses-target-lang en ${BPE} --buffer-size 1000 --max-tokens 10000
-```
-
-## Results
-| Vocabulary | Model | BLEU |
-|:-------------:|:-------------:|:-------------:|
-| Joint BPE 16k ([Kudo, 2018](https://arxiv.org/abs/1804.10959)) | 512d LSTM 2+2 | 33.81 |
-| Joint BPE 16k | Transformer base 2+2 (w/ GRU) | 36.64 (36.72) |
-| Joint BPE 4k | Transformer base 2+2 (w/ GRU) | 35.49 (36.10) |
-| Joint BBPE 4k | Transformer base 2+2 (w/ GRU) | 35.61 (35.82) |
-| Joint BPE 2k | Transformer base 2+2 (w/ GRU) | 34.87 (36.13) |
-| Joint BBPE 2k | Transformer base 2+2 (w/ GRU) | 34.98 (35.43) |
-| Characters | Transformer base 2+2 (w/ GRU) | 31.78 (33.30) |
-| Bytes | Transformer base 2+2 (w/ GRU) | 31.57 (33.62) |
-
-
-## Citation
-```
-@misc{wang2019neural,
- title={Neural Machine Translation with Byte-Level Subwords},
- author={Changhan Wang and Kyunghyun Cho and Jiatao Gu},
- year={2019},
- eprint={1909.03341},
- archivePrefix={arXiv},
- primaryClass={cs.CL}
-}
-```
-
-
-## Contact
-Changhan Wang ([changhan@fb.com](mailto:changhan@fb.com)),
-Kyunghyun Cho ([kyunghyuncho@fb.com](mailto:kyunghyuncho@fb.com)),
-Jiatao Gu ([jgu@fb.com](mailto:jgu@fb.com))
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/criss/sentence_retrieval/encoder_analysis.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/criss/sentence_retrieval/encoder_analysis.py
deleted file mode 100644
index b41bfbe38789ba14e6a5ea938c75d761424c00ab..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/criss/sentence_retrieval/encoder_analysis.py
+++ /dev/null
@@ -1,92 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-import argparse
-import glob
-
-import numpy as np
-
-
-DIM = 1024
-
-
-def compute_dist(source_embs, target_embs, k=5, return_sim_mat=False):
- target_ids = [tid for tid in target_embs]
- source_mat = np.stack(source_embs.values(), axis=0)
- normalized_source_mat = source_mat / np.linalg.norm(
- source_mat, axis=1, keepdims=True
- )
- target_mat = np.stack(target_embs.values(), axis=0)
- normalized_target_mat = target_mat / np.linalg.norm(
- target_mat, axis=1, keepdims=True
- )
- sim_mat = normalized_source_mat.dot(normalized_target_mat.T)
- if return_sim_mat:
- return sim_mat
- neighbors_map = {}
- for i, sentence_id in enumerate(source_embs):
- idx = np.argsort(sim_mat[i, :])[::-1][:k]
- neighbors_map[sentence_id] = [target_ids[tid] for tid in idx]
- return neighbors_map
-
-
-def load_embeddings(directory, LANGS):
- sentence_embeddings = {}
- sentence_texts = {}
- for lang in LANGS:
- sentence_embeddings[lang] = {}
- sentence_texts[lang] = {}
- lang_dir = f"{directory}/{lang}"
- embedding_files = glob.glob(f"{lang_dir}/all_avg_pool.{lang}.*")
- for embed_file in embedding_files:
- shard_id = embed_file.split(".")[-1]
- embeddings = np.fromfile(embed_file, dtype=np.float32)
- num_rows = embeddings.shape[0] // DIM
- embeddings = embeddings.reshape((num_rows, DIM))
-
- with open(f"{lang_dir}/sentences.{lang}.{shard_id}") as sentence_file:
- for idx, line in enumerate(sentence_file):
- sentence_id, sentence = line.strip().split("\t")
- sentence_texts[lang][sentence_id] = sentence
- sentence_embeddings[lang][sentence_id] = embeddings[idx, :]
-
- return sentence_embeddings, sentence_texts
-
-
-def compute_accuracy(directory, LANGS):
- sentence_embeddings, sentence_texts = load_embeddings(directory, LANGS)
-
- top_1_accuracy = {}
-
- top1_str = " ".join(LANGS) + "\n"
- for source_lang in LANGS:
- top_1_accuracy[source_lang] = {}
- top1_str += f"{source_lang} "
- for target_lang in LANGS:
- top1 = 0
- top5 = 0
- neighbors_map = compute_dist(
- sentence_embeddings[source_lang], sentence_embeddings[target_lang]
- )
- for sentence_id, neighbors in neighbors_map.items():
- if sentence_id == neighbors[0]:
- top1 += 1
- if sentence_id in neighbors[:5]:
- top5 += 1
- n = len(sentence_embeddings[target_lang])
- top1_str += f"{top1/n} "
- top1_str += "\n"
-
- print(top1_str)
- print(top1_str, file=open(f"{directory}/accuracy", "w"))
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(description="Analyze encoder outputs")
- parser.add_argument("directory", help="Source language corpus")
- parser.add_argument("--langs", help="List of langs")
- args = parser.parse_args()
- langs = args.langs.split(",")
- compute_accuracy(args.directory, langs)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_synthesis/preprocessing/denoiser/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_synthesis/preprocessing/denoiser/__init__.py
deleted file mode 100644
index 6264236915a7269a4d920ee8213004374dd86a9a..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_synthesis/preprocessing/denoiser/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_text_joint_to_text/models/s2t_dualinputxmtransformer.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_text_joint_to_text/models/s2t_dualinputxmtransformer.py
deleted file mode 100644
index 50683e6d7c8c0db5b8f019e5f7f5fb8c6dfd9f66..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_text_joint_to_text/models/s2t_dualinputxmtransformer.py
+++ /dev/null
@@ -1,585 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import copy
-
-import torch.nn as nn
-from fairseq import checkpoint_utils
-from fairseq import utils
-from fairseq.data.data_utils import lengths_to_padding_mask
-from fairseq.models import (
- register_model,
- register_model_architecture,
- FairseqEncoder,
-)
-from fairseq.models.speech_to_text import XMTransformerModel, Wav2VecEncoderWithAdaptor
-from fairseq.models.speech_to_text.xm_transformer import (
- set_default_adaptor_args,
- set_default_w2v_encoder_args,
-)
-from fairseq.models.transformer import TransformerEncoder, TransformerDecoder
-from fairseq.models.wav2vec import TransformerSentenceEncoderLayer
-from fairseq.utils import safe_hasattr
-
-from .s2t_dualinputtransformer import (
- DualInputS2TTransformerModel,
- TransformerMultiInputDecoder,
- DualInputEncoder,
-)
-
-
-class TransformerSentenceEncoderLayerStd(TransformerSentenceEncoderLayer):
- def __init__(self, sent_enc_layer):
- super(TransformerSentenceEncoderLayer, self).__init__()
- self.embedding_dim = sent_enc_layer.embedding_dim
- self.dropout = sent_enc_layer.dropout
- self.activation_dropout = sent_enc_layer.activation_dropout
-
- # Initialize blocks
- self.activation_fn = sent_enc_layer.activation_fn
- self.self_attn = sent_enc_layer.self_attn
-
- self.dropout1 = sent_enc_layer.dropout1
- self.dropout2 = sent_enc_layer.dropout2
- self.dropout3 = sent_enc_layer.dropout3
-
- self.layer_norm_first = sent_enc_layer.layer_norm_first
-
- # layer norm associated with the self attention layer
- self.self_attn_layer_norm = sent_enc_layer.self_attn_layer_norm
- self.fc1 = sent_enc_layer.fc1
- self.fc2 = sent_enc_layer.fc2
-
- # layer norm associated with the position wise feed-forward NN
- self.final_layer_norm = sent_enc_layer.final_layer_norm
-
- def forward(
- self,
- x,
- self_attn_mask=None,
- self_attn_padding_mask=None,
- need_weights=None,
- att_args=None,
- ):
- x, attn = super().forward(
- x, self_attn_mask, self_attn_padding_mask, need_weights, att_args
- )
- return x
-
-
-# TODO retire SharedEncoder
-class SharedEncoder(FairseqEncoder):
- def __init__(self, wav2vec_enc, mbart_enc, adaptor, shared_layers):
- super().__init__(None)
- self.w2v_encoder = wav2vec_enc
- self.shared_layers = self.w2v_encoder.w2v_model.encoder.layers[-shared_layers:]
- self.w2v_encoder.w2v_model.encoder.layers = (
- self.w2v_encoder.w2v_model.encoder.layers[:-shared_layers]
- )
- self.adaptor = adaptor
- if self.shared_layers[-1].layer_norm_first:
- self.final_layer_norm = mbart_enc.layer_norm
- else:
- mbart_enc.layer_norm = None
- self.final_layer_norm = None
- shared_layer_from = len(mbart_enc.layers) - shared_layers
- if shared_layer_from < 0:
- shared_layer_from = 0
- for layer_id, layer in enumerate(self.shared_layers):
- mbart_enc.layers[
- shared_layer_from + layer_id
- ] = TransformerSentenceEncoderLayerStd(layer)
-
- def forward(self, src_tokens, src_lengths=None, **kwargs):
- padding_mask = lengths_to_padding_mask(src_lengths)
- if not padding_mask.any():
- padding_mask = None
-
- out = self.w2v_encoder.forward(src_tokens, padding_mask, tbc=True)
- x = out["encoder_out"]
- enc_padding_mask = None
- if out["encoder_padding_mask"] is not None:
- enc_padding_mask = out["encoder_padding_mask"].transpose(
- 0, 1
- ) # T X B --> B X T
-
- x, enc_padding_mask = self.adaptor(x, enc_padding_mask)
- for layer in self.shared_layers:
- x, _ = layer(x, enc_padding_mask)
- if self.final_layer_norm is not None:
- x = self.final_layer_norm(x)
-
- return {
- "encoder_out": [x], # T x B x C
- "encoder_padding_mask": [enc_padding_mask]
- if enc_padding_mask is not None
- else [], # B x T
- "encoder_embedding": [], # B x T x C
- "encoder_states": [], # List[T x B x C]
- "src_tokens": [],
- "src_lengths": [],
- }
-
-
-class StackedWav2VecEncoderWithAdaptor(FairseqEncoder):
- def __init__(
- self,
- wav2vec_enc,
- mbart_enc_layers,
- mbart_layer_norm,
- adaptor,
- drop_w2v_layers=0,
- ):
- super().__init__(None)
- self.w2v_encoder = wav2vec_enc
- self.adaptor = adaptor
- self.mbart_encoder_layers = mbart_enc_layers
- self.final_layer_norm = mbart_layer_norm
- if drop_w2v_layers > 0:
- self.w2v_encoder.w2v_model.encoder.layers = (
- self.w2v_encoder.w2v_model.encoder.layers[:-drop_w2v_layers]
- )
-
- def forward(self, src_tokens, src_lengths=None, return_all_hiddens=False, **kwargs):
- padding_mask = lengths_to_padding_mask(src_lengths)
- if not padding_mask.any():
- padding_mask = None
-
- out = self.w2v_encoder.forward(src_tokens, padding_mask, tbc=True)
- x = out["encoder_out"]
- enc_padding_mask = None
- if out["encoder_padding_mask"] is not None:
- enc_padding_mask = out["encoder_padding_mask"].transpose(
- 0, 1
- ) # T X B --> B X T
-
- x, enc_padding_mask = self.adaptor(x, enc_padding_mask)
- encoder_states = []
- for layer in self.mbart_encoder_layers:
- x = layer(x, enc_padding_mask)
- if return_all_hiddens:
- encoder_states.append(x)
- if self.final_layer_norm is not None:
- x = self.final_layer_norm(x)
-
- return {
- "encoder_out": [x], # T x B x C
- "encoder_padding_mask": [enc_padding_mask]
- if enc_padding_mask is not None
- else [], # B x T
- "encoder_embedding": [], # B x T x C
- "encoder_states": encoder_states, # List[T x B x C]
- "src_tokens": [],
- "src_lengths": [],
- }
-
- def reorder_encoder_out(self, encoder_out, new_order):
- new_encoder_out = (
- []
- if len(encoder_out["encoder_out"]) == 0
- else [x.index_select(1, new_order) for x in encoder_out["encoder_out"]]
- )
-
- new_encoder_padding_mask = (
- []
- if len(encoder_out["encoder_padding_mask"]) == 0
- else [
- x.index_select(0, new_order)
- for x in encoder_out["encoder_padding_mask"]
- ]
- )
-
- new_encoder_embedding = (
- []
- if len(encoder_out["encoder_embedding"]) == 0
- else [
- x.index_select(0, new_order) for x in encoder_out["encoder_embedding"]
- ]
- )
-
- encoder_states = encoder_out["encoder_states"]
- if len(encoder_states) > 0:
- for idx, state in enumerate(encoder_states):
- encoder_states[idx] = state.index_select(1, new_order)
-
- return {
- "encoder_out": new_encoder_out, # T x B x C
- "encoder_padding_mask": new_encoder_padding_mask, # B x T
- "encoder_embedding": new_encoder_embedding, # B x T x C
- "encoder_states": encoder_states, # List[T x B x C]
- "src_tokens": [], # B x T
- "src_lengths": [], # B x 1
- }
-
-
-# Note:
-# dual input transformer:
-# encoder: wav2vec for speech + mbart encoder for text
-# decoder: mbart decoder for text
-@register_model("dual_input_xm_transformer")
-class DualInputXMTransformerModel(DualInputS2TTransformerModel):
- def __init__(self, encoder, decoder):
- super().__init__(encoder, decoder)
-
- @staticmethod
- def add_args(parser):
- """Add model-specific arguments to the parser."""
- # wav2vec encoder
- Wav2VecEncoderWithAdaptor.add_args(parser)
- # add_decoder_args(parser)
- # mbart Transformer
- parser.add_argument(
- "--activation-fn",
- type=str,
- default="relu",
- choices=utils.get_available_activation_fns(),
- help="activation function to use",
- )
-
- parser.add_argument(
- "--mbart-dropout", type=float, metavar="D", help="dropout probability"
- )
- parser.add_argument(
- "--mbart-attention-dropout",
- type=float,
- metavar="D",
- help="dropout probability for attention weights",
- )
- parser.add_argument(
- "--mbart-activation-dropout",
- type=float,
- metavar="D",
- help="dropout probability after activation in FFN.",
- )
-
- parser.add_argument(
- "--encoder-embed-dim",
- type=int,
- metavar="N",
- help="encoder embedding dimension",
- )
- parser.add_argument(
- "--encoder-ffn-embed-dim",
- type=int,
- metavar="N",
- help="encoder embedding dimension for FFN",
- )
- parser.add_argument(
- "--encoder-layers", type=int, metavar="N", help="num encoder layers"
- )
- parser.add_argument(
- "--encoder-attention-heads",
- type=int,
- metavar="N",
- help="num encoder attention heads",
- )
- parser.add_argument(
- "--encoder-normalize-before",
- action="store_true",
- help="apply layernorm before each encoder block",
- )
-
- parser.add_argument(
- "--decoder-embed-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension",
- )
- parser.add_argument(
- "--decoder-ffn-embed-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension for FFN",
- )
- parser.add_argument(
- "--decoder-layers", type=int, metavar="N", help="num decoder layers"
- )
- parser.add_argument(
- "--decoder-attention-heads",
- type=int,
- metavar="N",
- help="num decoder attention heads",
- )
- parser.add_argument(
- "--decoder-normalize-before",
- action="store_true",
- help="apply layernorm before each decoder block",
- )
- parser.add_argument(
- "--layernorm-embedding",
- action="store_true",
- help="add layernorm to embedding",
- )
- parser.add_argument(
- "--no-scale-embedding",
- action="store_true",
- help="if True, dont scale embeddings",
- )
- parser.add_argument(
- "--load-pretrained-mbart-from",
- type=str,
- metavar="STR",
- help="model to take text encoder decoder weights from (for initialization)",
- )
- # parser.add_argument("--finetune-w2v-params", type=str, metavar="STR",
- # help="comma-separated param strings to finetune.")
- parser.add_argument(
- "--finetune-mbart-decoder-params",
- type=str,
- metavar="STR",
- help="comma-separated param strings to finetune.",
- )
- parser.add_argument(
- "--finetune-mbart-encoder-params",
- type=str,
- metavar="STR",
- help="comma-separated param strings to finetune.",
- )
- parser.add_argument(
- "--skip-encoder-projection",
- action="store_true",
- help="skip the projection layer in encoder",
- )
-
- parser.add_argument(
- "--enc-grad-mult",
- type=float,
- metavar="V",
- default=1.0,
- help="multiply enc1 and enc2 gradient by V",
- )
- parser.add_argument(
- "--enc2-along-grad-mult",
- type=float,
- metavar="V",
- default=1.0,
- help="multiply enc2 gradient by V if only enc2 is used",
- )
- parser.add_argument(
- "--text-input-cost-ratio",
- type=float,
- default=1.0,
- metavar="V",
- help="text input cost ratio relative to speech input cost",
- )
- parser.add_argument(
- "--stack-w2v-mbart-encoder",
- action="store_true",
- help="stack w2v and mbart encoder",
- )
- parser.add_argument(
- "--stack-w2v-mbart-nonorm-encoder",
- action="store_true",
- help="stack w2v and mbart encoder",
- )
- parser.add_argument(
- "--no-final-norm-decoder", action="store_true", help="no layer norm"
- )
- parser.add_argument(
- "--drop-w2v-layers",
- type=int,
- default=0,
- metavar="N",
- help="drop w2v encoder layers",
- )
-
- parser.add_argument(
- "--share-w2v-text-encoder",
- action="store_true",
- help="share w2v encoder layers with text encoder",
- )
- parser.add_argument(
- "--shared-w2v-layers",
- type=int,
- default=0,
- metavar="N",
- help="shared encoder layers from w2v encoder",
- )
-
- @classmethod
- def build_encoder(cls, args, task):
- _args = copy.deepcopy(args)
- _args.dropout = args.mbart_dropout
- _args.attention_dropout = args.mbart_attention_dropout
- _args.activation_dropout = args.mbart_activation_dropout
- _args.max_source_positions = 1024
- enc_emb = nn.Embedding(
- len(task.src_dict), _args.encoder_embed_dim, task.src_dict.pad()
- )
- text_encoder = TransformerEncoder(_args, task.src_dict, enc_emb)
- spch_encoder = Wav2VecEncoderWithAdaptor(args)
- if getattr(args, "load_pretrained_mbart_from", None):
- text_encoder = checkpoint_utils.load_pretrained_component_from_model(
- component=text_encoder, checkpoint=args.load_pretrained_mbart_from
- )
- if getattr(args, "stack_w2v_mbart_encoder", False):
- assert getattr(args, "share_w2v_text_encoder", False) is False
- spch_encoder = StackedWav2VecEncoderWithAdaptor(
- spch_encoder.w2v_encoder,
- text_encoder.layers,
- text_encoder.layer_norm,
- spch_encoder.adaptor,
- args.drop_w2v_layers,
- )
- elif getattr(args, "stack_w2v_mbart_nonorm_encoder", False):
- text_encoder.layer_norm = None
- spch_encoder = StackedWav2VecEncoderWithAdaptor(
- spch_encoder.w2v_encoder,
- text_encoder.layers,
- text_encoder.layer_norm,
- spch_encoder.adaptor,
- args.drop_w2v_layers,
- )
- elif getattr(args, "share_w2v_text_encoder", False):
- spch_encoder = SharedEncoder(
- spch_encoder.w2v_encoder,
- text_encoder,
- spch_encoder.adaptor,
- args.shared_w2v_layers,
- )
-
- for k, p in spch_encoder.named_parameters():
- # Freeze pretrained models by default
- if safe_hasattr(
- args, "finetune_w2v_params"
- ) and XMTransformerModel.finetune_params(args.finetune_w2v_params, k):
- p.requires_grad = True
- else:
- p.requires_grad = False
- for k, p in text_encoder.named_parameters():
- # Freeze pretrained models by default
- if safe_hasattr(
- args, "finetune_mbart_encoder_params"
- ) and XMTransformerModel.finetune_params(
- args.finetune_mbart_encoder_params, k
- ):
- p.requires_grad = True
- else:
- p.requires_grad = False
- cross_attentive_loss_before_last_layer = (
- 0 if getattr(args, "attentive_cost_regularization", 0.0) > 0.0 else -1
- )
- encoder = DualInputEncoder(
- args,
- spch_encoder,
- text_encoder,
- task.src_dict,
- cross_attentive_loss_before_last_layer,
- )
- return encoder
-
- @classmethod
- def build_decoder(cls, args, task):
- _args = copy.deepcopy(args)
- _args.dropout = args.mbart_dropout
- _args.attention_dropout = args.mbart_attention_dropout
- _args.activation_dropout = args.mbart_activation_dropout
- _args.max_target_positions = 1024
- dec_emb = nn.Embedding(
- len(task.tgt_dict), _args.encoder_embed_dim, task.tgt_dict.pad()
- )
- decoder = TransformerDecoder(_args, task.tgt_dict, dec_emb)
- if getattr(args, "load_pretrained_mbart_from", None):
- decoder = checkpoint_utils.load_pretrained_component_from_model(
- component=decoder, checkpoint=args.load_pretrained_mbart_from
- )
- if getattr(args, "no_final_norm_decoder", False):
- decoder.layer_norm = None
- for k, p in decoder.named_parameters():
- # Freeze pretrained models by default
- if safe_hasattr(
- args, "finetune_mbart_decoder_params"
- ) and XMTransformerModel.finetune_params(
- args.finetune_mbart_decoder_params, k
- ):
- p.requires_grad = True
- else:
- p.requires_grad = False
-
- compute_cross_attentive_loss = (
- True if getattr(args, "attentive_cost_regularization", 0.0) > 0.0 else False
- )
- cross_attentive_loss_without_norm = getattr(
- args, "attentive_cost_without_normalize", False
- )
- cross_attentive_loss_reverse = (
- False # getattr(args, "attentive_cost_reverse", False)
- )
- decoder = TransformerMultiInputDecoder(
- dictionary=task.target_dictionary,
- spch_decoder=decoder,
- text_decoder=decoder,
- compute_cross_attentive_loss=compute_cross_attentive_loss,
- cross_attentive_loss_with_norm=True
- if not cross_attentive_loss_without_norm
- else False,
- cross_attentive_loss_reverse=cross_attentive_loss_reverse,
- )
- return decoder
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
- # make sure that all args are properly defaulted
- # (in case there are any new ones)
- dualinputxmtransformer_base(args)
-
- encoder = cls.build_encoder(args, task)
- decoder = cls.build_decoder(args, task)
- return cls(encoder, decoder)
-
-
-@register_model_architecture("dual_input_xm_transformer", "dualinputxmtransformer_base")
-def dualinputxmtransformer_base(args):
- # wav2vec encoder
- set_default_w2v_encoder_args(args)
- set_default_adaptor_args(args)
-
- # mbart model
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024)
- args.encoder_ffn_embed_dim = getattr(
- args, "encoder_ffn_embed_dim", 4 * args.encoder_embed_dim
- )
- args.encoder_layers = getattr(args, "encoder_layers", 12)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16)
- args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True)
- args.encoder_layerdrop = getattr(args, "encoder_layerdrop", 0)
- args.encoder_learned_pos = getattr(args, "encoder_learned_pos", True)
-
- args.decoder_embed_path = getattr(args, "decoder_embed_path", None)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024)
- args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4 * 1024)
- args.decoder_layers = getattr(args, "decoder_layers", 12)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16)
- args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True)
- args.decoder_learned_pos = getattr(args, "decoder_learned_pos", True)
- args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0.0)
-
- args.adaptive_input = getattr(args, "adaptive_input", False)
-
- args.mbart_attention_dropout = getattr(args, "mbart_attention_dropout", 0.0)
- args.mbart_activation_dropout = getattr(args, "mbart_activation_dropout", 0.0)
- args.mbart_dropout = getattr(args, "mbart_dropout", 0.1)
- args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None)
- args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0)
- args.share_decoder_input_output_embed = getattr(
- args, "share_decoder_input_output_embed", True
- )
- args.no_token_positional_embeddings = getattr(
- args, "no_token_positional_embeddings", False
- )
-
- args.decoder_output_dim = getattr(
- args, "decoder_output_dim", args.decoder_embed_dim
- )
- args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim)
-
- args.no_scale_embedding = getattr(args, "no_scale_embedding", False)
- args.quant_noise_pq = getattr(args, "quant_noise_pq", 0)
- args.layernorm_embedding = getattr(args, "layernorm_embedding", True)
-
- args.activation_fn = getattr(args, "activation_fn", "gelu")
- args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh")
- args.pooler_dropout = getattr(args, "pooler_dropout", 0.0)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/shorten_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/shorten_dataset.py
deleted file mode 100644
index 6ebb5d88feb3f29d1512a0873df304915d051209..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/shorten_dataset.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-from fairseq.data import data_utils
-
-from . import BaseWrapperDataset
-
-
-class TruncateDataset(BaseWrapperDataset):
- """Truncate a sequence by returning the first truncation_length tokens"""
-
- def __init__(self, dataset, truncation_length):
- super().__init__(dataset)
- assert truncation_length is not None
- self.truncation_length = truncation_length
- self.dataset = dataset
-
- def __getitem__(self, index):
- item = self.dataset[index]
- item_len = item.size(0)
- if item_len > self.truncation_length:
- item = item[: self.truncation_length]
- return item
-
- @property
- def sizes(self):
- return np.minimum(self.dataset.sizes, self.truncation_length)
-
- def __len__(self):
- return len(self.dataset)
-
-
-class RandomCropDataset(TruncateDataset):
- """Truncate a sequence by returning a random crop of truncation_length tokens"""
-
- def __init__(self, dataset, truncation_length, seed=1):
- super().__init__(dataset, truncation_length)
- self.seed = seed
- self.epoch = 0
-
- @property
- def can_reuse_epoch_itr_across_epochs(self):
- return True # only the crop changes, not item sizes
-
- def set_epoch(self, epoch, **unused):
- super().set_epoch(epoch)
- self.epoch = epoch
-
- def __getitem__(self, index):
- with data_utils.numpy_seed(self.seed, self.epoch, index):
- item = self.dataset[index]
- item_len = item.size(0)
- excess = item_len - self.truncation_length
- if excess > 0:
- start_idx = np.random.randint(0, excess)
- item = item[start_idx : start_idx + self.truncation_length]
- return item
-
-
-def maybe_shorten_dataset(
- dataset,
- split,
- shorten_data_split_list,
- shorten_method,
- tokens_per_sample,
- seed,
-):
- truncate_split = (
- split in shorten_data_split_list.split(",") or len(shorten_data_split_list) == 0
- )
- if shorten_method == "truncate" and truncate_split:
- dataset = TruncateDataset(dataset, tokens_per_sample)
- elif shorten_method == "random_crop" and truncate_split:
- dataset = RandomCropDataset(dataset, tokens_per_sample, seed)
- return dataset
diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/README.md b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/README.md
deleted file mode 100644
index 02892bc9dd4344e550596d238e2b71870cfc7dd3..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/README.md
+++ /dev/null
@@ -1,220 +0,0 @@
-# vakyansh-tts
-Text to Speech for Indic languages
-
-## 1. Installation and Setup for training
-
-Clone repo
-Note : for multspeaker glow-tts training use branch [multispeaker](https://github.com/Open-Speech-EkStep/vakyansh-tts/tree/multispeaker)
-```
-git clone https://github.com/Open-Speech-EkStep/vakyansh-tts
-```
-Build conda virtual environment
-```
-cd ./vakyansh-tts
-conda create --name python=3.7
-conda activate
-pip install -r requirements.txt
-```
-Install [apex](https://github.com/NVIDIA/apex); commit: 37cdaf4 for Mixed-precision training
-
-Note : used only for glow-tts
-```
-cd ..
-git clone https://github.com/NVIDIA/apex
-cd apex
-git checkout 37cdaf4
-pip install -v --disable-pip-version-check --no-cache-dir ./
-cd ../vakyansh-tts
-```
-Build Monotonic Alignment Search Code (Cython)
-
-Note : used only for glow-tts
-```
-bash install.sh
-```
-
-## 2. Data Resampling
-
-The data format should have a folder containing all the .wav files for glow-tts and a text file containing filenames with their sentences.
-
-Directory structure:
-
-langauge_folder_name
-```
-language_folder_name
-|-- ./wav/*.wav
-|-- ./text_file_name.txt
-```
-The format for text_file_name.txt (Text file is only needed for glow-tts training)
-
-```
-( audio1.wav "Sentence1." )
-( audio2.wav "Sentence2." )
-```
-
-To resample the .wav files to 22050 sample rate, change the following parameters in the vakyansh-tts/scripts/data/resample.sh
-
-```
-input_wav_path : absolute path to wav file folder in vakyansh_tts/data/
-output_wav_path : absolute path to vakyansh_tts/data/resampled_wav_folder_name
-output_sample_rate : 22050 (or any other desired sample rate)
-```
-
-To run:
-```bash
-cd scripts/data/
-bash resample.sh
-```
-
-
-## 3. Spectogram Training (glow-tts)
-
-### 3.1 Data Preparation
-
-
-To prepare the data edit the vakyansh-tts/scripts/glow/prepare_data.sh file and change the following parameters
-```
-input_text_path : absolute path to vakyansh_tts/data/text_file_name.txt
-input_wav_path : absolute path to vakyansh_tts/data/resampled_wav_folder_name
-gender : female or male voice
-```
-To run:
-```bash
-cd scripts/glow/
-bash prepare_data.sh
-```
-### 3.2 Training glow-tts
-
-To start the spectogram-training edit the vakyansh-tts/scripts/glow/train_glow.sh file and change the following parameter:
-```
-gender : female or male voice
-```
-Make sure that the gender is same as that of the prepare_data.sh file
-
-To start the training, run:
-```bash
-cd scripts/glow/
-bash train_glow.sh
-```
-## 4. Vocoder Training (hifi-gan)
-
-### 4.1 Data Preparation
-
-To prepare the data edit the vakyansh-tts/scripts/hifi/prepare_data.sh file and change the following parameters
-```
-input_wav_path : absolute path to vakyansh_tts/data/resampled_wav_folder_name
-gender : female or male voice
-```
-To run:
-```bash
-cd scripts/hifi/
-bash prepare_data.sh
-```
-### 4.2 Training hifi-gan
-
-To start the spectogram-training edit the vakyansh-tts/scripts/hifi/train_hifi.sh file and change the following parameter:
-```
-gender : female or male voice
-```
-Make sure that the gender is same as that of the prepare_data.sh file
-
-To start the training, run:
-```bash
-cd scripts/hifi/
-bash train_hifi.sh
-```
-
-## 5. Inference
-
-### 5.1 Using Gradio
-
-To use the gradio link edit the following parameters in the vakyansh-tts/scripts/inference/gradio.sh file:
-```
-gender : female or male voice
-device : cpu or cuda
-lang : langauge code
-```
-
-To run:
-```bash
-cd scripts/inference/
-bash gradio.sh
-```
-### 5.2 Using fast API
-To use the fast api link edit the parameters in the vakyansh-tts/scripts/inference/api.sh file similar to section 5.1
-
-To run:
-```bash
-cd scripts/inference/
-bash api.sh
-```
-
-### 5.3 Direct Inference using text
-To infer, edit the parameters in the vakyansh-tts/scripts/inference/infer.sh file similar to section 5.1 and set the text to the text variable
-
-To run:
-```bash
-cd scripts/inference/
-bash infer.sh
-```
-
-To configure other parameters there is a version that runs the advanced inference as well. Additional Parameters:
-```
-noise_scale : can vary from 0 to 1 for noise factor
-length_scale : can vary from 0 to 2 for changing the speed of the generated audio
-transliteration : whether to switch on/off transliteration. 1: ON, 0: OFF
-number_conversion : whether to switch on/off number to words conversion. 1: ON, 0: OFF
-split_sentences : whether to switch on/off splitting of sentences. 1: ON, 0: OFF
-```
-To run:
-```
-cd scripts/inference/
-bash advanced_infer.sh
-```
-
-### 5.4 Installation of tts_infer package
-
-In tts_infer package, we currently have two components:
-
- 1. Transliteration (AI4bharat's open sourced models) (Languages supported: {'hi', 'gu', 'mr', 'bn', 'te', 'ta', 'kn', 'pa', 'gom', 'mai', 'ml', 'sd', 'si', 'ur'} )
-
- 2. Num to Word (Languages supported: {'en', 'hi', 'gu', 'mr', 'bn', 'te', 'ta', 'kn', 'or', 'pa'} )
-```
-git clone https://github.com/Open-Speech-EkStep/vakyansh-tts
-cd vakyansh-tts
-bash install.sh
-python setup.py bdist_wheel
-pip install -e .
-cd tts_infer
-gsutil -m cp -r gs://vakyaansh-open-models/translit_models .
-```
-
-Usage: Refer to example file in tts_infer/
-```
-from tts_infer.tts import TextToMel, MelToWav
-from tts_infer.transliterate import XlitEngine
-from tts_infer.num_to_word_on_sent import normalize_nums
-
-import re
-from scipy.io.wavfile import write
-
-text_to_mel = TextToMel(glow_model_dir='/path/to/glow-tts/checkpoint/dir', device='cuda')
-mel_to_wav = MelToWav(hifi_model_dir='/path/to/hifi/checkpoint/dir', device='cuda')
-
-def translit(text, lang):
- reg = re.compile(r'[a-zA-Z]')
- engine = XlitEngine(lang)
- words = [engine.translit_word(word, topk=1)[lang][0] if reg.match(word) else word for word in text.split()]
- updated_sent = ' '.join(words)
- return updated_sent
-
-def run_tts(text, lang):
- text = text.replace('।', '.') # only for hindi models
- text_num_to_word = normalize_nums(text, lang) # converting numbers to words in lang
- text_num_to_word_and_transliterated = translit(text_num_to_word, lang) # transliterating english words to lang
-
- mel = text_to_mel.generate_mel(text_num_to_word_and_transliterated)
- audio, sr = mel_to_wav.generate_wav(mel)
- write(filename='temp.wav', rate=sr, data=audio) # for saving wav file, if needed
- return (sr, audio)
-```
diff --git a/spaces/Hashom132/stabilityai-stable-diffusion-2/app.py b/spaces/Hashom132/stabilityai-stable-diffusion-2/app.py
deleted file mode 100644
index d2782cea00b1bfcd22df7c204d9e52a6baf46ac2..0000000000000000000000000000000000000000
--- a/spaces/Hashom132/stabilityai-stable-diffusion-2/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/stabilityai/stable-diffusion-2").launch()
\ No newline at end of file
diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/test_text_len.py b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/test_text_len.py
deleted file mode 100644
index 77ad9d3adc4fabb6b6eee099a60b9793cef2dfa2..0000000000000000000000000000000000000000
--- a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/test_text_len.py
+++ /dev/null
@@ -1,204 +0,0 @@
-# Copyright 2021 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import argparse
-import ast
-import gradio as gr
-from os.path import isdir
-from data_measurements.dataset_statistics import DatasetStatisticsCacheClass as dmt_cls
-import utils
-from utils import dataset_utils
-from utils import gradio_utils as gr_utils
-import widgets
-import app as ap
-from app import load_or_prepare_widgets
-
-
-logs = utils.prepare_logging(__file__)
-
-# Utility for sidebar description and selection of the dataset
-DATASET_NAME_TO_DICT = dataset_utils.get_dataset_info_dicts()
-
-
-def get_load_prepare_list(dstats):
- """
- # Get load_or_prepare functions for the measurements we will display
- """
- # Measurement calculation:
- # Add any additional modules and their load-prepare function here.
- load_prepare_list = [
- ("text_lengths", dstats.load_or_prepare_text_lengths),
- ]
-
- return load_prepare_list
-
-
-def get_ui_widgets():
- """Get the widgets that will be displayed in the UI."""
- return [
- widgets.TextLengths(),]
-
-
-def get_widgets():
- """
- # A measurement widget requires 2 things:
- # - A load or prepare function
- # - A display function
- # We define these in two separate functions get_load_prepare_list and get_ui_widgets;
- # any widget can be added by modifying both functions and the rest of the app logic will work.
- # get_load_prepare_list is a function since it requires a DatasetStatisticsCacheClass which will
- # not be created until dataset and config values are selected in the ui
- """
- return get_load_prepare_list, get_ui_widgets()
-
-
-def get_title(dstats):
- title_str = f"### Showing: {dstats.dset_name} - {dstats.dset_config} - {dstats.split_name} - {'-'.join(dstats.text_field)}"
- logs.info("showing header")
- return title_str
-
-
-def display_initial_UI():
- """Displays the header in the UI"""
- # Extract the selected arguments
- dataset_args = gr_utils.sidebar_selection(DATASET_NAME_TO_DICT)
- return dataset_args
-
-
-
-
-def show_column(dstats, display_list, show_perplexities, column_id=""):
- """
- Function for displaying the elements in the streamlit app.
- Args:
- dstats (class): The dataset_statistics.py DatasetStatisticsCacheClass
- display_list (list): List of tuples for (widget_name, widget_display_function)
- show_perplexities (Bool): Whether perplexities should be loaded and displayed for this dataset
- column_id (str): Which column of the dataset the analysis is done on [DEPRECATED for v1]
- """
-
- # start showing stuff
- gr_utils.expander_header(dstats, DATASET_NAME_TO_DICT)
- for widget_tuple in display_list:
- widget_type = widget_tuple[0]
- widget_fn = widget_tuple[1]
- logs.info("showing %s." % widget_type)
- try:
- widget_fn(dstats, column_id)
- except Exception as e:
- logs.warning("Jk jk jk. There was an issue with %s:" % widget_type)
- logs.exception(e)
- # TODO: Fix how this is a weird outlier.
- if show_perplexities:
- gr_utils.expander_text_perplexities(dstats, column_id)
- logs.info("Have finished displaying the widgets.")
-
-
-def create_demo(live: bool, pull_cache_from_hub: bool):
- with gr.Blocks() as demo:
- state = gr.State()
- with gr.Row():
- with gr.Column(scale=1):
- dataset_args = display_initial_UI()
- get_load_prepare_list_fn, widget_list = get_widgets()
- # # TODO: Make this less of a weird outlier.
- # Doesn't do anything right now
- show_perplexities = gr.Checkbox(label="Show text perplexities")
- with gr.Column(scale=4):
- gr.Markdown("# Data Measurements Tool")
- title = gr.Markdown()
- for widget in widget_list:
- widget.render()
- # when UI upates, call the new text --> parse to teh TTi function
- def update_ui(dataset: str, config: str, split: str, feature: str):
- feature = ast.literal_eval(feature)
- label_field, label_names = gr_utils.get_label_names(dataset, config, DATASET_NAME_TO_DICT)
- dstats = dmt_cls(dset_name=dataset, dset_config=config, split_name=split, text_field=feature,
- label_field=label_field, label_names=label_names, use_cache=True)
- load_prepare_list = get_load_prepare_list_fn(dstats)
- dstats = load_or_prepare_widgets(dstats, load_prepare_list, show_perplexities=False,
- live=live, pull_cache_from_hub=pull_cache_from_hub)
- output = {title: get_title(dstats), state: dstats}
- for widget in widget_list:
- output.update(widget.update(dstats))
- return output
-
- def update_dataset(dataset: str):
- new_values = gr_utils.update_dataset(dataset, DATASET_NAME_TO_DICT)
- config = new_values[0][1]
- feature = new_values[1][1]
- split = new_values[2][1]
- new_dropdown = {
- dataset_args["text_field"]: gr.Dropdown.update(choices=new_values[1][0], value=feature),
- dataset_args["split_name"]: gr.Dropdown.update(choices=new_values[2][0], value=split),
- }
- return new_dropdown
-
- def update_config(dataset: str, config: str):
- new_values = gr_utils.update_config(dataset, config, DATASET_NAME_TO_DICT)
-
- feature = new_values[0][1]
- split = new_values[1][1]
- new_dropdown = {
- dataset_args["text_field"]: gr.Dropdown.update(choices=new_values[0][0], value=feature),
- dataset_args["split_name"]: gr.Dropdown.update(choices=new_values[1][0], value=split)
- }
- return new_dropdown
-
- measurements = [comp for output in widget_list for comp in output.output_components]
- demo.load(update_ui,
- inputs=[dataset_args["dset_name"], dataset_args["dset_config"], dataset_args["split_name"], dataset_args["text_field"]],
- outputs=[title, state] + measurements)
- print(dataset_args["text_field"])
- for widget in widget_list:
- widget.add_events(state)
-
- dataset_args["dset_name"].change(update_dataset,
- inputs=[dataset_args["dset_name"]],
- outputs=[dataset_args["dset_config"],
- dataset_args["split_name"], dataset_args["text_field"],
- title, state] + measurements)
-
- dataset_args["dset_config"].change(update_config,
- inputs=[dataset_args["dset_name"], dataset_args["dset_config"]],
- outputs=[dataset_args["split_name"], dataset_args["text_field"],
- title, state] + measurements)
-
- dataset_args["calculate_btn"].click(update_ui,
- inputs=[dataset_args["dset_name"], dataset_args["dset_config"],
- dataset_args["split_name"], dataset_args["text_field"]],
- outputs=[title, state] + measurements)
- return demo
-
-
-def main():
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--live", default=False, required=False, action="store_true", help="Flag to specify that this is not running live.")
- parser.add_argument(
- "--pull_cache_from_hub", default=False, required=False, action="store_true", help="Flag to specify whether to look in the hub for measurements caches. If you are using this option, you must have HUB_CACHE_ORGANIZATION= and HF_TOKEN= on separate lines in a file named .env at the root of this repo.")
- arguments = parser.parse_args()
- live = arguments.live
- pull_cache_from_hub = arguments.pull_cache_from_hub
-
- # Create and initialize the demo
- dataset_args = display_initial_UI()
- demo = create_demo(live, pull_cache_from_hub)
- print("this is the cureenrt TEXT:")
- print(dataset_args["text_field"])
-
- demo.launch()
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/HugoDzz/super-godot-galaxy/static/smg/index.html b/spaces/HugoDzz/super-godot-galaxy/static/smg/index.html
deleted file mode 100644
index 221664ad7b1306dc83bc68b640ae9f2927e46f47..0000000000000000000000000000000000000000
--- a/spaces/HugoDzz/super-godot-galaxy/static/smg/index.html
+++ /dev/null
@@ -1,248 +0,0 @@
-
-
-
-
-
- Super Godot Galaxy
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/ICML2022/OFA/fairseq/examples/latent_depth/latent_depth_src/__init__.py b/spaces/ICML2022/OFA/fairseq/examples/latent_depth/latent_depth_src/__init__.py
deleted file mode 100644
index c5fa76039ff98c18d3c14b5f4a8f73ffe644de11..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/latent_depth/latent_depth_src/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from . import multilingual_translation_latent_depth # noqa
-from .loss import latent_depth # noqa
-from .models import latent_multilingual_transformer # noqa
-from .modules import latent_layers # noqa
diff --git a/spaces/ICML2022/resefa/third_party/stylegan3_official_ops/custom_ops.py b/spaces/ICML2022/resefa/third_party/stylegan3_official_ops/custom_ops.py
deleted file mode 100644
index c5853ac187e6e3ae522b0ef1aabefc7b188f7083..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/resefa/third_party/stylegan3_official_ops/custom_ops.py
+++ /dev/null
@@ -1,191 +0,0 @@
-# python3.7
-
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Utility functions to setup customized operators.
-
-Please refer to https://github.com/NVlabs/stylegan3
-"""
-
-# pylint: disable=line-too-long
-# pylint: disable=multiple-statements
-# pylint: disable=missing-function-docstring
-# pylint: disable=useless-suppression
-# pylint: disable=inconsistent-quotes
-
-import glob
-import hashlib
-import importlib
-import os
-import re
-import shutil
-import uuid
-
-import torch
-import torch.utils.cpp_extension
-
-#----------------------------------------------------------------------------
-# Global options.
-
-verbosity = 'none' # Verbosity level: 'none', 'brief', 'full'
-
-#----------------------------------------------------------------------------
-# Internal helper funcs.
-
-def _find_compiler_bindir():
- patterns = [
- 'C:/Program Files (x86)/Microsoft Visual Studio/*/Professional/VC/Tools/MSVC/*/bin/Hostx64/x64',
- 'C:/Program Files (x86)/Microsoft Visual Studio/*/BuildTools/VC/Tools/MSVC/*/bin/Hostx64/x64',
- 'C:/Program Files (x86)/Microsoft Visual Studio/*/Community/VC/Tools/MSVC/*/bin/Hostx64/x64',
- 'C:/Program Files (x86)/Microsoft Visual Studio */vc/bin',
- ]
- for pattern in patterns:
- matches = sorted(glob.glob(pattern))
- if len(matches):
- return matches[-1]
- return None
-
-def _find_compiler_bindir_posix():
- patterns = [
- '/usr/local/cuda/bin'
- ]
- for pattern in patterns:
- matches = sorted(glob.glob(pattern))
- if len(matches):
- return matches[-1]
- return None
-
-#----------------------------------------------------------------------------
-
-def _get_mangled_gpu_name():
- name = torch.cuda.get_device_name().lower()
- out = []
- for c in name:
- if re.match('[a-z0-9_-]+', c):
- out.append(c)
- else:
- out.append('-')
- return ''.join(out)
-
-#----------------------------------------------------------------------------
-# Main entry point for compiling and loading C++/CUDA plugins.
-
-_cached_plugins = dict()
-
-def get_plugin(module_name, sources, headers=None, source_dir=None, **build_kwargs):
- assert verbosity in ['none', 'brief', 'full']
- if headers is None:
- headers = []
- if source_dir is not None:
- sources = [os.path.join(source_dir, fname) for fname in sources]
- headers = [os.path.join(source_dir, fname) for fname in headers]
-
- # Already cached?
- if module_name in _cached_plugins:
- return _cached_plugins[module_name]
-
- # Print status.
- if verbosity == 'full':
- print(f'Setting up PyTorch plugin "{module_name}"...')
- elif verbosity == 'brief':
- print(f'Setting up PyTorch plugin "{module_name}"... ', end='', flush=True)
- verbose_build = (verbosity == 'full')
-
- # Compile and load.
- try: # pylint: disable=too-many-nested-blocks
- # Make sure we can find the necessary compiler binaries.
- if os.name == 'nt' and os.system("where cl.exe >nul 2>nul") != 0:
- compiler_bindir = _find_compiler_bindir()
- if compiler_bindir is None:
- raise RuntimeError(f'Could not find MSVC/GCC/CLANG installation on this computer. Check _find_compiler_bindir() in "{__file__}".')
- os.environ['PATH'] += ';' + compiler_bindir
-
- elif os.name == 'posix':
- compiler_bindir = _find_compiler_bindir_posix()
- if compiler_bindir is None:
- raise RuntimeError(f'Could not find NVCC installation on this computer. Check _find_compiler_bindir_posix() in "{__file__}".')
- os.environ['PATH'] += ';' + compiler_bindir
-
- # Some containers set TORCH_CUDA_ARCH_LIST to a list that can either
- # break the build or unnecessarily restrict what's available to nvcc.
- # Unset it to let nvcc decide based on what's available on the
- # machine.
- os.environ['TORCH_CUDA_ARCH_LIST'] = ''
-
- # Incremental build md5sum trickery. Copies all the input source files
- # into a cached build directory under a combined md5 digest of the input
- # source files. Copying is done only if the combined digest has changed.
- # This keeps input file timestamps and filenames the same as in previous
- # extension builds, allowing for fast incremental rebuilds.
- #
- # This optimization is done only in case all the source files reside in
- # a single directory (just for simplicity) and if the TORCH_EXTENSIONS_DIR
- # environment variable is set (we take this as a signal that the user
- # actually cares about this.)
- #
- # EDIT: We now do it regardless of TORCH_EXTENSIONS_DIR, in order to work
- # around the *.cu dependency bug in ninja config.
- #
- all_source_files = sorted(sources + headers)
- all_source_dirs = set(os.path.dirname(fname) for fname in all_source_files)
- if len(all_source_dirs) == 1: # and ('TORCH_EXTENSIONS_DIR' in os.environ):
-
- # Compute combined hash digest for all source files.
- hash_md5 = hashlib.md5()
- for src in all_source_files:
- with open(src, 'rb') as f:
- hash_md5.update(f.read())
-
- # Select cached build directory name.
- source_digest = hash_md5.hexdigest()
- build_top_dir = torch.utils.cpp_extension._get_build_directory(module_name, verbose=verbose_build) # pylint: disable=protected-access
- cached_build_dir = os.path.join(build_top_dir, f'{source_digest}-{_get_mangled_gpu_name()}')
-
- if not os.path.isdir(cached_build_dir):
- tmpdir = f'{build_top_dir}/srctmp-{uuid.uuid4().hex}'
- os.makedirs(tmpdir)
- for src in all_source_files:
- shutil.copyfile(src, os.path.join(tmpdir, os.path.basename(src)))
- try:
- os.replace(tmpdir, cached_build_dir) # atomic
- except OSError:
- # source directory already exists, delete tmpdir and its contents.
- shutil.rmtree(tmpdir)
- if not os.path.isdir(cached_build_dir): raise
-
- # Compile.
- cached_sources = [os.path.join(cached_build_dir, os.path.basename(fname)) for fname in sources]
- torch.utils.cpp_extension.load(name=module_name, build_directory=cached_build_dir,
- verbose=verbose_build, sources=cached_sources, **build_kwargs)
- else:
- torch.utils.cpp_extension.load(name=module_name, verbose=verbose_build, sources=sources, **build_kwargs)
-
- # Load.
- module = importlib.import_module(module_name)
-
- except:
- if verbosity == 'brief':
- print('Failed!')
- raise
-
- # Print status and add to cache dict.
- if verbosity == 'full':
- print(f'Done setting up PyTorch plugin "{module_name}".')
- elif verbosity == 'brief':
- print('Done.')
- _cached_plugins[module_name] = module
- return module
-
-#----------------------------------------------------------------------------
-
-# pylint: enable=line-too-long
-# pylint: enable=multiple-statements
-# pylint: enable=missing-function-docstring
-# pylint: enable=useless-suppression
-# pylint: enable=inconsistent-quotes
diff --git a/spaces/Ikaros521/moe-tts/transforms.py b/spaces/Ikaros521/moe-tts/transforms.py
deleted file mode 100644
index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/moe-tts/transforms.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
-
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/Illumotion/Koboldcpp/examples/gptneox-wip/cmpnct_gpt2bpe.hpp b/spaces/Illumotion/Koboldcpp/examples/gptneox-wip/cmpnct_gpt2bpe.hpp
deleted file mode 100644
index 9d433f4b1acf01019344e66ce9eea59e7ed7d299..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/examples/gptneox-wip/cmpnct_gpt2bpe.hpp
+++ /dev/null
@@ -1,1133 +0,0 @@
-#ifndef CMPNCT_GPT2BPE
-#define CMPNCT_GPT2BPE
-
-#include
-#include
-#include
-#include
-#include
-#include