Acronis Backup 12.5.1 Build 14240 Crack: A Reliable and Flexible Solution for Data Protection
-
Acronis Backup 12.5.1 Build 14240 Crack is a powerful and versatile software that provides comprehensive data protection for any environment, including physical, virtual, cloud, mobile, and applications. With Acronis Backup 12.5.1 Build 14240 Crack, you can easily backup and restore your data, manage your backup policies, monitor your backup activities, and recover your data in minutes.
Acronis Backup 12.5.1 Build 14240 Crack is the latest update of Acronis Backup 12.5, which was released in August 2019. This update introduces several new features and enhancements, such as:
-
-
The enhanced backup option Performance and backup window (former Performance) enables you to set one of three levels of backup performance (high, low, prohibited) for every hour within a week. The high and low levels are configurable in terms of the process priority and output speed[^1^].
-
The new option Enable backup validation enables you to automatically validate your backups after they are created or according to a schedule. You can also specify the number of backups to keep validated[^2^].
-
The new option Enable ransomware protection enables you to protect your backups from ransomware attacks by detecting and blocking unauthorized encryption attempts[^2^].
-
The new option Enable deduplication enables you to reduce the storage space required for your backups by eliminating duplicate data blocks[^2^].
-
The new option Enable encryption enables you to encrypt your backups with AES-256 algorithm to ensure data security and privacy[^2^].
-
The new option Enable compression enables you to compress your backups to save storage space and bandwidth[^2^].
-
The new option Enable notifications enables you to receive email notifications about the status of your backup operations[^2^].
-
The new option Enable reports enables you to generate and view detailed reports about your backup activities and performance[^2^].
-
-
Acronis Backup 12.5.1 Build 14240 Crack supports a wide range of operating systems, platforms, and applications, such as Windows, Linux, Mac OS X, VMware, Hyper-V, Citrix XenServer, Oracle VM Server, Microsoft Exchange Server, Microsoft SQL Server, Microsoft SharePoint Server, Microsoft Active Directory, Microsoft Office 365, Google G Suite, Amazon EC2, Azure VMs, iOS, Android, and more[^2^].
-
Acronis Backup 12.5.1 Build 14240 Crack is a reliable and flexible solution for data protection that can meet the needs of any business size and complexity. With Acronis Backup 12.5.1 Build 14240 Crack, you can ensure the availability and security of your data while saving time and money.
-
Acronis Backup 12.5.1 Build 14240 Crack: What Customers Say
-
Acronis Backup 12.5.1 Build 14240 Crack is not only a powerful and versatile software for data protection, but also a highly rated and recommended solution by customers who have used it. According to TrustRadius, a platform for verified user reviews, Acronis Backup 12.5 has an average rating of 7.7 out of 10 based on 136 reviews and ratings[^3^]. Here are some of the pros and cons that customers have shared about Acronis Backup 12.5.1 Build 14240 Crack:
-
Pros
-
-
Acronis Backup 12.5.1 Build 14240 Crack offers excellent backup speeds, which can save time and resources for backup operations.
-
Acronis Backup 12.5.1 Build 14240 Crack supports a wide range of platforms, including physical, virtual, cloud, mobile, and applications, which can provide comprehensive data protection for any environment.
-
Acronis Backup 12.5.1 Build 14240 Crack is easy to deploy, manage, and use, with a web-based console that provides a customizable dashboard and drag-and-drop widgets.
-
Acronis Backup 12.5.1 Build 14240 Crack provides valuable ransomware protection with Acronis Active Protection, which can detect and block unauthorized encryption attempts and restore affected files.
-
Acronis Backup 12.5.1 Build 14240 Crack offers flexible backup options, such as universal recovery, recovery verification, instant recovery, incremental backup identification, deduplication, encryption, compression, notifications, and reports[^2^] .
-
-
Cons
-
-
Acronis Backup 12.5.1 Build 14240 Crack may have some issues with recovery time objectives (RTO), which may lag slightly behind the stated speeds.
-
Acronis Backup 12.5.1 Build 14240 Crack may not be able to recover data for a single instance installation[^3^].
-
Acronis Backup 12.5.1 Build 14240 Crack may be pricier than other solutions for small amounts of data to backup[^3^].
-
-
Overall, customers are satisfied with Acronis Backup 12.5.1 Build 14240 Crack and its features, performance, reliability, and support. Many customers have praised Acronis Backup 12.5.1 Build 14240 Crack as a solid solution for data protection that can meet the needs of any business size and complexity[^3^] .
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Navisworks Exporter for Revit and Boost Your Collaboration and Coordination.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Navisworks Exporter for Revit and Boost Your Collaboration and Coordination.md
deleted file mode 100644
index bc39300b516826bd4d0454abc9f164d77957010f..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Navisworks Exporter for Revit and Boost Your Collaboration and Coordination.md
+++ /dev/null
@@ -1,31 +0,0 @@
-
-
How to Download and Install Navisworks Exporter for Revit
-
Navisworks Exporter for Revit is a plug-in that allows you to export Revit models as NWC files that can be opened and viewed in Navisworks. NWC files are optimized for performance and collaboration, and can be used for clash detection, coordination, and simulation.
Scroll down to the section entitled Navisworks NWC Export Utility and click on the link that matches your Revit version and operating system.
-
Save the file to your computer and run the installer. Follow the instructions on the screen to complete the installation.
-
Restart Revit if it was running during the installation.
-
To export a Revit model as an NWC file, click Add-Ins > External Tools > Autodesk Navisworks. In the Export Scene As dialog box, click the Autodesk Navisworks Settings button. Adjust the settings for your export and click OK. Then choose a location and a name for your NWC file and click Save.
-
-
Congratulations! You have successfully downloaded and installed Navisworks Exporter for Revit and exported your first NWC file.
Here are some more paragraphs for your article:
-
NWC files are a great way to share and collaborate on Revit models with other stakeholders. You can use Navisworks to open and view NWC files, as well as combine them with other NWC files from different disciplines and sources. You can also use Navisworks to perform various tasks on the NWC files, such as:
-
-
-
Clash detection: You can check for interferences and conflicts between different elements and objects in the NWC files, and generate reports and markups to resolve them.
-
Coordination: You can align and synchronize the NWC files with a common coordinate system and time frame, and create federated models that show the whole project.
-
Simulation: You can create animations and walkthroughs of the NWC files, and simulate the construction sequence and schedule of the project.
-
-
To view an NWC file in Navisworks, you need to have Navisworks installed on your computer. You can download a free trial version of Navisworks from this page: Navisworks Free Trial. Once you have Navisworks installed, you can open an NWC file by clicking File > Open and browsing to the location of the file. You can also drag and drop the file into the Navisworks window.
-
You can adjust the settings for future exports of NWC files from Revit by using the Options Editor in Navisworks. To access the Options Editor, click File > Options. Expand the File Exporters node and click the Revit page. Here you can change various options for your export, such as:
-
-
Export mode: You can choose between exporting the entire project or only selected elements.
-
Export properties: You can choose which properties to include in the NWC file, such as categories, materials, phases, etc.
-
Export geometry: You can choose how to export the geometry of the Revit model, such as using tessellation or solids.
-
Export links: You can choose how to export linked Revit models or CAD files, such as embedding them or using relative paths.
-
-
You can also save your export settings as a profile and load it later for convenience.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Native Instruments Battery 4 Crack.md b/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Native Instruments Battery 4 Crack.md
deleted file mode 100644
index 8eeab23ab59c8744d64c133f4aea8c5313a95cc4..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Native Instruments Battery 4 Crack.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
When you choose a cell in a battery, you can create a new Hit or Hit Mix which includes various samples. Battery provides an offline sampler which makes it possible for the user to import the audio sample of his or her choice. Also, there are a host of products and advanced effects to use with the samples in the collection.
Battery is available in a free trial version and not all features are available in the trial version. In the trial version, you can load the samples that are already installed and can download sounds from the online sampler. The trial version also lets the user preview the recorded drum, effects, and EQ details and provides full access to the extensive online sampler. In the trial version, however, no additional modules, multi-track editing, importing of audio clips and creating custom kits are available. In order to use the trial version, you must register for a free NIN account which has its own unique limitations. It is not possible to download the free trial version to the desktop. Battery 3 offers 16 new percussion instruments from the most popular electronic percussion instruments. There are presets for everything from traditional acoustic drum kits to entire electronic drum kits. Some of the drums include modern drums such as the Hi-hat, ride, toms, cymbals, an A/D core, and much more. These drums are made using a specially designed rack with thousands of samples for real time performance to make the user an expert at creating real drum sounds. Production is easy with the space for mixing with real-time samples and also virtual racks to create your own kits. Batteries also enables the user to get started quickly and easily.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Acronis True Image 2018 Download and Try the Most Reliable Backup Tool.md b/spaces/1phancelerku/anime-remove-background/Acronis True Image 2018 Download and Try the Most Reliable Backup Tool.md
deleted file mode 100644
index 541796ca90648f615492cc7f2ef1ec6808ed7ab8..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Acronis True Image 2018 Download and Try the Most Reliable Backup Tool.md
+++ /dev/null
@@ -1,131 +0,0 @@
-
-
How to Download Acronis True Image 2018
-
If you are looking for a reliable and easy-to-use backup software that can protect your data and system from any disaster, you might want to consider Acronis True Image 2018. This software is one of the best in the market, offering a comprehensive set of features and tools that can help you create, manage, and restore backups of your files, disks, partitions, or entire machines. In this article, we will show you how to download Acronis True Image 2018, how to install and activate it, how to use its main functions, and how to get help and support if you need it.
-
What is Acronis True Image 2018 and why you need it
-
Acronis True Image 2018 is a personal cyber protection solution that delivers easy-to-use, efficient, and secure backup and recovery of your data and system. It can help you prevent data loss due to hardware failure, malware infection, accidental deletion, theft, or natural disaster. It can also help you migrate your data to a new device, clone your disk to a new drive, archive your files to save space, or verify the authenticity of your data with blockchain technology.
Acronis True Image 2018 offers a rich set of features that can meet your backup needs. Some of the main features are:
-
-
Disk and partition backup: You can create an exact copy of your entire disk or partition, including the operating system, applications, settings, and data. This is useful for restoring your system in case of a crash or replacing your disk with a new one.
-
File backup: You can back up individual files or folders to local, network, or cloud storage. You can also choose the backup type (full, incremental, or differential) and the backup frequency (daily, weekly, monthly, or on event).
-
Backup to cloud: You can store your backups in Acronis Cloud, a secure online storage that offers unlimited space for your data. You can access your backups from any device or location, as well as sync them across multiple devices.
-
Recovery: You can restore your data or system from any backup source (local, network, cloud) to any destination (same or different device). You can also restore individual files or folders from disk or file backups.
-
Cloning: You can clone your disk to another disk of the same or different size. This is useful for upgrading your disk to a larger or faster one without reinstalling the operating system or applications.
-
Archiving: You can archive your files that you rarely use or need to a local drive or cloud storage. This can help you free up space on your disk and optimize its performance.
-
Active protection: You can protect your data from ransomware attacks with Acronis Active Protection. This feature monitors your system for suspicious activity and blocks any unauthorized encryption attempts. It also allows you to recover any affected files from a backup.
-
Notary : You can verify the integrity and authenticity of your data with Acronis Notary. This feature uses blockchain technology to create a unique digital fingerprint for your data and store it in a public ledger. You can then use this fingerprint to prove that your data has not been altered or tampered with.
-
-
System requirements for Acronis True Image 2018
-
To use Acronis True Image 2018, you need to have a device that meets the following minimum system requirements:
-
-
Operating system
Hardware
-
Windows 7 SP1 or later (32-bit and 64-bit)
1 GHz processor or faster
-
macOS 10.11 or later
2 GB RAM or more
-
iOS 10.0 or later
1.5 GB free disk space or more
-
Android 4.1 or later
A high-speed internet connection for cloud backup and recovery
-
-
How to purchase and activate Acronis True Image 2018
-
To use Acronis True Image 2018, you need to purchase a subscription plan and activate the software with a license key. Here is how you can do that:
-
Pricing and subscription plans
-
Acronis True Image 2018 offers three subscription plans that vary in terms of features, cloud storage, and number of devices. You can choose the plan that suits your needs and budget. The plans are:
-
-
Standard: This is a one-time purchase plan that gives you access to the basic features of Acronis True Image 2018, such as disk and file backup, recovery, cloning, and archiving. It does not include cloud backup, active protection, notary, or any other advanced features. It also does not include any updates or upgrades. You can use this plan on one device only. The price of this plan is $49.99.
-
Advanced: This is a yearly subscription plan that gives you access to all the features of Acronis True Image 2018, including cloud backup, active protection, notary, and more. It also includes free updates and upgrades. You can use this plan on up to three devices. The price of this plan is $49.99 per year.
-
Premium: This is a yearly subscription plan that gives you access to all the features of Acronis True Image 2018, including cloud backup, active protection, notary, and more. It also includes free updates and upgrades. You can use this plan on up to five devices. The price of this plan is $99.99 per year.
-
-
Activation and licensing process
-
To activate Acronis True Image 2018, you need to have a license key that corresponds to your subscription plan. You can get the license key in one of the following ways:
-
-
Purchase online: You can purchase Acronis True Image 2018 online from the official website or from an authorized reseller. You will receive an email with the license key and a download link after completing the payment.
-
Purchase offline: You can purchase Acronis True Image 2018 offline from a retail store or a distributor. You will receive a box with the installation media and the license key inside.
-
Trial version: You can try Acronis True Image 2018 for free for 30 days by downloading the trial version from the official website. You will receive a trial license key by email after registering for the trial.
-
-
To activate Acronis True Image 2018, you need to enter the license key in the software interface after installing it on your device. You can also activate it online by logging in to your Acronis account and entering the license key there.
-
How to download and install Acronis True Image 2018
-
To download and install Acronis True Image 2018, you need to have a valid license key and an internet connection. Here is how you can do that:
-
Download link and installation file
-
You can download Acronis True Image 2018 from the official website or from the email that you received after purchasing or registering for the trial. The download link will direct you to the appropriate version of the software for your operating system (Windows, macOS, iOS, or Android). The installation file is a .exe file for Windows, a .dmg file for macOS, an .ipa file for iOS, and an .apk file for Android. The file size is about 500 MB for Windows and macOS, and about 100 MB for iOS and Android. You can save the file to your device or run it directly from the browser.
-
How to download acronis true image 2018 for free
-Download acronis true image 2018 full version with crack
-Acronis true image 2018 download link
-Download acronis true image 2018 iso
-Acronis true image 2018 bootable usb download
-Download acronis true image 2018 offline installer
-Acronis true image 2018 trial download
-Download acronis true image 2018 for windows 10
-Acronis true image 2018 mac download
-Download acronis true image 2018 serial key
-Acronis true image 2018 activation key download
-Download acronis true image 2018 user guide
-Acronis true image 2018 backup software download
-Download acronis true image 2018 update
-Acronis true image 2018 cloud download
-Download acronis true image 2018 recovery disk
-Acronis true image 2018 clone disk download
-Download acronis true image 2018 license key
-Acronis true image 2018 coupon code download
-Download acronis true image 2018 portable
-Acronis true image 2018 linux download
-Download acronis true image 2018 for android
-Acronis true image 2018 review download
-Download acronis true image 2018 patch
-Acronis true image 2018 keygen download
-Download acronis true image 2018 for pc
-Acronis true image 2018 system requirements download
-Download acronis true image 2018 latest version
-Acronis true image 2018 features download
-Download acronis true image 2018 comparison chart
-Acronis true image 2018 upgrade download
-Download acronis true image 2018 tutorial
-Acronis true image 2018 support download
-Download acronis true image 2018 forum
-Acronis true image 2018 problems download
-Download acronis true image 2018 tips and tricks
-Acronis true image 2018 alternatives download
-Download acronis true image 2018 vs norton ghost
-Acronis true image 2018 vs windows backup download
-Download acronis true image 2018 vs macrium reflect
-
Installation steps and options
-
To install Acronis True Image 2018, you need to run the installation file and follow the instructions on the screen. The installation process is similar for all operating systems, but there may be some differences in the options and settings. Here are the general steps and options for installing Acronis True Image 2018:
-
-
Accept the license agreement: You need to read and accept the terms and conditions of the license agreement before proceeding with the installation.
-
Choose the installation type: You can choose between a typical installation or a custom installation. The typical installation will install the software with the default settings and options, while the custom installation will allow you to change some of them, such as the installation location, the components to install, and the language.
-
Enter the license key: You need to enter the license key that you received after purchasing or registering for the trial. The license key will activate the software and determine the features and subscription plan that you can use.
-
Sign in to your Acronis account: You need to sign in to your Acronis account or create one if you don't have one. Your Acronis account will allow you to manage your subscription, access your cloud backups, sync your data across devices, and get help and support.
-
Complete the installation: The installation will take a few minutes to complete. You may need to restart your device after the installation is finished.
-
-
How to use Acronis True Image 2018
-
After installing and activating Acronis True Image 2018, you can start using it to backup and protect your data and system. The software has a user-friendly interface that allows you to access its main functions and settings. Here is how you can use Acronis True Image 2018:
-
Backup and recovery options
-
To create a backup of your data or system, you need to select the source (the data or disk that you want to backup) and the destination (the location where you want to store the backup). You can also choose the backup type, frequency, encryption, notification, and other options. To restore your data or system from a backup, you need to select the backup source (the location where the backup is stored) and the recovery destination (the location where you want to restore the data or disk). You can also choose the recovery mode, options, and verification.
-
Cloning and archiving options
-
To clone your disk to another disk, you need to select the source disk (the disk that you want to clone) and the destination disk (the disk where you want to copy the data). You can also choose the cloning mode (automatic or manual) and options (such as resizing partitions or excluding files). To archive your files to another location, you need to select the source files (the files that you want to archive) and the destination location (the local drive or cloud storage where you want to store the archived files). You can also choose the archiving options (such as compression, encryption, or scheduling).
-
Active protection and notary options
-
To protect your data from ransomware attacks, you need to enable Acronis Active Protection in the software settings. This feature will monitor your system for suspicious activity and block any unauthorized encryption attempts. It will also allow you to recover any affected files from a backup. To verify the integrity and authenticity of your data, you need to use Acronis Notary in the software interface. This feature will create a unique digital fingerprint for your data and store it in a public ledger. You can then use this fingerprint to prove that your data has not been altered or tampered with.
-
How to get help and support for Acronis True Image 2018
-
If you have any questions or issues with Acronis True Image 2018, you can get help and support from various sources. Some of the main sources are:
-
Documentation and tutorials
-
You can find the user guide, the quick start guide, the FAQ, and the video tutorials for Acronis True Image 2018 on the official website. These resources will provide you with detailed information and instructions on how to use the software and its features.
-
Knowledge base and community forum
-
You can search for answers and solutions to common problems and errors in the knowledge base and the community forum on the official website. These resources will provide you with articles, tips, tricks, and advice from Acronis experts and other users.
-
Technical support and initial setup service
-
You can contact the technical support team by phone, email, or chat if you need assistance with installation, activation, configuration, or troubleshooting. The technical support team is available 24/7 and can help you resolve any issues or errors. You can also purchase the initial setup service if you want an Acronis technician to remotely install and configure the software for you.
-
Conclusion and FAQs
-
Acronis True Image 2018 is a powerful backup software that can protect your data and system from any disaster. It offers a comprehensive set of features and tools that can help you create, manage, and restore backups of your files, disks, partitions, or entire machines. It also offers cloud backup, active protection, notary, and other advanced features that can enhance your data security and integrity. To use Acronis True Image 2018, you need to purchase a subscription plan, activate the software with a license key, download and install the software on your device, and start using its main functions. You can also get help and support from various sources if you need it.
-
Here are some FAQs that you might have about Acronis True Image 2018:
-
-
Q: How can I update or upgrade Acronis True Image 2018?
-
A: If you have an active subscription plan (Advanced or Premium), you can update or upgrade Acronis True Image 2018 for free. You will receive notifications when a new version or update is available. You can also check for updates manually in the software settings. If you have a one-time purchase plan (Standard), you cannot update or upgrade Acronis True Image 2018 for free. You will need to purchase a new license key for the latest version.
-
Q: How can I cancel or renew my subscription plan?
-
A: If you have an active subscription plan (Advanced or Premium), you can cancel or renew it at any time. You can manage your subscription plan in your Acronis account online. You can also change your payment method, billing information, or subscription type there.
-
Q: How can I backup or restore my mobile device?
-
A: If you have an iOS or Android device, you can backup or restore it with Acronis True Image 2018. You need to download and install the Acronis Mobile app on your device and sign in with your Acronis account. You can then backup your contacts, photos, videos, messages, calendars, and other data to Acronis Cloud or another device. You can also restore your data from a backup to the same or a different device.
-
Q: How can I access my cloud backups?
-
A: If you have backed up your data to Acronis Cloud, you can access it from any device or location. You need to sign in to your Acronis account online or use the Acronis Mobile app on your device. You can then view, download, delete, or share your cloud backups.
-
Q: How can I contact Acronis technical support?
-
A: If you need assistance with Acronis True Image 2018, you can contact Acronis technical support by phone, email, or chat. You can find the contact information on the official website. You will need to provide your license key, product version, operating system, error message, and other details that can help them solve your problem.
-
-
I hope this article has helped you learn how to download Acronis True Image 2018 and use it to backup and protect your data and system. If you have any feedback or suggestions, please let me know in the comments below. Thank you for reading!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Very Very Very by I.O.I - The Song That Broke the Charts.md b/spaces/1phancelerku/anime-remove-background/Download Very Very Very by I.O.I - The Song That Broke the Charts.md
deleted file mode 100644
index 3ddb6a40ef6a1e838b594043ffaaf0877b69d2e5..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Very Very Very by I.O.I - The Song That Broke the Charts.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
How to Download I.O.I's Very Very Very Song and Enjoy Its Catchy Melody
-
If you are a fan of K-pop, you might have heard of I.O.I, a girl group project that was formed through a survival reality show called Produce 101. The group consisted of 11 members who were selected from different agencies and debuted in 2016. They released two mini-albums and several singles before disbanding in 2017.
-
One of their most popular songs is Very Very Very, which was released as the title track of their second mini-album Miss Me? in October 2016. The song was composed by Park Jin-young, the founder of JYP Entertainment, and has a catchy melody and lyrics that express a girl's feelings for a boy. The song topped various music charts in South Korea and won several music awards.
If you love this song and want to listen to it anytime and anywhere, you might want to download it and enjoy it offline. Downloading the song can save you data and battery, as well as allow you to play it on different devices without internet connection. In this article, we will show you how to download I.O.I's Very Very Very song from different platforms, and how to enjoy it offline.
-
How to Download the Song from Different Platforms
-
There are many platforms where you can stream or download I.O.I's Very Very Very song, such as YouTube, Spotify, Apple Music, etc. However, not all of them offer free downloads or easy access. Here are some ways you can download the song from these platforms:
-
YouTube
-
YouTube is one of the most popular platforms where you can watch I.O.I's Very Very Very music video and listen to their song. However, if you want to download the song from YouTube, you have two options:
-
-
Use YouTube Premium or YouTube Music. These are subscription services that allow you to download videos and songs from YouTube and play them offline. You can sign up for a free trial or pay a monthly fee to use these services. To download I.O.I's Very Very Very song from YouTube Premium or YouTube Music, you need to:
-
-
Open the YouTube app on your device and search for I.O.I's Very Very Very music video or song.
-
Tap on the download icon below the video or song and choose the quality you want.
-
Wait for the download to finish and then go to your library or downloads section to find your downloaded video or song.
-
-
Use third-party tools or websites. These are free or paid tools or websites that allow you to download videos and songs from YouTube by copying and pasting their URLs. However, these tools or websites may not be safe, legal, or reliable. They may also have limited features, quality, or speed. To download I.O.I's Very Very Very song from third-party tools or websites, you need to:
-
-
Open the YouTube app or website on your device and search for I.O.I's Very Very Very music video or song.
-
Copy the URL of the video or song from the address bar or the share option.
-
Open a third-party tool or website that can download YouTube videos or songs, such as Y2mate, 4K Video Downloader, MP3Juices, etc.
-
Paste the URL of the video or song into the tool or website and choose the format and quality you want.
-
Click on the download button and wait for the download to finish.
-
Find your downloaded file in your device's storage or downloads folder.
-
-
-
Spotify
-
Spotify is another popular platform where you can stream or download I.O.I's Very Very Very song, as well as other songs from their albums and playlists. However, if you want to download the song from Spotify, you also have two options:
-
-
Use Spotify Premium. This is a subscription service that allows you to download songs from Spotify and play them offline. You can sign up for a free trial or pay a monthly fee to use this service. To download I.O.I's Very Very Very song from Spotify Premium, you need to:
-
-
Open the Spotify app on your device and search for I.O.I's Very Very Very song or their Miss Me? album.
-
Tap on the heart icon next to the song or the album to add it to your library.
-
Go to your library and tap on the download toggle next to the song or the album.
-
Wait for the download to finish and then go to your library or downloads section to find your downloaded song or album.
-
-
Use third-party tools or websites. These are free or paid tools or websites that allow you to download songs from Spotify by copying and pasting their URLs. However, these tools or websites may not be safe, legal, or reliable. They may also have limited features, quality, or speed. To download I.O.I's Very Very Very song from third-party tools or websites, you need to:
-
-
Open the Spotify app or website on your device and search for I.O.I's Very Very Very song or their Miss Me? album.
-
Copy the URL of the song or the album from the address bar or the share option.
-
Open a third-party tool or website that can download Spotify songs, such as Sidify, TuneFab, AudFree, etc.
-
Paste the URL of the song or the album into the tool or website and choose the format and quality you want.
-
Click on the download button and wait for the download to finish.
-
Find your downloaded file in your device's storage or downloads folder.
-
-
Apple Music
-
Apple Music is another popular platform where you can stream or download I.O.I's Very Very Very song, as well as other songs from their albums and playlists. However, if you want to download the song from Apple Music, you also have two options:
-
download ioi very very very mp3
-download ioi very very very lyrics
-download ioi very very very album
-download ioi very very very mv
-download ioi very very very dance practice
-download ioi very very very instrumental
-download ioi very very very live performance
-download ioi very very very ringtone
-download ioi very very very english cover
-download ioi very very very remix
-download ioi miss me album with very very very
-download ioi park jin young produced song very very very
-download ioi final comeback song very very very
-download ioi somi center song very very very
-download ioi addictive song very very very
-download ioi electropop song very very very
-download ioi bubblegum pop song very very very
-download ioi drum and bass song very very very
-download ioi number one song on gaon chart for 2016 october week 3 - 4, 2016, november week 1 - 2, 2016, december week 1 - 2, 2016, january week 1 - 2, 2017, february week 1 - 2, 2017, march week 1 - 2, 2017, april week 1 - 2, 2017, may week 1 - 2, 2017, june week 1 - 2, 2017, july week 1 - 2, 2017, august week 1 - 2, 2017, september week 1 - 2, 2017, october week 1 - 2, 2017, november week 1 - 2, 2017 and december week 1 - 2, 2017.
-download ioi most viewed kpop music video on youtube in america and worldwide for october month of year two thousand and sixteen according to billboard magazine article titled "Most Viewed K-Pop Videos in America & Around the World: October Month of Year Two Thousand and Sixteen" published on november month of year two thousand and sixteen date fourteen.
-download ioi song that sold over four hundred and twenty three thousand four hundred and ninety one downloads as of october month of year two thousand and sixteen according to gaon chart.
-download ioi song that won first place on mbc music show champion on october month of year two thousand and sixteen date twenty six and on mnet music show m countdown on october month of year two thousand and sixteen date twenty seven.
-download ioi song that was performed on mnet i.o.i x jyp special show and on the showcase for the mini album miss me release.
-download ioi song that was composed by park jin young who also wrote the lyrics.
-download ioi song that has a catchy chorus with the repeated phrase "neomu neomu neomu" which means "very very very" in korean language.
-download ioi song that expresses the feelings of a girl who wants to hear the confession from the guy she likes.
-download ioi song that has a colorful and cute music video with various outfits and props.
-download ioi song that has a fun and energetic dance choreography with a lot of jumping and waving.
-download ioi song that features all eleven members of the group including nayoung chungha sejeong chaeyeon kyulkyung sohye yeonjung yoojung mina doyeon and somi.
-download ioi song that is the title track of their second mini album miss me which was released on october month of year two thousand and sixteen date seventeen.
-
-
Use Apple Music subscription. This is a subscription service that allows you to download songs from Apple Music and play them offline. You can sign up for a free trial or pay a monthly fee to use this service. To download I.O.I's Very Very Very song from Apple Music, you need to:
-
-
Open the Apple Music app on your device and search for I.O.I's Very Very Very song or their Miss Me? album.
-
Tap on the plus icon next to the song or the album to add it to your library.
-
Go to your library and tap on the cloud icon next to the song or the album.
-
Wait for the download to finish and then go to your library or downloads section to find your downloaded song or album.
-
-
Use third-party tools or websites. These are free or paid tools or websites that allow you to download songs from Apple Music by copying and pasting their URLs. However, these tools or websites may not be safe, legal, or reliable. They may also have limited features, quality, or speed. To download I.O.I's Very Very Very song from third-party tools or websites, you need to:
-
-
Open the Apple Music app or website on your device and search for I.O.I's Very Very Very song or their Miss Me? album.
-
Copy the URL of the song or the album from the address bar or the share option.
-
Open a third-party tool or website that can download Apple Music songs, such as NoteBurner, TunesKit, DRmare, etc.
-
Paste the URL of the song or the album into the tool or website and choose the format and quality you want.
-
Click on the download button and wait for the download to finish.
-
Find your downloaded file in your device's storage or downloads folder.
-
-
-
How to Enjoy the Song Offline
-
Now that you have downloaded I.O.I's Very Very Very song from your preferred platform, you can enjoy it offline anytime and anywhere. Here are some ways you can enjoy the song offline:
-
Transfer the Song to Your Devices
-
If you want to listen to the song on different devices, such as your phone, tablet, laptop, etc., you need to transfer the song from your original device to your other devices. There are several ways you can do this:
-
-
Use USB cables. You can connect your devices with USB cables and copy and paste the song file from one device to another. This is a simple and fast way to transfer files, but it may require different types of cables for different devices.
-
Use Bluetooth. You can pair your devices with Bluetooth and send and receive the song file wirelessly. This is a convenient and cordless way to transfer files, but it may take longer time and consume more battery.
-
Use cloud services. You can upload your song file to a cloud service, such as Google Drive, Dropbox, iCloud, etc., and then download it to your other devices. This is a secure and accessible way to transfer files, but it may require internet connection and storage space.
-
-
Play the Song with Your Favorite Music Player
-
If you want to listen to the song with your favorite music player, such as VLC, Winamp, iTunes, etc., you need to open the song file with your music player and enjoy its features and settings. Here are some tips you can follow:
-
-
Choose a music player that suits your preferences and needs. There are many music players available for different devices and platforms, each with its own advantages and disadvantages. You can compare their features, functions, compatibility, interface, etc., and choose one that meets your expectations.
-
Adjust the settings and features of the music player to enhance your listening experience. You can customize various aspects of your music player, such as volume, equalizer, playback mode, playlist, etc., to suit your mood and taste. You can also explore other features of your music player, such as lyrics display, visualizer, sound effects, etc., to make your listening more fun and enjoyable.
-
-
Sing Along with the Lyrics and Learn Some Korean Words
-
If you want to sing along with I.O.I's Very Very Very song and learn some Korean words from it, you need to find the lyrics of the song online or offline. You can use the following table to compare the sources of the lyrics and their features: | Source | Features | | ------ | -------- | | [Color Coded Lyrics](^1^) | Provides the lyrics in Korean, Romanization, and English translation. Also provides the color codes for each member's parts and some background information about the song. | | [Genius](^2^) | Provides the lyrics in Korean and English translation. Also provides some annotations, explanations, and trivia about the song. | | [AZLyrics](^3^) | Provides the lyrics in English translation only. | You can choose the source that suits your preference and needs, and then follow these steps to sing along with the lyrics and learn some Korean words: - Open the source of the lyrics on your device and search for I.O.I's Very Very Very song. - Play the song with your music player and follow the lyrics on your screen. - Try to sing along with the song and pronounce the Korean words correctly. You can also use the Romanization or the English translation to help you understand the meaning of the words. - Pay attention to some common or useful Korean words and phrases from the song, such as 너무 (very), 좋아하다 (to like), 말해줘 (tell me), 자꾸 (keep), 떠오르다 (to come to mind), 조심하다 (to be careful), etc. You can also use a dictionary or a translator to look up more words or phrases that interest you. - Repeat the steps until you can sing along with the song confidently and learn some Korean words fluently.
Conclusion
-
In this article, we have shown you how to download I.O.I's Very Very Very song from different platforms, such as YouTube, Spotify, Apple Music, etc., and how to enjoy it offline, such as transferring it to your devices, playing it with your favorite music player, singing along with the lyrics, and learning some Korean words. We hope you have found this article helpful and informative, and that you have enjoyed listening to I.O.I's Very Very Very song.
-
I.O.I was a talented and charming girl group that left a lasting impression on many fans with their songs and performances. Although they have disbanded, their music lives on and can still bring joy and happiness to many listeners. If you are one of them, we encourage you to download and enjoy their Very Very Very song offline, as well as their other songs from their albums and playlists.
-
Thank you for reading this article. If you have any questions or feedback, please feel free to leave them in the comments section below. We would love to hear from you.
-
FAQs
-
Here are some frequently asked questions about I.O.I's Very Very Very song and how to download and enjoy it offline:
-
-
Q: When was I.O.I's Very Very Very song released?
-A: I.O.I's Very Very Very song was released on October 17, 2016 as the title track of their second mini-album Miss Me?
-
Q: Who composed I.O.I's Very Very Very song?
-A: I.O.I's Very Very Very song was composed by Park Jin-young, the founder of JYP Entertainment, who also produced other songs for I.O.I.
-
Q: How many members were in I.O.I?
-A: I.O.I had 11 members who were selected from different agencies through a survival reality show called Produce 101. They were Nayoung, Chungha, Sejeong, Chaeyeon, Kyulkyung, Sohye, Yeonjung, Yoojung, Mina, Doyeon, and Somi.
-
Q: Why did I.O.I disband?
-A: I.O.I disbanded in 2017 because they were a project group that had a limited contract period. The members returned to their original agencies and pursued their individual careers.
-
Q: Where can I find more songs by I.O.I?
-A: You can find more songs by I.O.I on various platforms, such as YouTube, Spotify, Apple Music, etc. You can also check out their albums and playlists, such as Chrysalis, Miss Me?, Whatta Man, etc.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/2023Liu2023/bingo/src/components/markdown.tsx b/spaces/2023Liu2023/bingo/src/components/markdown.tsx
deleted file mode 100644
index d4491467a1f14d1d72e535caac9c40636054e5df..0000000000000000000000000000000000000000
--- a/spaces/2023Liu2023/bingo/src/components/markdown.tsx
+++ /dev/null
@@ -1,9 +0,0 @@
-import { FC, memo } from 'react'
-import ReactMarkdown, { Options } from 'react-markdown'
-
-export const MemoizedReactMarkdown: FC = memo(
- ReactMarkdown,
- (prevProps, nextProps) =>
- prevProps.children === nextProps.children &&
- prevProps.className === nextProps.className
-)
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/inference/svs/base_svs_infer.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/inference/svs/base_svs_infer.py
deleted file mode 100644
index 39ed74f29f7526d5149e4f0079a3681a3bac2582..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/inference/svs/base_svs_infer.py
+++ /dev/null
@@ -1,265 +0,0 @@
-import os
-
-import torch
-import numpy as np
-from modules.hifigan.hifigan import HifiGanGenerator
-from vocoders.hifigan import HifiGAN
-from inference.svs.opencpop.map import cpop_pinyin2ph_func
-
-from utils import load_ckpt
-from utils.hparams import set_hparams, hparams
-from utils.text_encoder import TokenTextEncoder
-from pypinyin import pinyin, lazy_pinyin, Style
-import librosa
-import glob
-import re
-
-
-class BaseSVSInfer:
- def __init__(self, hparams, device=None):
- if device is None:
- device = 'cuda' if torch.cuda.is_available() else 'cpu'
- self.hparams = hparams
- self.device = device
-
- phone_list = ["AP", "SP", "a", "ai", "an", "ang", "ao", "b", "c", "ch", "d", "e", "ei", "en", "eng", "er", "f", "g",
- "h", "i", "ia", "ian", "iang", "iao", "ie", "in", "ing", "iong", "iu", "j", "k", "l", "m", "n", "o",
- "ong", "ou", "p", "q", "r", "s", "sh", "t", "u", "ua", "uai", "uan", "uang", "ui", "un", "uo", "v",
- "van", "ve", "vn", "w", "x", "y", "z", "zh"]
- self.ph_encoder = TokenTextEncoder(None, vocab_list=phone_list, replace_oov=',')
- self.pinyin2phs = cpop_pinyin2ph_func()
- self.spk_map = {'opencpop': 0}
-
- self.model = self.build_model()
- self.model.eval()
- self.model.to(self.device)
- self.vocoder = self.build_vocoder()
- self.vocoder.eval()
- self.vocoder.to(self.device)
-
- def build_model(self):
- raise NotImplementedError
-
- def forward_model(self, inp):
- raise NotImplementedError
-
- def build_vocoder(self):
- base_dir = hparams['vocoder_ckpt']
- config_path = f'{base_dir}/config.yaml'
- ckpt = sorted(glob.glob(f'{base_dir}/model_ckpt_steps_*.ckpt'), key=
- lambda x: int(re.findall(f'{base_dir}/model_ckpt_steps_(\d+).ckpt', x)[0]))[-1]
- print('| load HifiGAN: ', ckpt)
- ckpt_dict = torch.load(ckpt, map_location="cpu")
- config = set_hparams(config_path, global_hparams=False)
- state = ckpt_dict["state_dict"]["model_gen"]
- vocoder = HifiGanGenerator(config)
- vocoder.load_state_dict(state, strict=True)
- vocoder.remove_weight_norm()
- vocoder = vocoder.eval().to(self.device)
- return vocoder
-
- def run_vocoder(self, c, **kwargs):
- c = c.transpose(2, 1) # [B, 80, T]
- f0 = kwargs.get('f0') # [B, T]
- if f0 is not None and hparams.get('use_nsf'):
- # f0 = torch.FloatTensor(f0).to(self.device)
- y = self.vocoder(c, f0).view(-1)
- else:
- y = self.vocoder(c).view(-1)
- # [T]
- return y[None]
-
- def preprocess_word_level_input(self, inp):
- # Pypinyin can't solve polyphonic words
- text_raw = inp['text'].replace('最长', '最常').replace('长睫毛', '常睫毛') \
- .replace('那么长', '那么常').replace('多长', '多常') \
- .replace('很长', '很常') # We hope someone could provide a better g2p module for us by opening pull requests.
-
- # lyric
- pinyins = lazy_pinyin(text_raw, strict=False)
- ph_per_word_lst = [self.pinyin2phs[pinyin.strip()] for pinyin in pinyins if pinyin.strip() in self.pinyin2phs]
-
- # Note
- note_per_word_lst = [x.strip() for x in inp['notes'].split('|') if x.strip() != '']
- mididur_per_word_lst = [x.strip() for x in inp['notes_duration'].split('|') if x.strip() != '']
-
- if len(note_per_word_lst) == len(ph_per_word_lst) == len(mididur_per_word_lst):
- print('Pass word-notes check.')
- else:
- print('The number of words does\'t match the number of notes\' windows. ',
- 'You should split the note(s) for each word by | mark.')
- print(ph_per_word_lst, note_per_word_lst, mididur_per_word_lst)
- print(len(ph_per_word_lst), len(note_per_word_lst), len(mididur_per_word_lst))
- return None
-
- note_lst = []
- ph_lst = []
- midi_dur_lst = []
- is_slur = []
- for idx, ph_per_word in enumerate(ph_per_word_lst):
- # for phs in one word:
- # single ph like ['ai'] or multiple phs like ['n', 'i']
- ph_in_this_word = ph_per_word.split()
-
- # for notes in one word:
- # single note like ['D4'] or multiple notes like ['D4', 'E4'] which means a 'slur' here.
- note_in_this_word = note_per_word_lst[idx].split()
- midi_dur_in_this_word = mididur_per_word_lst[idx].split()
- # process for the model input
- # Step 1.
- # Deal with note of 'not slur' case or the first note of 'slur' case
- # j ie
- # F#4/Gb4 F#4/Gb4
- # 0 0
- for ph in ph_in_this_word:
- ph_lst.append(ph)
- note_lst.append(note_in_this_word[0])
- midi_dur_lst.append(midi_dur_in_this_word[0])
- is_slur.append(0)
- # step 2.
- # Deal with the 2nd, 3rd... notes of 'slur' case
- # j ie ie
- # F#4/Gb4 F#4/Gb4 C#4/Db4
- # 0 0 1
- if len(note_in_this_word) > 1: # is_slur = True, we should repeat the YUNMU to match the 2nd, 3rd... notes.
- for idx in range(1, len(note_in_this_word)):
- ph_lst.append(ph_in_this_word[-1])
- note_lst.append(note_in_this_word[idx])
- midi_dur_lst.append(midi_dur_in_this_word[idx])
- is_slur.append(1)
- ph_seq = ' '.join(ph_lst)
-
- if len(ph_lst) == len(note_lst) == len(midi_dur_lst):
- print(len(ph_lst), len(note_lst), len(midi_dur_lst))
- print('Pass word-notes check.')
- else:
- print('The number of words does\'t match the number of notes\' windows. ',
- 'You should split the note(s) for each word by | mark.')
- return None
- return ph_seq, note_lst, midi_dur_lst, is_slur
-
- def preprocess_phoneme_level_input(self, inp):
- ph_seq = inp['ph_seq']
- note_lst = inp['note_seq'].split()
- midi_dur_lst = inp['note_dur_seq'].split()
- is_slur = [float(x) for x in inp['is_slur_seq'].split()]
- print(len(note_lst), len(ph_seq.split()), len(midi_dur_lst))
- if len(note_lst) == len(ph_seq.split()) == len(midi_dur_lst):
- print('Pass word-notes check.')
- else:
- print('The number of words does\'t match the number of notes\' windows. ',
- 'You should split the note(s) for each word by | mark.')
- return None
- return ph_seq, note_lst, midi_dur_lst, is_slur
-
- def preprocess_input(self, inp, input_type='word'):
- """
-
- :param inp: {'text': str, 'item_name': (str, optional), 'spk_name': (str, optional)}
- :return:
- """
-
- item_name = inp.get('item_name', '')
- spk_name = inp.get('spk_name', 'opencpop')
-
- # single spk
- spk_id = self.spk_map[spk_name]
-
- # get ph seq, note lst, midi dur lst, is slur lst.
- if input_type == 'word':
- ret = self.preprocess_word_level_input(inp)
- elif input_type == 'phoneme': # like transcriptions.txt in Opencpop dataset.
- ret = self.preprocess_phoneme_level_input(inp)
- else:
- print('Invalid input type.')
- return None
-
- if ret:
- ph_seq, note_lst, midi_dur_lst, is_slur = ret
- else:
- print('==========> Preprocess_word_level or phone_level input wrong.')
- return None
-
- # convert note lst to midi id; convert note dur lst to midi duration
- try:
- midis = [librosa.note_to_midi(x.split("/")[0]) if x != 'rest' else 0
- for x in note_lst]
- midi_dur_lst = [float(x) for x in midi_dur_lst]
- except Exception as e:
- print(e)
- print('Invalid Input Type.')
- return None
-
- ph_token = self.ph_encoder.encode(ph_seq)
- item = {'item_name': item_name, 'text': inp['text'], 'ph': ph_seq, 'spk_id': spk_id,
- 'ph_token': ph_token, 'pitch_midi': np.asarray(midis), 'midi_dur': np.asarray(midi_dur_lst),
- 'is_slur': np.asarray(is_slur), }
- item['ph_len'] = len(item['ph_token'])
- return item
-
- def input_to_batch(self, item):
- item_names = [item['item_name']]
- text = [item['text']]
- ph = [item['ph']]
- txt_tokens = torch.LongTensor(item['ph_token'])[None, :].to(self.device)
- txt_lengths = torch.LongTensor([txt_tokens.shape[1]]).to(self.device)
- spk_ids = torch.LongTensor(item['spk_id'])[None, :].to(self.device)
-
- pitch_midi = torch.LongTensor(item['pitch_midi'])[None, :hparams['max_frames']].to(self.device)
- midi_dur = torch.FloatTensor(item['midi_dur'])[None, :hparams['max_frames']].to(self.device)
- is_slur = torch.LongTensor(item['is_slur'])[None, :hparams['max_frames']].to(self.device)
-
- batch = {
- 'item_name': item_names,
- 'text': text,
- 'ph': ph,
- 'txt_tokens': txt_tokens,
- 'txt_lengths': txt_lengths,
- 'spk_ids': spk_ids,
- 'pitch_midi': pitch_midi,
- 'midi_dur': midi_dur,
- 'is_slur': is_slur
- }
- return batch
-
- def postprocess_output(self, output):
- return output
-
- def infer_once(self, inp):
- inp = self.preprocess_input(inp, input_type=inp['input_type'] if inp.get('input_type') else 'word')
- output = self.forward_model(inp)
- output = self.postprocess_output(output)
- return output
-
- @classmethod
- def example_run(cls, inp):
- from utils.audio import save_wav
- set_hparams(print_hparams=False)
- infer_ins = cls(hparams)
- out = infer_ins.infer_once(inp)
- os.makedirs('infer_out', exist_ok=True)
- save_wav(out, f'infer_out/example_out.wav', hparams['audio_sample_rate'])
-
-
-# if __name__ == '__main__':
- # debug
- # a = BaseSVSInfer(hparams)
- # a.preprocess_input({'text': '你 说 你 不 SP 懂 为 何 在 这 时 牵 手 AP',
- # 'notes': 'D#4/Eb4 | D#4/Eb4 | D#4/Eb4 | D#4/Eb4 | rest | D#4/Eb4 | D4 | D4 | D4 | D#4/Eb4 | F4 | D#4/Eb4 | D4 | rest',
- # 'notes_duration': '0.113740 | 0.329060 | 0.287950 | 0.133480 | 0.150900 | 0.484730 | 0.242010 | 0.180820 | 0.343570 | 0.152050 | 0.266720 | 0.280310 | 0.633300 | 0.444590'
- # })
-
- # b = {
- # 'text': '小酒窝长睫毛AP是你最美的记号',
- # 'notes': 'C#4/Db4 | F#4/Gb4 | G#4/Ab4 | A#4/Bb4 F#4/Gb4 | F#4/Gb4 C#4/Db4 | C#4/Db4 | rest | C#4/Db4 | A#4/Bb4 | G#4/Ab4 | A#4/Bb4 | G#4/Ab4 | F4 | C#4/Db4',
- # 'notes_duration': '0.407140 | 0.376190 | 0.242180 | 0.509550 0.183420 | 0.315400 0.235020 | 0.361660 | 0.223070 | 0.377270 | 0.340550 | 0.299620 | 0.344510 | 0.283770 | 0.323390 | 0.360340'
- # }
- # c = {
- # 'text': '小酒窝长睫毛AP是你最美的记号',
- # 'ph_seq': 'x iao j iu w o ch ang ang j ie ie m ao AP sh i n i z ui m ei d e j i h ao',
- # 'note_seq': 'C#4/Db4 C#4/Db4 F#4/Gb4 F#4/Gb4 G#4/Ab4 G#4/Ab4 A#4/Bb4 A#4/Bb4 F#4/Gb4 F#4/Gb4 F#4/Gb4 C#4/Db4 C#4/Db4 C#4/Db4 rest C#4/Db4 C#4/Db4 A#4/Bb4 A#4/Bb4 G#4/Ab4 G#4/Ab4 A#4/Bb4 A#4/Bb4 G#4/Ab4 G#4/Ab4 F4 F4 C#4/Db4 C#4/Db4',
- # 'note_dur_seq': '0.407140 0.407140 0.376190 0.376190 0.242180 0.242180 0.509550 0.509550 0.183420 0.315400 0.315400 0.235020 0.361660 0.361660 0.223070 0.377270 0.377270 0.340550 0.340550 0.299620 0.299620 0.344510 0.344510 0.283770 0.283770 0.323390 0.323390 0.360340 0.360340',
- # 'is_slur_seq': '0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0'
- # } # input like Opencpop dataset.
- # a.preprocess_input(b)
- # a.preprocess_input(c, input_type='phoneme')
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/brainstorming.py b/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/brainstorming.py
deleted file mode 100644
index a6db1a5f6a963dee1736aa7ad4af2310b43b3a51..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/brainstorming.py
+++ /dev/null
@@ -1,67 +0,0 @@
-from __future__ import annotations
-import asyncio
-from colorama import Fore
-
-from typing import TYPE_CHECKING, List
-
-from . import decision_maker_registry
-from .base import BaseDecisionMaker
-from agentverse.logging import logger
-
-from agentverse.message import Message
-
-if TYPE_CHECKING:
- from agentverse.agents.base import BaseAgent
- from agentverse.message import CriticMessage
-
-
-@decision_maker_registry.register("brainstorming")
-class BrainstormingDecisionMaker(BaseDecisionMaker):
- """
- Much like the horizontal decision maker, but with some twists:
- (1) Solver acts as a summarizer, summarizing the discussion of this turn
- (2) After summarizing, all the agents' memory are cleared, and replaced with
- the summary (to avoid exceeding maximum context length of the model too fast)
- """
-
- name: str = "brainstorming"
-
- async def astep(
- self,
- agents: List[BaseAgent],
- task_description: str,
- previous_plan: str = "No solution yet.",
- advice: str = "No advice yet.",
- *args,
- **kwargs,
- ) -> List[str]:
- if advice != "No advice yet.":
- self.broadcast_messages(
- agents, [Message(content=advice, sender="Evaluator")]
- )
- for agent in agents[1:]:
- review: CriticMessage = await agent.astep(
- previous_plan, advice, task_description
- )
- if review.content != "":
- self.broadcast_messages(agents, [review])
-
- logger.info("", "Reviews:", Fore.YELLOW)
- logger.info(
- "",
- f"[{review.sender}]: {review.content}",
- Fore.YELLOW,
- )
-
- result = agents[0].step(previous_plan, advice, task_description)
- for agent in agents:
- agent.memory.reset()
- self.broadcast_messages(
- agents,
- [
- Message(
- content=result.content, sender="Summary From Previous Discussion"
- )
- ],
- )
- return [result]
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/AddChildMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/AddChildMethods.js
deleted file mode 100644
index deb49535fdbb2ea02abf872ffb42a229c64c6eb3..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/AddChildMethods.js
+++ /dev/null
@@ -1,112 +0,0 @@
-import AddChild from '../basesizer/utils/AddChild.js';
-import GetBoundsConfig from '../utils/GetBoundsConfig.js';
-import ALIGNMODE from '../utils/AlignConst.js';
-
-const IsPlainObject = Phaser.Utils.Objects.IsPlainObject;
-const GetValue = Phaser.Utils.Objects.GetValue;
-const ALIGN_CENTER = Phaser.Display.Align.CENTER;
-
-
-var GetEmptyCellIndex = function (columnIndex, rowIndex, cells, columnCount, rowCount) {
- if ((typeof (columnIndex) === 'number') || (typeof (rowIndex) === 'number')) {
- if (columnIndex === undefined) {
- var idx;
- for (var i = 0; i < columnCount; i++) {
- idx = (rowIndex * columnCount) + i;
- if (!cells[idx]) {
- return idx;
- }
- }
- } else if (rowIndex === undefined) {
- var idx;
- for (var i = 0; i < rowCount; i++) {
- idx = (i * columnCount) + columnIndex;
- if (!cells[idx]) {
- return idx;
- }
- }
- } else {
- var idx = (rowIndex * columnCount) + columnIndex;
- if (!cells[idx]) {
- return idx;
- }
- }
-
- } else if (rowIndex === true) {
- var idx;
- for (var i = 0; i < columnCount; i++) {
- for (var j = 0; j < rowCount; j++) {
- idx = (j * columnCount) + i;
- if (!cells[idx]) {
- return idx;
- }
- }
- }
- } else {
- for (var i = 0, cnt = cells.length; i < cnt; i++) {
- if (!cells[i]) {
- return i;
- }
- }
- }
- return null;
-}
-
-var Add = function (gameObject, columnIndex, rowIndex, align, paddingConfig, expand, childKey) {
- AddChild.call(this, gameObject);
- if (IsPlainObject(columnIndex)) {
- var config = columnIndex;
- columnIndex = GetValue(config, 'column', undefined);
- rowIndex = GetValue(config, 'row', undefined);
- align = GetValue(config, 'align', ALIGN_CENTER);
- paddingConfig = GetValue(config, 'padding', 0);
- expand = GetValue(config, 'expand', false);
- childKey = GetValue(config, 'key', undefined);
- }
-
- // Get insert index
- var itemIndex = GetEmptyCellIndex(columnIndex, rowIndex, this.sizerChildren, this.columnCount, this.rowCount);
- if (itemIndex === null) {
- // Specific index mode
- if ((typeof (columnIndex) === 'number') && (typeof (rowIndex) === 'number')) {
- return this;
- }
-
- if ((rowIndex === true) || (typeof (rowIndex) === 'number')) {
- this.addEmptyColumn();
- } else {
- this.addEmptyRow();
- }
-
- // Get insert index again
- itemIndex = GetEmptyCellIndex(columnIndex, rowIndex, this.sizerChildren, this.columnCount, this.rowCount);
- }
-
- if (typeof (align) === 'string') {
- align = ALIGNMODE[align];
- }
- if (align === undefined) {
- align = ALIGN_CENTER;
- }
- if (paddingConfig === undefined) {
- paddingConfig = 0;
- }
- if (expand === undefined) {
- expand = true;
- }
-
- var config = this.getSizerConfig(gameObject);
- config.align = align;
- config.padding = GetBoundsConfig(paddingConfig);
- config.expand = expand;
- this.sizerChildren[itemIndex] = gameObject;
-
- if (childKey !== undefined) {
- this.addChildrenMap(childKey, gameObject)
- }
- return this;
-}
-
-export default {
- add: Add
-}
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/lineprogresscanvas/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/lineprogresscanvas/Factory.js
deleted file mode 100644
index e182045fa63b8a22d7195f48a411d1ed3abb2478..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/lineprogresscanvas/Factory.js
+++ /dev/null
@@ -1,13 +0,0 @@
-import LineProgressCanvas from './LineProgressCanvas.js';
-import ObjectFactory from '../ObjectFactory.js';
-import SetValue from '../../../plugins/utils/object/SetValue.js';
-
-ObjectFactory.register('circularProgressCanvas', function (x, y, width, height, barColor, value, config) {
- var gameObject = new LineProgressCanvas(this.scene, x, y, width, height, barColor, value, config);
- this.scene.add.existing(gameObject);
- return gameObject;
-});
-
-SetValue(window, 'RexPlugins.UI.LineProgressCanvas', LineProgressCanvas);
-
-export default LineProgressCanvas;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/ParseEaseConfig.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/ParseEaseConfig.js
deleted file mode 100644
index c891bac21de5bd16ec694c5f6abb9eaaf3280751..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/ParseEaseConfig.js
+++ /dev/null
@@ -1,18 +0,0 @@
-import GetOrientationMode from '../../utils/GetOrientationMode.js';
-var ParseEaseConfig = function (menu, easeConfig) {
- if (typeof (easeConfig) === 'number') {
- easeConfig = {
- duration: easeConfig
- };
- }
-
- if (easeConfig.hasOwnProperty('orientation') && (easeConfig.orientation !== undefined)) {
- easeConfig.sameOrientation = GetOrientationMode(easeConfig.orientation) === menu.orientation;
- } else {
- easeConfig.sameOrientation = true;
- }
- easeConfig.destroy = false;
- return easeConfig;
-}
-
-export default ParseEaseConfig;
\ No newline at end of file
diff --git a/spaces/Akshat-1812/Dog-Vision/README.md b/spaces/Akshat-1812/Dog-Vision/README.md
deleted file mode 100644
index 633232a1cf41801385786f8af13211cfa49dae52..0000000000000000000000000000000000000000
--- a/spaces/Akshat-1812/Dog-Vision/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Dog Vision
-emoji: 📉
-colorFrom: indigo
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.1.4
-app_file: app.py
-pinned: false
-license: unknown
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AlexWang/lama/saicinpainting/training/visualizers/noop.py b/spaces/AlexWang/lama/saicinpainting/training/visualizers/noop.py
deleted file mode 100644
index 4175089a54a8484d51e6c879c1a99c4e4d961d15..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/saicinpainting/training/visualizers/noop.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from saicinpainting.training.visualizers.base import BaseVisualizer
-
-
-class NoopVisualizer(BaseVisualizer):
- def __init__(self, *args, **kwargs):
- pass
-
- def __call__(self, epoch_i, batch_i, batch, suffix='', rank=None):
- pass
diff --git a/spaces/Alpaca233/ChatGPT-PPT-Generate/app.py b/spaces/Alpaca233/ChatGPT-PPT-Generate/app.py
deleted file mode 100644
index af37444df6200a4202a9675bcda7d6f9e82be170..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/ChatGPT-PPT-Generate/app.py
+++ /dev/null
@@ -1,245 +0,0 @@
-import glob
-import os
-import random
-import re
-import string
-
-import gradio as gr
-
-import openai
-from icrawler import ImageDownloader
-from icrawler.builtin import GoogleImageCrawler, BingImageCrawler
-from uuid import uuid4
-from pptx import Presentation
-
-bad_coding_practice = ''.join(random.choice(string.ascii_uppercase + string.ascii_lowercase + string.digits) for _ in
- range(16))
-
-
-def refresh_bad_coding_practice():
- global bad_coding_practice
- bad_coding_practice = ''.join(random.choice(string.ascii_uppercase + string.ascii_lowercase + string.digits)
- for _ in range(16))
- return
-
-
-class PrefixNameDownloader(ImageDownloader):
-
- def get_filename(self, task, default_ext):
- filename = super(PrefixNameDownloader, self).get_filename(
- task, default_ext)
- print(bad_coding_practice)
- return 'prefix_' + bad_coding_practice + filename
-
-
-def generate_ppt(file, topic, slide_length, api_key):
- print(file.name)
-
- root = Presentation(file.name)
-
- openai.api_key = api_key
-
- message = f"""
- Create content for a slideshow presentation.
- The content's topic is {topic}.
- The slideshow is {slide_length} slides long.
- The content is written in the language of the content I give you above.
-
-
- You are allowed to use the following slide types:
-
- Slide types:
- Title Slide - (Title, Subtitle)
- Content Slide - (Title, Content)
- Image Slide - (Title, Content, Image)
- Thanks Slide - (Title)
-
- Put this tag before the Title Slide: [L_TS]
- Put this tag before the Content Slide: [L_CS]
- Put this tag before the Image Slide: [L_IS]
- Put this tag before the Thanks Slide: [L_THS]
-
- Put "[SLIDEBREAK]" after each slide
-
- For example:
- [L_TS]
- [TITLE]Mental Health[/TITLE]
-
- [SLIDEBREAK]
-
- [L_CS]
- [TITLE]Mental Health Definition[/TITLE]
- [CONTENT]
- 1. Definition: A person’s condition with regard to their psychological and emotional well-being
- 2. Can impact one's physical health
- 3. Stigmatized too often.
- [/CONTENT]
-
- [SLIDEBREAK]
-
- Put this tag before the Title: [TITLE]
- Put this tag after the Title: [/TITLE]
- Put this tag before the Subitle: [SUBTITLE]
- Put this tag after the Subtitle: [/SUBTITLE]
- Put this tag before the Content: [CONTENT]
- Put this tag after the Content: [/CONTENT]
- Put this tag before the Image: [IMAGE]
- Put this tag after the Image: [/IMAGE]
-
- Elaborate on the Content, provide as much information as possible.
- You put a [/CONTENT] at the end of the Content.
- Do not reply as if you are talking about the slideshow itself. (ex. "Include pictures here about...")
- Do not include any special characters (?, !, ., :, ) in the Title.
- Do not include any additional information in your response and stick to the format."""
-
- response = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=[
- {"role": "user", "content": message}
- ]
- )
-
- # """ Ref for slide types:
- # 0 -> title and subtitle
- # 1 -> title and content
- # 2 -> section header
- # 3 -> two content
- # 4 -> Comparison
- # 5 -> Title only
- # 6 -> Blank
- # 7 -> Content with caption
- # 8 -> Pic with caption
- # """
-
- def delete_all_slides():
- for i in range(len(root.slides) - 1, -1, -1):
- r_id = root.slides._sldIdLst[i].rId
- root.part.drop_rel(r_id)
- del root.slides._sldIdLst[i]
-
- def create_title_slide(title, subtitle):
- layout = root.slide_layouts[0]
- slide = root.slides.add_slide(layout)
- slide.shapes.title.text = title
- slide.placeholders[1].text = subtitle
-
- def create_section_header_slide(title):
- layout = root.slide_layouts[2]
- slide = root.slides.add_slide(layout)
- slide.shapes.title.text = title
-
- def create_title_and_content_slide(title, content):
- layout = root.slide_layouts[1]
- slide = root.slides.add_slide(layout)
- slide.shapes.title.text = title
- slide.placeholders[1].text = content
-
- def create_title_and_content_and_image_slide(title, content, image_query):
- layout = root.slide_layouts[8]
- slide = root.slides.add_slide(layout)
- slide.shapes.title.text = title
- slide.placeholders[2].text = content
- refresh_bad_coding_practice()
- bing_crawler = GoogleImageCrawler(downloader_cls=PrefixNameDownloader, storage={'root_dir': os.getcwd()})
- bing_crawler.crawl(keyword=image_query, max_num=1)
- dir_path = os.path.dirname(os.path.realpath(__file__))
- file_name = glob.glob(f"prefix_{bad_coding_practice}*")
- print(file_name)
- img_path = os.path.join(dir_path, file_name[0])
- slide.shapes.add_picture(img_path, slide.placeholders[1].left, slide.placeholders[1].top,
- slide.placeholders[1].width, slide.placeholders[1].height)
-
- def find_text_in_between_tags(text, start_tag, end_tag):
- start_pos = text.find(start_tag)
- end_pos = text.find(end_tag)
- result = []
- while start_pos > -1 and end_pos > -1:
- text_between_tags = text[start_pos + len(start_tag):end_pos]
- result.append(text_between_tags)
- start_pos = text.find(start_tag, end_pos + len(end_tag))
- end_pos = text.find(end_tag, start_pos)
- res1 = "".join(result)
- res2 = re.sub(r"\[IMAGE\].*?\[/IMAGE\]", '', res1)
- if len(result) > 0:
- return res2
- else:
- return ""
-
- def search_for_slide_type(text):
- tags = ["[L_TS]", "[L_CS]", "[L_IS]", "[L_THS]"]
- found_text = next((s for s in tags if s in text), None)
- return found_text
-
- def parse_response(reply):
- list_of_slides = reply.split("[SLIDEBREAK]")
- for slide in list_of_slides:
- slide_type = search_for_slide_type(slide)
- if slide_type == "[L_TS]":
- create_title_slide(find_text_in_between_tags(str(slide), "[TITLE]", "[/TITLE]"),
- find_text_in_between_tags(str(slide), "[SUBTITLE]", "[/SUBTITLE]"))
- elif slide_type == "[L_CS]":
- create_title_and_content_slide("".join(find_text_in_between_tags(str(slide), "[TITLE]", "[/TITLE]")),
- "".join(find_text_in_between_tags(str(slide), "[CONTENT]",
- "[/CONTENT]")))
- elif slide_type == "[L_IS]":
- create_title_and_content_and_image_slide("".join(find_text_in_between_tags(str(slide), "[TITLE]",
- "[/TITLE]")),
- "".join(find_text_in_between_tags(str(slide), "[CONTENT]",
- "[/CONTENT]")),
- "".join(find_text_in_between_tags(str(slide), "[IMAGE]",
- "[/IMAGE]")))
- elif slide_type == "[L_THS]":
- create_section_header_slide("".join(find_text_in_between_tags(str(slide), "[TITLE]", "[/TITLE]")))
-
- def find_title():
- return root.slides[0].shapes.title.text
-
- delete_all_slides()
-
- print(response)
-
- parse_response(response['choices'][0]['message']['content'])
-
- name_ = str(uuid4()).replace('-', '')
-
- root.save(f"./{name_}.pptx")
-
- print("done")
-
- dir_path = "./"
- prefix = "prefix_"
-
- for file_name in os.listdir(dir_path):
- if file_name.startswith(prefix):
- file_path = os.path.join(dir_path, file_name)
- if os.path.isfile(file_path):
- os.remove(file_path)
-
- return f"./{name_}.pptx"
-
-
-with gr.Blocks(title="ChatGPT PPT框架生成") as demo:
- gr.Markdown("""
ChatGPT PPT框架生成
""")
- with gr.Row():
- with gr.Column():
- openai_token = gr.Textbox(label="OpenAI API Key")
- topic = gr.Textbox(label="PPT的主题或内容")
- length = gr.Slider(minimum=1, maximum=20, value=6, label="生成的PPT页数", step=1)
- theme = gr.File(value="./theme.pptx", file_types=['pptx', 'ppt'], label="PPT模版")
- output_file = gr.File(interactive=False)
-
- topic.submit(
- fn=generate_ppt,
- inputs=[theme, topic, length, openai_token],
- outputs=[output_file]
- )
-
- submit = gr.Button("生成")
- submit.click(
- fn=generate_ppt,
- inputs=[theme, topic, length, openai_token],
- outputs=[output_file]
- )
-
-if __name__ == "__main__":
- demo.launch()
diff --git a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_mbf.py b/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_mbf.py
deleted file mode 100644
index 46ae777cc97af41a531cba4e5d1ff31f2efcb468..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_mbf.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from easydict import EasyDict as edict
-
-# make training faster
-# our RAM is 256G
-# mount -t tmpfs -o size=140G tmpfs /train_tmp
-
-config = edict()
-config.loss = "cosface"
-config.network = "mbf"
-config.resume = False
-config.output = None
-config.embedding_size = 512
-config.sample_rate = 0.1
-config.fp16 = True
-config.momentum = 0.9
-config.weight_decay = 2e-4
-config.batch_size = 128
-config.lr = 0.1 # batch size is 512
-
-config.rec = "/train_tmp/glint360k"
-config.num_classes = 360232
-config.num_image = 17091657
-config.num_epoch = 20
-config.warmup_epoch = -1
-config.decay_epoch = [8, 12, 15, 18]
-config.val_targets = ["lfw", "cfp_fp", "agedb_30"]
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_r101_caffe_fpn_mstrain_2x.py b/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_r101_caffe_fpn_mstrain_2x.py
deleted file mode 100644
index 85fa2f5d73a896e09d7b1f72202d0a100eaca821..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_r101_caffe_fpn_mstrain_2x.py
+++ /dev/null
@@ -1,167 +0,0 @@
-_base_ = '../_base_/default_runtime.py'
-
-# model settings
-model = dict(
- type='RetinaNet',
- pretrained='open-mmlab://detectron2/resnet101_caffe',
- backbone=dict(
- type='ResNet',
- depth=101,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=False),
- norm_eval=True,
- style='caffe'),
- neck=dict(
- type='FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- start_level=1,
- add_extra_convs=True,
- num_outs=5),
- bbox_head=dict(
- type='GARetinaHead',
- num_classes=80,
- in_channels=256,
- stacked_convs=4,
- feat_channels=256,
- approx_anchor_generator=dict(
- type='AnchorGenerator',
- octave_base_scale=4,
- scales_per_octave=3,
- ratios=[0.5, 1.0, 2.0],
- strides=[8, 16, 32, 64, 128]),
- square_anchor_generator=dict(
- type='AnchorGenerator',
- ratios=[1.0],
- scales=[4],
- strides=[8, 16, 32, 64, 128]),
- anchor_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[.0, .0, .0, .0],
- target_stds=[1.0, 1.0, 1.0, 1.0]),
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[.0, .0, .0, .0],
- target_stds=[1.0, 1.0, 1.0, 1.0]),
- loc_filter_thr=0.01,
- loss_loc=dict(
- type='FocalLoss',
- use_sigmoid=True,
- gamma=2.0,
- alpha=0.25,
- loss_weight=1.0),
- loss_shape=dict(type='BoundedIoULoss', beta=0.2, loss_weight=1.0),
- loss_cls=dict(
- type='FocalLoss',
- use_sigmoid=True,
- gamma=2.0,
- alpha=0.25,
- loss_weight=1.0),
- loss_bbox=dict(type='SmoothL1Loss', beta=0.04, loss_weight=1.0)))
-# training and testing settings
-train_cfg = dict(
- ga_assigner=dict(
- type='ApproxMaxIoUAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.4,
- min_pos_iou=0.4,
- ignore_iof_thr=-1),
- ga_sampler=dict(
- type='RandomSampler',
- num=256,
- pos_fraction=0.5,
- neg_pos_ub=-1,
- add_gt_as_proposals=False),
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.5,
- min_pos_iou=0.0,
- ignore_iof_thr=-1),
- allowed_border=-1,
- pos_weight=-1,
- center_ratio=0.2,
- ignore_ratio=0.5,
- debug=False)
-test_cfg = dict(
- nms_pre=1000,
- min_bbox_size=0,
- score_thr=0.05,
- nms=dict(type='nms', iou_threshold=0.5),
- max_per_img=100)
-# dataset settings
-dataset_type = 'CocoDataset'
-data_root = 'data/coco/'
-img_norm_cfg = dict(
- mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(
- type='Resize',
- img_scale=[(1333, 480), (1333, 960)],
- keep_ratio=True,
- multiscale_mode='range'),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=2,
- workers_per_gpu=2,
- train=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/instances_train2017.json',
- img_prefix=data_root + 'train2017/',
- pipeline=train_pipeline),
- val=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/instances_val2017.json',
- img_prefix=data_root + 'val2017/',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/instances_val2017.json',
- img_prefix=data_root + 'val2017/',
- pipeline=test_pipeline))
-evaluation = dict(interval=1, metric='bbox')
-# optimizer
-optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
-optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
-# learning policy
-lr_config = dict(
- policy='step',
- warmup='linear',
- warmup_iters=500,
- warmup_ratio=1.0 / 3,
- step=[16, 22])
-checkpoint_config = dict(interval=1)
-# yapf:disable
-log_config = dict(
- interval=50,
- hooks=[
- dict(type='TextLoggerHook'),
- # dict(type='TensorboardLoggerHook')
- ])
-# yapf:enable
-# runtime settings
-runner = dict(type='EpochBasedRunner', max_epochs=24)
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/deeplabv3_r50-d8.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/deeplabv3_r50-d8.py
deleted file mode 100644
index d7a43bee01422ad4795dd27874e0cd4bb6cbfecf..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/deeplabv3_r50-d8.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='ASPPHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- dilations=(1, 12, 24, 36),
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_20k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_20k_voc12aug.py
deleted file mode 100644
index 1056ad4d1e2a4f956d12f6daf506620fab27dd17..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_20k_voc12aug.py
+++ /dev/null
@@ -1,7 +0,0 @@
-_base_ = [
- '../_base_/models/deeplabv3plus_r50-d8.py',
- '../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_20k.py'
-]
-model = dict(
- decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fp16/deeplabv3plus_r101-d8_512x1024_80k_fp16_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fp16/deeplabv3plus_r101-d8_512x1024_80k_fp16_cityscapes.py
deleted file mode 100644
index eaf569d4d76af2e498c039899c01f9960b1158d9..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/fp16/deeplabv3plus_r101-d8_512x1024_80k_fp16_cityscapes.py
+++ /dev/null
@@ -1,5 +0,0 @@
-_base_ = '../deeplabv3plus/deeplabv3plus_r101-d8_512x1024_80k_cityscapes.py'
-# fp16 settings
-optimizer_config = dict(type='Fp16OptimizerHook', loss_scale=512.)
-# fp16 placeholder
-fp16 = dict()
diff --git a/spaces/Apex-X/GODROOP/roop/processors/__init__.py b/spaces/Apex-X/GODROOP/roop/processors/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Apex-X/ROOPOK/roop/metadata.py b/spaces/Apex-X/ROOPOK/roop/metadata.py
deleted file mode 100644
index aea9e16d897ede57f566ccc773d0d2ee17905dfb..0000000000000000000000000000000000000000
--- a/spaces/Apex-X/ROOPOK/roop/metadata.py
+++ /dev/null
@@ -1,2 +0,0 @@
-name = 'roop'
-version = '1.3.2'
diff --git a/spaces/ArcanAlt/arcanDream/README.md b/spaces/ArcanAlt/arcanDream/README.md
deleted file mode 100644
index d82ee8d9d75a65ba4810f04d0f9cf2c771b44f36..0000000000000000000000000000000000000000
--- a/spaces/ArcanAlt/arcanDream/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: ArcanDream
-emoji: 💻
-colorFrom: green
-colorTo: green
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/nap.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/nap.py
deleted file mode 100644
index 72aa5bfd4b60d8e6ef6ed0cf2ae4f763d12195cc..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/nap.py
+++ /dev/null
@@ -1,43 +0,0 @@
-# Copyright 2016 Étienne Bersac
-# Copyright 2016 Julien Danjou
-# Copyright 2016 Joshua Harlow
-# Copyright 2013-2014 Ray Holder
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import time
-import typing
-
-if typing.TYPE_CHECKING:
- import threading
-
-
-def sleep(seconds: float) -> None:
- """
- Sleep strategy that delays execution for a given number of seconds.
-
- This is the default strategy, and may be mocked out for unit testing.
- """
- time.sleep(seconds)
-
-
-class sleep_using_event:
- """Sleep strategy that waits on an event to be set."""
-
- def __init__(self, event: "threading.Event") -> None:
- self.event = event
-
- def __call__(self, timeout: typing.Optional[float]) -> None:
- # NOTE(harlowja): this may *not* actually wait for timeout
- # seconds if the event is set (ie this may eject out early).
- self.event.wait(timeout=timeout)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/more_itertools/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/more_itertools/__init__.py
deleted file mode 100644
index ea38bef1f661e62d577b3c2207386d901d851c72..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/more_itertools/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from .more import * # noqa
-from .recipes import * # noqa
-
-__version__ = '8.12.0'
diff --git a/spaces/Audio-AGI/AudioSep/utils.py b/spaces/Audio-AGI/AudioSep/utils.py
deleted file mode 100644
index abfb28500aa2c7f7cf395a869245d4c2061f9ca5..0000000000000000000000000000000000000000
--- a/spaces/Audio-AGI/AudioSep/utils.py
+++ /dev/null
@@ -1,384 +0,0 @@
-import os
-import datetime
-import json
-import logging
-import librosa
-import pickle
-from typing import Dict
-import numpy as np
-import torch
-import torch.nn as nn
-import yaml
-from models.audiosep import AudioSep, get_model_class
-
-
-def ignore_warnings():
- import warnings
- # Ignore UserWarning from torch.meshgrid
- warnings.filterwarnings('ignore', category=UserWarning, module='torch.functional')
-
- # Refined regex pattern to capture variations in the warning message
- pattern = r"Some weights of the model checkpoint at roberta-base were not used when initializing RobertaModel: \['lm_head\..*'\].*"
- warnings.filterwarnings('ignore', message=pattern)
-
-
-
-def create_logging(log_dir, filemode):
- os.makedirs(log_dir, exist_ok=True)
- i1 = 0
-
- while os.path.isfile(os.path.join(log_dir, "{:04d}.log".format(i1))):
- i1 += 1
-
- log_path = os.path.join(log_dir, "{:04d}.log".format(i1))
- logging.basicConfig(
- level=logging.DEBUG,
- format="%(asctime)s %(filename)s[line:%(lineno)d] %(levelname)s %(message)s",
- datefmt="%a, %d %b %Y %H:%M:%S",
- filename=log_path,
- filemode=filemode,
- )
-
- # Print to console
- console = logging.StreamHandler()
- console.setLevel(logging.INFO)
- formatter = logging.Formatter("%(name)-12s: %(levelname)-8s %(message)s")
- console.setFormatter(formatter)
- logging.getLogger("").addHandler(console)
-
- return logging
-
-
-def float32_to_int16(x: float) -> int:
- x = np.clip(x, a_min=-1, a_max=1)
- return (x * 32767.0).astype(np.int16)
-
-
-def int16_to_float32(x: int) -> float:
- return (x / 32767.0).astype(np.float32)
-
-
-def parse_yaml(config_yaml: str) -> Dict:
- r"""Parse yaml file.
-
- Args:
- config_yaml (str): config yaml path
-
- Returns:
- yaml_dict (Dict): parsed yaml file
- """
-
- with open(config_yaml, "r") as fr:
- return yaml.load(fr, Loader=yaml.FullLoader)
-
-
-def get_audioset632_id_to_lb(ontology_path: str) -> Dict:
- r"""Get AudioSet 632 classes ID to label mapping."""
-
- audioset632_id_to_lb = {}
-
- with open(ontology_path) as f:
- data_list = json.load(f)
-
- for e in data_list:
- audioset632_id_to_lb[e["id"]] = e["name"]
-
- return audioset632_id_to_lb
-
-
-def load_pretrained_panns(
- model_type: str,
- checkpoint_path: str,
- freeze: bool
-) -> nn.Module:
- r"""Load pretrained pretrained audio neural networks (PANNs).
-
- Args:
- model_type: str, e.g., "Cnn14"
- checkpoint_path, str, e.g., "Cnn14_mAP=0.431.pth"
- freeze: bool
-
- Returns:
- model: nn.Module
- """
-
- if model_type == "Cnn14":
- Model = Cnn14
-
- elif model_type == "Cnn14_DecisionLevelMax":
- Model = Cnn14_DecisionLevelMax
-
- else:
- raise NotImplementedError
-
- model = Model(sample_rate=32000, window_size=1024, hop_size=320,
- mel_bins=64, fmin=50, fmax=14000, classes_num=527)
-
- if checkpoint_path:
- checkpoint = torch.load(checkpoint_path, map_location="cpu")
- model.load_state_dict(checkpoint["model"])
-
- if freeze:
- for param in model.parameters():
- param.requires_grad = False
-
- return model
-
-
-def energy(x):
- return torch.mean(x ** 2)
-
-
-def magnitude_to_db(x):
- eps = 1e-10
- return 20. * np.log10(max(x, eps))
-
-
-def db_to_magnitude(x):
- return 10. ** (x / 20)
-
-
-def ids_to_hots(ids, classes_num, device):
- hots = torch.zeros(classes_num).to(device)
- for id in ids:
- hots[id] = 1
- return hots
-
-
-def calculate_sdr(
- ref: np.ndarray,
- est: np.ndarray,
- eps=1e-10
-) -> float:
- r"""Calculate SDR between reference and estimation.
-
- Args:
- ref (np.ndarray), reference signal
- est (np.ndarray), estimated signal
- """
- reference = ref
- noise = est - reference
-
-
- numerator = np.clip(a=np.mean(reference ** 2), a_min=eps, a_max=None)
-
- denominator = np.clip(a=np.mean(noise ** 2), a_min=eps, a_max=None)
-
- sdr = 10. * np.log10(numerator / denominator)
-
- return sdr
-
-
-def calculate_sisdr(ref, est):
- r"""Calculate SDR between reference and estimation.
-
- Args:
- ref (np.ndarray), reference signal
- est (np.ndarray), estimated signal
- """
-
- eps = np.finfo(ref.dtype).eps
-
- reference = ref.copy()
- estimate = est.copy()
-
- reference = reference.reshape(reference.size, 1)
- estimate = estimate.reshape(estimate.size, 1)
-
- Rss = np.dot(reference.T, reference)
- # get the scaling factor for clean sources
- a = (eps + np.dot(reference.T, estimate)) / (Rss + eps)
-
- e_true = a * reference
- e_res = estimate - e_true
-
- Sss = (e_true**2).sum()
- Snn = (e_res**2).sum()
-
- sisdr = 10 * np.log10((eps+ Sss)/(eps + Snn))
-
- return sisdr
-
-
-class StatisticsContainer(object):
- def __init__(self, statistics_path):
- self.statistics_path = statistics_path
-
- self.backup_statistics_path = "{}_{}.pkl".format(
- os.path.splitext(self.statistics_path)[0],
- datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S"),
- )
-
- self.statistics_dict = {"balanced_train": [], "test": []}
-
- def append(self, steps, statistics, split, flush=True):
- statistics["steps"] = steps
- self.statistics_dict[split].append(statistics)
-
- if flush:
- self.flush()
-
- def flush(self):
- pickle.dump(self.statistics_dict, open(self.statistics_path, "wb"))
- pickle.dump(self.statistics_dict, open(self.backup_statistics_path, "wb"))
- logging.info(" Dump statistics to {}".format(self.statistics_path))
- logging.info(" Dump statistics to {}".format(self.backup_statistics_path))
-
-
-def get_mean_sdr_from_dict(sdris_dict):
- mean_sdr = np.nanmean(list(sdris_dict.values()))
- return mean_sdr
-
-
-def remove_silence(audio: np.ndarray, sample_rate: int) -> np.ndarray:
- r"""Remove silent frames."""
- window_size = int(sample_rate * 0.1)
- threshold = 0.02
-
- frames = librosa.util.frame(x=audio, frame_length=window_size, hop_length=window_size).T
- # shape: (frames_num, window_size)
-
- new_frames = get_active_frames(frames, threshold)
- # shape: (new_frames_num, window_size)
-
- new_audio = new_frames.flatten()
- # shape: (new_audio_samples,)
-
- return new_audio
-
-
-def get_active_frames(frames: np.ndarray, threshold: float) -> np.ndarray:
- r"""Get active frames."""
-
- energy = np.max(np.abs(frames), axis=-1)
- # shape: (frames_num,)
-
- active_indexes = np.where(energy > threshold)[0]
- # shape: (new_frames_num,)
-
- new_frames = frames[active_indexes]
- # shape: (new_frames_num,)
-
- return new_frames
-
-
-def repeat_to_length(audio: np.ndarray, segment_samples: int) -> np.ndarray:
- r"""Repeat audio to length."""
-
- repeats_num = (segment_samples // audio.shape[-1]) + 1
- audio = np.tile(audio, repeats_num)[0 : segment_samples]
-
- return audio
-
-def calculate_segmentwise_sdr(ref, est, hop_samples, return_sdr_list=False):
- min_len = min(ref.shape[-1], est.shape[-1])
- pointer = 0
- sdrs = []
- while pointer + hop_samples < min_len:
- sdr = calculate_sdr(
- ref=ref[:, pointer : pointer + hop_samples],
- est=est[:, pointer : pointer + hop_samples],
- )
- sdrs.append(sdr)
- pointer += hop_samples
-
- sdr = np.nanmedian(sdrs)
-
- if return_sdr_list:
- return sdr, sdrs
- else:
- return sdr
-
-
-def loudness(data, input_loudness, target_loudness):
- """ Loudness normalize a signal.
-
- Normalize an input signal to a user loudness in dB LKFS.
-
- Params
- -------
- data : torch.Tensor
- Input multichannel audio data.
- input_loudness : float
- Loudness of the input in dB LUFS.
- target_loudness : float
- Target loudness of the output in dB LUFS.
-
- Returns
- -------
- output : torch.Tensor
- Loudness normalized output data.
- """
-
- # calculate the gain needed to scale to the desired loudness level
- delta_loudness = target_loudness - input_loudness
- gain = torch.pow(10.0, delta_loudness / 20.0)
-
- output = gain * data
-
- # check for potentially clipped samples
- # if torch.max(torch.abs(output)) >= 1.0:
- # warnings.warn("Possible clipped samples in output.")
-
- return output
-
-
-def load_ss_model(
- configs: Dict,
- checkpoint_path: str,
- query_encoder: nn.Module
-) -> nn.Module:
- r"""Load trained universal source separation model.
-
- Args:
- configs (Dict)
- checkpoint_path (str): path of the checkpoint to load
- device (str): e.g., "cpu" | "cuda"
-
- Returns:
- pl_model: pl.LightningModule
- """
-
- ss_model_type = configs["model"]["model_type"]
- input_channels = configs["model"]["input_channels"]
- output_channels = configs["model"]["output_channels"]
- condition_size = configs["model"]["condition_size"]
-
- # Initialize separation model
- SsModel = get_model_class(model_type=ss_model_type)
-
- ss_model = SsModel(
- input_channels=input_channels,
- output_channels=output_channels,
- condition_size=condition_size,
- )
-
- # Load PyTorch Lightning model
- pl_model = AudioSep.load_from_checkpoint(
- checkpoint_path=checkpoint_path,
- strict=False,
- ss_model=ss_model,
- waveform_mixer=None,
- query_encoder=query_encoder,
- loss_function=None,
- optimizer_type=None,
- learning_rate=None,
- lr_lambda_func=None,
- map_location=torch.device('cpu'),
- )
-
- return pl_model
-
-
-def parse_yaml(config_yaml: str) -> Dict:
- r"""Parse yaml file.
-
- Args:
- config_yaml (str): config yaml path
-
- Returns:
- yaml_dict (Dict): parsed yaml file
- """
-
- with open(config_yaml, "r") as fr:
- return yaml.load(fr, Loader=yaml.FullLoader)
\ No newline at end of file
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/test_model_zoo.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/test_model_zoo.py
deleted file mode 100644
index e3360a74864e0c00ed92ffbc8531c8d36e8be379..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/test_model_zoo.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import unittest
-
-from detectron2 import model_zoo
-from detectron2.config import instantiate
-from detectron2.modeling import FPN, GeneralizedRCNN
-
-logger = logging.getLogger(__name__)
-
-
-class TestModelZoo(unittest.TestCase):
- def test_get_returns_model(self):
- model = model_zoo.get("Misc/scratch_mask_rcnn_R_50_FPN_3x_gn.yaml", trained=False)
- self.assertIsInstance(model, GeneralizedRCNN)
- self.assertIsInstance(model.backbone, FPN)
-
- def test_get_invalid_model(self):
- self.assertRaises(RuntimeError, model_zoo.get, "Invalid/config.yaml")
-
- def test_get_url(self):
- url = model_zoo.get_checkpoint_url("Misc/scratch_mask_rcnn_R_50_FPN_3x_gn.yaml")
- self.assertEqual(
- url,
- "https://dl.fbaipublicfiles.com/detectron2/Misc/scratch_mask_rcnn_R_50_FPN_3x_gn/138602908/model_final_01ca85.pkl", # noqa
- )
- url2 = model_zoo.get_checkpoint_url("Misc/scratch_mask_rcnn_R_50_FPN_3x_gn.py")
- self.assertEqual(url, url2)
-
- def _build_lazy_model(self, name):
- cfg = model_zoo.get_config("common/models/" + name)
- instantiate(cfg.model)
-
- def test_mask_rcnn_fpn(self):
- self._build_lazy_model("mask_rcnn_fpn.py")
-
- def test_mask_rcnn_c4(self):
- self._build_lazy_model("mask_rcnn_c4.py")
-
- def test_panoptic_fpn(self):
- self._build_lazy_model("panoptic_fpn.py")
-
- def test_schedule(self):
- cfg = model_zoo.get_config("common/coco_schedule.py")
- for _, v in cfg.items():
- instantiate(v)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/Bart92/RVC_HF/infer_batch_rvc.py b/spaces/Bart92/RVC_HF/infer_batch_rvc.py
deleted file mode 100644
index 15c862a3d6bf815fa68003cc7054b694cae50c2a..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/infer_batch_rvc.py
+++ /dev/null
@@ -1,215 +0,0 @@
-"""
-v1
-runtime\python.exe myinfer-v2-0528.py 0 "E:\codes\py39\RVC-beta\todo-songs" "E:\codes\py39\logs\mi-test\added_IVF677_Flat_nprobe_7.index" harvest "E:\codes\py39\RVC-beta\output" "E:\codes\py39\test-20230416b\weights\mi-test.pth" 0.66 cuda:0 True 3 0 1 0.33
-v2
-runtime\python.exe myinfer-v2-0528.py 0 "E:\codes\py39\RVC-beta\todo-songs" "E:\codes\py39\test-20230416b\logs\mi-test-v2\aadded_IVF677_Flat_nprobe_1_v2.index" harvest "E:\codes\py39\RVC-beta\output_v2" "E:\codes\py39\test-20230416b\weights\mi-test-v2.pth" 0.66 cuda:0 True 3 0 1 0.33
-"""
-import os, sys, pdb, torch
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-import sys
-import torch
-import tqdm as tq
-from multiprocessing import cpu_count
-
-
-class Config:
- def __init__(self, device, is_half):
- self.device = device
- self.is_half = is_half
- self.n_cpu = 0
- self.gpu_name = None
- self.gpu_mem = None
- self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config()
-
- def device_config(self) -> tuple:
- if torch.cuda.is_available():
- i_device = int(self.device.split(":")[-1])
- self.gpu_name = torch.cuda.get_device_name(i_device)
- if (
- ("16" in self.gpu_name and "V100" not in self.gpu_name.upper())
- or "P40" in self.gpu_name.upper()
- or "1060" in self.gpu_name
- or "1070" in self.gpu_name
- or "1080" in self.gpu_name
- ):
- print("16系/10系显卡和P40强制单精度")
- self.is_half = False
- for config_file in ["32k.json", "40k.json", "48k.json"]:
- with open(f"configs/{config_file}", "r") as f:
- strr = f.read().replace("true", "false")
- with open(f"configs/{config_file}", "w") as f:
- f.write(strr)
- with open("infer/modules/train/preprocess.py", "r") as f:
- strr = f.read().replace("3.7", "3.0")
- with open("infer/modules/train/preprocess.py", "w") as f:
- f.write(strr)
- else:
- self.gpu_name = None
- self.gpu_mem = int(
- torch.cuda.get_device_properties(i_device).total_memory
- / 1024
- / 1024
- / 1024
- + 0.4
- )
- if self.gpu_mem <= 4:
- with open("infer/modules/train/preprocess.py", "r") as f:
- strr = f.read().replace("3.7", "3.0")
- with open("infer/modules/train/preprocess.py", "w") as f:
- f.write(strr)
- elif torch.backends.mps.is_available():
- print("没有发现支持的N卡, 使用MPS进行推理")
- self.device = "mps"
- else:
- print("没有发现支持的N卡, 使用CPU进行推理")
- self.device = "cpu"
- self.is_half = True
-
- if self.n_cpu == 0:
- self.n_cpu = cpu_count()
-
- if self.is_half:
- # 6G显存配置
- x_pad = 3
- x_query = 10
- x_center = 60
- x_max = 65
- else:
- # 5G显存配置
- x_pad = 1
- x_query = 6
- x_center = 38
- x_max = 41
-
- if self.gpu_mem != None and self.gpu_mem <= 4:
- x_pad = 1
- x_query = 5
- x_center = 30
- x_max = 32
-
- return x_pad, x_query, x_center, x_max
-
-
-f0up_key = sys.argv[1]
-input_path = sys.argv[2]
-index_path = sys.argv[3]
-f0method = sys.argv[4] # harvest or pm
-opt_path = sys.argv[5]
-model_path = sys.argv[6]
-index_rate = float(sys.argv[7])
-device = sys.argv[8]
-is_half = sys.argv[9].lower() != "false"
-filter_radius = int(sys.argv[10])
-resample_sr = int(sys.argv[11])
-rms_mix_rate = float(sys.argv[12])
-protect = float(sys.argv[13])
-print(sys.argv)
-config = Config(device, is_half)
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-from infer.modules.vc.modules import VC
-from lib.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from infer.lib.audio import load_audio
-from fairseq import checkpoint_utils
-from scipy.io import wavfile
-
-hubert_model = None
-
-
-def load_hubert():
- global hubert_model
- models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
- ["hubert_base.pt"],
- suffix="",
- )
- hubert_model = models[0]
- hubert_model = hubert_model.to(device)
- if is_half:
- hubert_model = hubert_model.half()
- else:
- hubert_model = hubert_model.float()
- hubert_model.eval()
-
-
-def vc_single(sid, input_audio, f0_up_key, f0_file, f0_method, file_index, index_rate):
- global tgt_sr, net_g, vc, hubert_model, version
- if input_audio is None:
- return "You need to upload an audio", None
- f0_up_key = int(f0_up_key)
- audio = load_audio(input_audio, 16000)
- times = [0, 0, 0]
- if hubert_model == None:
- load_hubert()
- if_f0 = cpt.get("f0", 1)
- # audio_opt=vc.pipeline(hubert_model,net_g,sid,audio,times,f0_up_key,f0_method,file_index,file_big_npy,index_rate,if_f0,f0_file=f0_file)
- audio_opt = vc.pipeline(
- hubert_model,
- net_g,
- sid,
- audio,
- input_audio,
- times,
- f0_up_key,
- f0_method,
- file_index,
- index_rate,
- if_f0,
- filter_radius,
- tgt_sr,
- resample_sr,
- rms_mix_rate,
- version,
- protect,
- f0_file=f0_file,
- )
- print(times)
- return audio_opt
-
-
-def get_vc(model_path):
- global n_spk, tgt_sr, net_g, vc, cpt, device, is_half, version
- print("loading pth %s" % model_path)
- cpt = torch.load(model_path, map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- if_f0 = cpt.get("f0", 1)
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1: #
- net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=is_half)
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净,真奇葩
- net_g.eval().to(device)
- if is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, config)
- n_spk = cpt["config"][-3]
- # return {"visible": True,"maximum": n_spk, "__type__": "update"}
-
-
-get_vc(model_path)
-audios = os.listdir(input_path)
-for file in tq.tqdm(audios):
- if file.endswith(".wav"):
- file_path = input_path + "/" + file
- wav_opt = vc_single(
- 0, file_path, f0up_key, None, f0method, index_path, index_rate
- )
- out_path = opt_path + "/" + file
- wavfile.write(out_path, tgt_sr, wav_opt)
diff --git a/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/dataset.py b/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/dataset.py
deleted file mode 100644
index cfd01a174978d97180a897e40cb59ecadec1d12e..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/dataset.py
+++ /dev/null
@@ -1,183 +0,0 @@
-import os
-import random
-
-import numpy as np
-import torch
-import torch.utils.data
-from tqdm import tqdm
-
-from . import spec_utils
-
-
-class VocalRemoverValidationSet(torch.utils.data.Dataset):
- def __init__(self, patch_list):
- self.patch_list = patch_list
-
- def __len__(self):
- return len(self.patch_list)
-
- def __getitem__(self, idx):
- path = self.patch_list[idx]
- data = np.load(path)
-
- X, y = data["X"], data["y"]
-
- X_mag = np.abs(X)
- y_mag = np.abs(y)
-
- return X_mag, y_mag
-
-
-def make_pair(mix_dir, inst_dir):
- input_exts = [".wav", ".m4a", ".mp3", ".mp4", ".flac"]
-
- X_list = sorted(
- [
- os.path.join(mix_dir, fname)
- for fname in os.listdir(mix_dir)
- if os.path.splitext(fname)[1] in input_exts
- ]
- )
- y_list = sorted(
- [
- os.path.join(inst_dir, fname)
- for fname in os.listdir(inst_dir)
- if os.path.splitext(fname)[1] in input_exts
- ]
- )
-
- filelist = list(zip(X_list, y_list))
-
- return filelist
-
-
-def train_val_split(dataset_dir, split_mode, val_rate, val_filelist):
- if split_mode == "random":
- filelist = make_pair(
- os.path.join(dataset_dir, "mixtures"),
- os.path.join(dataset_dir, "instruments"),
- )
-
- random.shuffle(filelist)
-
- if len(val_filelist) == 0:
- val_size = int(len(filelist) * val_rate)
- train_filelist = filelist[:-val_size]
- val_filelist = filelist[-val_size:]
- else:
- train_filelist = [
- pair for pair in filelist if list(pair) not in val_filelist
- ]
- elif split_mode == "subdirs":
- if len(val_filelist) != 0:
- raise ValueError(
- "The `val_filelist` option is not available in `subdirs` mode"
- )
-
- train_filelist = make_pair(
- os.path.join(dataset_dir, "training/mixtures"),
- os.path.join(dataset_dir, "training/instruments"),
- )
-
- val_filelist = make_pair(
- os.path.join(dataset_dir, "validation/mixtures"),
- os.path.join(dataset_dir, "validation/instruments"),
- )
-
- return train_filelist, val_filelist
-
-
-def augment(X, y, reduction_rate, reduction_mask, mixup_rate, mixup_alpha):
- perm = np.random.permutation(len(X))
- for i, idx in enumerate(tqdm(perm)):
- if np.random.uniform() < reduction_rate:
- y[idx] = spec_utils.reduce_vocal_aggressively(
- X[idx], y[idx], reduction_mask
- )
-
- if np.random.uniform() < 0.5:
- # swap channel
- X[idx] = X[idx, ::-1]
- y[idx] = y[idx, ::-1]
- if np.random.uniform() < 0.02:
- # mono
- X[idx] = X[idx].mean(axis=0, keepdims=True)
- y[idx] = y[idx].mean(axis=0, keepdims=True)
- if np.random.uniform() < 0.02:
- # inst
- X[idx] = y[idx]
-
- if np.random.uniform() < mixup_rate and i < len(perm) - 1:
- lam = np.random.beta(mixup_alpha, mixup_alpha)
- X[idx] = lam * X[idx] + (1 - lam) * X[perm[i + 1]]
- y[idx] = lam * y[idx] + (1 - lam) * y[perm[i + 1]]
-
- return X, y
-
-
-def make_padding(width, cropsize, offset):
- left = offset
- roi_size = cropsize - left * 2
- if roi_size == 0:
- roi_size = cropsize
- right = roi_size - (width % roi_size) + left
-
- return left, right, roi_size
-
-
-def make_training_set(filelist, cropsize, patches, sr, hop_length, n_fft, offset):
- len_dataset = patches * len(filelist)
-
- X_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64)
- y_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64)
-
- for i, (X_path, y_path) in enumerate(tqdm(filelist)):
- X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft)
- coef = np.max([np.abs(X).max(), np.abs(y).max()])
- X, y = X / coef, y / coef
-
- l, r, roi_size = make_padding(X.shape[2], cropsize, offset)
- X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant")
- y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant")
-
- starts = np.random.randint(0, X_pad.shape[2] - cropsize, patches)
- ends = starts + cropsize
- for j in range(patches):
- idx = i * patches + j
- X_dataset[idx] = X_pad[:, :, starts[j] : ends[j]]
- y_dataset[idx] = y_pad[:, :, starts[j] : ends[j]]
-
- return X_dataset, y_dataset
-
-
-def make_validation_set(filelist, cropsize, sr, hop_length, n_fft, offset):
- patch_list = []
- patch_dir = "cs{}_sr{}_hl{}_nf{}_of{}".format(
- cropsize, sr, hop_length, n_fft, offset
- )
- os.makedirs(patch_dir, exist_ok=True)
-
- for i, (X_path, y_path) in enumerate(tqdm(filelist)):
- basename = os.path.splitext(os.path.basename(X_path))[0]
-
- X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft)
- coef = np.max([np.abs(X).max(), np.abs(y).max()])
- X, y = X / coef, y / coef
-
- l, r, roi_size = make_padding(X.shape[2], cropsize, offset)
- X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant")
- y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant")
-
- len_dataset = int(np.ceil(X.shape[2] / roi_size))
- for j in range(len_dataset):
- outpath = os.path.join(patch_dir, "{}_p{}.npz".format(basename, j))
- start = j * roi_size
- if not os.path.exists(outpath):
- np.savez(
- outpath,
- X=X_pad[:, :, start : start + cropsize],
- y=y_pad[:, :, start : start + cropsize],
- )
- patch_list.append(outpath)
-
- return VocalRemoverValidationSet(patch_list)
diff --git a/spaces/Belshia/shia/README.md b/spaces/Belshia/shia/README.md
deleted file mode 100644
index d648421b8ee540f3bcef13291fa6200bf34345cb..0000000000000000000000000000000000000000
--- a/spaces/Belshia/shia/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Shia
-emoji: 🌍
-colorFrom: purple
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/_version.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/_version.py
deleted file mode 100644
index b723056a756af22aaf1a4709c5122bea9fb279ee..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/_version.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# coding: utf-8
-# file generated by setuptools_scm
-# don't change, don't track in version control
-version = '2.8.2'
-version_tuple = (2, 8, 2)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/poolmanager.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/poolmanager.py
deleted file mode 100644
index ca4ec341184adb3d30f3cd825b49a81b87d29b08..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/poolmanager.py
+++ /dev/null
@@ -1,537 +0,0 @@
-from __future__ import absolute_import
-
-import collections
-import functools
-import logging
-
-from ._collections import RecentlyUsedContainer
-from .connectionpool import HTTPConnectionPool, HTTPSConnectionPool, port_by_scheme
-from .exceptions import (
- LocationValueError,
- MaxRetryError,
- ProxySchemeUnknown,
- ProxySchemeUnsupported,
- URLSchemeUnknown,
-)
-from .packages import six
-from .packages.six.moves.urllib.parse import urljoin
-from .request import RequestMethods
-from .util.proxy import connection_requires_http_tunnel
-from .util.retry import Retry
-from .util.url import parse_url
-
-__all__ = ["PoolManager", "ProxyManager", "proxy_from_url"]
-
-
-log = logging.getLogger(__name__)
-
-SSL_KEYWORDS = (
- "key_file",
- "cert_file",
- "cert_reqs",
- "ca_certs",
- "ssl_version",
- "ca_cert_dir",
- "ssl_context",
- "key_password",
- "server_hostname",
-)
-
-# All known keyword arguments that could be provided to the pool manager, its
-# pools, or the underlying connections. This is used to construct a pool key.
-_key_fields = (
- "key_scheme", # str
- "key_host", # str
- "key_port", # int
- "key_timeout", # int or float or Timeout
- "key_retries", # int or Retry
- "key_strict", # bool
- "key_block", # bool
- "key_source_address", # str
- "key_key_file", # str
- "key_key_password", # str
- "key_cert_file", # str
- "key_cert_reqs", # str
- "key_ca_certs", # str
- "key_ssl_version", # str
- "key_ca_cert_dir", # str
- "key_ssl_context", # instance of ssl.SSLContext or urllib3.util.ssl_.SSLContext
- "key_maxsize", # int
- "key_headers", # dict
- "key__proxy", # parsed proxy url
- "key__proxy_headers", # dict
- "key__proxy_config", # class
- "key_socket_options", # list of (level (int), optname (int), value (int or str)) tuples
- "key__socks_options", # dict
- "key_assert_hostname", # bool or string
- "key_assert_fingerprint", # str
- "key_server_hostname", # str
-)
-
-#: The namedtuple class used to construct keys for the connection pool.
-#: All custom key schemes should include the fields in this key at a minimum.
-PoolKey = collections.namedtuple("PoolKey", _key_fields)
-
-_proxy_config_fields = ("ssl_context", "use_forwarding_for_https")
-ProxyConfig = collections.namedtuple("ProxyConfig", _proxy_config_fields)
-
-
-def _default_key_normalizer(key_class, request_context):
- """
- Create a pool key out of a request context dictionary.
-
- According to RFC 3986, both the scheme and host are case-insensitive.
- Therefore, this function normalizes both before constructing the pool
- key for an HTTPS request. If you wish to change this behaviour, provide
- alternate callables to ``key_fn_by_scheme``.
-
- :param key_class:
- The class to use when constructing the key. This should be a namedtuple
- with the ``scheme`` and ``host`` keys at a minimum.
- :type key_class: namedtuple
- :param request_context:
- A dictionary-like object that contain the context for a request.
- :type request_context: dict
-
- :return: A namedtuple that can be used as a connection pool key.
- :rtype: PoolKey
- """
- # Since we mutate the dictionary, make a copy first
- context = request_context.copy()
- context["scheme"] = context["scheme"].lower()
- context["host"] = context["host"].lower()
-
- # These are both dictionaries and need to be transformed into frozensets
- for key in ("headers", "_proxy_headers", "_socks_options"):
- if key in context and context[key] is not None:
- context[key] = frozenset(context[key].items())
-
- # The socket_options key may be a list and needs to be transformed into a
- # tuple.
- socket_opts = context.get("socket_options")
- if socket_opts is not None:
- context["socket_options"] = tuple(socket_opts)
-
- # Map the kwargs to the names in the namedtuple - this is necessary since
- # namedtuples can't have fields starting with '_'.
- for key in list(context.keys()):
- context["key_" + key] = context.pop(key)
-
- # Default to ``None`` for keys missing from the context
- for field in key_class._fields:
- if field not in context:
- context[field] = None
-
- return key_class(**context)
-
-
-#: A dictionary that maps a scheme to a callable that creates a pool key.
-#: This can be used to alter the way pool keys are constructed, if desired.
-#: Each PoolManager makes a copy of this dictionary so they can be configured
-#: globally here, or individually on the instance.
-key_fn_by_scheme = {
- "http": functools.partial(_default_key_normalizer, PoolKey),
- "https": functools.partial(_default_key_normalizer, PoolKey),
-}
-
-pool_classes_by_scheme = {"http": HTTPConnectionPool, "https": HTTPSConnectionPool}
-
-
-class PoolManager(RequestMethods):
- """
- Allows for arbitrary requests while transparently keeping track of
- necessary connection pools for you.
-
- :param num_pools:
- Number of connection pools to cache before discarding the least
- recently used pool.
-
- :param headers:
- Headers to include with all requests, unless other headers are given
- explicitly.
-
- :param \\**connection_pool_kw:
- Additional parameters are used to create fresh
- :class:`urllib3.connectionpool.ConnectionPool` instances.
-
- Example::
-
- >>> manager = PoolManager(num_pools=2)
- >>> r = manager.request('GET', 'http://google.com/')
- >>> r = manager.request('GET', 'http://google.com/mail')
- >>> r = manager.request('GET', 'http://yahoo.com/')
- >>> len(manager.pools)
- 2
-
- """
-
- proxy = None
- proxy_config = None
-
- def __init__(self, num_pools=10, headers=None, **connection_pool_kw):
- RequestMethods.__init__(self, headers)
- self.connection_pool_kw = connection_pool_kw
- self.pools = RecentlyUsedContainer(num_pools, dispose_func=lambda p: p.close())
-
- # Locally set the pool classes and keys so other PoolManagers can
- # override them.
- self.pool_classes_by_scheme = pool_classes_by_scheme
- self.key_fn_by_scheme = key_fn_by_scheme.copy()
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- self.clear()
- # Return False to re-raise any potential exceptions
- return False
-
- def _new_pool(self, scheme, host, port, request_context=None):
- """
- Create a new :class:`urllib3.connectionpool.ConnectionPool` based on host, port, scheme, and
- any additional pool keyword arguments.
-
- If ``request_context`` is provided, it is provided as keyword arguments
- to the pool class used. This method is used to actually create the
- connection pools handed out by :meth:`connection_from_url` and
- companion methods. It is intended to be overridden for customization.
- """
- pool_cls = self.pool_classes_by_scheme[scheme]
- if request_context is None:
- request_context = self.connection_pool_kw.copy()
-
- # Although the context has everything necessary to create the pool,
- # this function has historically only used the scheme, host, and port
- # in the positional args. When an API change is acceptable these can
- # be removed.
- for key in ("scheme", "host", "port"):
- request_context.pop(key, None)
-
- if scheme == "http":
- for kw in SSL_KEYWORDS:
- request_context.pop(kw, None)
-
- return pool_cls(host, port, **request_context)
-
- def clear(self):
- """
- Empty our store of pools and direct them all to close.
-
- This will not affect in-flight connections, but they will not be
- re-used after completion.
- """
- self.pools.clear()
-
- def connection_from_host(self, host, port=None, scheme="http", pool_kwargs=None):
- """
- Get a :class:`urllib3.connectionpool.ConnectionPool` based on the host, port, and scheme.
-
- If ``port`` isn't given, it will be derived from the ``scheme`` using
- ``urllib3.connectionpool.port_by_scheme``. If ``pool_kwargs`` is
- provided, it is merged with the instance's ``connection_pool_kw``
- variable and used to create the new connection pool, if one is
- needed.
- """
-
- if not host:
- raise LocationValueError("No host specified.")
-
- request_context = self._merge_pool_kwargs(pool_kwargs)
- request_context["scheme"] = scheme or "http"
- if not port:
- port = port_by_scheme.get(request_context["scheme"].lower(), 80)
- request_context["port"] = port
- request_context["host"] = host
-
- return self.connection_from_context(request_context)
-
- def connection_from_context(self, request_context):
- """
- Get a :class:`urllib3.connectionpool.ConnectionPool` based on the request context.
-
- ``request_context`` must at least contain the ``scheme`` key and its
- value must be a key in ``key_fn_by_scheme`` instance variable.
- """
- scheme = request_context["scheme"].lower()
- pool_key_constructor = self.key_fn_by_scheme.get(scheme)
- if not pool_key_constructor:
- raise URLSchemeUnknown(scheme)
- pool_key = pool_key_constructor(request_context)
-
- return self.connection_from_pool_key(pool_key, request_context=request_context)
-
- def connection_from_pool_key(self, pool_key, request_context=None):
- """
- Get a :class:`urllib3.connectionpool.ConnectionPool` based on the provided pool key.
-
- ``pool_key`` should be a namedtuple that only contains immutable
- objects. At a minimum it must have the ``scheme``, ``host``, and
- ``port`` fields.
- """
- with self.pools.lock:
- # If the scheme, host, or port doesn't match existing open
- # connections, open a new ConnectionPool.
- pool = self.pools.get(pool_key)
- if pool:
- return pool
-
- # Make a fresh ConnectionPool of the desired type
- scheme = request_context["scheme"]
- host = request_context["host"]
- port = request_context["port"]
- pool = self._new_pool(scheme, host, port, request_context=request_context)
- self.pools[pool_key] = pool
-
- return pool
-
- def connection_from_url(self, url, pool_kwargs=None):
- """
- Similar to :func:`urllib3.connectionpool.connection_from_url`.
-
- If ``pool_kwargs`` is not provided and a new pool needs to be
- constructed, ``self.connection_pool_kw`` is used to initialize
- the :class:`urllib3.connectionpool.ConnectionPool`. If ``pool_kwargs``
- is provided, it is used instead. Note that if a new pool does not
- need to be created for the request, the provided ``pool_kwargs`` are
- not used.
- """
- u = parse_url(url)
- return self.connection_from_host(
- u.host, port=u.port, scheme=u.scheme, pool_kwargs=pool_kwargs
- )
-
- def _merge_pool_kwargs(self, override):
- """
- Merge a dictionary of override values for self.connection_pool_kw.
-
- This does not modify self.connection_pool_kw and returns a new dict.
- Any keys in the override dictionary with a value of ``None`` are
- removed from the merged dictionary.
- """
- base_pool_kwargs = self.connection_pool_kw.copy()
- if override:
- for key, value in override.items():
- if value is None:
- try:
- del base_pool_kwargs[key]
- except KeyError:
- pass
- else:
- base_pool_kwargs[key] = value
- return base_pool_kwargs
-
- def _proxy_requires_url_absolute_form(self, parsed_url):
- """
- Indicates if the proxy requires the complete destination URL in the
- request. Normally this is only needed when not using an HTTP CONNECT
- tunnel.
- """
- if self.proxy is None:
- return False
-
- return not connection_requires_http_tunnel(
- self.proxy, self.proxy_config, parsed_url.scheme
- )
-
- def _validate_proxy_scheme_url_selection(self, url_scheme):
- """
- Validates that were not attempting to do TLS in TLS connections on
- Python2 or with unsupported SSL implementations.
- """
- if self.proxy is None or url_scheme != "https":
- return
-
- if self.proxy.scheme != "https":
- return
-
- if six.PY2 and not self.proxy_config.use_forwarding_for_https:
- raise ProxySchemeUnsupported(
- "Contacting HTTPS destinations through HTTPS proxies "
- "'via CONNECT tunnels' is not supported in Python 2"
- )
-
- def urlopen(self, method, url, redirect=True, **kw):
- """
- Same as :meth:`urllib3.HTTPConnectionPool.urlopen`
- with custom cross-host redirect logic and only sends the request-uri
- portion of the ``url``.
-
- The given ``url`` parameter must be absolute, such that an appropriate
- :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it.
- """
- u = parse_url(url)
- self._validate_proxy_scheme_url_selection(u.scheme)
-
- conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme)
-
- kw["assert_same_host"] = False
- kw["redirect"] = False
-
- if "headers" not in kw:
- kw["headers"] = self.headers.copy()
-
- if self._proxy_requires_url_absolute_form(u):
- response = conn.urlopen(method, url, **kw)
- else:
- response = conn.urlopen(method, u.request_uri, **kw)
-
- redirect_location = redirect and response.get_redirect_location()
- if not redirect_location:
- return response
-
- # Support relative URLs for redirecting.
- redirect_location = urljoin(url, redirect_location)
-
- # RFC 7231, Section 6.4.4
- if response.status == 303:
- method = "GET"
-
- retries = kw.get("retries")
- if not isinstance(retries, Retry):
- retries = Retry.from_int(retries, redirect=redirect)
-
- # Strip headers marked as unsafe to forward to the redirected location.
- # Check remove_headers_on_redirect to avoid a potential network call within
- # conn.is_same_host() which may use socket.gethostbyname() in the future.
- if retries.remove_headers_on_redirect and not conn.is_same_host(
- redirect_location
- ):
- headers = list(six.iterkeys(kw["headers"]))
- for header in headers:
- if header.lower() in retries.remove_headers_on_redirect:
- kw["headers"].pop(header, None)
-
- try:
- retries = retries.increment(method, url, response=response, _pool=conn)
- except MaxRetryError:
- if retries.raise_on_redirect:
- response.drain_conn()
- raise
- return response
-
- kw["retries"] = retries
- kw["redirect"] = redirect
-
- log.info("Redirecting %s -> %s", url, redirect_location)
-
- response.drain_conn()
- return self.urlopen(method, redirect_location, **kw)
-
-
-class ProxyManager(PoolManager):
- """
- Behaves just like :class:`PoolManager`, but sends all requests through
- the defined proxy, using the CONNECT method for HTTPS URLs.
-
- :param proxy_url:
- The URL of the proxy to be used.
-
- :param proxy_headers:
- A dictionary containing headers that will be sent to the proxy. In case
- of HTTP they are being sent with each request, while in the
- HTTPS/CONNECT case they are sent only once. Could be used for proxy
- authentication.
-
- :param proxy_ssl_context:
- The proxy SSL context is used to establish the TLS connection to the
- proxy when using HTTPS proxies.
-
- :param use_forwarding_for_https:
- (Defaults to False) If set to True will forward requests to the HTTPS
- proxy to be made on behalf of the client instead of creating a TLS
- tunnel via the CONNECT method. **Enabling this flag means that request
- and response headers and content will be visible from the HTTPS proxy**
- whereas tunneling keeps request and response headers and content
- private. IP address, target hostname, SNI, and port are always visible
- to an HTTPS proxy even when this flag is disabled.
-
- Example:
- >>> proxy = urllib3.ProxyManager('http://localhost:3128/')
- >>> r1 = proxy.request('GET', 'http://google.com/')
- >>> r2 = proxy.request('GET', 'http://httpbin.org/')
- >>> len(proxy.pools)
- 1
- >>> r3 = proxy.request('GET', 'https://httpbin.org/')
- >>> r4 = proxy.request('GET', 'https://twitter.com/')
- >>> len(proxy.pools)
- 3
-
- """
-
- def __init__(
- self,
- proxy_url,
- num_pools=10,
- headers=None,
- proxy_headers=None,
- proxy_ssl_context=None,
- use_forwarding_for_https=False,
- **connection_pool_kw
- ):
-
- if isinstance(proxy_url, HTTPConnectionPool):
- proxy_url = "%s://%s:%i" % (
- proxy_url.scheme,
- proxy_url.host,
- proxy_url.port,
- )
- proxy = parse_url(proxy_url)
-
- if proxy.scheme not in ("http", "https"):
- raise ProxySchemeUnknown(proxy.scheme)
-
- if not proxy.port:
- port = port_by_scheme.get(proxy.scheme, 80)
- proxy = proxy._replace(port=port)
-
- self.proxy = proxy
- self.proxy_headers = proxy_headers or {}
- self.proxy_ssl_context = proxy_ssl_context
- self.proxy_config = ProxyConfig(proxy_ssl_context, use_forwarding_for_https)
-
- connection_pool_kw["_proxy"] = self.proxy
- connection_pool_kw["_proxy_headers"] = self.proxy_headers
- connection_pool_kw["_proxy_config"] = self.proxy_config
-
- super(ProxyManager, self).__init__(num_pools, headers, **connection_pool_kw)
-
- def connection_from_host(self, host, port=None, scheme="http", pool_kwargs=None):
- if scheme == "https":
- return super(ProxyManager, self).connection_from_host(
- host, port, scheme, pool_kwargs=pool_kwargs
- )
-
- return super(ProxyManager, self).connection_from_host(
- self.proxy.host, self.proxy.port, self.proxy.scheme, pool_kwargs=pool_kwargs
- )
-
- def _set_proxy_headers(self, url, headers=None):
- """
- Sets headers needed by proxies: specifically, the Accept and Host
- headers. Only sets headers not provided by the user.
- """
- headers_ = {"Accept": "*/*"}
-
- netloc = parse_url(url).netloc
- if netloc:
- headers_["Host"] = netloc
-
- if headers:
- headers_.update(headers)
- return headers_
-
- def urlopen(self, method, url, redirect=True, **kw):
- "Same as HTTP(S)ConnectionPool.urlopen, ``url`` must be absolute."
- u = parse_url(url)
- if not connection_requires_http_tunnel(self.proxy, self.proxy_config, u.scheme):
- # For connections using HTTP CONNECT, httplib sets the necessary
- # headers on the CONNECT to the proxy. If we're not using CONNECT,
- # we'll definitely need to set 'Host' at the very least.
- headers = kw.get("headers", self.headers)
- kw["headers"] = self._set_proxy_headers(url, headers)
-
- return super(ProxyManager, self).urlopen(method, url, redirect=redirect, **kw)
-
-
-def proxy_from_url(url, **kw):
- return ProxyManager(proxy_url=url, **kw)
diff --git a/spaces/Boadiwaa/Recipes/openai/api_resources/experimental/completion_config.py b/spaces/Boadiwaa/Recipes/openai/api_resources/experimental/completion_config.py
deleted file mode 100644
index 5d4feb40e1bcba470690e888473d9b7623b4282d..0000000000000000000000000000000000000000
--- a/spaces/Boadiwaa/Recipes/openai/api_resources/experimental/completion_config.py
+++ /dev/null
@@ -1,11 +0,0 @@
-from openai.api_resources.abstract import (
- CreateableAPIResource,
- DeletableAPIResource,
- ListableAPIResource,
-)
-
-
-class CompletionConfig(
- CreateableAPIResource, ListableAPIResource, DeletableAPIResource
-):
- OBJECT_NAME = "experimental.completion_configs"
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/unique_by_key.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/unique_by_key.h
deleted file mode 100644
index 6ab8578407e1cd90aeaba982780b966b4aee013e..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/unique_by_key.h
+++ /dev/null
@@ -1,67 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace tbb
-{
-namespace detail
-{
-
-
-template
- thrust::pair
- unique_by_key(execution_policy &exec,
- ForwardIterator1 keys_first,
- ForwardIterator1 keys_last,
- ForwardIterator2 values_first,
- BinaryPredicate binary_pred);
-
-
-template
- thrust::pair
- unique_by_key_copy(execution_policy &exec,
- InputIterator1 keys_first,
- InputIterator1 keys_last,
- InputIterator2 values_first,
- OutputIterator1 keys_output,
- OutputIterator2 values_output,
- BinaryPredicate binary_pred);
-
-
-} // end namespace detail
-} // end namespace tbb
-} // end namespace system
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/CVPR/unicl-zero-shot-img-recog/model/text_encoder/build.py b/spaces/CVPR/unicl-zero-shot-img-recog/model/text_encoder/build.py
deleted file mode 100644
index 21717b73146f2be5fa823e5bd8f4dd0b144d188c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/unicl-zero-shot-img-recog/model/text_encoder/build.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import os
-
-from transformers import CLIPTokenizer
-from transformers import AutoTokenizer
-
-from .registry import lang_encoders
-from .registry import is_lang_encoder
-
-
-def build_lang_encoder(config_encoder, tokenizer, verbose, **kwargs):
- model_name = config_encoder['NAME']
-
- if not is_lang_encoder(model_name):
- raise ValueError(f'Unknown model: {model_name}')
-
- return lang_encoders(model_name)(config_encoder, tokenizer, verbose, **kwargs)
-
-
-def build_tokenizer(config_encoder):
- tokenizer = None
- os.environ['TOKENIZERS_PARALLELISM'] = 'true'
- if config_encoder['TOKENIZER'] == 'clip':
- pretrained_tokenizer = config_encoder.get(
- 'PRETRAINED_TOKENIZER', 'openai/clip-vit-base-patch32'
- )
- tokenizer = CLIPTokenizer.from_pretrained(pretrained_tokenizer)
- tokenizer.add_special_tokens({'cls_token': tokenizer.eos_token})
- else:
- tokenizer = AutoTokenizer.from_pretrained(config_encoder['TOKENIZER'])
-
- return tokenizer
diff --git a/spaces/CognitiveLabs/GPT-auto-webscraping/app.py b/spaces/CognitiveLabs/GPT-auto-webscraping/app.py
deleted file mode 100644
index a4d5d1583ac478bfc206a8c1b1bbcdc8edecd647..0000000000000000000000000000000000000000
--- a/spaces/CognitiveLabs/GPT-auto-webscraping/app.py
+++ /dev/null
@@ -1,107 +0,0 @@
-from AssistantService import GPTAssistant
-from openai.error import AuthenticationError
-import streamlit as st
-from langsmith.run_helpers import traceable
-import configparser
-import os
-
-config = configparser.ConfigParser()
-config.read('config.ini')
-if 'DEFAULT' in config:
- assistant_api_key = config['DEFAULT'].get('API-KEY', '')
-
-os.environ["LANGCHAIN_TRACING_V2"]="true"
-os.environ["LANGCHAIN_ENDPOINT"]="https://api.smith.langchain.com"
-os.environ["LANGCHAIN_API_KEY"]=st.secrets["LANGCHAIN_API_KEY"]
-os.environ["LANGCHAIN_PROJECT"]=st.secrets["LANGCHAIN_PROJECT"]
-
-@traceable(run_type="tool")
-def start_session(session_started):
- st.session_state['session_started'] = session_started
- return session_started
-
-# change session_started to True
-if 'session_started' not in st.session_state:
- start_session(True)
-
-st.write("This app helps you to extract data from HTML code using web scraping. It uses *GPT-3.5-turbo-16k* to generate the code for you. \n *Contribute to this project on [GitHub](https://github.com/CognitiveLabs/GPT-auto-webscraping)*")
-
-with st.expander(label="Check out the video demo"):
- yt_video = st.video("https://www.youtube.com/watch?v=_zeCun4OlCc")
-
-info_text = """
-**Quick start** \n
-Fill the input with .
-- Choose a repeating element on the page, like a product on a list.
-- Inspect the HTML code and copy the element.
-- After generating the "output format" and the code, paste the complete HTML code of the page in the last input to test it
-"""
-st.write(info_text)
-st.image("https://j.gifs.com/gpqvPl.gif", width=600)
-
-
-
-if assistant_api_key == '':
- assistant_api_key = st.secrets["API_KEY"]
- if assistant_api_key:
- gpt_assistant = GPTAssistant(assistant_api_key)
-else:
- gpt_assistant = GPTAssistant(assistant_api_key)
-
-# get the html content
-html_content = st.text_input("Paste the HTML tags of the item you want to extract:", max_chars=10000, help="example:
Product 1
, watch the video above")
-# check if html_content is an url, and show error if it is
-if html_content:
- if html_content.startswith("http"):
- st.write("Please paste the HTML piece code, not the URL")
- html_content = None
-
-extract_button = st.button("Generate output format & code")
-
-
-if html_content and extract_button:
- try:
- st.write("1/2: Generating the output format...")
- output = gpt_assistant.chain_response_format(html_content)
- st.session_state['output_format'] = output
- except NameError:
- st.write("Complete the API key field")
- except AuthenticationError:
- st.write("Invalid API key")
-
-if 'output_format' in st.session_state:
- output_format = st.code(st.session_state['output_format'], language="json")
-
- try:
- st.write("2/2: Generating the code...")
- python_code = gpt_assistant.chain_code_generator(st.session_state['output_format'], html_content)
- st.session_state['code_generated'] = python_code
- st.session_state['code_generated_exec'] = python_code + "\nresult = extract_info(html_data)"
-
- except NameError:
- st.write("Complete the API key field")
- except AuthenticationError:
- st.write("Invalid API key")
-
-@traceable(run_type="tool")
-def test_the_code(code, full_content):
- exec(code, globals())
- if result:
- st.write("data extracted successfully")
- # show data in table
- st.table(result)
- else:
- st.write("error extracting data")
-
- return result or "error"
-
-
-if 'code_generated' in st.session_state:
- python_function_label = st.write("Here is your python function:")
- code_generated = st.code(st.session_state['code_generated'],language="python")
- full_content = st.text_input("Paste your complete HTML here:")
- test_code = st.button("Test the code")
- if full_content and test_code:
- html_data = full_content
- result = None
- test_the_code(st.session_state['code_generated_exec'], full_content=full_content)
\ No newline at end of file
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/tf.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/tf.py
deleted file mode 100644
index 5db3b39e69a20717c7d840e537027ce0d833306c..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/tf.py
+++ /dev/null
@@ -1,269 +0,0 @@
-from __future__ import print_function
-
-
-try:
- import tensorflow as tf
- from tensorflow.python.ops import nn
- relu = nn.relu
- slim = tf.contrib.slim
- sigmoid = nn.sigmoid
- softmax = nn.softmax
-except:
- print("tensorflow is not installed, util.tf can not be used.")
-
-def is_gpu_available(cuda_only=True):
- """
- code from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/platform/test.py
- Returns whether TensorFlow can access a GPU.
- Args:
- cuda_only: limit the search to CUDA gpus.
- Returns:
- True iff a gpu device of the requested kind is available.
- """
- from tensorflow.python.client import device_lib as _device_lib
-
- if cuda_only:
- return any((x.device_type == 'GPU')
- for x in _device_lib.list_local_devices())
- else:
- return any((x.device_type == 'GPU' or x.device_type == 'SYCL')
- for x in _device_lib.list_local_devices())
-
-
-
-def get_available_gpus(num_gpus = None):
- """
- Modified on http://stackoverflow.com/questions/38559755/how-to-get-current-available-gpus-in-tensorflow
- However, the original code will occupy all available gpu memory.
- The modified code need a parameter: num_gpus. It does nothing but return the device handler name
- It will work well on single-maching-training, but I don't know whether it will work well on a cluster.
- """
- if num_gpus == None:
- from tensorflow.python.client import device_lib as _device_lib
- local_device_protos = _device_lib.list_local_devices()
- return [x.name for x in local_device_protos if x.device_type == 'GPU']
- else:
- return ['/gpu:%d'%(idx) for idx in xrange(num_gpus)]
-
-def get_latest_ckpt(path):
-# tf.train.latest_checkpoint
- import util
- path = util.io.get_absolute_path(path)
- if util.io.is_dir(path):
- ckpt = tf.train.get_checkpoint_state(path)
- if ckpt is not None:
- ckpt_path = ckpt.model_checkpoint_path
- else:
- ckpt_path = None
- else:
- ckpt_path = path;
- return ckpt_path
-
-def get_all_ckpts(path):
- ckpt = tf.train.get_checkpoint_state(path)
- all_ckpts = ckpt.all_model_checkpoint_paths
- ckpts = [str(c) for c in all_ckpts]
- return ckpts
-
-def get_iter(ckpt):
- import util
- iter_ = int(util.str.find_all(ckpt, '.ckpt-\d+')[0].split('-')[-1])
- return iter_
-
-def get_init_fn(checkpoint_path, train_dir, ignore_missing_vars = False,
- checkpoint_exclude_scopes = None, model_name = None, checkpoint_model_scope = None):
- """
- code from github/SSD-tensorflow/tf_utils.py
- Returns a function run by the chief worker to warm-start the training.
- Note that the init_fn is only run when initializing the model during the very
- first global step.
-
- checkpoint_path: the checkpoint to be restored
- train_dir: the directory where checkpoints are stored during training.
- ignore_missing_vars: if False and there are variables in the model but not in the checkpoint, an error will be raised.
- checkpoint_model_scope and model_name: if the root scope of checkpoints and the model in session is different,
- (but the sub-scopes are all the same), specify them clearly
- checkpoint_exclude_scopes: variables to be excluded when restoring from checkpoint_path.
- Returns:
- An init function run by the supervisor.
- """
- import util
- if util.str.is_none_or_empty(checkpoint_path):
- return None
- # Warn the user if a checkpoint exists in the train_dir. Then ignore.
- if tf.train.latest_checkpoint(train_dir):
- tf.logging.info(
- 'Ignoring --checkpoint_path because a checkpoint already exists in %s'
- % train_dir)
- return None
-
- exclusions = []
- if checkpoint_exclude_scopes:
- exclusions = [scope.strip()
- for scope in checkpoint_exclude_scopes.split(',')]
-
- # TODO(sguada) variables.filter_variables()
- variables_to_restore = []
- for var in slim.get_model_variables():
- excluded = False
- for exclusion in exclusions:
- if var.op.name.startswith(exclusion):
- excluded = True
- break
- if not excluded:
- variables_to_restore.append(var)
- # Change model scope if necessary.
- if checkpoint_model_scope is not None:
- variables_to_restore = {checkpoint_model_scope + '/' + var.op.name : var for var in variables_to_restore}
- tf.logging.info('variables_to_restore: %r'%(variables_to_restore))
- checkpoint_path = get_latest_ckpt(checkpoint_path)
- tf.logging.info('Fine-tuning from %s. Ignoring missing vars: %s' % (checkpoint_path, ignore_missing_vars))
- print ('checkpoint_path', checkpoint_path)
- return slim.assign_from_checkpoint_fn(
- checkpoint_path,
- variables_to_restore,
- ignore_missing_vars=ignore_missing_vars)
-
-
-def get_variables_to_train(flags = None):
- """code from github/SSD-tensorflow/tf_utils.py
- Returns a list of variables to train.
-
- Returns:
- A list of variables to train by the optimizer.
- """
- if flags is None or flags.trainable_scopes is None:
- return tf.trainable_variables()
- else:
- scopes = [scope.strip() for scope in flags.trainable_scopes.split(',')]
-
- variables_to_train = []
- for scope in scopes:
- variables = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope)
- variables_to_train.extend(variables)
- return variables_to_train
-
-def Print(tensor, data, msg = '', file = None, mode = 'w'):
- from tensorflow.python.ops import control_flow_ops
- import util
- def np_print(*args):
- if util.str.contains(msg, '%'):
- message = msg%tuple(args)
- else:
- message = msg + ' %'*len(args)%tuple(args)
- if file is not None:
- file_path = util.io.get_absolute_path(file)
- print('writting message to file(%s):'%(file_path), message)
- with open(file_path, mode) as f:
- print(message, file = f)
- else:
- print(message)
- return control_flow_ops.with_dependencies([tf.py_func(np_print, data, [])], tensor)
-
-def get_variable_names_in_checkpoint(path, return_shapes = False, return_reader = False):
- """
- Args:
- path: the path to training directory containing checkpoints,
- or path to checkpoint
- Return:
- a list of variable names in the checkpoint
- """
- import util
- ckpt = get_latest_ckpt(path)
- ckpt_reader = tf.train.NewCheckpointReader(ckpt)
- ckpt_vars = ckpt_reader.get_variable_to_shape_map()
- names = [var for var in ckpt_vars]
- if return_shapes:
- return names, ckpt_vars
- def get(name):
- return ckpt_reader.get_tensor(name)
- if return_reader:
- return names, get
- return names
-
-
-
-def min_area_rect(xs, ys):
- import util
- rects = tf.py_func(util.img.min_area_rect, [xs, ys], xs.dtype)
- rects.set_shape([None, 5])
- return rects
-
-
-def gpu_config(config = None, allow_growth = None, gpu_memory_fraction = None):
- if config is None:
- config = tf.ConfigProto()
-
- if allow_growth is not None:
- config.gpu_options.allow_growth = allow_growth
-
- if gpu_memory_fraction is not None:
- config.gpu_options.per_process_gpu_memory_fraction = gpu_memory_fraction
-
- return config
-
-def wait_for_checkpoint(path):
- from tensorflow.contrib.training.python.training import evaluation
- return evaluation.checkpoints_iterator(path)
-
-def focal_loss(labels, logits, gamma = 2.0, alpha = 0.75, normalize = True):
- labels = tf.where(labels > 0, tf.ones_like(labels), tf.zeros_like(labels))
- labels = tf.cast(labels, tf.float32)
- probs = tf.sigmoid(logits)
- CE = tf.nn.sigmoid_cross_entropy_with_logits(labels = labels, logits = logits)
-
- alpha_t = tf.ones_like(logits) * alpha
- alpha_t = tf.where(labels > 0, alpha_t, 1.0 - alpha_t)
- probs_t = tf.where(labels > 0, probs, 1.0 - probs)
-
- focal_matrix = alpha_t * tf.pow((1.0 - probs_t), gamma)
- fl = focal_matrix * CE
-
- fl = tf.reduce_sum(fl)
- if normalize:
- #n_pos = tf.reduce_sum(labels)
- #fl = fl / tf.cast(n_pos, tf.float32)
- total_weights = tf.stop_gradient(tf.reduce_sum(focal_matrix))
- fl = fl / total_weights
- return fl
-
-
-def focal_loss_layer_initializer(sigma = 0.01, pi = 0.01):
- import numpy as np
- b0 = - np.log((1 - pi) / pi)
- return tf.random_normal_initializer(stddev = sigma), \
- tf.constant_initializer(b0)
-
-
-def sum_gradients(clone_grads, do_summary = False):
- averaged_grads = []
- for grad_and_vars in zip(*clone_grads):
- grads = []
- var = grad_and_vars[0][1]
- try:
- for g, v in grad_and_vars:
- assert v == var
- grads.append(g)
- grad = tf.add_n(grads, name = v.op.name + '_summed_gradients')
- except:
- import pdb
- pdb.set_trace()
-
- averaged_grads.append((grad, v))
-
- if do_summary:
- tf.summary.histogram("variables_and_gradients_" + grad.op.name, grad)
- tf.summary.histogram("variables_and_gradients_" + v.op.name, v)
- tf.summary.scalar("variables_and_gradients_" + grad.op.name+\
- '_mean/var_mean', tf.reduce_mean(grad)/tf.reduce_mean(var))
- tf.summary.scalar("variables_and_gradients_" + v.op.name+'_mean',tf.reduce_mean(var))
- return averaged_grads
-
-def get_update_op():
- """
- Extremely important for BatchNorm
- """
- update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
- if update_ops:
- return tf.group(*update_ops)
- return None
diff --git a/spaces/DHEIVER/ImageClassifierCataract/README.md b/spaces/DHEIVER/ImageClassifierCataract/README.md
deleted file mode 100644
index 72ae983b1124a5748a98053a6d48daf9e695ac55..0000000000000000000000000000000000000000
--- a/spaces/DHEIVER/ImageClassifierCataract/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: ImageClassifierCataract
-emoji: 📊
-colorFrom: yellow
-colorTo: green
-sdk: gradio
-sdk_version: 3.44.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/http.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/http.py
deleted file mode 100644
index ca9dc54b215f7977970658250f23e3be137f1b3e..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/http.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import http.server
-import sys
-from typing import Mapping, Tuple
-
-from . import __version__
-from .http_exceptions import HttpProcessingError as HttpProcessingError
-from .http_parser import (
- HeadersParser as HeadersParser,
- HttpParser as HttpParser,
- HttpRequestParser as HttpRequestParser,
- HttpResponseParser as HttpResponseParser,
- RawRequestMessage as RawRequestMessage,
- RawResponseMessage as RawResponseMessage,
-)
-from .http_websocket import (
- WS_CLOSED_MESSAGE as WS_CLOSED_MESSAGE,
- WS_CLOSING_MESSAGE as WS_CLOSING_MESSAGE,
- WS_KEY as WS_KEY,
- WebSocketError as WebSocketError,
- WebSocketReader as WebSocketReader,
- WebSocketWriter as WebSocketWriter,
- WSCloseCode as WSCloseCode,
- WSMessage as WSMessage,
- WSMsgType as WSMsgType,
- ws_ext_gen as ws_ext_gen,
- ws_ext_parse as ws_ext_parse,
-)
-from .http_writer import (
- HttpVersion as HttpVersion,
- HttpVersion10 as HttpVersion10,
- HttpVersion11 as HttpVersion11,
- StreamWriter as StreamWriter,
-)
-
-__all__ = (
- "HttpProcessingError",
- "RESPONSES",
- "SERVER_SOFTWARE",
- # .http_writer
- "StreamWriter",
- "HttpVersion",
- "HttpVersion10",
- "HttpVersion11",
- # .http_parser
- "HeadersParser",
- "HttpParser",
- "HttpRequestParser",
- "HttpResponseParser",
- "RawRequestMessage",
- "RawResponseMessage",
- # .http_websocket
- "WS_CLOSED_MESSAGE",
- "WS_CLOSING_MESSAGE",
- "WS_KEY",
- "WebSocketReader",
- "WebSocketWriter",
- "ws_ext_gen",
- "ws_ext_parse",
- "WSMessage",
- "WebSocketError",
- "WSMsgType",
- "WSCloseCode",
-)
-
-
-SERVER_SOFTWARE: str = "Python/{0[0]}.{0[1]} aiohttp/{1}".format(
- sys.version_info, __version__
-)
-
-RESPONSES: Mapping[int, Tuple[str, str]] = http.server.BaseHTTPRequestHandler.responses
diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/patch_feature_extractor.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/patch_feature_extractor.py
deleted file mode 100644
index 8901b123d2845bfaecc1a42f66be13fdf1ddd349..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/patch_feature_extractor.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import numpy as np
-import torch
-import torch.nn as nn
-from einops.layers.torch import Rearrange
-
-
-class PatchFeatureExtractor(nn.Module):
- x_mean = torch.FloatTensor(np.array([0.485, 0.456, 0.406])[None, :, None, None])
- x_std = torch.FloatTensor(np.array([0.229, 0.224, 0.225])[None, :, None, None])
-
- def __init__(self, patch_num=256, input_shape=None):
- super(PatchFeatureExtractor, self).__init__()
-
- if input_shape is None:
- input_shape = [3, 512, 1024]
- self.patch_dim = 1024
- self.patch_num = patch_num
-
- img_channel = input_shape[0]
- img_h = input_shape[1]
- img_w = input_shape[2]
-
- p_h, p_w = img_h, img_w // self.patch_num
- p_dim = p_h * p_w * img_channel
-
- self.patch_embedding = nn.Sequential(
- Rearrange('b c h (p_n p_w) -> b p_n (h p_w c)', p_w=p_w),
- nn.Linear(p_dim, self.patch_dim)
- )
-
- self.x_mean.requires_grad = False
- self.x_std.requires_grad = False
-
- def _prepare_x(self, x):
- x = x.clone()
- if self.x_mean.device != x.device:
- self.x_mean = self.x_mean.to(x.device)
- self.x_std = self.x_std.to(x.device)
- x[:, :3] = (x[:, :3] - self.x_mean) / self.x_std
-
- return x
-
- def forward(self, x):
- # x [b 3 512 1024]
- x = self._prepare_x(x) # [b 3 512 1024]
- x = self.patch_embedding(x) # [b 256(patch_num) 1024(d)]
- x = x.permute(0, 2, 1) # [b 1024(d) 256(patch_num)]
- return x
-
-
-if __name__ == '__main__':
- from PIL import Image
- extractor = PatchFeatureExtractor()
- img = np.array(Image.open("../../src/demo.png")).transpose((2, 0, 1))
- input = torch.Tensor([img]) # 1 3 512 1024
- feature = extractor(input)
- print(feature.shape) # 1, 1024, 256
diff --git a/spaces/DeepDrivePL/PaddleSeg-Matting/README.md b/spaces/DeepDrivePL/PaddleSeg-Matting/README.md
deleted file mode 100644
index 80f05c6854496d0c806297a00a77da5f480fec81..0000000000000000000000000000000000000000
--- a/spaces/DeepDrivePL/PaddleSeg-Matting/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: PaddleSeg Matting
-emoji: 📊
-colorFrom: indigo
-colorTo: yellow
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmodel/models.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmodel/models.py
deleted file mode 100644
index ceb6f2ce21720722d5d8c9ee4f7e015ad06a9647..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmodel/models.py
+++ /dev/null
@@ -1,558 +0,0 @@
-import torch
-import torch.nn as nn
-import torchvision
-from . import resnet, resnext
-try:
- from lib.nn import SynchronizedBatchNorm2d
-except ImportError:
- from torch.nn import BatchNorm2d as SynchronizedBatchNorm2d
-
-
-class SegmentationModuleBase(nn.Module):
- def __init__(self):
- super(SegmentationModuleBase, self).__init__()
-
- def pixel_acc(self, pred, label):
- _, preds = torch.max(pred, dim=1)
- valid = (label >= 0).long()
- acc_sum = torch.sum(valid * (preds == label).long())
- pixel_sum = torch.sum(valid)
- acc = acc_sum.float() / (pixel_sum.float() + 1e-10)
- return acc
-
-
-class SegmentationModule(SegmentationModuleBase):
- def __init__(self, net_enc, net_dec, crit, deep_sup_scale=None):
- super(SegmentationModule, self).__init__()
- self.encoder = net_enc
- self.decoder = net_dec
- self.crit = crit
- self.deep_sup_scale = deep_sup_scale
-
- def forward(self, feed_dict, *, segSize=None):
- if segSize is None: # training
- if self.deep_sup_scale is not None: # use deep supervision technique
- (pred, pred_deepsup) = self.decoder(self.encoder(feed_dict['img_data'], return_feature_maps=True))
- else:
- pred = self.decoder(self.encoder(feed_dict['img_data'], return_feature_maps=True))
-
- loss = self.crit(pred, feed_dict['seg_label'])
- if self.deep_sup_scale is not None:
- loss_deepsup = self.crit(pred_deepsup, feed_dict['seg_label'])
- loss = loss + loss_deepsup * self.deep_sup_scale
-
- acc = self.pixel_acc(pred, feed_dict['seg_label'])
- return loss, acc
- else: # inference
- pred = self.decoder(self.encoder(feed_dict['img_data'], return_feature_maps=True), segSize=segSize)
- return pred
-
-
-def conv3x3(in_planes, out_planes, stride=1, has_bias=False):
- "3x3 convolution with padding"
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
- padding=1, bias=has_bias)
-
-
-def conv3x3_bn_relu(in_planes, out_planes, stride=1):
- return nn.Sequential(
- conv3x3(in_planes, out_planes, stride),
- SynchronizedBatchNorm2d(out_planes),
- nn.ReLU(inplace=True),
- )
-
-
-class ModelBuilder():
- # custom weights initialization
- def weights_init(self, m):
- classname = m.__class__.__name__
- if classname.find('Conv') != -1:
- nn.init.kaiming_normal_(m.weight.data)
- elif classname.find('BatchNorm') != -1:
- m.weight.data.fill_(1.)
- m.bias.data.fill_(1e-4)
- #elif classname.find('Linear') != -1:
- # m.weight.data.normal_(0.0, 0.0001)
-
- def build_encoder(self, arch='resnet50_dilated8', fc_dim=512, weights=''):
- pretrained = True if len(weights) == 0 else False
- if arch == 'resnet34':
- raise NotImplementedError
- orig_resnet = resnet.__dict__['resnet34'](pretrained=pretrained)
- net_encoder = Resnet(orig_resnet)
- elif arch == 'resnet34_dilated8':
- raise NotImplementedError
- orig_resnet = resnet.__dict__['resnet34'](pretrained=pretrained)
- net_encoder = ResnetDilated(orig_resnet,
- dilate_scale=8)
- elif arch == 'resnet34_dilated16':
- raise NotImplementedError
- orig_resnet = resnet.__dict__['resnet34'](pretrained=pretrained)
- net_encoder = ResnetDilated(orig_resnet,
- dilate_scale=16)
- elif arch == 'resnet50':
- orig_resnet = resnet.__dict__['resnet50'](pretrained=pretrained)
- net_encoder = Resnet(orig_resnet)
- elif arch == 'resnet50_dilated8':
- orig_resnet = resnet.__dict__['resnet50'](pretrained=pretrained)
- net_encoder = ResnetDilated(orig_resnet,
- dilate_scale=8)
- elif arch == 'resnet50_dilated16':
- orig_resnet = resnet.__dict__['resnet50'](pretrained=pretrained)
- net_encoder = ResnetDilated(orig_resnet,
- dilate_scale=16)
- elif arch == 'resnet101':
- orig_resnet = resnet.__dict__['resnet101'](pretrained=pretrained)
- net_encoder = Resnet(orig_resnet)
- elif arch == 'resnet101_dilated8':
- orig_resnet = resnet.__dict__['resnet101'](pretrained=pretrained)
- net_encoder = ResnetDilated(orig_resnet,
- dilate_scale=8)
- elif arch == 'resnet101_dilated16':
- orig_resnet = resnet.__dict__['resnet101'](pretrained=pretrained)
- net_encoder = ResnetDilated(orig_resnet,
- dilate_scale=16)
- elif arch == 'resnext101':
- orig_resnext = resnext.__dict__['resnext101'](pretrained=pretrained)
- net_encoder = Resnet(orig_resnext) # we can still use class Resnet
- else:
- raise Exception('Architecture undefined!')
-
- # net_encoder.apply(self.weights_init)
- if len(weights) > 0:
- # print('Loading weights for net_encoder')
- net_encoder.load_state_dict(
- torch.load(weights, map_location=lambda storage, loc: storage), strict=False)
- return net_encoder
-
- def build_decoder(self, arch='ppm_bilinear_deepsup',
- fc_dim=512, num_class=150,
- weights='', inference=False, use_softmax=False):
- if arch == 'c1_bilinear_deepsup':
- net_decoder = C1BilinearDeepSup(
- num_class=num_class,
- fc_dim=fc_dim,
- inference=inference,
- use_softmax=use_softmax)
- elif arch == 'c1_bilinear':
- net_decoder = C1Bilinear(
- num_class=num_class,
- fc_dim=fc_dim,
- inference=inference,
- use_softmax=use_softmax)
- elif arch == 'ppm_bilinear':
- net_decoder = PPMBilinear(
- num_class=num_class,
- fc_dim=fc_dim,
- inference=inference,
- use_softmax=use_softmax)
- elif arch == 'ppm_bilinear_deepsup':
- net_decoder = PPMBilinearDeepsup(
- num_class=num_class,
- fc_dim=fc_dim,
- inference=inference,
- use_softmax=use_softmax)
- elif arch == 'upernet_lite':
- net_decoder = UPerNet(
- num_class=num_class,
- fc_dim=fc_dim,
- inference=inference,
- use_softmax=use_softmax,
- fpn_dim=256)
- elif arch == 'upernet':
- net_decoder = UPerNet(
- num_class=num_class,
- fc_dim=fc_dim,
- inference=inference,
- use_softmax=use_softmax,
- fpn_dim=512)
- elif arch == 'upernet_tmp':
- net_decoder = UPerNetTmp(
- num_class=num_class,
- fc_dim=fc_dim,
- inference=inference,
- use_softmax=use_softmax,
- fpn_dim=512)
- else:
- raise Exception('Architecture undefined!')
-
- net_decoder.apply(self.weights_init)
- if len(weights) > 0:
- # print('Loading weights for net_decoder')
- net_decoder.load_state_dict(
- torch.load(weights, map_location=lambda storage, loc: storage), strict=False)
- return net_decoder
-
-
-class Resnet(nn.Module):
- def __init__(self, orig_resnet):
- super(Resnet, self).__init__()
-
- # take pretrained resnet, except AvgPool and FC
- self.conv1 = orig_resnet.conv1
- self.bn1 = orig_resnet.bn1
- self.relu1 = orig_resnet.relu1
- self.conv2 = orig_resnet.conv2
- self.bn2 = orig_resnet.bn2
- self.relu2 = orig_resnet.relu2
- self.conv3 = orig_resnet.conv3
- self.bn3 = orig_resnet.bn3
- self.relu3 = orig_resnet.relu3
- self.maxpool = orig_resnet.maxpool
- self.layer1 = orig_resnet.layer1
- self.layer2 = orig_resnet.layer2
- self.layer3 = orig_resnet.layer3
- self.layer4 = orig_resnet.layer4
-
- def forward(self, x, return_feature_maps=False):
- conv_out = []
-
- x = self.relu1(self.bn1(self.conv1(x)))
- x = self.relu2(self.bn2(self.conv2(x)))
- x = self.relu3(self.bn3(self.conv3(x)))
- x = self.maxpool(x)
-
- x = self.layer1(x); conv_out.append(x);
- x = self.layer2(x); conv_out.append(x);
- x = self.layer3(x); conv_out.append(x);
- x = self.layer4(x); conv_out.append(x);
-
- if return_feature_maps:
- return conv_out
- return [x]
-
-
-class ResnetDilated(nn.Module):
- def __init__(self, orig_resnet, dilate_scale=8):
- super(ResnetDilated, self).__init__()
- from functools import partial
-
- if dilate_scale == 8:
- orig_resnet.layer3.apply(
- partial(self._nostride_dilate, dilate=2))
- orig_resnet.layer4.apply(
- partial(self._nostride_dilate, dilate=4))
- elif dilate_scale == 16:
- orig_resnet.layer4.apply(
- partial(self._nostride_dilate, dilate=2))
-
- # take pretrained resnet, except AvgPool and FC
- self.conv1 = orig_resnet.conv1
- self.bn1 = orig_resnet.bn1
- self.relu1 = orig_resnet.relu1
- self.conv2 = orig_resnet.conv2
- self.bn2 = orig_resnet.bn2
- self.relu2 = orig_resnet.relu2
- self.conv3 = orig_resnet.conv3
- self.bn3 = orig_resnet.bn3
- self.relu3 = orig_resnet.relu3
- self.maxpool = orig_resnet.maxpool
- self.layer1 = orig_resnet.layer1
- self.layer2 = orig_resnet.layer2
- self.layer3 = orig_resnet.layer3
- self.layer4 = orig_resnet.layer4
-
- def _nostride_dilate(self, m, dilate):
- classname = m.__class__.__name__
- if classname.find('Conv') != -1:
- # the convolution with stride
- if m.stride == (2, 2):
- m.stride = (1, 1)
- if m.kernel_size == (3, 3):
- m.dilation = (dilate//2, dilate//2)
- m.padding = (dilate//2, dilate//2)
- # other convoluions
- else:
- if m.kernel_size == (3, 3):
- m.dilation = (dilate, dilate)
- m.padding = (dilate, dilate)
-
- def forward(self, x, return_feature_maps=False):
- conv_out = []
-
- x = self.relu1(self.bn1(self.conv1(x)))
- x = self.relu2(self.bn2(self.conv2(x)))
- x = self.relu3(self.bn3(self.conv3(x)))
- x = self.maxpool(x)
-
- x = self.layer1(x); conv_out.append(x);
- x = self.layer2(x); conv_out.append(x);
- x = self.layer3(x); conv_out.append(x);
- x = self.layer4(x); conv_out.append(x);
-
- if return_feature_maps:
- return conv_out
- return [x]
-
-
-# last conv, bilinear upsample
-class C1BilinearDeepSup(nn.Module):
- def __init__(self, num_class=150, fc_dim=2048, inference=False, use_softmax=False):
- super(C1BilinearDeepSup, self).__init__()
- self.use_softmax = use_softmax
- self.inference = inference
-
- self.cbr = conv3x3_bn_relu(fc_dim, fc_dim // 4, 1)
- self.cbr_deepsup = conv3x3_bn_relu(fc_dim // 2, fc_dim // 4, 1)
-
- # last conv
- self.conv_last = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0)
- self.conv_last_deepsup = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0)
-
- def forward(self, conv_out, segSize=None):
- conv5 = conv_out[-1]
-
- x = self.cbr(conv5)
- x = self.conv_last(x)
-
- if self.inference or self.use_softmax: # is True during inference
- x = nn.functional.interpolate(
- x, size=segSize, mode='bilinear', align_corners=False)
- if self.use_softmax:
- x = nn.functional.softmax(x, dim=1)
- return x
-
- # deep sup
- conv4 = conv_out[-2]
- _ = self.cbr_deepsup(conv4)
- _ = self.conv_last_deepsup(_)
-
- x = nn.functional.log_softmax(x, dim=1)
- _ = nn.functional.log_softmax(_, dim=1)
-
- return (x, _)
-
-
-# last conv, bilinear upsample
-class C1Bilinear(nn.Module):
- def __init__(self, num_class=150, fc_dim=2048, inference=False, use_softmax=False):
- super(C1Bilinear, self).__init__()
- self.use_softmax = use_softmax
- self.inference = inference
-
- self.cbr = conv3x3_bn_relu(fc_dim, fc_dim // 4, 1)
-
- # last conv
- self.conv_last = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0)
-
- def forward(self, conv_out, segSize=None):
- conv5 = conv_out[-1]
- x = self.cbr(conv5)
- x = self.conv_last(x)
-
- if self.inference or self.use_softmax: # is True during inference
- x = nn.functional.interpolate(
- x, size=segSize, mode='bilinear', align_corners=False)
- if self.use_softmax:
- x = nn.functional.softmax(x, dim=1)
- else:
- x = nn.functional.log_softmax(x, dim=1)
-
- return x
-
-
-# pyramid pooling, bilinear upsample
-class PPMBilinear(nn.Module):
- def __init__(self, num_class=150, fc_dim=4096,
- inference=False, use_softmax=False, pool_scales=(1, 2, 3, 6)):
- super(PPMBilinear, self).__init__()
- self.use_softmax = use_softmax
- self.inference = inference
-
- self.ppm = []
- for scale in pool_scales:
- self.ppm.append(nn.Sequential(
- nn.AdaptiveAvgPool2d(scale),
- nn.Conv2d(fc_dim, 512, kernel_size=1, bias=False),
- SynchronizedBatchNorm2d(512),
- nn.ReLU(inplace=True)
- ))
- self.ppm = nn.ModuleList(self.ppm)
-
- self.conv_last = nn.Sequential(
- nn.Conv2d(fc_dim+len(pool_scales)*512, 512,
- kernel_size=3, padding=1, bias=False),
- SynchronizedBatchNorm2d(512),
- nn.ReLU(inplace=True),
- nn.Dropout2d(0.1),
- nn.Conv2d(512, num_class, kernel_size=1)
- )
-
- def forward(self, conv_out, segSize=None):
- conv5 = conv_out[-1]
-
- input_size = conv5.size()
- ppm_out = [conv5]
- for pool_scale in self.ppm:
- ppm_out.append(nn.functional.interpolate(
- pool_scale(conv5),
- (input_size[2], input_size[3]),
- mode='bilinear', align_corners=False))
- ppm_out = torch.cat(ppm_out, 1)
-
- x = self.conv_last(ppm_out)
-
- if self.inference or self.use_softmax: # is True during inference
- x = nn.functional.interpolate(
- x, size=segSize, mode='bilinear', align_corners=False)
- if self.use_softmax:
- x = nn.functional.softmax(x, dim=1)
- else:
- x = nn.functional.log_softmax(x, dim=1)
- return x
-
-
-# pyramid pooling, bilinear upsample
-class PPMBilinearDeepsup(nn.Module):
- def __init__(self, num_class=150, fc_dim=4096,
- inference=False, use_softmax=False, pool_scales=(1, 2, 3, 6)):
- super(PPMBilinearDeepsup, self).__init__()
- self.use_softmax = use_softmax
- self.inference = inference
-
- self.ppm = []
- for scale in pool_scales:
- self.ppm.append(nn.Sequential(
- nn.AdaptiveAvgPool2d(scale),
- nn.Conv2d(fc_dim, 512, kernel_size=1, bias=False),
- SynchronizedBatchNorm2d(512),
- nn.ReLU(inplace=True)
- ))
- self.ppm = nn.ModuleList(self.ppm)
- self.cbr_deepsup = conv3x3_bn_relu(fc_dim // 2, fc_dim // 4, 1)
-
- self.conv_last = nn.Sequential(
- nn.Conv2d(fc_dim+len(pool_scales)*512, 512,
- kernel_size=3, padding=1, bias=False),
- SynchronizedBatchNorm2d(512),
- nn.ReLU(inplace=True),
- nn.Dropout2d(0.1),
- nn.Conv2d(512, num_class, kernel_size=1)
- )
- self.conv_last_deepsup = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0)
- self.dropout_deepsup = nn.Dropout2d(0.1)
-
- def forward(self, conv_out, segSize=None):
- conv5 = conv_out[-1]
-
- input_size = conv5.size()
- ppm_out = [conv5]
- for pool_scale in self.ppm:
- ppm_out.append(nn.functional.interpolate(
- pool_scale(conv5),
- (input_size[2], input_size[3]),
- mode='bilinear', align_corners=False))
- ppm_out = torch.cat(ppm_out, 1)
-
- x = self.conv_last(ppm_out)
-
- if self.inference or self.use_softmax: # is True during inference
- x = nn.functional.interpolate(
- x, size=segSize, mode='bilinear', align_corners=False)
- if self.use_softmax:
- x = nn.functional.softmax(x, dim=1)
- return x
-
- # deep sup
- conv4 = conv_out[-2]
- _ = self.cbr_deepsup(conv4)
- _ = self.dropout_deepsup(_)
- _ = self.conv_last_deepsup(_)
-
- x = nn.functional.log_softmax(x, dim=1)
- _ = nn.functional.log_softmax(_, dim=1)
-
- return (x, _)
-
-
-# upernet
-class UPerNet(nn.Module):
- def __init__(self, num_class=150, fc_dim=4096,
- inference=False, use_softmax=False, pool_scales=(1, 2, 3, 6),
- fpn_inplanes=(256,512,1024,2048), fpn_dim=256):
- super(UPerNet, self).__init__()
- self.use_softmax = use_softmax
- self.inference = inference
-
- # PPM Module
- self.ppm_pooling = []
- self.ppm_conv = []
-
- for scale in pool_scales:
- self.ppm_pooling.append(nn.AdaptiveAvgPool2d(scale))
- self.ppm_conv.append(nn.Sequential(
- nn.Conv2d(fc_dim, 512, kernel_size=1, bias=False),
- SynchronizedBatchNorm2d(512),
- nn.ReLU(inplace=True)
- ))
- self.ppm_pooling = nn.ModuleList(self.ppm_pooling)
- self.ppm_conv = nn.ModuleList(self.ppm_conv)
- self.ppm_last_conv = conv3x3_bn_relu(fc_dim + len(pool_scales)*512, fpn_dim, 1)
-
- # FPN Module
- self.fpn_in = []
- for fpn_inplane in fpn_inplanes[:-1]: # skip the top layer
- self.fpn_in.append(nn.Sequential(
- nn.Conv2d(fpn_inplane, fpn_dim, kernel_size=1, bias=False),
- SynchronizedBatchNorm2d(fpn_dim),
- nn.ReLU(inplace=True)
- ))
- self.fpn_in = nn.ModuleList(self.fpn_in)
-
- self.fpn_out = []
- for i in range(len(fpn_inplanes) - 1): # skip the top layer
- self.fpn_out.append(nn.Sequential(
- conv3x3_bn_relu(fpn_dim, fpn_dim, 1),
- ))
- self.fpn_out = nn.ModuleList(self.fpn_out)
-
- self.conv_last = nn.Sequential(
- conv3x3_bn_relu(len(fpn_inplanes) * fpn_dim, fpn_dim, 1),
- nn.Conv2d(fpn_dim, num_class, kernel_size=1)
- )
-
- def forward(self, conv_out, segSize=None):
- conv5 = conv_out[-1]
-
- input_size = conv5.size()
- ppm_out = [conv5]
- for pool_scale, pool_conv in zip(self.ppm_pooling, self.ppm_conv):
- ppm_out.append(pool_conv(nn.functional.interploate(
- pool_scale(conv5),
- (input_size[2], input_size[3]),
- mode='bilinear', align_corners=False)))
- ppm_out = torch.cat(ppm_out, 1)
- f = self.ppm_last_conv(ppm_out)
-
- fpn_feature_list = [f]
- for i in reversed(range(len(conv_out) - 1)):
- conv_x = conv_out[i]
- conv_x = self.fpn_in[i](conv_x) # lateral branch
-
- f = nn.functional.interpolate(
- f, size=conv_x.size()[2:], mode='bilinear', align_corners=False) # top-down branch
- f = conv_x + f
-
- fpn_feature_list.append(self.fpn_out[i](f))
-
- fpn_feature_list.reverse() # [P2 - P5]
- output_size = fpn_feature_list[0].size()[2:]
- fusion_list = [fpn_feature_list[0]]
- for i in range(1, len(fpn_feature_list)):
- fusion_list.append(nn.functional.interpolate(
- fpn_feature_list[i],
- output_size,
- mode='bilinear', align_corners=False))
- fusion_out = torch.cat(fusion_list, 1)
- x = self.conv_last(fusion_out)
-
- if self.inference or self.use_softmax: # is True during inference
- x = nn.functional.interpolate(
- x, size=segSize, mode='bilinear', align_corners=False)
- if self.use_softmax:
- x = nn.functional.softmax(x, dim=1)
- return x
-
- x = nn.functional.log_softmax(x, dim=1)
-
- return x
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/__init__.py b/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/__init__.py
deleted file mode 100644
index ece0ea08fe2e939cc260a1dafc0ab5b391b773d9..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-# empty
diff --git a/spaces/ECCV2022/PSG/OpenPSG/configs/motifs/panoptic_fpn_r101_fpn_1x_predcls_psg.py b/spaces/ECCV2022/PSG/OpenPSG/configs/motifs/panoptic_fpn_r101_fpn_1x_predcls_psg.py
deleted file mode 100644
index d125d475b96e26c7862d16b5335798ee9defab44..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/PSG/OpenPSG/configs/motifs/panoptic_fpn_r101_fpn_1x_predcls_psg.py
+++ /dev/null
@@ -1,28 +0,0 @@
-_base_ = './panoptic_fpn_r50_fpn_1x_predcls_psg.py'
-
-model = dict(backbone=dict(
- depth=101,
- init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet101')))
-
-# Log config
-project_name = 'openpsg'
-expt_name = 'motifs_panoptic_fpn_r101_fpn_1x_predcls_psg'
-work_dir = f'./work_dirs/{expt_name}'
-
-log_config = dict(
- interval=50,
- hooks=[
- dict(type='TextLoggerHook'),
- # dict(type='TensorboardLoggerHook')
- dict(
- type='WandbLoggerHook',
- init_kwargs=dict(
- project=project_name,
- name=expt_name,
- # config=work_dir + "/cfg.yaml"
- ),
- ),
- ],
-)
-
-load_from = 'work_dirs/checkpoints/panoptic_fpn_r101_fpn_1x_coco_20210820_193950-ab9157a2.pth'
diff --git a/spaces/ECE1786-AG/ArtIstic-GENREator/app.py b/spaces/ECE1786-AG/ArtIstic-GENREator/app.py
deleted file mode 100644
index 59b128133abe1ffcef71db27df3792e64722b180..0000000000000000000000000000000000000000
--- a/spaces/ECE1786-AG/ArtIstic-GENREator/app.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import torch
-import gradio as gr
-from transformers import pipeline, T5ForConditionalGeneration, T5Tokenizer
-from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
-
-# generate lyrics
-lyrics_generator = pipeline("text-generation", "ECE1786-AG/lyrics-generator")
-
-# summarize lyrics
-model = T5ForConditionalGeneration.from_pretrained("Michau/t5-base-en-generate-headline")
-tokenizer = T5Tokenizer.from_pretrained("Michau/t5-base-en-generate-headline")
-
-# generate single cover
-scheduler = EulerDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-2", subfolder="scheduler")
-pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2", scheduler=scheduler, revision="fp16", torch_dtype=torch.float16)
-device = "cuda" if torch.cuda.is_available() else "cpu"
-pipe = pipe.to(device)
-
-def generate_lyrics(genre, prompt):
- complete_prompt = " <{0}>\n{1}".format(genre, prompt)
- lyrics = lyrics_generator(complete_prompt, max_length=1024)
- lyrics = lyrics[0]['generated_text']
- lyrics = lyrics.split('\n', 1)[1] # remove first line from the generated lyrics
-
- return lyrics
-
-def summarize_lyrics(lyrics):
- text = "headline: " + lyrics
- encoding = tokenizer.encode_plus(text, return_tensors = "pt")
- input_ids = encoding["input_ids"]
- attention_masks = encoding["attention_mask"]
- beam_outputs = model.generate(
- input_ids = input_ids,
- attention_mask = attention_masks,
- max_length = 100,
- num_beams = 5,
- early_stopping = True,
- )
- result = tokenizer.decode(beam_outputs[0])
- result = result.replace('', '')
- result = result.replace('', '')
-
- return result
-
-def generate_cover(prompt, style, effect):
- prompt = summarize_lyrics(prompt) # call function summarize_lyrics to summarize lyrics
- prompt = prompt + ", " + effect + ", album cover, artistic, " + style
- print(prompt)
- image = pipe(prompt).images[0]
- return image
-
-demo = gr.Blocks()
-with demo:
- gr.HTML(
- """
-
-
-
ArtIstic GENREator
-
-
Generate Inspirational Lyrics and Single Cover
-
- """
- )
-
- with gr.Row():
-
- # Left column (lyrics generation)
- with gr.Column():
- with gr.Accordion("Step 1. Generate Lyrics"):
- gr.Markdown("Enter the starting text and select genre to generate lyrics")
- with gr.Row():
- input_start_text = gr.Textbox(placeholder='I am', label="Starting Text")
- input_lyrics_type = gr.Radio(choices=['pop', 'rap', 'country', 'rock', 'r&b'], value='pop', label="Lyrics Genre")
- button_gen_lyrics = gr.Button("Generate Lyrics", variant="primary")
- output_generated_lyrics = gr.Textbox(label="Generated Lyrics", lines=8)
-
- # Right column (single cover generation)
- with gr.Column():
- with gr.Accordion("Step 2. Generate Single Cover"):
- gr.Markdown("Cover will be generated based on style, effect and generated lyrics")
- with gr.Row():
- input_cover_style = gr.Dropdown(choices=['painted', 'abstract', 'minimalist', 'illustrated', 'photographic', 'vintage'], value='painted', label="Track Cover Style")
- input_cover_effect = gr.Radio(choices=['black and white', 'highly detailed', 'blurred'], value='highly detailed', label="Track Cover Effect")
- button_gen_cover = gr.Button("Generate Cover", variant="primary")
- output_generated_cover = gr.Image(label="Generated Cover")
-
- # Bind functions to buttons
- button_gen_lyrics.click(fn=generate_lyrics, inputs=[input_lyrics_type , input_start_text], outputs=output_generated_lyrics)
- button_gen_cover.click(fn=generate_cover, inputs=[output_generated_lyrics, input_cover_style, input_cover_effect], outputs=output_generated_cover)
-
-demo.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/modules.py b/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/modules.py
deleted file mode 100644
index 2201a58bee9b7808d386b3ef9ac2d1f9630e56ef..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/modules.py
+++ /dev/null
@@ -1,521 +0,0 @@
-import copy
-import math
-
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import AvgPool1d, Conv1d, Conv2d, ConvTranspose1d
-from torch.nn import functional as F
-from torch.nn.utils import remove_weight_norm, weight_norm
-
-from infer.lib.infer_pack import commons
-from infer.lib.infer_pack.commons import get_padding, init_weights
-from infer.lib.infer_pack.transforms import piecewise_rational_quadratic_transform
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- out_channels,
- kernel_size,
- n_layers,
- p_dropout,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(
- nn.Conv1d(
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(
- nn.Conv1d(
- hidden_channels,
- hidden_channels,
- kernel_size,
- padding=kernel_size // 2,
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size**i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(
- nn.Conv1d(
- channels,
- channels,
- kernel_size,
- groups=channels,
- dilation=dilation,
- padding=padding,
- )
- )
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(
- self,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- p_dropout=0,
- ):
- super(WN, self).__init__()
- assert kernel_size % 2 == 1
- self.hidden_channels = hidden_channels
- self.kernel_size = (kernel_size,)
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(
- gin_channels, 2 * hidden_channels * n_layers, 1
- )
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
-
- for i in range(n_layers):
- dilation = dilation_rate**i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(
- hidden_channels,
- 2 * hidden_channels,
- kernel_size,
- dilation=dilation,
- padding=padding,
- )
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, : self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels :, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- ]
- )
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False,
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=p_dropout,
- gin_channels=gin_channels,
- )
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class ConvFlow(nn.Module):
- def __init__(
- self,
- in_channels,
- filter_channels,
- kernel_size,
- n_layers,
- num_bins=10,
- tail_bound=5.0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
- self.proj = nn.Conv1d(
- filter_channels, self.half_channels * (num_bins * 3 - 1), 1
- )
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
- self.filter_channels
- )
- unnormalized_derivatives = h[..., 2 * self.num_bins :]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(
- x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails="linear",
- tail_bound=self.tail_bound,
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
deleted file mode 100644
index 06f2b79f5e5c6f2049bf8220c29ae20c3f82d524..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
+++ /dev/null
@@ -1,98 +0,0 @@
-import numpy as np
-import parselmouth
-
-from infer.lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-
-
-class PMF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def compute_f0(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0
-
- def compute_f0_uv(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0, uv
diff --git a/spaces/GT-RIPL/GPT-K/knowledge/retrieve.py b/spaces/GT-RIPL/GPT-K/knowledge/retrieve.py
deleted file mode 100644
index 30126aadff6922c192d949feb95a60ef7890bab7..0000000000000000000000000000000000000000
--- a/spaces/GT-RIPL/GPT-K/knowledge/retrieve.py
+++ /dev/null
@@ -1,105 +0,0 @@
-import h5py
-import numpy as np
-from tqdm import tqdm
-import torch
-from knowledge import TextDB
-
-
-class ImageCropsIdx:
- def __init__(self, knowledge_idx, topk_w, topk_f, topk_n):
- topk = {"whole": topk_w, "five": topk_f, "nine": topk_n}
- self.topk = {k: v for k, v in topk.items() if v > 0}
-
- self.knowledge_idx, self.fdim, self.file_hash = self.load(knowledge_idx, self.topk)
-
- def load(self, knowledge_idx, topk):
- with h5py.File(knowledge_idx, "r") as f:
- fdim = f.attrs["fdim"]
- file_hash = f.attrs["file_hash"]
-
- knowledge_idx_ = {}
- for i in tqdm(range(len(f)), desc="Load sentence idx", dynamic_ncols=True, mininterval=1.0):
- knowledge_idx_[str(i)] = {"image_ids": f[f"{i}/image_ids"][:]}
- for k, v in topk.items():
- knowledge_idx_[str(i)][k] = {
- "index": f[f"{i}/{k}/index"][:, :, :v],
- "score": f[f"{i}/{k}/score"][:, :, :v],
- "query": f[f"{i}/{k}/query"][:]
- }
-
- knowledge_idx = {}
- for i in knowledge_idx_.keys():
- for j, id in enumerate(knowledge_idx_[i]["image_ids"]):
- knowledge_idx[id] = {}
- for k in topk.keys():
- knowledge_idx[id][k] = {
- "index": knowledge_idx_[i][k]["index"][j],
- "score": knowledge_idx_[i][k]["score"][j],
- "query": knowledge_idx_[i][k]["query"][j],
- }
-
- return knowledge_idx, fdim, file_hash
-
- def __getitem__(self, image_id):
- return self.knowledge_idx[image_id]
-
-
-class KnowAugImageCrops:
- def __init__(self, knowledge_db: TextDB, knowledge_idx: ImageCropsIdx, return_txt=False):
- self.knowledge_db = knowledge_db
- self.knowledge_idx = knowledge_idx
- assert knowledge_db.file_hash == knowledge_idx.file_hash
-
- self.ncrop = {"whole": 1, "five": 5, "nine": 9}
- self.topk = knowledge_idx.topk
- self.fdim = knowledge_idx.fdim
-
- self.return_txt = return_txt
-
- def __call__(self, image_id):
- ret = {}
- for k in self.topk.keys():
- ki = self.knowledge_idx[image_id][k]["index"].flatten()
- ke, kt = self.knowledge_db[ki]
- kq = self.knowledge_idx[image_id][k]["query"]
- kp = np.tile(np.arange(self.ncrop[k])[:, None], (1, self.topk[k])).flatten()
- ks = self.knowledge_idx[image_id][k]["score"].flatten()
-
- ke = torch.FloatTensor(ke)
- kq = torch.FloatTensor(kq)
- kp = torch.LongTensor(kp)
- ks = torch.FloatTensor(ks)
-
- ret[k] = {"embed": ke, "query": kq, "pos": kp, "score": ks}
- if self.return_txt:
- ret[k]["text"] = kt
-
- return ret
-
-
-class KnowAugImageCropsCombined:
- def __init__(
- self,
- knwl_aug_obj: KnowAugImageCrops,
- knwl_aug_attr: KnowAugImageCrops,
- knwl_aug_act: KnowAugImageCrops
- ):
- self.knwl_aug_obj = knwl_aug_obj
- self.knwl_aug_act = knwl_aug_act
- self.knwl_aug_attr = knwl_aug_attr
- self.fdim = knwl_aug_obj.fdim
-
- def __call__(self, image_id):
- knwl_obj = self.knwl_aug_obj(image_id)
- knwl_attr = self.knwl_aug_attr(image_id)
- knwl_act = self.knwl_aug_act(image_id)
-
- ret = {}
- for k in knwl_obj.keys():
- ret[k] = {
- "obj": knwl_obj[k],
- "attr": knwl_attr[k],
- "act": knwl_act[k]
- }
-
- return ret
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/models/streams/two_stream_transport_lang_fusion.py b/spaces/Gen-Sim/Gen-Sim/cliport/models/streams/two_stream_transport_lang_fusion.py
deleted file mode 100644
index b20a28c446071ed50dad3ce7977ae6c9b459fec3..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/models/streams/two_stream_transport_lang_fusion.py
+++ /dev/null
@@ -1,196 +0,0 @@
-import torch
-import numpy as np
-
-import cliport.models as models
-import cliport.models.core.fusion as fusion
-from cliport.models.core.transport import Transport
-
-
-class TwoStreamTransportLangFusion(Transport):
- """Two Stream Transport (a.k.a Place) module"""
-
- def __init__(self, stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device):
- self.fusion_type = cfg['train']['trans_stream_fusion_type']
- super().__init__(stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device)
-
- def _build_nets(self):
- stream_one_fcn, stream_two_fcn = self.stream_fcn
- stream_one_model = models.names[stream_one_fcn]
- stream_two_model = models.names[stream_two_fcn]
-
- self.key_stream_one = stream_one_model(self.in_shape, self.output_dim, self.cfg, self.device, self.preprocess)
- self.key_stream_two = stream_two_model(self.in_shape, self.output_dim, self.cfg, self.device, self.preprocess)
- self.query_stream_one = stream_one_model(self.kernel_shape, self.kernel_dim, self.cfg, self.device, self.preprocess)
- self.query_stream_two = stream_two_model(self.kernel_shape, self.kernel_dim, self.cfg, self.device, self.preprocess)
- self.fusion_key = fusion.names[self.fusion_type](input_dim=self.kernel_dim)
- self.fusion_query = fusion.names[self.fusion_type](input_dim=self.kernel_dim)
-
- print(f"Transport FCN - Stream One: {stream_one_fcn}, Stream Two: {stream_two_fcn}, Stream Fusion: {self.fusion_type}")
-
- def transport2(self, in_tensor, crop, l):
- logits = self.fusion_key(self.key_stream_one(in_tensor), self.key_stream_two(in_tensor, l))
- kernel = self.fusion_query(self.query_stream_one(crop), self.query_stream_two(crop, l))
- return logits, kernel
-
- def forward(self, inp_img, p, lang_goal, softmax=True):
- """Forward pass."""
- if len(inp_img.shape) < 4:
- inp_img = inp_img[None]
-
- if type(inp_img) is not torch.Tensor:
- in_data = inp_img # .reshape(in_shape)
- in_tens = torch.from_numpy(in_data).to(dtype=torch.float, device=self.device) # [B W H 6]
- else:
- in_data = inp_img
- in_tens = in_data
-
- in_tensor = torch.nn.functional.pad(in_tens, tuple(self.padding[[2,1,0]].reshape(-1)), mode='constant')
- if type(p[0]) is not torch.Tensor:
- p = torch.FloatTensor(p)[None]
-
- in_tensors = []
- crops = []
-
- # this for loop is fast.
- for i in range(len(in_tensor)):
- in_tensor_i = in_tensor[[i]]
- # Rotation pivot.
- pv = p[i] + self.pad_size
-
- # Crop before network (default for Transporters CoRL 2020).
- hcrop = self.pad_size
- in_tensor_i = in_tensor_i.permute(0, 3, 1, 2)
-
- crop = [in_tensor_i] * self.n_rotations
- crop = self.rotator(crop, pivot=pv.float())
- crop = torch.cat(crop, dim=0)
- crop = crop[:, :, int(pv[0]-hcrop):int(pv[0]+hcrop), int(pv[1]-hcrop):int(pv[1]+hcrop)]
-
- in_tensors.append(in_tensor_i)
- crops.append(crop)
-
- logits, kernels = self.transport(torch.cat(in_tensors,dim=0), torch.cat(crops, dim=0), lang_goal) #crops.shape:(8, 36, 6, 64, 64)
- res = self.correlate(logits, kernels, softmax)
- return res
-
-class TwoStreamTransportLangFusionLat(TwoStreamTransportLangFusion):
- """Two Stream Transport (a.k.a Place) module with lateral connections"""
-
- def __init__(self, stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device):
-
- self.fusion_type = cfg['train']['trans_stream_fusion_type']
- super().__init__(stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device)
-
- def transport(self, in_tensor, crop, l):
- key_out_one, key_lat_one = self.key_stream_one(in_tensor)
- key_out_two = self.key_stream_two(in_tensor, key_lat_one, l)
- logits = self.fusion_key(key_out_one, key_out_two)
-
- query_out_one, query_lat_one = self.query_stream_one(crop)
- query_out_two = self.query_stream_two(crop, query_lat_one, l)
- kernel = self.fusion_query(query_out_one, query_out_two)
-
- return logits, kernel
-
-
-class TwoStreamTransportLangFusionLatReduce(TwoStreamTransportLangFusionLat):
- """Two Stream Transport (a.k.a Place) module with lateral connections"""
-
- def __init__(self, stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device):
-
- self.fusion_type = cfg['train']['trans_stream_fusion_type']
- super().__init__(stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device)
-
- del self.query_stream_one
- del self.query_stream_two
- # del self.key_stream_one
- # del self.key_stream_two
-
- stream_one_fcn = 'plain_resnet_reduce_lat'
- stream_one_model = models.names[stream_one_fcn]
- stream_two_fcn = 'clip_ling'
- stream_two_model = models.names[stream_two_fcn]
-
-
-
- # self.key_stream_one = stream_one_model(self.in_shape, self.output_dim, self.cfg, self.device, self.preprocess)
- # self.key_stream_two = stream_two_model(self.in_shape, self.output_dim, self.cfg, self.device, self.preprocess)
-
- self.query_stream_one = stream_one_model(self.kernel_shape, self.kernel_dim, self.cfg, self.device, self.preprocess)
- self.query_stream_two = stream_two_model(self.kernel_shape, self.kernel_dim, self.cfg, self.device, self.preprocess)
-
- def transport(self, in_tensor, crop, l):
- key_out_one, key_lat_one = self.key_stream_one(in_tensor)
- key_out_two = self.key_stream_two(in_tensor, key_lat_one, l)
- logits = self.fusion_key(key_out_one, key_out_two)
-
- query_out_one, query_lat_one = self.query_stream_one(crop)
- query_out_two = self.query_stream_two(crop, query_lat_one, l)
- kernel = self.fusion_query(query_out_one, query_out_two)
-
- return logits, kernel
-
-
-
-
-
-class TwoStreamTransportLangFusionLatReduceOneStream(TwoStreamTransportLangFusionLatReduce):
- """Two Stream Transport (a.k.a Place) module with lateral connections"""
-
- def __init__(self, stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device):
-
- self.fusion_type = cfg['train']['trans_stream_fusion_type']
- super().__init__(stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device)
-
- del self.query_stream_one
- del self.query_stream_two
-
-
-
- def transport(self, in_tensor, crop, l):
- key_out_one, key_lat_one = self.key_stream_one(in_tensor)
- key_out_two = self.key_stream_two(in_tensor, key_lat_one, l)
- logits = self.fusion_key(key_out_one, key_out_two)
-
- query_out_one, query_lat_one = self.key_stream_one(crop)
- query_out_two = self.key_stream_two(crop, query_lat_one, l)
- kernel = self.fusion_query(query_out_one, query_out_two)
-
- return logits, kernel
-
-
-
-
-class TwoStreamTransportLangFusionLatPretrained18(TwoStreamTransportLangFusionLat):
- """Two Stream Transport (a.k.a Place) module with lateral connections"""
-
- def __init__(self, stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device):
-
- self.fusion_type = cfg['train']['trans_stream_fusion_type']
- super().__init__(stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device)
-
- del self.query_stream_one
- del self.query_stream_two
- # del self.key_stream_one
- # del self.key_stream_two
- stream_one_fcn = 'pretrained_resnet18'
- stream_one_model = models.names[stream_one_fcn]
- stream_two_fcn = 'clip_ling'
- stream_two_model = models.names[stream_two_fcn]
-
- # self.key_stream_one = stream_one_model(self.in_shape, self.output_dim, self.cfg, self.device, self.preprocess)
- # self.key_stream_two = stream_two_model(self.in_shape, self.output_dim, self.cfg, self.device, self.preprocess)
-
- self.query_stream_one = stream_one_model(self.kernel_shape, self.kernel_dim, self.cfg, self.device, self.preprocess)
- self.query_stream_two = stream_two_model(self.kernel_shape, self.kernel_dim, self.cfg, self.device, self.preprocess)
-
- def transport(self, in_tensor, crop, l):
- key_out_one, key_lat_one = self.key_stream_one(in_tensor)
- key_out_two = self.key_stream_two(in_tensor, key_lat_one, l)
- logits = self.fusion_key(key_out_one, key_out_two)
-
- query_out_one, query_lat_one = self.query_stream_one(crop)
- query_out_two = self.query_stream_two(crop, query_lat_one, l)
- kernel = self.fusion_query(query_out_one, query_out_two)
-
- return logits, kernel
\ No newline at end of file
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/utils/__init__.py b/spaces/Gen-Sim/Gen-Sim/cliport/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/GeorgeOrville/bingo/postcss.config.js b/spaces/GeorgeOrville/bingo/postcss.config.js
deleted file mode 100644
index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000
--- a/spaces/GeorgeOrville/bingo/postcss.config.js
+++ /dev/null
@@ -1,6 +0,0 @@
-module.exports = {
- plugins: {
- tailwindcss: {},
- autoprefixer: {},
- },
-}
diff --git a/spaces/Gradio-Blocks/HairCLIP/model.py b/spaces/Gradio-Blocks/HairCLIP/model.py
deleted file mode 100644
index a16120b23a7a88c0c63fd9c74fe89fa8867b16eb..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/HairCLIP/model.py
+++ /dev/null
@@ -1,160 +0,0 @@
-from __future__ import annotations
-
-import argparse
-import os
-import pathlib
-import subprocess
-import sys
-from typing import Callable, Union
-
-import dlib
-import huggingface_hub
-import numpy as np
-import PIL.Image
-import torch
-import torch.nn as nn
-import torchvision.transforms as T
-
-if os.getenv('SYSTEM') == 'spaces' and not torch.cuda.is_available():
- with open('patch.e4e') as f:
- subprocess.run('patch -p1'.split(), cwd='encoder4editing', stdin=f)
- with open('patch.hairclip') as f:
- subprocess.run('patch -p1'.split(), cwd='HairCLIP', stdin=f)
-
-app_dir = pathlib.Path(__file__).parent
-
-e4e_dir = app_dir / 'encoder4editing'
-sys.path.insert(0, e4e_dir.as_posix())
-
-from models.psp import pSp
-from utils.alignment import align_face
-
-hairclip_dir = app_dir / 'HairCLIP'
-mapper_dir = hairclip_dir / 'mapper'
-sys.path.insert(0, hairclip_dir.as_posix())
-sys.path.insert(0, mapper_dir.as_posix())
-
-from mapper.datasets.latents_dataset_inference import LatentsDatasetInference
-from mapper.hairclip_mapper import HairCLIPMapper
-
-
-class Model:
- def __init__(self):
- self.device = torch.device(
- 'cuda:0' if torch.cuda.is_available() else 'cpu')
- self.landmark_model = self._create_dlib_landmark_model()
- self.e4e = self._load_e4e()
- self.hairclip = self._load_hairclip()
- self.transform = self._create_transform()
-
- @staticmethod
- def _create_dlib_landmark_model():
- path = huggingface_hub.hf_hub_download(
- 'public-data/dlib_face_landmark_model',
- 'shape_predictor_68_face_landmarks.dat')
- return dlib.shape_predictor(path)
-
- def _load_e4e(self) -> nn.Module:
- ckpt_path = huggingface_hub.hf_hub_download('public-data/e4e',
- 'e4e_ffhq_encode.pt')
- ckpt = torch.load(ckpt_path, map_location='cpu')
- opts = ckpt['opts']
- opts['device'] = self.device.type
- opts['checkpoint_path'] = ckpt_path
- opts = argparse.Namespace(**opts)
- model = pSp(opts)
- model.to(self.device)
- model.eval()
- return model
-
- def _load_hairclip(self) -> nn.Module:
- ckpt_path = huggingface_hub.hf_hub_download('public-data/HairCLIP',
- 'hairclip.pt')
- ckpt = torch.load(ckpt_path, map_location='cpu')
- opts = ckpt['opts']
- opts['device'] = self.device.type
- opts['checkpoint_path'] = ckpt_path
- opts['editing_type'] = 'both'
- opts['input_type'] = 'text'
- opts['hairstyle_description'] = 'HairCLIP/mapper/hairstyle_list.txt'
- opts['color_description'] = 'red'
- opts = argparse.Namespace(**opts)
- model = HairCLIPMapper(opts)
- model.to(self.device)
- model.eval()
- return model
-
- @staticmethod
- def _create_transform() -> Callable:
- transform = T.Compose([
- T.Resize(256),
- T.CenterCrop(256),
- T.ToTensor(),
- T.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
- ])
- return transform
-
- def detect_and_align_face(self, image: str) -> PIL.Image.Image:
- image = align_face(filepath=image, predictor=self.landmark_model)
- return image
-
- @staticmethod
- def denormalize(tensor: torch.Tensor) -> torch.Tensor:
- return torch.clamp((tensor + 1) / 2 * 255, 0, 255).to(torch.uint8)
-
- def postprocess(self, tensor: torch.Tensor) -> np.ndarray:
- tensor = self.denormalize(tensor)
- return tensor.cpu().numpy().transpose(1, 2, 0)
-
- @torch.inference_mode()
- def reconstruct_face(
- self, image: PIL.Image.Image) -> tuple[np.ndarray, torch.Tensor]:
- input_data = self.transform(image).unsqueeze(0).to(self.device)
- reconstructed_images, latents = self.e4e(input_data,
- randomize_noise=False,
- return_latents=True)
- reconstructed = torch.clamp(reconstructed_images[0].detach(), -1, 1)
- reconstructed = self.postprocess(reconstructed)
- return reconstructed, latents[0]
-
- @torch.inference_mode()
- def generate(self, editing_type: str, hairstyle_index: int,
- color_description: str, latent: torch.Tensor) -> np.ndarray:
- opts = self.hairclip.opts
- opts.editing_type = editing_type
- opts.color_description = color_description
-
- if editing_type == 'color':
- hairstyle_index = 0
-
- device = torch.device(opts.device)
-
- dataset = LatentsDatasetInference(latents=latent.unsqueeze(0).cpu(),
- opts=opts)
- w, hairstyle_text_inputs_list, color_text_inputs_list = dataset[0][:3]
-
- w = w.unsqueeze(0).to(device)
- hairstyle_text_inputs = hairstyle_text_inputs_list[
- hairstyle_index].unsqueeze(0).to(device)
- color_text_inputs = color_text_inputs_list[0].unsqueeze(0).to(device)
-
- hairstyle_tensor_hairmasked = torch.Tensor([0]).unsqueeze(0).to(device)
- color_tensor_hairmasked = torch.Tensor([0]).unsqueeze(0).to(device)
-
- w_hat = w + 0.1 * self.hairclip.mapper(
- w,
- hairstyle_text_inputs,
- color_text_inputs,
- hairstyle_tensor_hairmasked,
- color_tensor_hairmasked,
- )
- x_hat, _ = self.hairclip.decoder(
- [w_hat],
- input_is_latent=True,
- return_latents=True,
- randomize_noise=False,
- truncation=1,
- )
- res = torch.clamp(x_hat[0].detach(), -1, 1)
- res = self.postprocess(res)
- return res
diff --git a/spaces/Gradio-Blocks/Story_and_Video_Generation/README.md b/spaces/Gradio-Blocks/Story_and_Video_Generation/README.md
deleted file mode 100644
index 5f7e90e9f81574e342ac4af6100d154a1ac807d9..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/Story_and_Video_Generation/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Story_and_Video_Generation
-emoji: 📖🎬
-colorFrom: blue
-colorTo: pink
-sdk: gradio
-sdk_version: 3.0.2
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/resnest.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/resnest.py
deleted file mode 100644
index 48e1d8bfa47348a13f0da0b9ecf32354fa270340..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/resnest.py
+++ /dev/null
@@ -1,317 +0,0 @@
-import math
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as cp
-from mmcv.cnn import build_conv_layer, build_norm_layer
-
-from ..builder import BACKBONES
-from ..utils import ResLayer
-from .resnet import Bottleneck as _Bottleneck
-from .resnet import ResNetV1d
-
-
-class RSoftmax(nn.Module):
- """Radix Softmax module in ``SplitAttentionConv2d``.
-
- Args:
- radix (int): Radix of input.
- groups (int): Groups of input.
- """
-
- def __init__(self, radix, groups):
- super().__init__()
- self.radix = radix
- self.groups = groups
-
- def forward(self, x):
- batch = x.size(0)
- if self.radix > 1:
- x = x.view(batch, self.groups, self.radix, -1).transpose(1, 2)
- x = F.softmax(x, dim=1)
- x = x.reshape(batch, -1)
- else:
- x = torch.sigmoid(x)
- return x
-
-
-class SplitAttentionConv2d(nn.Module):
- """Split-Attention Conv2d in ResNeSt.
-
- Args:
- in_channels (int): Number of channels in the input feature map.
- channels (int): Number of intermediate channels.
- kernel_size (int | tuple[int]): Size of the convolution kernel.
- stride (int | tuple[int]): Stride of the convolution.
- padding (int | tuple[int]): Zero-padding added to both sides of
- dilation (int | tuple[int]): Spacing between kernel elements.
- groups (int): Number of blocked connections from input channels to
- output channels.
- groups (int): Same as nn.Conv2d.
- radix (int): Radix of SpltAtConv2d. Default: 2
- reduction_factor (int): Reduction factor of inter_channels. Default: 4.
- conv_cfg (dict): Config dict for convolution layer. Default: None,
- which means using conv2d.
- norm_cfg (dict): Config dict for normalization layer. Default: None.
- dcn (dict): Config dict for DCN. Default: None.
- """
-
- def __init__(self,
- in_channels,
- channels,
- kernel_size,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- radix=2,
- reduction_factor=4,
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- dcn=None):
- super(SplitAttentionConv2d, self).__init__()
- inter_channels = max(in_channels * radix // reduction_factor, 32)
- self.radix = radix
- self.groups = groups
- self.channels = channels
- self.with_dcn = dcn is not None
- self.dcn = dcn
- fallback_on_stride = False
- if self.with_dcn:
- fallback_on_stride = self.dcn.pop('fallback_on_stride', False)
- if self.with_dcn and not fallback_on_stride:
- assert conv_cfg is None, 'conv_cfg must be None for DCN'
- conv_cfg = dcn
- self.conv = build_conv_layer(
- conv_cfg,
- in_channels,
- channels * radix,
- kernel_size,
- stride=stride,
- padding=padding,
- dilation=dilation,
- groups=groups * radix,
- bias=False)
- # To be consistent with original implementation, starting from 0
- self.norm0_name, norm0 = build_norm_layer(
- norm_cfg, channels * radix, postfix=0)
- self.add_module(self.norm0_name, norm0)
- self.relu = nn.ReLU(inplace=True)
- self.fc1 = build_conv_layer(
- None, channels, inter_channels, 1, groups=self.groups)
- self.norm1_name, norm1 = build_norm_layer(
- norm_cfg, inter_channels, postfix=1)
- self.add_module(self.norm1_name, norm1)
- self.fc2 = build_conv_layer(
- None, inter_channels, channels * radix, 1, groups=self.groups)
- self.rsoftmax = RSoftmax(radix, groups)
-
- @property
- def norm0(self):
- """nn.Module: the normalization layer named "norm0" """
- return getattr(self, self.norm0_name)
-
- @property
- def norm1(self):
- """nn.Module: the normalization layer named "norm1" """
- return getattr(self, self.norm1_name)
-
- def forward(self, x):
- x = self.conv(x)
- x = self.norm0(x)
- x = self.relu(x)
-
- batch, rchannel = x.shape[:2]
- batch = x.size(0)
- if self.radix > 1:
- splits = x.view(batch, self.radix, -1, *x.shape[2:])
- gap = splits.sum(dim=1)
- else:
- gap = x
- gap = F.adaptive_avg_pool2d(gap, 1)
- gap = self.fc1(gap)
-
- gap = self.norm1(gap)
- gap = self.relu(gap)
-
- atten = self.fc2(gap)
- atten = self.rsoftmax(atten).view(batch, -1, 1, 1)
-
- if self.radix > 1:
- attens = atten.view(batch, self.radix, -1, *atten.shape[2:])
- out = torch.sum(attens * splits, dim=1)
- else:
- out = atten * x
- return out.contiguous()
-
-
-class Bottleneck(_Bottleneck):
- """Bottleneck block for ResNeSt.
-
- Args:
- inplane (int): Input planes of this block.
- planes (int): Middle planes of this block.
- groups (int): Groups of conv2.
- base_width (int): Base of width in terms of base channels. Default: 4.
- base_channels (int): Base of channels for calculating width.
- Default: 64.
- radix (int): Radix of SpltAtConv2d. Default: 2
- reduction_factor (int): Reduction factor of inter_channels in
- SplitAttentionConv2d. Default: 4.
- avg_down_stride (bool): Whether to use average pool for stride in
- Bottleneck. Default: True.
- kwargs (dict): Key word arguments for base class.
- """
- expansion = 4
-
- def __init__(self,
- inplanes,
- planes,
- groups=1,
- base_width=4,
- base_channels=64,
- radix=2,
- reduction_factor=4,
- avg_down_stride=True,
- **kwargs):
- """Bottleneck block for ResNeSt."""
- super(Bottleneck, self).__init__(inplanes, planes, **kwargs)
-
- if groups == 1:
- width = self.planes
- else:
- width = math.floor(self.planes *
- (base_width / base_channels)) * groups
-
- self.avg_down_stride = avg_down_stride and self.conv2_stride > 1
-
- self.norm1_name, norm1 = build_norm_layer(
- self.norm_cfg, width, postfix=1)
- self.norm3_name, norm3 = build_norm_layer(
- self.norm_cfg, self.planes * self.expansion, postfix=3)
-
- self.conv1 = build_conv_layer(
- self.conv_cfg,
- self.inplanes,
- width,
- kernel_size=1,
- stride=self.conv1_stride,
- bias=False)
- self.add_module(self.norm1_name, norm1)
- self.with_modulated_dcn = False
- self.conv2 = SplitAttentionConv2d(
- width,
- width,
- kernel_size=3,
- stride=1 if self.avg_down_stride else self.conv2_stride,
- padding=self.dilation,
- dilation=self.dilation,
- groups=groups,
- radix=radix,
- reduction_factor=reduction_factor,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- dcn=self.dcn)
- delattr(self, self.norm2_name)
-
- if self.avg_down_stride:
- self.avd_layer = nn.AvgPool2d(3, self.conv2_stride, padding=1)
-
- self.conv3 = build_conv_layer(
- self.conv_cfg,
- width,
- self.planes * self.expansion,
- kernel_size=1,
- bias=False)
- self.add_module(self.norm3_name, norm3)
-
- def forward(self, x):
-
- def _inner_forward(x):
- identity = x
-
- out = self.conv1(x)
- out = self.norm1(out)
- out = self.relu(out)
-
- if self.with_plugins:
- out = self.forward_plugin(out, self.after_conv1_plugin_names)
-
- out = self.conv2(out)
-
- if self.avg_down_stride:
- out = self.avd_layer(out)
-
- if self.with_plugins:
- out = self.forward_plugin(out, self.after_conv2_plugin_names)
-
- out = self.conv3(out)
- out = self.norm3(out)
-
- if self.with_plugins:
- out = self.forward_plugin(out, self.after_conv3_plugin_names)
-
- if self.downsample is not None:
- identity = self.downsample(x)
-
- out += identity
-
- return out
-
- if self.with_cp and x.requires_grad:
- out = cp.checkpoint(_inner_forward, x)
- else:
- out = _inner_forward(x)
-
- out = self.relu(out)
-
- return out
-
-
-@BACKBONES.register_module()
-class ResNeSt(ResNetV1d):
- """ResNeSt backbone.
-
- Args:
- groups (int): Number of groups of Bottleneck. Default: 1
- base_width (int): Base width of Bottleneck. Default: 4
- radix (int): Radix of SplitAttentionConv2d. Default: 2
- reduction_factor (int): Reduction factor of inter_channels in
- SplitAttentionConv2d. Default: 4.
- avg_down_stride (bool): Whether to use average pool for stride in
- Bottleneck. Default: True.
- kwargs (dict): Keyword arguments for ResNet.
- """
-
- arch_settings = {
- 50: (Bottleneck, (3, 4, 6, 3)),
- 101: (Bottleneck, (3, 4, 23, 3)),
- 152: (Bottleneck, (3, 8, 36, 3)),
- 200: (Bottleneck, (3, 24, 36, 3))
- }
-
- def __init__(self,
- groups=1,
- base_width=4,
- radix=2,
- reduction_factor=4,
- avg_down_stride=True,
- **kwargs):
- self.groups = groups
- self.base_width = base_width
- self.radix = radix
- self.reduction_factor = reduction_factor
- self.avg_down_stride = avg_down_stride
- super(ResNeSt, self).__init__(**kwargs)
-
- def make_res_layer(self, **kwargs):
- """Pack all blocks in a stage into a ``ResLayer``."""
- return ResLayer(
- groups=self.groups,
- base_width=self.base_width,
- base_channels=self.base_channels,
- radix=self.radix,
- reduction_factor=self.reduction_factor,
- avg_down_stride=self.avg_down_stride,
- **kwargs)
diff --git a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/server.py b/spaces/HaHaBill/LandShapes-Antarctica/netdissect/server.py
deleted file mode 100644
index d8422a2bad5ac2a09d4582a98da4f962dac1a911..0000000000000000000000000000000000000000
--- a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/server.py
+++ /dev/null
@@ -1,185 +0,0 @@
-#!/usr/bin/env python
-
-import argparse, connexion, os, sys, yaml, json, socket
-from netdissect.easydict import EasyDict
-from flask import send_from_directory, redirect
-from flask_cors import CORS
-
-
-from netdissect.serverstate import DissectionProject
-
-__author__ = 'Hendrik Strobelt, David Bau'
-
-CONFIG_FILE_NAME = 'dissect.json'
-projects = {}
-
-app = connexion.App(__name__, debug=False)
-
-
-def get_all_projects():
- res = []
- for key, project in projects.items():
- # print key
- res.append({
- 'project': key,
- 'info': {
- 'layers': [layer['layer'] for layer in project.get_layers()]
- }
- })
- return sorted(res, key=lambda x: x['project'])
-
-def get_layers(project):
- return {
- 'request': {'project': project},
- 'res': projects[project].get_layers()
- }
-
-def get_units(project, layer):
- return {
- 'request': {'project': project, 'layer': layer},
- 'res': projects[project].get_units(layer)
- }
-
-def get_rankings(project, layer):
- return {
- 'request': {'project': project, 'layer': layer},
- 'res': projects[project].get_rankings(layer)
- }
-
-def get_levels(project, layer, quantiles):
- return {
- 'request': {'project': project, 'layer': layer, 'quantiles': quantiles},
- 'res': projects[project].get_levels(layer, quantiles)
- }
-
-def get_channels(project, layer):
- answer = dict(channels=projects[project].get_channels(layer))
- return {
- 'request': {'project': project, 'layer': layer},
- 'res': answer
- }
-
-def post_generate(gen_req):
- project = gen_req['project']
- zs = gen_req.get('zs', None)
- ids = gen_req.get('ids', None)
- return_urls = gen_req.get('return_urls', False)
- assert (zs is None) != (ids is None) # one or the other, not both
- ablations = gen_req.get('ablations', [])
- interventions = gen_req.get('interventions', None)
- # no z avilable if ablations
- generated = projects[project].generate_images(zs, ids, interventions,
- return_urls=return_urls)
- return {
- 'request': gen_req,
- 'res': generated
- }
-
-def post_features(feat_req):
- project = feat_req['project']
- ids = feat_req['ids']
- masks = feat_req.get('masks', None)
- layers = feat_req.get('layers', None)
- interventions = feat_req.get('interventions', None)
- features = projects[project].get_features(
- ids, masks, layers, interventions)
- return {
- 'request': feat_req,
- 'res': features
- }
-
-def post_featuremaps(feat_req):
- project = feat_req['project']
- ids = feat_req['ids']
- layers = feat_req.get('layers', None)
- interventions = feat_req.get('interventions', None)
- featuremaps = projects[project].get_featuremaps(
- ids, layers, interventions)
- return {
- 'request': feat_req,
- 'res': featuremaps
- }
-
-@app.route('/client/')
-def send_static(path):
- """ serves all files from ./client/ to ``/client/``
-
- :param path: path from api call
- """
- return send_from_directory(args.client, path)
-
-@app.route('/data/')
-def send_data(path):
- """ serves all files from the data dir to ``/dissect/``
-
- :param path: path from api call
- """
- print('Got the data route for', path)
- return send_from_directory(args.data, path)
-
-
-@app.route('/')
-def redirect_home():
- return redirect('/client/index.html', code=302)
-
-
-def load_projects(directory):
- """
- searches for CONFIG_FILE_NAME in all subdirectories of directory
- and creates data handlers for all of them
-
- :param directory: scan directory
- :return: null
- """
- project_dirs = []
- # Don't search more than 2 dirs deep.
- search_depth = 2 + directory.count(os.path.sep)
- for root, dirs, files in os.walk(directory):
- if CONFIG_FILE_NAME in files:
- project_dirs.append(root)
- # Don't get subprojects under a project dir.
- del dirs[:]
- elif root.count(os.path.sep) >= search_depth:
- del dirs[:]
- for p_dir in project_dirs:
- print('Loading %s' % os.path.join(p_dir, CONFIG_FILE_NAME))
- with open(os.path.join(p_dir, CONFIG_FILE_NAME), 'r') as jf:
- config = EasyDict(json.load(jf))
- dh_id = os.path.split(p_dir)[1]
- projects[dh_id] = DissectionProject(
- config=config,
- project_dir=p_dir,
- path_url='data/' + os.path.relpath(p_dir, directory),
- public_host=args.public_host)
-
-app.add_api('server.yaml')
-
-# add CORS support
-CORS(app.app, headers='Content-Type')
-
-parser = argparse.ArgumentParser()
-parser.add_argument("--nodebug", default=False)
-parser.add_argument("--address", default="127.0.0.1") # 0.0.0.0 for nonlocal use
-parser.add_argument("--port", default="5001")
-parser.add_argument("--public_host", default=None)
-parser.add_argument("--nocache", default=False)
-parser.add_argument("--data", type=str, default='dissect')
-parser.add_argument("--client", type=str, default='client_dist')
-
-if __name__ == '__main__':
- args = parser.parse_args()
- for d in [args.data, args.client]:
- if not os.path.isdir(d):
- print('No directory %s' % d)
- sys.exit(1)
- args.data = os.path.abspath(args.data)
- args.client = os.path.abspath(args.client)
- if args.public_host is None:
- args.public_host = '%s:%d' % (socket.getfqdn(), int(args.port))
- app.run(port=int(args.port), debug=not args.nodebug, host=args.address,
- use_reloader=False)
-else:
- args, _ = parser.parse_known_args()
- if args.public_host is None:
- args.public_host = '%s:%d' % (socket.getfqdn(), int(args.port))
- load_projects(args.data)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/criterions/tacotron2_loss.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/criterions/tacotron2_loss.py
deleted file mode 100644
index 8c7b655c8c52f8fa478b4568850ec8f741dab78e..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/criterions/tacotron2_loss.py
+++ /dev/null
@@ -1,210 +0,0 @@
-# Copyright (c) 2017-present, Facebook, Inc.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the LICENSE file in
-# the root directory of this source tree. An additional grant of patent rights
-# can be found in the PATENTS file in the same directory.
-
-import logging
-from typing import Any, Dict, List
-from functools import lru_cache
-from dataclasses import dataclass, field
-
-import torch
-from omegaconf import II
-
-from fairseq import metrics, utils
-from fairseq.criterions import FairseqCriterion, register_criterion
-from fairseq.dataclass import FairseqDataclass
-from fairseq.data.data_utils import lengths_to_mask
-import torch.nn.functional as F
-
-
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class Tacotron2CriterionConfig(FairseqDataclass):
- bce_pos_weight: float = field(
- default=1.0,
- metadata={"help": "weight of positive examples for BCE loss"},
- )
- n_frames_per_step: int = field(
- default=0,
- metadata={"help": "Number of frames per decoding step"},
- )
- use_guided_attention_loss: bool = field(
- default=False,
- metadata={"help": "use guided attention loss"},
- )
- guided_attention_loss_sigma: float = field(
- default=0.4,
- metadata={"help": "weight of positive examples for BCE loss"},
- )
- ctc_weight: float = field(
- default=0.0, metadata={"help": "weight for CTC loss"}
- )
- sentence_avg: bool = II("optimization.sentence_avg")
-
-
-class GuidedAttentionLoss(torch.nn.Module):
- """
- Efficiently Trainable Text-to-Speech System Based on Deep Convolutional
- Networks with Guided Attention (https://arxiv.org/abs/1710.08969)
- """
-
- def __init__(self, sigma):
- super().__init__()
- self.sigma = sigma
-
- @staticmethod
- @lru_cache(maxsize=8)
- def _get_weight(s_len, t_len, sigma):
- grid_x, grid_y = torch.meshgrid(torch.arange(t_len), torch.arange(s_len))
- grid_x = grid_x.to(s_len.device)
- grid_y = grid_y.to(s_len.device)
- w = (grid_y.float() / s_len - grid_x.float() / t_len) ** 2
- return 1.0 - torch.exp(-w / (2 * (sigma ** 2)))
-
- def _get_weights(self, src_lens, tgt_lens):
- bsz, max_s_len, max_t_len = len(src_lens), max(src_lens), max(tgt_lens)
- weights = torch.zeros((bsz, max_t_len, max_s_len))
- for i, (s_len, t_len) in enumerate(zip(src_lens, tgt_lens)):
- weights[i, :t_len, :s_len] = self._get_weight(s_len, t_len,
- self.sigma)
- return weights
-
- @staticmethod
- def _get_masks(src_lens, tgt_lens):
- in_masks = lengths_to_mask(src_lens)
- out_masks = lengths_to_mask(tgt_lens)
- return out_masks.unsqueeze(2) & in_masks.unsqueeze(1)
-
- def forward(self, attn, src_lens, tgt_lens, reduction="mean"):
- weights = self._get_weights(src_lens, tgt_lens).to(attn.device)
- masks = self._get_masks(src_lens, tgt_lens).to(attn.device)
- loss = (weights * attn.transpose(1, 2)).masked_select(masks)
- loss = torch.sum(loss) if reduction == "sum" else torch.mean(loss)
- return loss
-
-
-@register_criterion("tacotron2", dataclass=Tacotron2CriterionConfig)
-class Tacotron2Criterion(FairseqCriterion):
- def __init__(self, task, sentence_avg, n_frames_per_step,
- use_guided_attention_loss, guided_attention_loss_sigma,
- bce_pos_weight, ctc_weight):
- super().__init__(task)
- self.sentence_avg = sentence_avg
- self.n_frames_per_step = n_frames_per_step
- self.bce_pos_weight = bce_pos_weight
-
- self.guided_attn = None
- if use_guided_attention_loss:
- self.guided_attn = GuidedAttentionLoss(guided_attention_loss_sigma)
- self.ctc_weight = ctc_weight
-
- def forward(self, model, sample, reduction="mean"):
- bsz, max_len, _ = sample["target"].size()
- feat_tgt = sample["target"]
- feat_len = sample["target_lengths"].view(bsz, 1).expand(-1, max_len)
- eos_tgt = torch.arange(max_len).to(sample["target"].device)
- eos_tgt = eos_tgt.view(1, max_len).expand(bsz, -1)
- eos_tgt = (eos_tgt == (feat_len - 1)).float()
- src_tokens = sample["net_input"]["src_tokens"]
- src_lens = sample["net_input"]["src_lengths"]
- tgt_lens = sample["target_lengths"]
-
- feat_out, eos_out, extra = model(
- src_tokens=src_tokens,
- src_lengths=src_lens,
- prev_output_tokens=sample["net_input"]["prev_output_tokens"],
- incremental_state=None,
- target_lengths=tgt_lens,
- speaker=sample["speaker"]
- )
-
- l1_loss, mse_loss, eos_loss = self.compute_loss(
- extra["feature_out"], feat_out, eos_out, feat_tgt, eos_tgt,
- tgt_lens, reduction,
- )
- attn_loss = torch.tensor(0.).type_as(l1_loss)
- if self.guided_attn is not None:
- attn_loss = self.guided_attn(extra['attn'], src_lens, tgt_lens, reduction)
- ctc_loss = torch.tensor(0.).type_as(l1_loss)
- if self.ctc_weight > 0.:
- net_output = (feat_out, eos_out, extra)
- lprobs = model.get_normalized_probs(net_output, log_probs=True)
- lprobs = lprobs.transpose(0, 1) # T x B x C
- src_mask = lengths_to_mask(src_lens)
- src_tokens_flat = src_tokens.masked_select(src_mask)
- ctc_loss = F.ctc_loss(
- lprobs, src_tokens_flat, tgt_lens, src_lens,
- reduction=reduction, zero_infinity=True
- ) * self.ctc_weight
- loss = l1_loss + mse_loss + eos_loss + attn_loss + ctc_loss
-
- sample_size = sample["nsentences"] if self.sentence_avg \
- else sample["ntokens"]
- logging_output = {
- "loss": utils.item(loss.data),
- "ntokens": sample["ntokens"],
- "nsentences": sample["nsentences"],
- "sample_size": sample_size,
- "l1_loss": utils.item(l1_loss.data),
- "mse_loss": utils.item(mse_loss.data),
- "eos_loss": utils.item(eos_loss.data),
- "attn_loss": utils.item(attn_loss.data),
- "ctc_loss": utils.item(ctc_loss.data),
- }
- return loss, sample_size, logging_output
-
- def compute_loss(self, feat_out, feat_out_post, eos_out, feat_tgt,
- eos_tgt, tgt_lens, reduction="mean"):
- mask = lengths_to_mask(tgt_lens)
- _eos_out = eos_out[mask].squeeze()
- _eos_tgt = eos_tgt[mask]
- _feat_tgt = feat_tgt[mask]
- _feat_out = feat_out[mask]
- _feat_out_post = feat_out_post[mask]
-
- l1_loss = (
- F.l1_loss(_feat_out, _feat_tgt, reduction=reduction) +
- F.l1_loss(_feat_out_post, _feat_tgt, reduction=reduction)
- )
- mse_loss = (
- F.mse_loss(_feat_out, _feat_tgt, reduction=reduction) +
- F.mse_loss(_feat_out_post, _feat_tgt, reduction=reduction)
- )
- eos_loss = F.binary_cross_entropy_with_logits(
- _eos_out, _eos_tgt, pos_weight=torch.tensor(self.bce_pos_weight),
- reduction=reduction
- )
- return l1_loss, mse_loss, eos_loss
-
- @classmethod
- def reduce_metrics(cls, logging_outputs: List[Dict[str, Any]]) -> None:
- ns = [log.get("sample_size", 0) for log in logging_outputs]
- ntot = sum(ns)
- ws = [n / (ntot + 1e-8) for n in ns]
- for key in ["loss", "l1_loss", "mse_loss", "eos_loss", "attn_loss", "ctc_loss"]:
- vals = [log.get(key, 0) for log in logging_outputs]
- val = sum(val * w for val, w in zip(vals, ws))
- metrics.log_scalar(key, val, ntot, round=3)
- metrics.log_scalar("sample_size", ntot, len(logging_outputs))
-
- # inference metrics
- if "targ_frames" not in logging_outputs[0]:
- return
- n = sum(log.get("targ_frames", 0) for log in logging_outputs)
- for key, new_key in [
- ("mcd_loss", "mcd_loss"),
- ("pred_frames", "pred_ratio"),
- ("nins", "ins_rate"),
- ("ndel", "del_rate"),
- ]:
- val = sum(log.get(key, 0) for log in logging_outputs)
- metrics.log_scalar(new_key, val / n, n, round=3)
-
- @staticmethod
- def logging_outputs_can_be_summed() -> bool:
- return False
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/ema/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/ema/__init__.py
deleted file mode 100644
index 503ceaa609b092e48bd32a0031f4e2ffb875483f..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/ema/__init__.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import importlib
-import os
-
-from .ema import EMA
-
-
-def build_ema(model, cfg, device):
- return EMA(model, cfg, device)
-
-
-# automatically import any Python files in the models/ema/ directory
-for file in sorted(os.listdir(os.path.dirname(__file__))):
- if file.endswith(".py") and not file.startswith("_"):
- file_name = file[: file.find(".py")]
- importlib.import_module("fairseq.models.ema." + file_name)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/wav2vec/wav2vec.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/wav2vec/wav2vec.py
deleted file mode 100644
index af6604da10f504baabff50bf14a6eb2214bffef3..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/wav2vec/wav2vec.py
+++ /dev/null
@@ -1,630 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass, field
-import logging
-import math
-from typing import Optional, Tuple
-from omegaconf import II
-import sys
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq.dataclass import ChoiceEnum, FairseqDataclass
-from fairseq.models import BaseFairseqModel, register_model
-from fairseq.modules import (
- Fp32GroupNorm,
- Fp32LayerNorm,
- GumbelVectorQuantizer,
- KmeansVectorQuantizer,
- TransposeLast,
-)
-from fairseq.tasks import FairseqTask
-from fairseq.utils import buffered_arange
-
-
-logger = logging.getLogger(__name__)
-
-
-AGGREGATOR_CHOICES = ChoiceEnum(["cnn", "gru"])
-PROJECT_FEATURES_CHOICES = ChoiceEnum(["none", "same", "new"])
-ACTIVATION_CHOICES = ChoiceEnum(["relu", "gelu"])
-VQ_TYPE_CHOICES = ChoiceEnum(["none", "gumbel", "kmeans"])
-
-
-@dataclass
-class Wav2VecConfig(FairseqDataclass):
- prediction_steps: int = field(
- default=12, metadata={"help": "number of steps ahead to predict"}
- )
- sample_distance: Optional[int] = field(
- default=None,
- metadata={
- "help": "sample distance from target. does not work properly with cross-sampling"
- },
- )
- cross_sample_negatives: int = field(
- default=0, metadata={"help": "num of cross sampled negatives"}
- )
- num_negatives: int = field(
- default=10, metadata={"help": "num of sampled negatives"}
- )
- conv_feature_layers: str = field(
- default="[(512, 10, 5), (512, 8, 4), (512, 4, 2), (512, 4, 2), (512, 4, 2), (512, 1, 1), (512, 1, 1), (512, 1, 1)]",
- metadata={
- "help": "convolutional feature extraction layers [(dim, kernel_size, stride), ...]"
- },
- )
- conv_aggregator_layers: str = field(
- default="[(512, 2, 1), (512, 3, 1), (512, 4, 1), (512, 5, 1), (512, 6, 1), (512, 7, 1), (512, 8, 1), (512, 9, 1), (512, 10, 1), (512, 11, 1), (512, 12, 1), (512, 13, 1)]",
- metadata={
- "help": "convolutional aggregator layers [(dim, kernel_size, stride), ...]"
- },
- )
- dropout: float = field(
- default=0.0, metadata={"help": "dropout to apply within the model"}
- )
- dropout_features: float = field(
- default=0.0, metadata={"help": "dropout to apply to the features"}
- )
- dropout_agg: float = field(
- default=0.0, metadata={"help": "dropout to apply after aggregation step"}
- )
- aggregator: AGGREGATOR_CHOICES = field(
- default="cnn", metadata={"help": "type of aggregator to use"}
- )
- gru_dim: int = field(default=512, metadata={"help": "GRU dimensionality"})
- no_conv_bias: bool = field(
- default=False, metadata={"help": "if set, does not learn bias for conv layers"}
- )
- agg_zero_pad: bool = field(
- default=False,
- metadata={"help": "if set, zero pads in aggregator instead of repl pad"},
- )
- skip_connections_feat: bool = field(
- default=False,
- metadata={"help": "if set, adds skip connections to the feature extractor"},
- )
- skip_connections_agg: bool = field(
- default=True,
- metadata={"help": "if set, adds skip connections to the aggregator"},
- )
- residual_scale: float = field(
- default=0.5, metadata={"help": "scales residual by sqrt(value)"}
- )
- log_compression: bool = field(
- default=True,
- metadata={"help": "if set, adds a log compression to feature extractor"},
- )
- balanced_classes: bool = field(
- default=False,
- metadata={"help": "if set, loss is scaled to balance for number of negatives"},
- )
- project_features: PROJECT_FEATURES_CHOICES = field(
- default="none",
- metadata={
- "help": "if not none, features are projected using the (same or new) aggregator"
- },
- )
- non_affine_group_norm: bool = field(
- default=False, metadata={"help": "if set, group norm is not affine"}
- )
- offset: str = field(
- default="auto",
- metadata={
- "help": "if set to 'auto', it is computed automatically from the receptive field, else set to int value"
- },
- )
- activation: ACTIVATION_CHOICES = field(
- default="relu",
- metadata={
- "help": "if set to 'auto', it is computed automatically from the receptive field, else set to int value"
- },
- )
- vq_type: VQ_TYPE_CHOICES = field(
- default="none", metadata={"help": "which type of quantizer to use"}
- )
- vq_vars: int = field(
- default=320,
- metadata={"help": "project to this many vector quantized variables per group"},
- )
- vq_groups: int = field(
- default=2, metadata={"help": "number of groups of latent variables"}
- )
- vq_dim: int = field(
- default=0,
- metadata={
- "help": "uses this dimensionality for quantized vectors. 0 to use model dim // groups"
- },
- )
- vq_depth: int = field(
- default=1, metadata={"help": "number of layers for vq weight projection"}
- )
- combine_groups: bool = field(
- default=False, metadata={"help": "if set, variables are shared among groups"}
- )
- vq_temp: Tuple[float, float, float] = field(
- default=(2.0, 0.5, 0.999995),
- metadata={
- "help": "temperature for latent variable sampling with gumbel softmax. should be a tuple of 3 values (start, end, decay)"
- },
- )
- vq_gamma: float = field(
- default=0.25,
- metadata={"help": "gamma parameter for kmeans style vector quantization"},
- )
- infonce: bool = II("criterion.infonce")
-
-
-@register_model("wav2vec", dataclass=Wav2VecConfig)
-class Wav2VecModel(BaseFairseqModel):
- @classmethod
- def build_model(cls, cfg: Wav2VecConfig, task: FairseqTask):
- """Build a new model instance."""
-
- model = Wav2VecModel(cfg)
- logger.info(model)
- return model
-
- def __init__(self, cfg: Wav2VecConfig):
- super().__init__()
-
- self.prediction_steps = cfg.prediction_steps
- offset = cfg.offset
-
- if cfg.activation == "relu":
- activation = nn.ReLU()
- elif cfg.activation == "gelu":
- activation = nn.GELU()
- else:
- raise Exception("unknown activation " + cfg.activation)
-
- feature_enc_layers = eval(cfg.conv_feature_layers)
- self.feature_extractor = ConvFeatureExtractionModel(
- conv_layers=feature_enc_layers,
- dropout=0.0,
- log_compression=cfg.log_compression,
- skip_connections=cfg.skip_connections_feat,
- residual_scale=cfg.residual_scale,
- non_affine_group_norm=cfg.non_affine_group_norm,
- activation=activation,
- )
- embed = feature_enc_layers[-1][0]
-
- self.vector_quantizer = None
- if cfg.vq_type == "gumbel":
- self.vector_quantizer = GumbelVectorQuantizer(
- dim=embed,
- num_vars=cfg.vq_vars,
- temp=cfg.vq_temp,
- groups=cfg.vq_groups,
- combine_groups=cfg.combine_groups,
- vq_dim=cfg.vq_dim if cfg.vq_dim > 0 else embed,
- time_first=False,
- activation=activation,
- weight_proj_depth=cfg.vq_depth,
- weight_proj_factor=2,
- )
- elif cfg.vq_type == "kmeans":
- self.vector_quantizer = KmeansVectorQuantizer(
- dim=embed,
- num_vars=cfg.vq_vars,
- groups=cfg.vq_groups,
- combine_groups=cfg.combine_groups,
- vq_dim=cfg.vq_dim if cfg.vq_dim > 0 else embed,
- time_first=False,
- gamma=cfg.vq_gamma,
- )
- else:
- assert (
- cfg.vq_type == "none" or cfg.vq_type is None
- ), "Unknown quantizer type"
-
- if cfg.offset == "auto":
- jin = 0
- rin = 0
- for _, k, stride in feature_enc_layers:
- if rin == 0:
- rin = k
- rin = rin + (k - 1) * jin
- if jin == 0:
- jin = stride
- else:
- jin *= stride
- offset = math.ceil(rin / jin)
-
- offset = int(offset)
-
- def make_aggregator():
- if cfg.aggregator == "cnn":
- agg_layers = eval(cfg.conv_aggregator_layers)
- agg_dim = agg_layers[-1][0]
- feature_aggregator = ConvAggegator(
- conv_layers=agg_layers,
- embed=embed,
- dropout=cfg.dropout,
- skip_connections=cfg.skip_connections_agg,
- residual_scale=cfg.residual_scale,
- non_affine_group_norm=cfg.non_affine_group_norm,
- conv_bias=not cfg.no_conv_bias,
- zero_pad=cfg.agg_zero_pad,
- activation=activation,
- )
- elif cfg.aggregator == "gru":
- agg_dim = cfg.gru_dim
- feature_aggregator = nn.Sequential(
- TransposeLast(),
- nn.GRU(
- input_size=embed,
- hidden_size=agg_dim,
- num_layers=1,
- dropout=cfg.dropout,
- ),
- TransposeLast(deconstruct_idx=0),
- )
- else:
- raise Exception("unknown aggregator type " + cfg.aggregator)
-
- return feature_aggregator, agg_dim
-
- self.feature_aggregator, agg_dim = make_aggregator()
-
- self.wav2vec_predictions = Wav2VecPredictionsModel(
- in_dim=agg_dim,
- out_dim=embed,
- prediction_steps=cfg.prediction_steps,
- n_negatives=cfg.num_negatives,
- cross_sample_negatives=cfg.cross_sample_negatives,
- sample_distance=cfg.sample_distance,
- dropout=cfg.dropout,
- offset=offset,
- balanced_classes=cfg.balanced_classes,
- infonce=cfg.infonce,
- )
-
- self.dropout_feats = nn.Dropout(p=cfg.dropout_features)
- self.dropout_agg = nn.Dropout(p=cfg.dropout_agg)
-
- if cfg.project_features == "none":
- self.project_features = None
- elif cfg.project_features == "same":
- self.project_features = self.feature_aggregator
- elif cfg.project_features == "new":
- self.project_features, _ = make_aggregator()
-
- def forward(self, source):
- result = {}
-
- features = self.feature_extractor(source)
- if self.vector_quantizer:
- q_res = self.vector_quantizer(features)
- features = q_res["x"]
- for k in q_res.keys():
- if k != "x":
- result[k] = q_res[k]
-
- x = self.dropout_feats(features)
- x = self.feature_aggregator(x)
- x = self.dropout_agg(x)
-
- if self.project_features is not None:
- features = self.project_features(features)
- x, targets = self.wav2vec_predictions(x, features)
- result["cpc_logits"] = x
- result["cpc_targets"] = targets
-
- return result
-
- def upgrade_state_dict_named(self, state_dict, name):
- super().upgrade_state_dict_named(state_dict, name)
-
- def max_positions(self):
- """Maximum length supported by the model."""
- return sys.maxsize
-
- def get_logits(self, net_output):
- logits = net_output["cpc_logits"]
- return logits
-
- def get_targets(self, sample, net_output):
- t = net_output["cpc_targets"]
- if isinstance(t, tuple):
- t = t[0]
- return t.contiguous()
-
- def get_target_weights(self, targets, net_output):
- targets = net_output["cpc_targets"]
- if isinstance(targets, tuple) and targets[-1] is not None:
- return targets[-1]
- return None
-
- def get_extra_losses(self, net_output):
- loss = None
- if "prob_perplexity" in net_output:
- loss = net_output["num_vars"] - net_output["prob_perplexity"]
- elif "kmeans_loss" in net_output:
- loss = net_output["kmeans_loss"]
-
- return loss
-
-
-def norm_block(is_layer_norm, dim, affine=True):
- if is_layer_norm:
- mod = nn.Sequential(
- TransposeLast(),
- Fp32LayerNorm(dim, elementwise_affine=affine),
- TransposeLast(),
- )
- else:
- mod = Fp32GroupNorm(1, dim, affine=affine)
-
- return mod
-
-
-class ConvFeatureExtractionModel(nn.Module):
- def __init__(
- self,
- conv_layers,
- dropout,
- log_compression,
- skip_connections,
- residual_scale,
- non_affine_group_norm,
- activation,
- ):
- super().__init__()
-
- def block(n_in, n_out, k, stride):
- return nn.Sequential(
- nn.Conv1d(n_in, n_out, k, stride=stride, bias=False),
- nn.Dropout(p=dropout),
- norm_block(
- is_layer_norm=False, dim=n_out, affine=not non_affine_group_norm
- ),
- activation,
- )
-
- in_d = 1
- self.conv_layers = nn.ModuleList()
- for dim, k, stride in conv_layers:
- self.conv_layers.append(block(in_d, dim, k, stride))
- in_d = dim
-
- self.log_compression = log_compression
- self.skip_connections = skip_connections
- self.residual_scale = math.sqrt(residual_scale)
-
- def forward(self, x):
- # BxT -> BxCxT
- x = x.unsqueeze(1)
-
- for conv in self.conv_layers:
- residual = x
- x = conv(x)
- if self.skip_connections and x.size(1) == residual.size(1):
- tsz = x.size(2)
- r_tsz = residual.size(2)
- residual = residual[..., :: r_tsz // tsz][..., :tsz]
- x = (x + residual) * self.residual_scale
-
- if self.log_compression:
- x = x.abs()
- x = x + 1
- x = x.log()
-
- return x
-
-
-class ZeroPad1d(nn.Module):
- def __init__(self, pad_left, pad_right):
- super().__init__()
- self.pad_left = pad_left
- self.pad_right = pad_right
-
- def forward(self, x):
- return F.pad(x, (self.pad_left, self.pad_right))
-
-
-class ConvAggegator(nn.Module):
- def __init__(
- self,
- conv_layers,
- embed,
- dropout,
- skip_connections,
- residual_scale,
- non_affine_group_norm,
- conv_bias,
- zero_pad,
- activation,
- ):
- super().__init__()
-
- def block(n_in, n_out, k, stride):
- # padding dims only really make sense for stride = 1
- ka = k // 2
- kb = ka - 1 if k % 2 == 0 else ka
-
- pad = (
- ZeroPad1d(ka + kb, 0) if zero_pad else nn.ReplicationPad1d((ka + kb, 0))
- )
-
- return nn.Sequential(
- pad,
- nn.Conv1d(n_in, n_out, k, stride=stride, bias=conv_bias),
- nn.Dropout(p=dropout),
- norm_block(False, n_out, affine=not non_affine_group_norm),
- activation,
- )
-
- in_d = embed
- self.conv_layers = nn.ModuleList()
- self.residual_proj = nn.ModuleList()
- for dim, k, stride in conv_layers:
- if in_d != dim and skip_connections:
- self.residual_proj.append(nn.Conv1d(in_d, dim, 1, bias=False))
- else:
- self.residual_proj.append(None)
-
- self.conv_layers.append(block(in_d, dim, k, stride))
- in_d = dim
- self.conv_layers = nn.Sequential(*self.conv_layers)
- self.skip_connections = skip_connections
- self.residual_scale = math.sqrt(residual_scale)
-
- def forward(self, x):
- for rproj, conv in zip(self.residual_proj, self.conv_layers):
- residual = x
- x = conv(x)
- if self.skip_connections:
- if rproj is not None:
- residual = rproj(residual)
- x = (x + residual) * self.residual_scale
- return x
-
-
-class Wav2VecPredictionsModel(nn.Module):
- def __init__(
- self,
- in_dim,
- out_dim,
- prediction_steps,
- n_negatives,
- cross_sample_negatives,
- sample_distance,
- dropout,
- offset,
- balanced_classes,
- infonce,
- ):
- super().__init__()
-
- self.n_negatives = n_negatives
- self.cross_sample_negatives = cross_sample_negatives
- self.sample_distance = sample_distance
- self.project_to_steps = nn.ConvTranspose2d(
- in_dim, out_dim, (1, prediction_steps)
- )
- self.dropout = nn.Dropout(p=dropout)
- self.offset = offset
- self.balanced_classes = balanced_classes
- self.infonce = infonce
-
- def sample_negatives(self, y):
- bsz, fsz, tsz = y.shape
-
- y = y.transpose(0, 1) # BCT -> CBT
- y = y.contiguous().view(fsz, -1) # CBT => C(BxT)
-
- cross_high = tsz * bsz
- high = tsz if self.sample_distance is None else min(tsz, self.sample_distance)
- assert high > 1
-
- neg_idxs = torch.randint(low=0, high=high, size=(bsz, self.n_negatives * tsz))
-
- with torch.no_grad():
- if self.n_negatives > 0:
- tszs = (
- buffered_arange(tsz)
- .unsqueeze(-1)
- .expand(-1, self.n_negatives)
- .flatten()
- )
-
- neg_idxs = torch.randint(
- low=0, high=high - 1, size=(bsz, self.n_negatives * tsz)
- )
- neg_idxs[neg_idxs >= tszs] += 1
-
- if self.cross_sample_negatives > 0:
- tszs = (
- buffered_arange(tsz)
- .unsqueeze(-1)
- .expand(-1, self.cross_sample_negatives)
- .flatten()
- )
-
- cross_neg_idxs = torch.randint(
- low=0,
- high=cross_high - 1,
- size=(bsz, self.cross_sample_negatives * tsz),
- )
- cross_neg_idxs[cross_neg_idxs >= tszs] += 1
-
- if self.n_negatives > 0:
- for i in range(1, bsz):
- neg_idxs[i] += i * high
- else:
- neg_idxs = cross_neg_idxs
-
- if self.cross_sample_negatives > 0 and self.n_negatives > 0:
- neg_idxs = torch.cat([neg_idxs, cross_neg_idxs], dim=1)
-
- negs = y[..., neg_idxs.view(-1)]
- negs = negs.view(
- fsz, bsz, self.n_negatives + self.cross_sample_negatives, tsz
- ).permute(
- 2, 1, 0, 3
- ) # to NxBxCxT
-
- return negs
-
- def forward(self, x, y):
-
- x = x.unsqueeze(-1)
- x = self.project_to_steps(x) # BxCxTxS
- x = self.dropout(x)
-
- negatives = self.sample_negatives(y)
- y = y.unsqueeze(0)
- targets = torch.cat([y, negatives], dim=0) # Copies x B x C x T
-
- copies = targets.size(0)
- bsz, dim, tsz, steps = x.shape
- steps = min(steps, tsz - self.offset)
-
- predictions = x.new(
- bsz * copies * (tsz - self.offset + 1) * steps
- - ((steps + 1) * steps // 2) * copies * bsz
- )
- if self.infonce:
- labels = predictions.new_full(
- (predictions.shape[0] // copies,), 0, dtype=torch.long
- )
- else:
- labels = torch.zeros_like(predictions)
- weights = (
- torch.full_like(labels, 1 / self.n_negatives)
- if self.balanced_classes and not self.infonce
- else None
- )
-
- start = end = 0
- for i in range(steps):
- offset = i + self.offset
- end = start + (tsz - offset) * bsz * copies
- if self.infonce:
- predictions[start:end] = torch.einsum(
- "bct,nbct->tbn", x[..., :-offset, i], targets[..., offset:]
- ).flatten()
- else:
- pos_num = (end - start) // copies
- predictions[start:end] = torch.einsum(
- "bct,nbct->nbt", x[..., :-offset, i], targets[..., offset:]
- ).flatten()
- labels[start : start + pos_num] = 1.0
- if weights is not None:
- weights[start : start + pos_num] = 1.0
- start = end
- assert end == predictions.numel(), "{} != {}".format(end, predictions.numel())
-
- if self.infonce:
- predictions = predictions.view(-1, copies)
- else:
- if weights is not None:
- labels = (labels, weights)
-
- return predictions, labels
diff --git a/spaces/Harsha86390/mygenaichatgpt/README.md b/spaces/Harsha86390/mygenaichatgpt/README.md
deleted file mode 100644
index 192da35f7b1f227368d9f48815239c531ca28743..0000000000000000000000000000000000000000
--- a/spaces/Harsha86390/mygenaichatgpt/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Mygenaichatgpt
-emoji: 😻
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-sdk_version: 3.42.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Heisenberg08/Ai_Portrait_Mode/dataset.py b/spaces/Heisenberg08/Ai_Portrait_Mode/dataset.py
deleted file mode 100644
index f4138ea75f1587fb8a53f75adf02ab2c33751b4c..0000000000000000000000000000000000000000
--- a/spaces/Heisenberg08/Ai_Portrait_Mode/dataset.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import torch
-from torch.utils.data.dataloader import DataLoader,Dataset
-import torch.optim as optim
-import albumentations as A
-from albumentations.pytorch import ToTensorV2
-
-import numpy as np
-import matplotlib.pyplot as plt
-import os
-from PIL import Image
-
-class Segmentation_Dataset(Dataset):
- def __init__(self,img_dir,mask_dir,transform=None):
- self.img_dir=img_dir
- self.mask_dir=mask_dir
- self.transform=transform
- self.images=os.listdir(img_dir)
- self.images=[im for im in self.images if ".jpg" in im]
- def __len__(self):
- return len(self.images)
-
- def __getitem__(self,idx):
- img_path=os.path.join(self.img_dir,self.images[idx])
- mask_path=os.path.join(self.mask_dir,self.images[idx].replace(".jpg",".png"))
-
- image=np.array(Image.open(img_path).convert("RGB"))
- mask=np.array(Image.open(mask_path).convert("L"),dtype=np.float32)
- mask[mask==255]=1.0
-
- if self.transform is not None:
- augmentations=self.transform(image=image,mask=mask)
- image=augmentations["image"]
- mask=augmentations["mask"]
-
- return image, mask
-
\ No newline at end of file
diff --git a/spaces/Hila/RobustViT/ViT/weight_init.py b/spaces/Hila/RobustViT/ViT/weight_init.py
deleted file mode 100644
index 616373c3c1d0e9dc9cac51f85d791346e2240c99..0000000000000000000000000000000000000000
--- a/spaces/Hila/RobustViT/ViT/weight_init.py
+++ /dev/null
@@ -1,60 +0,0 @@
-import torch
-import math
-import warnings
-
-
-def _no_grad_trunc_normal_(tensor, mean, std, a, b):
- # Cut & paste from PyTorch official master until it's in a few official releases - RW
- # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf
- def norm_cdf(x):
- # Computes standard normal cumulative distribution function
- return (1. + math.erf(x / math.sqrt(2.))) / 2.
-
- if (mean < a - 2 * std) or (mean > b + 2 * std):
- warnings.warn("mean is more than 2 std from [a, b] in nn.init.trunc_normal_. "
- "The distribution of values may be incorrect.",
- stacklevel=2)
-
- with torch.no_grad():
- # Values are generated by using a truncated uniform distribution and
- # then using the inverse CDF for the normal distribution.
- # Get upper and lower cdf values
- l = norm_cdf((a - mean) / std)
- u = norm_cdf((b - mean) / std)
-
- # Uniformly fill tensor with values from [l, u], then translate to
- # [2l-1, 2u-1].
- tensor.uniform_(2 * l - 1, 2 * u - 1)
-
- # Use inverse cdf transform for normal distribution to get truncated
- # standard normal
- tensor.erfinv_()
-
- # Transform to proper mean, std
- tensor.mul_(std * math.sqrt(2.))
- tensor.add_(mean)
-
- # Clamp to ensure it's in the proper range
- tensor.clamp_(min=a, max=b)
- return tensor
-
-
-def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.):
- # type: (Tensor, float, float, float, float) -> Tensor
- r"""Fills the input Tensor with values drawn from a truncated
- normal distribution. The values are effectively drawn from the
- normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)`
- with values outside :math:`[a, b]` redrawn until they are within
- the bounds. The method used for generating the random values works
- best when :math:`a \leq \text{mean} \leq b`.
- Args:
- tensor: an n-dimensional `torch.Tensor`
- mean: the mean of the normal distribution
- std: the standard deviation of the normal distribution
- a: the minimum cutoff value
- b: the maximum cutoff value
- Examples:
- >>> w = torch.empty(3, 5)
- >>> nn.init.trunc_normal_(w)
- """
- return _no_grad_trunc_normal_(tensor, mean, std, a, b)
\ No newline at end of file
diff --git a/spaces/Hila/RobustViT/segmentation_dataset.py b/spaces/Hila/RobustViT/segmentation_dataset.py
deleted file mode 100644
index 285400bffbeb5aa24121e13dfefb220fad01d22a..0000000000000000000000000000000000000000
--- a/spaces/Hila/RobustViT/segmentation_dataset.py
+++ /dev/null
@@ -1,141 +0,0 @@
-import json
-from torch.utils import data
-from torchvision.datasets import ImageFolder
-import torch
-import os
-from PIL import Image
-import numpy as np
-import argparse
-from tqdm import tqdm
-from munkres import Munkres
-import multiprocessing
-from multiprocessing import Process, Manager
-import collections
-import torchvision.transforms as transforms
-import torchvision.transforms.functional as TF
-import random
-import torchvision
-import cv2
-import random
-torch.manual_seed(0)
-
-SegItem = collections.namedtuple('SegItem', ('image_name', 'tag'))
-
-normalize = transforms.Normalize(mean=[0.5, 0.5, 0.5],
- std=[0.5, 0.5, 0.5])
-
-TRANSFORM_TRAIN = transforms.Compose([
- transforms.RandomResizedCrop(224),
- transforms.RandomHorizontalFlip(),
- ])
-
-TRANSFORM_EVAL = transforms.Compose([
- transforms.Resize(256),
- transforms.CenterCrop(224),
-])
-
-IMAGE_TRANSFORMS = transforms.Compose([
- transforms.ToTensor(),
- normalize
-])
-
-MERGED_TAGS = {'n04356056', 'n04355933',
- 'n04493381', 'n02808440',
- 'n03642806', 'n03832673',
- 'n04008634', 'n03773504',
- 'n03887697', 'n15075141'}
-
-TRAIN_PARTITION = "train"
-VAL_PARTITION = "val"
-LEGAL_PARTITIONS = {TRAIN_PARTITION, VAL_PARTITION}
-
-# TRAIN_CLASSES = 500
-
-class SegmentationDataset(ImageFolder):
- def __init__(self, seg_path, imagenet_path, partition=TRAIN_PARTITION, num_samples=2, train_classes=500
- , imagenet_classes_path='imagenet_classes.json', seed=None):
- assert partition in LEGAL_PARTITIONS
- self._partition = partition
- self._seg_path = seg_path
- self._imagenet_path = imagenet_path
- with open(imagenet_classes_path, 'r') as f:
- self._imagenet_classes = json.load(f)
- self._tag_list = [tag for tag in os.listdir(self._seg_path) if tag not in MERGED_TAGS]
- if seed:
- print(f'Shuffling training classes with seed {seed}')
- random.seed(seed)
- random.shuffle(self._tag_list)
- if partition == TRAIN_PARTITION:
- # Skip merged tags
- self._tag_list = self._tag_list[:train_classes]
- elif partition == VAL_PARTITION:
- # Skip merged tags
- self._tag_list = self._tag_list[train_classes:]
- for tag in self._tag_list:
- assert tag in self._imagenet_classes
- self._all_segementations = []
- for tag in self._tag_list:
- base_dir = os.path.join(self._seg_path, tag)
- for i, seg in enumerate(os.listdir(base_dir)):
- if i >= num_samples:
- break
- self._all_segementations.append(SegItem(seg.split('.')[0], tag))
-
- def __getitem__(self, item):
- seg_item = self._all_segementations[item]
-
- seg_path = os.path.join(self._seg_path, seg_item.tag, seg_item.image_name + ".png")
- image_path = os.path.join(self._imagenet_path, seg_item.tag, seg_item.image_name + ".JPEG")
-
- seg_map = Image.open(seg_path)
- image = Image.open(image_path)
- image = image.convert('RGB')
-
- seg_map = np.array(seg_map)
- seg_map = seg_map[:, :, 1] * 256 + seg_map[:, :, 0]
-
- assert len([cand for cand in np.unique(seg_map) if cand != 0 and cand != 1000]) == 1
-
- # Convert to binary seg maps
- seg_map[seg_map == 1000] = 0
- seg_map[seg_map != 0] = 1
-
- seg_map = torch.from_numpy(seg_map.astype(np.float32))
-
- # transforms - start
- seg_map = seg_map.reshape(1, seg_map.shape[-2], seg_map.shape[-1])
-
- if self._partition == VAL_PARTITION:
- image = TRANSFORM_EVAL(image)
- seg_map = TRANSFORM_EVAL(seg_map)
-
- elif self._partition == TRAIN_PARTITION:
- # Resize
- resize = transforms.Resize(size=(256, 256))
- image = resize(image)
- seg_map = resize(seg_map)
-
- # Random crop
- i, j, h, w = transforms.RandomCrop.get_params(
- image, output_size=(224, 224))
- image = TF.crop(image, i, j, h, w)
- seg_map = TF.crop(seg_map, i, j, h, w)
-
- # RandomHorizontalFlip
- if random.random() > 0.5:
- image = TF.hflip(image)
- seg_map = TF.hflip(seg_map)
-
- else:
- raise Exception(f"Unsupported partition type {self._partition}")
-
- # normalize original image and turn to tensor
- image_ten = IMAGE_TRANSFORMS(image)
- # transforms - end
-
- class_name = int(self._imagenet_classes[seg_item.tag])
-
- return seg_map, image_ten, class_name
-
- def __len__(self):
- return len(self._all_segementations)
\ No newline at end of file
diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/docs/covost_example.md b/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/docs/covost_example.md
deleted file mode 100644
index 16447f041e4751f79d9f7848b33ef2ff943d63c2..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/docs/covost_example.md
+++ /dev/null
@@ -1,102 +0,0 @@
-[[Back]](..)
-
-# S2T Example: ST on CoVoST
-We replicate the experiments in
-[CoVoST 2 and Massively Multilingual Speech-to-Text Translation (Wang et al., 2020)](https://arxiv.org/abs/2007.10310).
-
-## Data Preparation
-[Download](https://commonvoice.mozilla.org/en/datasets) and unpack Common Voice v4 to a path
-`${COVOST_ROOT}/${SOURCE_LANG_ID}`, then preprocess it with
-```bash
-# additional Python packages for S2T data processing/model training
-pip install pandas torchaudio sentencepiece
-
-# En ASR
-python examples/speech_to_text/prep_covost_data.py \
- --data-root ${COVOST_ROOT} --vocab-type char --src-lang en
-# ST
-python examples/speech_to_text/prep_covost_data.py \
- --data-root ${COVOST_ROOT} --vocab-type char \
- --src-lang fr --tgt-lang en
-```
-The generated files (manifest, features, vocabulary and data configuration) will be added to
-`${COVOST_ROOT}/${SOURCE_LANG_ID}`.
-
-Download our vocabulary files if you want to use our pre-trained models:
-- ASR: [En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_asr_vocab_char.zip)
-- ST: [Fr-En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_fr_en_st_vocab_char.zip), [De-En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_de_en_st_vocab_char.zip), [Es-En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_es_en_st_vocab_char.zip), [Ca-En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_ca_en_st_vocab_char.zip), [En-De](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_de_st_vocab_char.zip), [En-Ca](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_ca_st_vocab_char.zip), [En-Fa](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_fa_st_vocab_char.zip), [En-Et](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_et_st_vocab_char.zip)
-
-## ASR
-#### Training
-We train an En ASR model for encoder pre-training of all ST models:
-```bash
-fairseq-train ${COVOST_ROOT}/en \
- --config-yaml config_asr_en.yaml --train-subset train_asr_en --valid-subset dev_asr_en \
- --save-dir ${ASR_SAVE_DIR} --num-workers 4 --max-tokens 50000 --max-update 60000 \
- --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --report-accuracy --arch s2t_transformer_s --dropout 0.15 --optimizer adam --lr 2e-3 \
- --lr-scheduler inverse_sqrt --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8
-```
-where `ASR_SAVE_DIR` is the checkpoint root path. We set `--update-freq 8` to simulate 8 GPUs with 1 GPU.
-You may want to update it accordingly when using more than 1 GPU.
-
-#### Inference & Evaluation
-```bash
-CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt
-python scripts/average_checkpoints.py \
- --inputs ${ASR_SAVE_DIR} --num-epoch-checkpoints 10 \
- --output "${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}"
-fairseq-generate ${COVOST_ROOT}/en \
- --config-yaml config_asr_en.yaml --gen-subset test_asr_en --task speech_to_text \
- --path ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 \
- --scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct
-```
-#### Results
-| --arch | Params | En | Model |
-|---|---|---|---|
-| s2t_transformer_s | 31M | 25.6 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_asr_transformer_s.pt) |
-
-## ST
-#### Training
-Fr-En as example:
-```bash
-fairseq-train ${COVOST_ROOT}/fr \
- --config-yaml config_st_fr_en.yaml --train-subset train_st_fr_en --valid-subset dev_st_fr_en \
- --save-dir ${ST_SAVE_DIR} --num-workers 4 --max-update 30000 --max-tokens 40000 \ # --max-tokens 50000 for en-*
- --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \
- --arch s2t_transformer_s --encoder-freezing-updates 1000 --optimizer adam --lr 2e-3 \
- --lr-scheduler inverse_sqrt --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 \
- --load-pretrained-encoder-from ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}
-```
-where `ST_SAVE_DIR` is the checkpoint root path. The ST encoder is pre-trained by En ASR for faster training and better
-performance: `--load-pretrained-encoder-from `. We set `--update-freq 8` to simulate 8 GPUs with 1 GPU.
-You may want to update it accordingly when using more than 1 GPU.
-
-#### Inference & Evaluation
-Average the last 10 checkpoints and evaluate on test split:
-```bash
-CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt
-python scripts/average_checkpoints.py \
- --inputs ${ST_SAVE_DIR} --num-epoch-checkpoints 10 \
- --output "${ST_SAVE_DIR}/${CHECKPOINT_FILENAME}"
-fairseq-generate ${COVOST_ROOT}/fr \
- --config-yaml config_st_fr_en.yaml --gen-subset test_st_fr_en --task speech_to_text \
- --path ${ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \
- --max-tokens 50000 --beam 5 --scoring sacrebleu
-```
-
-## Interactive Decoding
-Launch the interactive console via
-```bash
-fairseq-interactive ${COVOST_ROOT}/fr --config-yaml config_st_fr_en.yaml \
- --task speech_to_text --path ${SAVE_DIR}/${CHECKPOINT_FILENAME} \
- --max-tokens 50000 --beam 5
-```
-Type in WAV/FLAC/OGG audio paths (one per line) after the prompt.
-
-#### Results
-| --arch | Params | Fr-En | De-En | Es-En | Ca-En | En-De | En-Ca | En-Fa | En-Et | Model |
-|---|---|---|---|---|---|---|---|---|---|---|
-| s2t_transformer_s | 31M | [27.2](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_fr_en_st_transformer_s.pt) | [17.7](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_de_en_st_transformer_s.pt) | [23.1](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_es_en_st_transformer_s.pt) | [19.3](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_ca_en_st_transformer_s.pt) | [16.1](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_de_st_transformer_s.pt) | [21.6](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_ca_st_transformer_s.pt) | [12.9](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_fa_st_transformer_s.pt) | [12.8](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_et_st_transformer_s.pt) | (<-Download) |
-
-[[Back]](..)
diff --git a/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/__init__.py b/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/sort_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/sort_dataset.py
deleted file mode 100644
index b3890e7279e1f26db2e48ec0a91c639e9299d60f..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/data/sort_dataset.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-
-from . import BaseWrapperDataset
-
-
-class SortDataset(BaseWrapperDataset):
- def __init__(self, dataset, sort_order):
- super().__init__(dataset)
- if not isinstance(sort_order, (list, tuple)):
- sort_order = [sort_order]
- self.sort_order = sort_order
-
- assert all(len(so) == len(dataset) for so in sort_order)
-
- def ordered_indices(self):
- return np.lexsort(self.sort_order)
diff --git a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/transforms.py b/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/transforms.py
deleted file mode 100644
index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/transforms.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
-
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/Illumotion/Koboldcpp/examples/chat-vicuna.sh b/spaces/Illumotion/Koboldcpp/examples/chat-vicuna.sh
deleted file mode 100644
index 8c7b7bef42784d3037c377e71fc20e08a7302883..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/examples/chat-vicuna.sh
+++ /dev/null
@@ -1,41 +0,0 @@
-#!/bin/bash
-
-set -e
-
-cd "$(dirname "$0")/.." || exit
-
-MODEL="${MODEL:-./models/ggml-vic13b-uncensored-q5_0.bin}"
-PROMPT_TEMPLATE=${PROMPT_TEMPLATE:-./prompts/chat.txt}
-USER_NAME="### Human"
-AI_NAME="### Assistant"
-
-# Adjust to the number of CPU cores you want to use.
-N_THREAD="${N_THREAD:-8}"
-# Number of tokens to predict (made it larger than default because we want a long interaction)
-N_PREDICTS="${N_PREDICTS:-2048}"
-
-# Note: you can also override the generation options by specifying them on the command line:
-# For example, override the context size by doing: ./chatLLaMa --ctx_size 1024
-GEN_OPTIONS="${GEN_OPTIONS:---ctx_size 2048 --temp 0.7 --top_k 40 --top_p 0.5 --repeat_last_n 256 --batch_size 1024 --repeat_penalty 1.17647}"
-
-DATE_TIME=$(date +%H:%M)
-DATE_YEAR=$(date +%Y)
-
-PROMPT_FILE=$(mktemp -t llamacpp_prompt.XXXXXXX.txt)
-
-sed -e "s/\[\[USER_NAME\]\]/$USER_NAME/g" \
- -e "s/\[\[AI_NAME\]\]/$AI_NAME/g" \
- -e "s/\[\[DATE_TIME\]\]/$DATE_TIME/g" \
- -e "s/\[\[DATE_YEAR\]\]/$DATE_YEAR/g" \
- $PROMPT_TEMPLATE > $PROMPT_FILE
-
-# shellcheck disable=SC2086 # Intended splitting of GEN_OPTIONS
-./bin/main $GEN_OPTIONS \
- --model "$MODEL" \
- --threads "$N_THREAD" \
- --n_predict "$N_PREDICTS" \
- --color --interactive \
- --file ${PROMPT_FILE} \
- --reverse-prompt "### Human:" \
- --in-prefix ' ' \
- "$@"
diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/utils.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/utils.py
deleted file mode 100644
index f337db7db54c82be041698d694e1403e8918c4c0..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/utils.py
+++ /dev/null
@@ -1,40 +0,0 @@
-"""Modified from https://github.com/CSAILVision/semantic-segmentation-pytorch"""
-
-import os
-import sys
-
-import numpy as np
-import torch
-
-try:
- from urllib import urlretrieve
-except ImportError:
- from urllib.request import urlretrieve
-
-
-def load_url(url, model_dir='./pretrained', map_location=None):
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- filename = url.split('/')[-1]
- cached_file = os.path.join(model_dir, filename)
- if not os.path.exists(cached_file):
- sys.stderr.write('Downloading: "{}" to {}\n'.format(url, cached_file))
- urlretrieve(url, cached_file)
- return torch.load(cached_file, map_location=map_location)
-
-
-def color_encode(labelmap, colors, mode='RGB'):
- labelmap = labelmap.astype('int')
- labelmap_rgb = np.zeros((labelmap.shape[0], labelmap.shape[1], 3),
- dtype=np.uint8)
- for label in np.unique(labelmap):
- if label < 0:
- continue
- labelmap_rgb += (labelmap == label)[:, :, np.newaxis] * \
- np.tile(colors[label],
- (labelmap.shape[0], labelmap.shape[1], 1))
-
- if mode == 'BGR':
- return labelmap_rgb[:, :, ::-1]
- else:
- return labelmap_rgb
diff --git a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/monkey_patch_non_inplace.py b/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/monkey_patch_non_inplace.py
deleted file mode 100644
index 9661d70751261a11bbc33b57967efcf09d3cbe0c..0000000000000000000000000000000000000000
--- a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/monkey_patch_non_inplace.py
+++ /dev/null
@@ -1,118 +0,0 @@
-"""
-Monkey patch the llama implementation in the huggingface/transformers library.
-Avoid bugs in mps backend by not using in-place operations.
-"""
-import math
-from typing import List, Optional, Tuple
-
-import torch
-from torch import nn
-import transformers
-
-
-def rotate_half(x):
- """Rotates half the hidden dims of the input."""
- x1 = x[..., : x.shape[-1] // 2].clone()
- x2 = x[..., x.shape[-1] // 2 :].clone()
- return torch.cat((-x2, x1), dim=-1)
-
-
-def apply_rotary_pos_emb(q, k, cos, sin, position_ids):
- gather_indices = position_ids[:, None, :, None] # [bs, 1, seq_len, 1]
- gather_indices = gather_indices.repeat(1, cos.shape[1], 1, cos.shape[3])
- cos = torch.gather(cos.repeat(gather_indices.shape[0], 1, 1, 1), 2, gather_indices)
- sin = torch.gather(sin.repeat(gather_indices.shape[0], 1, 1, 1), 2, gather_indices)
- q_embed = (q * cos) + (rotate_half(q) * sin)
- k_embed = (k * cos) + (rotate_half(k) * sin)
- return q_embed, k_embed
-
-
-def forward(
- self,
- hidden_states: torch.Tensor,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- past_key_value: Optional[Tuple[torch.Tensor]] = None,
- output_attentions: bool = False,
- use_cache: bool = False,
-) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
- bsz, q_len, _ = hidden_states.size()
-
- query_states = (
- self.q_proj(hidden_states)
- .view(bsz, q_len, self.num_heads, self.head_dim)
- .transpose(1, 2)
- )
- key_states = (
- self.k_proj(hidden_states)
- .view(bsz, q_len, self.num_heads, self.head_dim)
- .transpose(1, 2)
- )
- value_states = (
- self.v_proj(hidden_states)
- .view(bsz, q_len, self.num_heads, self.head_dim)
- .transpose(1, 2)
- )
-
- kv_seq_len = key_states.shape[-2]
- if past_key_value is not None:
- kv_seq_len += past_key_value[0].shape[-2]
- cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
- query_states, key_states = apply_rotary_pos_emb(
- query_states, key_states, cos, sin, position_ids
- )
- # [bsz, nh, t, hd]
-
- if past_key_value is not None:
- # reuse k, v, self_attention
- key_states = torch.cat([past_key_value[0], key_states], dim=2)
- value_states = torch.cat([past_key_value[1], value_states], dim=2)
-
- past_key_value = (key_states, value_states) if use_cache else None
-
- attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(
- self.head_dim
- )
-
- if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
- raise ValueError(
- f"Attention weights should be of size {(bsz * self.num_heads, q_len, kv_seq_len)}, but is"
- f" {attn_weights.size()}"
- )
-
- if attention_mask is not None:
- if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
- raise ValueError(
- f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
- )
- attn_weights = attn_weights + attention_mask
- attn_weights = torch.max(
- attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min)
- )
-
- # upcast attention to fp32
- attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(
- query_states.dtype
- )
- attn_output = torch.matmul(attn_weights, value_states)
-
- if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
- raise ValueError(
- f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
- f" {attn_output.size()}"
- )
-
- attn_output = attn_output.transpose(1, 2)
- attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
-
- attn_output = self.o_proj(attn_output)
-
- if not output_attentions:
- attn_weights = None
-
- return attn_output, attn_weights, past_key_value
-
-
-def replace_llama_attn_with_non_inplace_operations():
- """Avoid bugs in mps backend by not using in-place operations."""
- transformers.models.llama.modeling_llama.LlamaAttention.forward = forward
diff --git a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/test_throughput.py b/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/test_throughput.py
deleted file mode 100644
index 9cc5f45c7e06deb596b51213cd2667fd8361dbfd..0000000000000000000000000000000000000000
--- a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/test_throughput.py
+++ /dev/null
@@ -1,115 +0,0 @@
-"""Benchmarking script to test the throughput of serving workers."""
-import argparse
-import json
-
-import requests
-import threading
-import time
-
-from fastchat.conversation import default_conversation
-
-
-def main():
- if args.worker_address:
- worker_addr = args.worker_address
- else:
- controller_addr = args.controller_address
- ret = requests.post(controller_addr + "/refresh_all_workers")
- ret = requests.post(controller_addr + "/list_models")
- models = ret.json()["models"]
- models.sort()
- print(f"Models: {models}")
-
- ret = requests.post(
- controller_addr + "/get_worker_address", json={"model": args.model_name}
- )
- worker_addr = ret.json()["address"]
- print(f"worker_addr: {worker_addr}")
-
- if worker_addr == "":
- return
-
- conv = default_conversation.copy()
- conv.append_message(conv.roles[0], "Tell me a story with more than 1000 words")
- prompt_template = conv.get_prompt()
- prompts = [prompt_template for _ in range(args.n_thread)]
-
- headers = {"User-Agent": "fastchat Client"}
- ploads = [
- {
- "model": args.model_name,
- "prompt": prompts[i],
- "max_new_tokens": args.max_new_tokens,
- "temperature": 0.0,
- # "stop": conv.sep,
- }
- for i in range(len(prompts))
- ]
-
- def send_request(results, i):
- if args.test_dispatch:
- ret = requests.post(
- controller_addr + "/get_worker_address", json={"model": args.model_name}
- )
- thread_worker_addr = ret.json()["address"]
- else:
- thread_worker_addr = worker_addr
- print(f"thread {i} goes to {thread_worker_addr}")
- response = requests.post(
- thread_worker_addr + "/worker_generate_stream",
- headers=headers,
- json=ploads[i],
- stream=False,
- )
- k = list(
- response.iter_lines(chunk_size=8192, decode_unicode=False, delimiter=b"\0")
- )
- # print(k)
- response_new_words = json.loads(k[-2].decode("utf-8"))["text"]
- error_code = json.loads(k[-2].decode("utf-8"))["error_code"]
- # print(f"=== Thread {i} ===, words: {1}, error code: {error_code}")
- results[i] = len(response_new_words.split(" ")) - len(prompts[i].split(" "))
-
- # use N threads to prompt the backend
- tik = time.time()
- threads = []
- results = [None] * args.n_thread
- for i in range(args.n_thread):
- t = threading.Thread(target=send_request, args=(results, i))
- t.start()
- # time.sleep(0.5)
- threads.append(t)
-
- for t in threads:
- t.join()
-
- print(f"Time (POST): {time.time() - tik} s")
- # n_words = 0
- # for i, response in enumerate(results):
- # # print(prompt[i].replace(conv.sep, "\n"), end="")
- # # make sure the streaming finishes at EOS or stopping criteria
- # k = list(response.iter_lines(chunk_size=8192, decode_unicode=False, delimiter=b"\0"))
- # response_new_words = json.loads(k[-2].decode("utf-8"))["text"]
- # # print(response_new_words)
- # n_words += len(response_new_words.split(" ")) - len(prompts[i].split(" "))
- n_words = sum(results)
- time_seconds = time.time() - tik
- print(
- f"Time (Completion): {time_seconds}, n threads: {args.n_thread}, "
- f"throughput: {n_words / time_seconds} words/s."
- )
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--controller-address", type=str, default="http://localhost:21001"
- )
- parser.add_argument("--worker-address", type=str)
- parser.add_argument("--model-name", type=str, default="vicuna")
- parser.add_argument("--max-new-tokens", type=int, default=2048)
- parser.add_argument("--n-thread", type=int, default=8)
- parser.add_argument("--test-dispatch", action="store_true")
- args = parser.parse_args()
-
- main()
diff --git a/spaces/JUNGU/pixera_gen/examples/pixelArt/combine.py b/spaces/JUNGU/pixera_gen/examples/pixelArt/combine.py
deleted file mode 100644
index 669a3752045c556f3bcd7aaa2c8b35bc536be136..0000000000000000000000000000000000000000
--- a/spaces/JUNGU/pixera_gen/examples/pixelArt/combine.py
+++ /dev/null
@@ -1,29 +0,0 @@
-import cv2
-import numpy as np
-
-class combine:
- #Author: Alican Akca
- def __init__(self, size = (400,300),images = [],background_image = None):
- self.size = size
- self.images = images
- self.background_image = background_image
-
- def combiner(self,images,background_image):
- original = images[0]
- masked = images[1]
- background = cv2.resize(background_image,(images[0].shape[1],images[0].shape[0]))
- result = blend_images_using_mask(original, background, masked)
- return result
-
-def mix_pixel(pix_1, pix_2, perc):
-
- return (perc/255 * pix_1) + ((255 - perc)/255 * pix_2)
-
-def blend_images_using_mask(img_orig, img_for_overlay, img_mask):
-
- if len(img_mask.shape) != 3:
- img_mask = cv2.cvtColor(img_mask, cv2.COLOR_GRAY2BGR)
-
- img_res = mix_pixel(img_orig, img_for_overlay, img_mask)
-
- return cv2.cvtColor(img_res.astype(np.uint8), cv2.COLOR_BGR2RGB)
\ No newline at end of file
diff --git a/spaces/Jamkonams/AutoGPT/autogpt/workspace.py b/spaces/Jamkonams/AutoGPT/autogpt/workspace.py
deleted file mode 100644
index 6fb0e3113eb2c1338edf7f86c6e162fc27c61e50..0000000000000000000000000000000000000000
--- a/spaces/Jamkonams/AutoGPT/autogpt/workspace.py
+++ /dev/null
@@ -1,47 +0,0 @@
-from __future__ import annotations
-
-import os
-from pathlib import Path
-
-from autogpt.config import Config
-
-CFG = Config()
-
-# Set a dedicated folder for file I/O
-WORKSPACE_PATH = Path(os.getcwd()) / "auto_gpt_workspace"
-
-# Create the directory if it doesn't exist
-if not os.path.exists(WORKSPACE_PATH):
- os.makedirs(WORKSPACE_PATH)
-
-
-def path_in_workspace(relative_path: str | Path) -> Path:
- """Get full path for item in workspace
-
- Parameters:
- relative_path (str | Path): Path to translate into the workspace
-
- Returns:
- Path: Absolute path for the given path in the workspace
- """
- return safe_path_join(WORKSPACE_PATH, relative_path)
-
-
-def safe_path_join(base: Path, *paths: str | Path) -> Path:
- """Join one or more path components, asserting the resulting path is within the workspace.
-
- Args:
- base (Path): The base path
- *paths (str): The paths to join to the base path
-
- Returns:
- Path: The joined path
- """
- joined_path = base.joinpath(*paths).resolve()
-
- if CFG.restrict_to_workspace and not joined_path.is_relative_to(base):
- raise ValueError(
- f"Attempted to access path '{joined_path}' outside of workspace '{base}'."
- )
-
- return joined_path
diff --git a/spaces/JeffJing/ZookChatBot/steamship/base/mime_types.py b/spaces/JeffJing/ZookChatBot/steamship/base/mime_types.py
deleted file mode 100644
index 9b3c94ac2dc3aab12e402c5588c56f56769ef59f..0000000000000000000000000000000000000000
--- a/spaces/JeffJing/ZookChatBot/steamship/base/mime_types.py
+++ /dev/null
@@ -1,42 +0,0 @@
-from enum import Enum
-
-
-class MimeTypes(str, Enum):
- UNKNOWN = "unknown"
- TXT = "text/plain"
- JSON = "application/json"
- MKD = "text/markdown"
- EPUB = "application/epub+zip"
- PDF = "application/pdf"
- JPG = "image/jpeg"
- PNG = "image/png"
- TIFF = "image/tiff"
- GIF = "image/gif"
- HTML = "text/html"
- DOC = "application/msword"
- DOCX = "application/vnd.openxmlformats-officedocument.wordprocessingml.document"
- PPT = "applicatino/ms-powerpoint"
- PPTX = "application/vnd.openxmlformats-officedocument.presentationml.presentation"
- RTF = "application/rtf"
- BINARY = "application/octet-stream"
- STEAMSHIP_BLOCK_JSON = "application/vnd.steamship-block.json.v1"
- WAV = "audio/wav"
- MP3 = "audio/mp3"
- MP4_VIDEO = "video/mp4"
- MP4_AUDIO = "audio/mp4"
- WEBM_VIDEO = "video/webm"
- WEBM_AUDIO = "audio/webm"
- FILE_JSON = "fileJson"
-
-
-class ContentEncodings:
- BASE64 = "base64"
-
-
-TEXT_MIME_TYPES = [
- MimeTypes.TXT,
- MimeTypes.MKD,
- MimeTypes.HTML,
- MimeTypes.DOCX,
- MimeTypes.PPTX,
-]
diff --git a/spaces/Jerkinjankins/ogkalu-Comic-Diffusion/README.md b/spaces/Jerkinjankins/ogkalu-Comic-Diffusion/README.md
deleted file mode 100644
index 314c1fceab39311153e965a8c9b6ba501997bccb..0000000000000000000000000000000000000000
--- a/spaces/Jerkinjankins/ogkalu-Comic-Diffusion/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Ogkalu Comic Diffusion
-emoji: 😻
-colorFrom: blue
-colorTo: gray
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Jerry0203/sentence_embedding/README.md b/spaces/Jerry0203/sentence_embedding/README.md
deleted file mode 100644
index 3ca227c7ee84ea9e3bf4d7c34b24224a2f456e6b..0000000000000000000000000000000000000000
--- a/spaces/Jerry0203/sentence_embedding/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Sentence Embedding
-emoji: 📉
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 3.33.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/JoPmt/Txt-to-video/README.md b/spaces/JoPmt/Txt-to-video/README.md
deleted file mode 100644
index c978e0d78b22fd146e1dee39e3655601dba4bfea..0000000000000000000000000000000000000000
--- a/spaces/JoPmt/Txt-to-video/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Txt To Video
-emoji: ⚡
-colorFrom: blue
-colorTo: gray
-sdk: static
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/JohnC26/ChatGPTwithAPI/README.md b/spaces/JohnC26/ChatGPTwithAPI/README.md
deleted file mode 100644
index 5e9db9ee137f91124dc76c9ed996db9fff3477d5..0000000000000000000000000000000000000000
--- a/spaces/JohnC26/ChatGPTwithAPI/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: ChatGPTwithAPI
-emoji: 🚀
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.20.0
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: ysharma/ChatGPTwithAPI
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/utils/abs_model.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/utils/abs_model.py
deleted file mode 100644
index b6d27a6df74c6988dd4355cbef149ed90f3a36cf..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/utils/abs_model.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from abc import ABC
-from abc import abstractmethod
-
-import torch
-
-class AbsMelDecoder(torch.nn.Module, ABC):
- """The abstract PPG-based voice conversion class
- This "model" is one of mediator objects for "Task" class.
-
- """
-
- @abstractmethod
- def forward(
- self,
- bottle_neck_features: torch.Tensor,
- feature_lengths: torch.Tensor,
- speech: torch.Tensor,
- speech_lengths: torch.Tensor,
- logf0_uv: torch.Tensor = None,
- spembs: torch.Tensor = None,
- styleembs: torch.Tensor = None,
- ) -> torch.Tensor:
- raise NotImplementedError
diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/utils/modelutils.py b/spaces/Kevin676/Real-Time-Voice-Cloning/utils/modelutils.py
deleted file mode 100644
index 6acaa984e0c7876f9149fc1ff99001b7761dc80b..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Real-Time-Voice-Cloning/utils/modelutils.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from pathlib import Path
-
-def check_model_paths(encoder_path: Path, synthesizer_path: Path, vocoder_path: Path):
- # This function tests the model paths and makes sure at least one is valid.
- if encoder_path.is_file() or encoder_path.is_dir():
- return
- if synthesizer_path.is_file() or synthesizer_path.is_dir():
- return
- if vocoder_path.is_file() or vocoder_path.is_dir():
- return
-
- # If none of the paths exist, remind the user to download models if needed
- print("********************************************************************************")
- print("Error: Model files not found. Follow these instructions to get and install the models:")
- print("https://github.com/CorentinJ/Real-Time-Voice-Cloning/wiki/Pretrained-models")
- print("********************************************************************************\n")
- quit(-1)
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/utils/__init__.py b/spaces/KyanChen/RSPrompter/mmdet/models/utils/__init__.py
deleted file mode 100644
index af3b2448dbeae8eed8e0b579b7bbc159a623fa3c..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/utils/__init__.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .gaussian_target import (gather_feat, gaussian_radius,
- gen_gaussian_target, get_local_maximum,
- get_topk_from_heatmap, transpose_and_gather_feat)
-from .make_divisible import make_divisible
-from .misc import (aligned_bilinear, center_of_mass, empty_instances,
- filter_gt_instances, filter_scores_and_topk, flip_tensor,
- generate_coordinate, images_to_levels, interpolate_as,
- levels_to_images, mask2ndarray, multi_apply,
- relative_coordinate_maps, rename_loss_dict,
- reweight_loss_dict, samplelist_boxtype2tensor,
- select_single_mlvl, sigmoid_geometric_mean,
- unfold_wo_center, unmap, unpack_gt_instances)
-from .panoptic_gt_processing import preprocess_panoptic_gt
-from .point_sample import (get_uncertain_point_coords_with_randomness,
- get_uncertainty)
-
-__all__ = [
- 'gaussian_radius', 'gen_gaussian_target', 'make_divisible',
- 'get_local_maximum', 'get_topk_from_heatmap', 'transpose_and_gather_feat',
- 'interpolate_as', 'sigmoid_geometric_mean', 'gather_feat',
- 'preprocess_panoptic_gt', 'get_uncertain_point_coords_with_randomness',
- 'get_uncertainty', 'unpack_gt_instances', 'empty_instances',
- 'center_of_mass', 'filter_scores_and_topk', 'flip_tensor',
- 'generate_coordinate', 'levels_to_images', 'mask2ndarray', 'multi_apply',
- 'select_single_mlvl', 'unmap', 'images_to_levels',
- 'samplelist_boxtype2tensor', 'filter_gt_instances', 'rename_loss_dict',
- 'reweight_loss_dict', 'relative_coordinate_maps', 'aligned_bilinear',
- 'unfold_wo_center'
-]
diff --git a/spaces/LanguageBind/LanguageBind/languagebind/video/processing_video.py b/spaces/LanguageBind/LanguageBind/languagebind/video/processing_video.py
deleted file mode 100644
index fdea0fd4fffa8eb6d4fff6b600ee02e7abe45c06..0000000000000000000000000000000000000000
--- a/spaces/LanguageBind/LanguageBind/languagebind/video/processing_video.py
+++ /dev/null
@@ -1,161 +0,0 @@
-import cv2
-import decord
-import numpy as np
-import torch
-from PIL import Image
-from decord import VideoReader, cpu
-from torchvision import transforms
-from transformers import ProcessorMixin, BatchEncoding
-from transformers.image_processing_utils import BatchFeature
-from pytorchvideo.data.encoded_video import EncodedVideo
-from torchvision.transforms import Compose, Lambda, ToTensor
-from torchvision.transforms._transforms_video import NormalizeVideo, RandomCropVideo, RandomHorizontalFlipVideo, CenterCropVideo
-from pytorchvideo.transforms import ApplyTransformToKey, ShortSideScale, UniformTemporalSubsample
-
-decord.bridge.set_bridge('torch')
-
-OPENAI_DATASET_MEAN = (0.48145466, 0.4578275, 0.40821073)
-OPENAI_DATASET_STD = (0.26862954, 0.26130258, 0.27577711)
-
-def make_list_of_images(x):
- if not isinstance(x, list):
- return [x]
- return x
-
-def get_video_transform(config):
- config = config.vision_config
- if config.video_decode_backend == 'pytorchvideo':
- transform = ApplyTransformToKey(
- key="video",
- transform=Compose(
- [
- UniformTemporalSubsample(config.num_frames),
- Lambda(lambda x: x / 255.0),
- NormalizeVideo(mean=OPENAI_DATASET_MEAN, std=OPENAI_DATASET_STD),
- ShortSideScale(size=224),
- CenterCropVideo(224),
- RandomHorizontalFlipVideo(p=0.5),
- ]
- ),
- )
-
- elif config.video_decode_backend == 'decord':
-
- transform = Compose(
- [
- # UniformTemporalSubsample(num_frames),
- Lambda(lambda x: x / 255.0),
- NormalizeVideo(mean=OPENAI_DATASET_MEAN, std=OPENAI_DATASET_STD),
- ShortSideScale(size=224),
- CenterCropVideo(224),
- RandomHorizontalFlipVideo(p=0.5),
- ]
- )
-
- elif config.video_decode_backend == 'opencv':
- transform = Compose(
- [
- # UniformTemporalSubsample(num_frames),
- Lambda(lambda x: x / 255.0),
- NormalizeVideo(mean=OPENAI_DATASET_MEAN, std=OPENAI_DATASET_STD),
- ShortSideScale(size=224),
- CenterCropVideo(224),
- RandomHorizontalFlipVideo(p=0.5),
- ]
- )
- else:
- raise NameError('video_decode_backend should specify in (pytorchvideo, decord, opencv)')
- return transform
-
-
-def load_and_transform_video(
- video_path,
- transform,
- video_decode_backend='opencv',
- clip_start_sec=0.0,
- clip_end_sec=None,
- num_frames=8,
-):
- if video_decode_backend == 'pytorchvideo':
- # decord pyav
- video = EncodedVideo.from_path(video_path, decoder="decord", decode_audio=False)
- duration = video.duration
- start_sec = clip_start_sec # secs
- end_sec = clip_end_sec if clip_end_sec is not None else duration # secs
- video_data = video.get_clip(start_sec=start_sec, end_sec=end_sec)
- video_outputs = transform(video_data)
-
- elif video_decode_backend == 'decord':
- decord.bridge.set_bridge('torch')
- decord_vr = VideoReader(video_path, ctx=cpu(0))
- duration = len(decord_vr)
- frame_id_list = np.linspace(0, duration-1, num_frames, dtype=int)
- video_data = decord_vr.get_batch(frame_id_list)
- video_data = video_data.permute(3, 0, 1, 2) # (T, H, W, C) -> (C, T, H, W)
- video_outputs = transform(video_data)
-
- elif video_decode_backend == 'opencv':
- cv2_vr = cv2.VideoCapture(video_path)
- duration = int(cv2_vr.get(cv2.CAP_PROP_FRAME_COUNT))
- frame_id_list = np.linspace(0, duration-1, num_frames, dtype=int)
-
- video_data = []
- for frame_idx in frame_id_list:
- cv2_vr.set(1, frame_idx)
- _, frame = cv2_vr.read()
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
- video_data.append(torch.from_numpy(frame).permute(2, 0, 1))
- cv2_vr.release()
- video_data = torch.stack(video_data, dim=1)
- video_outputs = transform(video_data)
- else:
- raise NameError('video_decode_backend should specify in (pytorchvideo, decord, opencv)')
- return video_outputs
-
-class LanguageBindVideoProcessor(ProcessorMixin):
- attributes = []
- tokenizer_class = ("LanguageBindVideoTokenizer")
-
- def __init__(self, config, tokenizer=None, **kwargs):
- super().__init__(**kwargs)
- self.config = config
- self.transform = get_video_transform(config)
- self.image_processor = load_and_transform_video
- self.tokenizer = tokenizer
-
- def __call__(self, images=None, text=None, context_length=77, return_tensors=None, **kwargs):
- if text is None and images is None:
- raise ValueError("You have to specify either text or images. Both cannot be none.")
-
- if text is not None:
- encoding = self.tokenizer(text, max_length=context_length, padding='max_length',
- truncation=True, return_tensors=return_tensors, **kwargs)
-
- if images is not None:
- images = make_list_of_images(images)
- image_features = [self.image_processor(image, self.transform,
- video_decode_backend=self.config.vision_config.video_decode_backend,
- num_frames=self.config.vision_config.num_frames) for image in images]
- image_features = torch.stack(image_features)
-
- if text is not None and images is not None:
- encoding["pixel_values"] = image_features
- return encoding
- elif text is not None:
- return encoding
- else:
- return {"pixel_values": image_features}
-
- def batch_decode(self, skip_special_tokens=True, *args, **kwargs):
- """
- This method forwards all its arguments to CLIPTokenizerFast's [`~PreTrainedTokenizer.batch_decode`]. Please
- refer to the docstring of this method for more information.
- """
- return self.tokenizer.batch_decode(*args, skip_special_tokens=skip_special_tokens, **kwargs)
-
- def decode(self, skip_special_tokens=True, *args, **kwargs):
- """
- This method forwards all its arguments to CLIPTokenizerFast's [`~PreTrainedTokenizer.decode`]. Please refer to
- the docstring of this method for more information.
- """
- return self.tokenizer.decode(*args, skip_special_tokens=skip_special_tokens, **kwargs)
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/dlmodels.bat b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/dlmodels.bat
deleted file mode 100644
index 5d80f50369b1f3ed37c045d07a9e2ce8954f09d4..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/dlmodels.bat
+++ /dev/null
@@ -1,348 +0,0 @@
-@echo off && chcp 65001
-
-echo working dir is %cd%
-echo downloading requirement aria2 check.
-echo=
-dir /a:d/b | findstr "aria2" > flag.txt
-findstr "aria2" flag.txt >nul
-if %errorlevel% ==0 (
- echo aria2 checked.
- echo=
-) else (
- echo failed. please downloading aria2 from webpage!
- echo unzip it and put in this directory!
- timeout /T 5
- start https://github.com/aria2/aria2/releases/tag/release-1.36.0
- echo=
- goto end
-)
-
-echo envfiles checking start.
-echo=
-
-for /f %%x in ('findstr /i /c:"aria2" "flag.txt"') do (set aria2=%%x)&goto endSch
-:endSch
-
-set d32=f0D32k.pth
-set d40=f0D40k.pth
-set d48=f0D48k.pth
-set g32=f0G32k.pth
-set g40=f0G40k.pth
-set g48=f0G48k.pth
-
-set d40v2=f0D40k.pth
-set g40v2=f0G40k.pth
-
-set dld32=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D32k.pth
-set dld40=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D40k.pth
-set dld48=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D48k.pth
-set dlg32=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G32k.pth
-set dlg40=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G40k.pth
-set dlg48=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G48k.pth
-
-set dld40v2=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D40k.pth
-set dlg40v2=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G40k.pth
-
-set hp2_all=HP2_all_vocals.pth
-set hp3_all=HP3_all_vocals.pth
-set hp5_only=HP5_only_main_vocal.pth
-set VR_DeEchoAggressive=VR-DeEchoAggressive.pth
-set VR_DeEchoDeReverb=VR-DeEchoDeReverb.pth
-set VR_DeEchoNormal=VR-DeEchoNormal.pth
-set onnx_dereverb=vocals.onnx
-
-set dlhp2_all=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2_all_vocals.pth
-set dlhp3_all=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP3_all_vocals.pth
-set dlhp5_only=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5_only_main_vocal.pth
-set dlVR_DeEchoAggressive=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/VR-DeEchoAggressive.pth
-set dlVR_DeEchoDeReverb=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/VR-DeEchoDeReverb.pth
-set dlVR_DeEchoNormal=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/VR-DeEchoNormal.pth
-set dlonnx_dereverb=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/onnx_dereverb_By_FoxJoy/vocals.onnx
-
-set hb=hubert_base.pt
-
-set dlhb=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt
-
-echo dir check start.
-echo=
-
-if exist "%~dp0assets\pretrained" (
- echo dir .\assets\pretrained checked.
- ) else (
- echo failed. generating dir .\assets\pretrained.
- mkdir pretrained
- )
-if exist "%~dp0assets\pretrained_v2" (
- echo dir .\assets\pretrained_v2 checked.
- ) else (
- echo failed. generating dir .\assets\pretrained_v2.
- mkdir pretrained_v2
- )
-if exist "%~dp0assets\uvr5_weights" (
- echo dir .\assets\uvr5_weights checked.
- ) else (
- echo failed. generating dir .\assets\uvr5_weights.
- mkdir uvr5_weights
- )
-if exist "%~dp0assets\uvr5_weights\onnx_dereverb_By_FoxJoy" (
- echo dir .\assets\uvr5_weights\onnx_dereverb_By_FoxJoy checked.
- ) else (
- echo failed. generating dir .\assets\uvr5_weights\onnx_dereverb_By_FoxJoy.
- mkdir uvr5_weights\onnx_dereverb_By_FoxJoy
- )
-
-echo=
-echo dir check finished.
-
-echo=
-echo required files check start.
-
-echo checking D32k.pth
-if exist "%~dp0assets\pretrained\D32k.pth" (
- echo D32k.pth in .\assets\pretrained checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D32k.pth -d %~dp0assets\pretrained -o D32k.pth
- if exist "%~dp0assets\pretrained\D32k.pth" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking D40k.pth
-if exist "%~dp0assets\pretrained\D40k.pth" (
- echo D40k.pth in .\assets\pretrained checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D40k.pth -d %~dp0assets\pretrained -o D40k.pth
- if exist "%~dp0assets\pretrained\D40k.pth" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking D40k.pth
-if exist "%~dp0assets\pretrained_v2\D40k.pth" (
- echo D40k.pth in .\assets\pretrained_v2 checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D40k.pth -d %~dp0assets\pretrained_v2 -o D40k.pth
- if exist "%~dp0assets\pretrained_v2\D40k.pth" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking D48k.pth
-if exist "%~dp0assets\pretrained\D48k.pth" (
- echo D48k.pth in .\assets\pretrained checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D48k.pth -d %~dp0assets\pretrained -o D48k.pth
- if exist "%~dp0assets\pretrained\D48k.pth" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking G32k.pth
-if exist "%~dp0assets\pretrained\G32k.pth" (
- echo G32k.pth in .\assets\pretrained checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G32k.pth -d %~dp0assets\pretrained -o G32k.pth
- if exist "%~dp0assets\pretrained\G32k.pth" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking G40k.pth
-if exist "%~dp0assets\pretrained\G40k.pth" (
- echo G40k.pth in .\assets\pretrained checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G40k.pth -d %~dp0assets\pretrained -o G40k.pth
- if exist "%~dp0assets\pretrained\G40k.pth" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking G40k.pth
-if exist "%~dp0assets\pretrained_v2\G40k.pth" (
- echo G40k.pth in .\assets\pretrained_v2 checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G40k.pth -d %~dp0assets\pretrained_v2 -o G40k.pth
- if exist "%~dp0assets\pretrained_v2\G40k.pth" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking G48k.pth
-if exist "%~dp0assets\pretrained\G48k.pth" (
- echo G48k.pth in .\assets\pretrained checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G48k.pth -d %~dp0assets\pretrained -o G48k.pth
- if exist "%~dp0assets\pretrained\G48k.pth" (echo download successful.) else (echo please try again!
- echo=)
- )
-
-echo checking %d32%
-if exist "%~dp0assets\pretrained\%d32%" (
- echo %d32% in .\assets\pretrained checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dld32% -d %~dp0assets\pretrained -o %d32%
- if exist "%~dp0assets\pretrained\%d32%" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking %d40%
-if exist "%~dp0assets\pretrained\%d40%" (
- echo %d40% in .\assets\pretrained checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dld40% -d %~dp0assets\pretrained -o %d40%
- if exist "%~dp0assets\pretrained\%d40%" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking %d40v2%
-if exist "%~dp0assets\pretrained_v2\%d40v2%" (
- echo %d40v2% in .\assets\pretrained_v2 checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dld40v2% -d %~dp0assets\pretrained_v2 -o %d40v2%
- if exist "%~dp0assets\pretrained_v2\%d40v2%" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking %d48%
-if exist "%~dp0assets\pretrained\%d48%" (
- echo %d48% in .\assets\pretrained checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dld48% -d %~dp0assets\pretrained -o %d48%
- if exist "%~dp0assets\pretrained\%d48%" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking %g32%
-if exist "%~dp0assets\pretrained\%g32%" (
- echo %g32% in .\assets\pretrained checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlg32% -d %~dp0assets\pretrained -o %g32%
- if exist "%~dp0assets\pretrained\%g32%" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking %g40%
-if exist "%~dp0assets\pretrained\%g40%" (
- echo %g40% in .\assets\pretrained checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlg40% -d %~dp0assets\pretrained -o %g40%
- if exist "%~dp0assets\pretrained\%g40%" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking %g40v2%
-if exist "%~dp0assets\pretrained_v2\%g40v2%" (
- echo %g40v2% in .\assets\pretrained_v2 checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlg40v2% -d %~dp0assets\pretrained_v2 -o %g40v2%
- if exist "%~dp0assets\pretrained_v2\%g40v2%" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking %g48%
-if exist "%~dp0assets\pretrained\%g48%" (
- echo %g48% in .\assets\pretrained checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlg48% -d %~dp0assets\pretrained -o %g48%
- if exist "%~dp0assets\pretrained\%g48%" (echo download successful.) else (echo please try again!
- echo=)
- )
-
-echo checking %hp2_all%
-if exist "%~dp0assets\uvr5_weights\%hp2_all%" (
- echo %hp2_all% in .\assets\uvr5_weights checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlhp2_all% -d %~dp0assets\uvr5_weights -o %hp2_all%
- if exist "%~dp0assets\uvr5_weights\%hp2_all%" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking %hp3_all%
-if exist "%~dp0assets\uvr5_weights\%hp3_all%" (
- echo %hp3_all% in .\assets\uvr5_weights checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlhp3_all% -d %~dp0assets\uvr5_weights -o %hp3_all%
- if exist "%~dp0assets\uvr5_weights\%hp3_all%" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking %hp5_only%
-if exist "%~dp0assets\uvr5_weights\%hp5_only%" (
- echo %hp5_only% in .\assets\uvr5_weights checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlhp5_only% -d %~dp0assets\uvr5_weights -o %hp5_only%
- if exist "%~dp0assets\uvr5_weights\%hp5_only%" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking %VR_DeEchoAggressive%
-if exist "%~dp0assets\uvr5_weights\%VR_DeEchoAggressive%" (
- echo %VR_DeEchoAggressive% in .\assets\uvr5_weights checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlVR_DeEchoAggressive% -d %~dp0assets\uvr5_weights -o %VR_DeEchoAggressive%
- if exist "%~dp0assets\uvr5_weights\%VR_DeEchoAggressive%" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking %VR_DeEchoDeReverb%
-if exist "%~dp0assets\uvr5_weights\%VR_DeEchoDeReverb%" (
- echo %VR_DeEchoDeReverb% in .\assets\uvr5_weights checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlVR_DeEchoDeReverb% -d %~dp0assets\uvr5_weights -o %VR_DeEchoDeReverb%
- if exist "%~dp0assets\uvr5_weights\%VR_DeEchoDeReverb%" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking %VR_DeEchoNormal%
-if exist "%~dp0assets\uvr5_weights\%VR_DeEchoNormal%" (
- echo %VR_DeEchoNormal% in .\assets\uvr5_weights checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlVR_DeEchoNormal% -d %~dp0assets\uvr5_weights -o %VR_DeEchoNormal%
- if exist "%~dp0assets\uvr5_weights\%VR_DeEchoNormal%" (echo download successful.) else (echo please try again!
- echo=)
- )
-echo checking %onnx_dereverb%
-if exist "%~dp0assets\uvr5_weights\onnx_dereverb_By_FoxJoy\%onnx_dereverb%" (
- echo %onnx_dereverb% in .\assets\uvr5_weights\onnx_dereverb_By_FoxJoy checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlonnx_dereverb% -d %~dp0assets\uvr5_weights\onnx_dereverb_By_FoxJoy -o %onnx_dereverb%
- if exist "%~dp0assets\uvr5_weights\onnx_dereverb_By_FoxJoy\%onnx_dereverb%" (echo download successful.) else (echo please try again!
- echo=)
- )
-
-echo checking %hb%
-if exist "%~dp0assets\hubert\%hb%" (
- echo %hb% in .\assets\hubert\pretrained checked.
- echo=
- ) else (
- echo failed. starting download from huggingface.
- %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlhb% -d %~dp0assets\hubert\ -o %hb%
- if exist "%~dp0assets\hubert\%hb%" (echo download successful.) else (echo please try again!
- echo=)
- )
-
-echo required files check finished.
-echo envfiles check complete.
-pause
-:end
-del flag.txt
diff --git a/spaces/Lewislou/Lewislou-cell-seg-sribd/stardist_pkg/utils.py b/spaces/Lewislou/Lewislou-cell-seg-sribd/stardist_pkg/utils.py
deleted file mode 100644
index 6459bd5510ce770b6ec7d13e03cf0ebf92d67974..0000000000000000000000000000000000000000
--- a/spaces/Lewislou/Lewislou-cell-seg-sribd/stardist_pkg/utils.py
+++ /dev/null
@@ -1,394 +0,0 @@
-from __future__ import print_function, unicode_literals, absolute_import, division
-
-import numpy as np
-import warnings
-import os
-import datetime
-from tqdm import tqdm
-from collections import defaultdict
-from zipfile import ZipFile, ZIP_DEFLATED
-from scipy.ndimage.morphology import distance_transform_edt, binary_fill_holes
-from scipy.ndimage.measurements import find_objects
-from scipy.optimize import minimize_scalar
-from skimage.measure import regionprops
-from csbdeep.utils import _raise
-from csbdeep.utils.six import Path
-from collections.abc import Iterable
-
-from .matching import matching_dataset, _check_label_array
-
-
-try:
- from edt import edt
- _edt_available = True
- try: _edt_parallel_max = len(os.sched_getaffinity(0))
- except: _edt_parallel_max = 128
- _edt_parallel_default = 4
- _edt_parallel = os.environ.get('STARDIST_EDT_NUM_THREADS', _edt_parallel_default)
- try:
- _edt_parallel = min(_edt_parallel_max, int(_edt_parallel))
- except ValueError as e:
- warnings.warn(f"Invalid value ({_edt_parallel}) for STARDIST_EDT_NUM_THREADS. Using default value ({_edt_parallel_default}) instead.")
- _edt_parallel = _edt_parallel_default
- del _edt_parallel_default, _edt_parallel_max
-except ImportError:
- _edt_available = False
- # warnings.warn("Could not find package edt... \nConsider installing it with \n pip install edt\nto improve training data generation performance.")
- pass
-
-
-def gputools_available():
- try:
- import gputools
- except:
- return False
- return True
-
-
-def path_absolute(path_relative):
- """ Get absolute path to resource"""
- base_path = os.path.abspath(os.path.dirname(__file__))
- return os.path.join(base_path, path_relative)
-
-
-def _is_power_of_2(i):
- assert i > 0
- e = np.log2(i)
- return e == int(e)
-
-
-def _normalize_grid(grid,n):
- try:
- grid = tuple(grid)
- (len(grid) == n and
- all(map(np.isscalar,grid)) and
- all(map(_is_power_of_2,grid))) or _raise(TypeError())
- return tuple(int(g) for g in grid)
- except (TypeError, AssertionError):
- raise ValueError("grid = {grid} must be a list/tuple of length {n} with values that are power of 2".format(grid=grid, n=n))
-
-
-def edt_prob(lbl_img, anisotropy=None):
- if _edt_available:
- return _edt_prob_edt(lbl_img, anisotropy=anisotropy)
- else:
- # warnings.warn("Could not find package edt... \nConsider installing it with \n pip install edt\nto improve training data generation performance.")
- return _edt_prob_scipy(lbl_img, anisotropy=anisotropy)
-
-def _edt_prob_edt(lbl_img, anisotropy=None):
- """Perform EDT on each labeled object and normalize.
- Internally uses https://github.com/seung-lab/euclidean-distance-transform-3d
- that can handle multiple labels at once
- """
- lbl_img = np.ascontiguousarray(lbl_img)
- constant_img = lbl_img.min() == lbl_img.max() and lbl_img.flat[0] > 0
- if constant_img:
- warnings.warn("EDT of constant label image is ill-defined. (Assuming background around it.)")
- # we just need to compute the edt once but then normalize it for each object
- prob = edt(lbl_img, anisotropy=anisotropy, black_border=constant_img, parallel=_edt_parallel)
- objects = find_objects(lbl_img)
- for i,sl in enumerate(objects,1):
- # i: object label id, sl: slices of object in lbl_img
- if sl is None: continue
- _mask = lbl_img[sl]==i
- # normalize it
- prob[sl][_mask] /= np.max(prob[sl][_mask]+1e-10)
- return prob
-
-def _edt_prob_scipy(lbl_img, anisotropy=None):
- """Perform EDT on each labeled object and normalize."""
- def grow(sl,interior):
- return tuple(slice(s.start-int(w[0]),s.stop+int(w[1])) for s,w in zip(sl,interior))
- def shrink(interior):
- return tuple(slice(int(w[0]),(-1 if w[1] else None)) for w in interior)
- constant_img = lbl_img.min() == lbl_img.max() and lbl_img.flat[0] > 0
- if constant_img:
- lbl_img = np.pad(lbl_img, ((1,1),)*lbl_img.ndim, mode='constant')
- warnings.warn("EDT of constant label image is ill-defined. (Assuming background around it.)")
- objects = find_objects(lbl_img)
- prob = np.zeros(lbl_img.shape,np.float32)
- for i,sl in enumerate(objects,1):
- # i: object label id, sl: slices of object in lbl_img
- if sl is None: continue
- interior = [(s.start>0,s.stop0,s.stop 0:
- # ignore image boundary, since predictions may not be reliable
- mask_b = np.zeros_like(mask)
- mask_b[b:-b,b:-b] = True
- else:
- mask_b = True
-
- points = np.nonzero(mask & mask_b)
-
- if prob is not None:
- # weighted sampling via prob
- w = prob[points[0],points[1]].astype(np.float64)
- w /= np.sum(w)
- ind = np.random.choice(len(points[0]), n_samples, replace=True, p=w)
- else:
- ind = np.random.choice(len(points[0]), n_samples, replace=True)
-
- points = points[0][ind], points[1][ind]
- points = np.stack(points,axis=-1)
- return points
-
-
-def calculate_extents(lbl, func=np.median):
- """ Aggregate bounding box sizes of objects in label images. """
- if (isinstance(lbl,np.ndarray) and lbl.ndim==4) or (not isinstance(lbl,np.ndarray) and isinstance(lbl,Iterable)):
- return func(np.stack([calculate_extents(_lbl,func) for _lbl in lbl], axis=0), axis=0)
-
- n = lbl.ndim
- n in (2,3) or _raise(ValueError("label image should be 2- or 3-dimensional (or pass a list of these)"))
-
- regs = regionprops(lbl)
- if len(regs) == 0:
- return np.zeros(n)
- else:
- extents = np.array([np.array(r.bbox[n:])-np.array(r.bbox[:n]) for r in regs])
- return func(extents, axis=0)
-
-
-def polyroi_bytearray(x,y,pos=None,subpixel=True):
- """ Byte array of polygon roi with provided x and y coordinates
- See https://github.com/imagej/imagej1/blob/master/ij/io/RoiDecoder.java
- """
- import struct
- def _int16(x):
- return int(x).to_bytes(2, byteorder='big', signed=True)
- def _uint16(x):
- return int(x).to_bytes(2, byteorder='big', signed=False)
- def _int32(x):
- return int(x).to_bytes(4, byteorder='big', signed=True)
- def _float(x):
- return struct.pack(">f", x)
-
- subpixel = bool(subpixel)
- # add offset since pixel center is at (0.5,0.5) in ImageJ
- x_raw = np.asarray(x).ravel() + 0.5
- y_raw = np.asarray(y).ravel() + 0.5
- x = np.round(x_raw)
- y = np.round(y_raw)
- assert len(x) == len(y)
- top, left, bottom, right = y.min(), x.min(), y.max(), x.max() # bbox
-
- n_coords = len(x)
- bytes_header = 64
- bytes_total = bytes_header + n_coords*2*2 + subpixel*n_coords*2*4
- B = [0] * bytes_total
- B[ 0: 4] = map(ord,'Iout') # magic start
- B[ 4: 6] = _int16(227) # version
- B[ 6: 8] = _int16(0) # roi type (0 = polygon)
- B[ 8:10] = _int16(top) # bbox top
- B[10:12] = _int16(left) # bbox left
- B[12:14] = _int16(bottom) # bbox bottom
- B[14:16] = _int16(right) # bbox right
- B[16:18] = _uint16(n_coords) # number of coordinates
- if subpixel:
- B[50:52] = _int16(128) # subpixel resolution (option flag)
- if pos is not None:
- B[56:60] = _int32(pos) # position (C, Z, or T)
-
- for i,(_x,_y) in enumerate(zip(x,y)):
- xs = bytes_header + 2*i
- ys = xs + 2*n_coords
- B[xs:xs+2] = _int16(_x - left)
- B[ys:ys+2] = _int16(_y - top)
-
- if subpixel:
- base1 = bytes_header + n_coords*2*2
- base2 = base1 + n_coords*4
- for i,(_x,_y) in enumerate(zip(x_raw,y_raw)):
- xs = base1 + 4*i
- ys = base2 + 4*i
- B[xs:xs+4] = _float(_x)
- B[ys:ys+4] = _float(_y)
-
- return bytearray(B)
-
-
-def export_imagej_rois(fname, polygons, set_position=True, subpixel=True, compression=ZIP_DEFLATED):
- """ polygons assumed to be a list of arrays with shape (id,2,c) """
-
- if isinstance(polygons,np.ndarray):
- polygons = (polygons,)
-
- fname = Path(fname)
- if fname.suffix == '.zip':
- fname = fname.with_suffix('')
-
- with ZipFile(str(fname)+'.zip', mode='w', compression=compression) as roizip:
- for pos,polygroup in enumerate(polygons,start=1):
- for i,poly in enumerate(polygroup,start=1):
- roi = polyroi_bytearray(poly[1],poly[0], pos=(pos if set_position else None), subpixel=subpixel)
- roizip.writestr('{pos:03d}_{i:03d}.roi'.format(pos=pos,i=i), roi)
-
-
-def optimize_threshold(Y, Yhat, model, nms_thresh, measure='accuracy', iou_threshs=[0.3,0.5,0.7], bracket=None, tol=1e-2, maxiter=20, verbose=1):
- """ Tune prob_thresh for provided (fixed) nms_thresh to maximize matching score (for given measure and averaged over iou_threshs). """
- np.isscalar(nms_thresh) or _raise(ValueError("nms_thresh must be a scalar"))
- iou_threshs = [iou_threshs] if np.isscalar(iou_threshs) else iou_threshs
- values = dict()
-
- if bracket is None:
- max_prob = max([np.max(prob) for prob, dist in Yhat])
- bracket = max_prob/2, max_prob
- # print("bracket =", bracket)
-
- with tqdm(total=maxiter, disable=(verbose!=1), desc="NMS threshold = %g" % nms_thresh) as progress:
-
- def fn(thr):
- prob_thresh = np.clip(thr, *bracket)
- value = values.get(prob_thresh)
- if value is None:
- Y_instances = [model._instances_from_prediction(y.shape, *prob_dist, prob_thresh=prob_thresh, nms_thresh=nms_thresh)[0] for y,prob_dist in zip(Y,Yhat)]
- stats = matching_dataset(Y, Y_instances, thresh=iou_threshs, show_progress=False, parallel=True)
- values[prob_thresh] = value = np.mean([s._asdict()[measure] for s in stats])
- if verbose > 1:
- print("{now} thresh: {prob_thresh:f} {measure}: {value:f}".format(
- now = datetime.datetime.now().strftime('%H:%M:%S'),
- prob_thresh = prob_thresh,
- measure = measure,
- value = value,
- ), flush=True)
- else:
- progress.update()
- progress.set_postfix_str("{prob_thresh:.3f} -> {value:.3f}".format(prob_thresh=prob_thresh, value=value))
- progress.refresh()
- return -value
-
- opt = minimize_scalar(fn, method='golden', bracket=bracket, tol=tol, options={'maxiter': maxiter})
-
- verbose > 1 and print('\n',opt, flush=True)
- return opt.x, -opt.fun
-
-
-def _invert_dict(d):
- """ return v-> [k_1,k_2,k_3....] for k,v in d"""
- res = defaultdict(list)
- for k,v in d.items():
- res[v].append(k)
- return res
-
-
-def mask_to_categorical(y, n_classes, classes, return_cls_dict=False):
- """generates a multi-channel categorical class map
-
- Parameters
- ----------
- y : n-dimensional ndarray
- integer label array
- n_classes : int
- Number of different classes (without background)
- classes: dict, integer, or None
- the label to class assignment
- can be
- - dict {label -> class_id}
- the value of class_id can be
- 0 -> background class
- 1...n_classes -> the respective object class (1 ... n_classes)
- None -> ignore object (prob is set to -1 for the pixels of the object, except for background class)
- - single integer value or None -> broadcast value to all labels
-
- Returns
- -------
- probability map of shape y.shape+(n_classes+1,) (first channel is background)
-
- """
-
- _check_label_array(y, 'y')
- if not (np.issubdtype(type(n_classes), np.integer) and n_classes>=1):
- raise ValueError(f"n_classes is '{n_classes}' but should be a positive integer")
-
- y_labels = np.unique(y[y>0]).tolist()
-
- # build dict class_id -> labels (inverse of classes)
- if np.issubdtype(type(classes), np.integer) or classes is None:
- classes = dict((k,classes) for k in y_labels)
- elif isinstance(classes, dict):
- pass
- else:
- raise ValueError("classes should be dict, single scalar, or None!")
-
- if not set(y_labels).issubset(set(classes.keys())):
- raise ValueError(f"all gt labels should be present in class dict provided \ngt_labels found\n{set(y_labels)}\nclass dict labels provided\n{set(classes.keys())}")
-
- cls_dict = _invert_dict(classes)
-
- # prob map
- y_mask = np.zeros(y.shape+(n_classes+1,), np.float32)
-
- for cls, labels in cls_dict.items():
- if cls is None:
- # prob == -1 will be used in the loss to ignore object
- y_mask[np.isin(y, labels)] = -1
- elif np.issubdtype(type(cls), np.integer) and 0 <= cls <= n_classes:
- y_mask[...,cls] = np.isin(y, labels)
- else:
- raise ValueError(f"Wrong class id '{cls}' (for n_classes={n_classes})")
-
- # set 0/1 background prob (unaffected by None values for class ids)
- y_mask[...,0] = (y==0)
-
- if return_cls_dict:
- return y_mask, cls_dict
- else:
- return y_mask
-
-
-def _is_floatarray(x):
- return isinstance(x.dtype.type(0),np.floating)
-
-
-def abspath(root, relpath):
- from pathlib import Path
- root = Path(root)
- if root.is_dir():
- path = root/relpath
- else:
- path = root.parent/relpath
- return str(path.absolute())
diff --git a/spaces/Lianjd/stock_dashboard/backtrader/version.py b/spaces/Lianjd/stock_dashboard/backtrader/version.py
deleted file mode 100644
index 9e8a77310aeba15fe0f1d61b9640d8eff707c0dc..0000000000000000000000000000000000000000
--- a/spaces/Lianjd/stock_dashboard/backtrader/version.py
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8; py-indent-offset:4 -*-
-###############################################################################
-#
-# Copyright (C) 2015-2020 Daniel Rodriguez
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program. If not, see .
-#
-###############################################################################
-from __future__ import (absolute_import, division, print_function,
- unicode_literals)
-
-
-__version__ = '1.9.76.123'
-
-__btversion__ = tuple(int(x) for x in __version__.split('.'))
diff --git a/spaces/Lightxr/sd-diffusers-webui/modules/prompt_parser.py b/spaces/Lightxr/sd-diffusers-webui/modules/prompt_parser.py
deleted file mode 100644
index 42cbbb3038612a44571765905e8526553f462663..0000000000000000000000000000000000000000
--- a/spaces/Lightxr/sd-diffusers-webui/modules/prompt_parser.py
+++ /dev/null
@@ -1,391 +0,0 @@
-
-import re
-import math
-import numpy as np
-import torch
-
-# Code from https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/8e2aeee4a127b295bfc880800e4a312e0f049b85, modified.
-
-class PromptChunk:
- """
- This object contains token ids, weight (multipliers:1.4) and textual inversion embedding info for a chunk of prompt.
- If a prompt is short, it is represented by one PromptChunk, otherwise, multiple are necessary.
- Each PromptChunk contains an exact amount of tokens - 77, which includes one for start and end token,
- so just 75 tokens from prompt.
- """
-
- def __init__(self):
- self.tokens = []
- self.multipliers = []
- self.fixes = []
-
-
-class FrozenCLIPEmbedderWithCustomWordsBase(torch.nn.Module):
- """A pytorch module that is a wrapper for FrozenCLIPEmbedder module. it enhances FrozenCLIPEmbedder, making it possible to
- have unlimited prompt length and assign weights to tokens in prompt.
- """
-
- def __init__(self, text_encoder, enable_emphasis=True):
- super().__init__()
-
- self.device = lambda: text_encoder.device
- self.enable_emphasis = enable_emphasis
- """Original FrozenCLIPEmbedder module; can also be FrozenOpenCLIPEmbedder or xlmr.BertSeriesModelWithTransformation,
- depending on model."""
-
- self.chunk_length = 75
-
- def empty_chunk(self):
- """creates an empty PromptChunk and returns it"""
-
- chunk = PromptChunk()
- chunk.tokens = [self.id_start] + [self.id_end] * (self.chunk_length + 1)
- chunk.multipliers = [1.0] * (self.chunk_length + 2)
- return chunk
-
- def get_target_prompt_token_count(self, token_count):
- """returns the maximum number of tokens a prompt of a known length can have before it requires one more PromptChunk to be represented"""
-
- return math.ceil(max(token_count, 1) / self.chunk_length) * self.chunk_length
-
- def tokenize_line(self, line):
- """
- this transforms a single prompt into a list of PromptChunk objects - as many as needed to
- represent the prompt.
- Returns the list and the total number of tokens in the prompt.
- """
-
- if self.enable_emphasis:
- parsed = parse_prompt_attention(line)
- else:
- parsed = [[line, 1.0]]
-
- tokenized = self.tokenize([text for text, _ in parsed])
-
- chunks = []
- chunk = PromptChunk()
- token_count = 0
- last_comma = -1
-
- def next_chunk(is_last=False):
- """puts current chunk into the list of results and produces the next one - empty;
- if is_last is true, tokens tokens at the end won't add to token_count"""
- nonlocal token_count
- nonlocal last_comma
- nonlocal chunk
-
- if is_last:
- token_count += len(chunk.tokens)
- else:
- token_count += self.chunk_length
-
- to_add = self.chunk_length - len(chunk.tokens)
- if to_add > 0:
- chunk.tokens += [self.id_end] * to_add
- chunk.multipliers += [1.0] * to_add
-
- chunk.tokens = [self.id_start] + chunk.tokens + [self.id_end]
- chunk.multipliers = [1.0] + chunk.multipliers + [1.0]
-
- last_comma = -1
- chunks.append(chunk)
- chunk = PromptChunk()
-
- comma_padding_backtrack = 20 # default value in https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/6cff4401824299a983c8e13424018efc347b4a2b/modules/shared.py#L410
- for tokens, (text, weight) in zip(tokenized, parsed):
- if text == "BREAK" and weight == -1:
- next_chunk()
- continue
-
- position = 0
- while position < len(tokens):
- token = tokens[position]
-
- if token == self.comma_token:
- last_comma = len(chunk.tokens)
-
- # this is when we are at the end of alloted 75 tokens for the current chunk, and the current token is not a comma. opts.comma_padding_backtrack
- # is a setting that specifies that if there is a comma nearby, the text after the comma should be moved out of this chunk and into the next.
- elif (
- comma_padding_backtrack != 0
- and len(chunk.tokens) == self.chunk_length
- and last_comma != -1
- and len(chunk.tokens) - last_comma <= comma_padding_backtrack
- ):
- break_location = last_comma + 1
-
- reloc_tokens = chunk.tokens[break_location:]
- reloc_mults = chunk.multipliers[break_location:]
-
- chunk.tokens = chunk.tokens[:break_location]
- chunk.multipliers = chunk.multipliers[:break_location]
-
- next_chunk()
- chunk.tokens = reloc_tokens
- chunk.multipliers = reloc_mults
-
- if len(chunk.tokens) == self.chunk_length:
- next_chunk()
-
- chunk.tokens.append(token)
- chunk.multipliers.append(weight)
- position += 1
-
- if len(chunk.tokens) > 0 or len(chunks) == 0:
- next_chunk(is_last=True)
-
- return chunks, token_count
-
- def process_texts(self, texts):
- """
- Accepts a list of texts and calls tokenize_line() on each, with cache. Returns the list of results and maximum
- length, in tokens, of all texts.
- """
-
- token_count = 0
-
- cache = {}
- batch_chunks = []
- for line in texts:
- if line in cache:
- chunks = cache[line]
- else:
- chunks, current_token_count = self.tokenize_line(line)
- token_count = max(current_token_count, token_count)
-
- cache[line] = chunks
-
- batch_chunks.append(chunks)
-
- return batch_chunks, token_count
-
- def forward(self, texts):
- """
- Accepts an array of texts; Passes texts through transformers network to create a tensor with numerical representation of those texts.
- Returns a tensor with shape of (B, T, C), where B is length of the array; T is length, in tokens, of texts (including padding) - T will
- be a multiple of 77; and C is dimensionality of each token - for SD1 it's 768, and for SD2 it's 1024.
- An example shape returned by this function can be: (2, 77, 768).
- Webui usually sends just one text at a time through this function - the only time when texts is an array with more than one elemenet
- is when you do prompt editing: "a picture of a [cat:dog:0.4] eating ice cream"
- """
-
- batch_chunks, token_count = self.process_texts(texts)
- chunk_count = max([len(x) for x in batch_chunks])
-
- zs = []
- ts = []
- for i in range(chunk_count):
- batch_chunk = [
- chunks[i] if i < len(chunks) else self.empty_chunk()
- for chunks in batch_chunks
- ]
-
- tokens = [x.tokens for x in batch_chunk]
- multipliers = [x.multipliers for x in batch_chunk]
- # self.embeddings.fixes = [x.fixes for x in batch_chunk]
-
- # for fixes in self.embeddings.fixes:
- # for position, embedding in fixes:
- # used_embeddings[embedding.name] = embedding
-
- z = self.process_tokens(tokens, multipliers)
- zs.append(z)
- ts.append(tokens)
-
- return np.hstack(ts), torch.hstack(zs)
-
- def process_tokens(self, remade_batch_tokens, batch_multipliers):
- """
- sends one single prompt chunk to be encoded by transformers neural network.
- remade_batch_tokens is a batch of tokens - a list, where every element is a list of tokens; usually
- there are exactly 77 tokens in the list. batch_multipliers is the same but for multipliers instead of tokens.
- Multipliers are used to give more or less weight to the outputs of transformers network. Each multiplier
- corresponds to one token.
- """
- tokens = torch.asarray(remade_batch_tokens).to(self.device())
-
- # this is for SD2: SD1 uses the same token for padding and end of text, while SD2 uses different ones.
- if self.id_end != self.id_pad:
- for batch_pos in range(len(remade_batch_tokens)):
- index = remade_batch_tokens[batch_pos].index(self.id_end)
- tokens[batch_pos, index + 1 : tokens.shape[1]] = self.id_pad
-
- z = self.encode_with_transformers(tokens)
-
- # restoring original mean is likely not correct, but it seems to work well to prevent artifacts that happen otherwise
- batch_multipliers = torch.asarray(batch_multipliers).to(self.device())
- original_mean = z.mean()
- z = z * batch_multipliers.reshape(batch_multipliers.shape + (1,)).expand(z.shape)
- new_mean = z.mean()
- z = z * (original_mean / new_mean)
-
- return z
-
-
-class FrozenCLIPEmbedderWithCustomWords(FrozenCLIPEmbedderWithCustomWordsBase):
- def __init__(self, tokenizer, text_encoder):
- super().__init__(text_encoder)
- self.tokenizer = tokenizer
- self.text_encoder = text_encoder
-
- vocab = self.tokenizer.get_vocab()
-
- self.comma_token = vocab.get(",", None)
-
- self.token_mults = {}
- tokens_with_parens = [
- (k, v)
- for k, v in vocab.items()
- if "(" in k or ")" in k or "[" in k or "]" in k
- ]
- for text, ident in tokens_with_parens:
- mult = 1.0
- for c in text:
- if c == "[":
- mult /= 1.1
- if c == "]":
- mult *= 1.1
- if c == "(":
- mult *= 1.1
- if c == ")":
- mult /= 1.1
-
- if mult != 1.0:
- self.token_mults[ident] = mult
-
- self.id_start = self.tokenizer.bos_token_id
- self.id_end = self.tokenizer.eos_token_id
- self.id_pad = self.id_end
-
- def tokenize(self, texts):
- tokenized = self.tokenizer(
- texts, truncation=False, add_special_tokens=False
- )["input_ids"]
-
- return tokenized
-
- def encode_with_transformers(self, tokens):
- CLIP_stop_at_last_layers = 1
- tokens = tokens.to(self.text_encoder.device)
- outputs = self.text_encoder(tokens, output_hidden_states=True)
-
- if CLIP_stop_at_last_layers > 1:
- z = outputs.hidden_states[-CLIP_stop_at_last_layers]
- z = self.text_encoder.text_model.final_layer_norm(z)
- else:
- z = outputs.last_hidden_state
-
- return z
-
-
-re_attention = re.compile(
- r"""
-\\\(|
-\\\)|
-\\\[|
-\\]|
-\\\\|
-\\|
-\(|
-\[|
-:([+-]?[.\d]+)\)|
-\)|
-]|
-[^\\()\[\]:]+|
-:
-""",
- re.X,
-)
-
-re_break = re.compile(r"\s*\bBREAK\b\s*", re.S)
-
-
-def parse_prompt_attention(text):
- """
- Parses a string with attention tokens and returns a list of pairs: text and its associated weight.
- Accepted tokens are:
- (abc) - increases attention to abc by a multiplier of 1.1
- (abc:3.12) - increases attention to abc by a multiplier of 3.12
- [abc] - decreases attention to abc by a multiplier of 1.1
- \( - literal character '('
- \[ - literal character '['
- \) - literal character ')'
- \] - literal character ']'
- \\ - literal character '\'
- anything else - just text
-
- >>> parse_prompt_attention('normal text')
- [['normal text', 1.0]]
- >>> parse_prompt_attention('an (important) word')
- [['an ', 1.0], ['important', 1.1], [' word', 1.0]]
- >>> parse_prompt_attention('(unbalanced')
- [['unbalanced', 1.1]]
- >>> parse_prompt_attention('\(literal\]')
- [['(literal]', 1.0]]
- >>> parse_prompt_attention('(unnecessary)(parens)')
- [['unnecessaryparens', 1.1]]
- >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).')
- [['a ', 1.0],
- ['house', 1.5730000000000004],
- [' ', 1.1],
- ['on', 1.0],
- [' a ', 1.1],
- ['hill', 0.55],
- [', sun, ', 1.1],
- ['sky', 1.4641000000000006],
- ['.', 1.1]]
- """
-
- res = []
- round_brackets = []
- square_brackets = []
-
- round_bracket_multiplier = 1.1
- square_bracket_multiplier = 1 / 1.1
-
- def multiply_range(start_position, multiplier):
- for p in range(start_position, len(res)):
- res[p][1] *= multiplier
-
- for m in re_attention.finditer(text):
- text = m.group(0)
- weight = m.group(1)
-
- if text.startswith("\\"):
- res.append([text[1:], 1.0])
- elif text == "(":
- round_brackets.append(len(res))
- elif text == "[":
- square_brackets.append(len(res))
- elif weight is not None and len(round_brackets) > 0:
- multiply_range(round_brackets.pop(), float(weight))
- elif text == ")" and len(round_brackets) > 0:
- multiply_range(round_brackets.pop(), round_bracket_multiplier)
- elif text == "]" and len(square_brackets) > 0:
- multiply_range(square_brackets.pop(), square_bracket_multiplier)
- else:
- parts = re.split(re_break, text)
- for i, part in enumerate(parts):
- if i > 0:
- res.append(["BREAK", -1])
- res.append([part, 1.0])
-
- for pos in round_brackets:
- multiply_range(pos, round_bracket_multiplier)
-
- for pos in square_brackets:
- multiply_range(pos, square_bracket_multiplier)
-
- if len(res) == 0:
- res = [["", 1.0]]
-
- # merge runs of identical weights
- i = 0
- while i + 1 < len(res):
- if res[i][1] == res[i + 1][1]:
- res[i][0] += res[i + 1][0]
- res.pop(i + 1)
- else:
- i += 1
-
- return res
diff --git a/spaces/Liu-LAB/GPT-academic/docs/README.md.German.md b/spaces/Liu-LAB/GPT-academic/docs/README.md.German.md
deleted file mode 100644
index d514de30f54bd8931568c029a3bbd3aa3eacdbb1..0000000000000000000000000000000000000000
--- a/spaces/Liu-LAB/GPT-academic/docs/README.md.German.md
+++ /dev/null
@@ -1,307 +0,0 @@
-> **Hinweis**
->
-> Bei der Installation von Abhängigkeiten sollten nur die in **requirements.txt** **angegebenen Versionen** streng ausgewählt werden.
->
-> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/`
-
-# GPT Akademisch optimiert (GPT Academic)
-
-**Wenn Ihnen dieses Projekt gefällt, geben Sie ihm bitte einen Stern; wenn Sie bessere Tastenkombinationen oder Funktions-Plugins entwickelt haben, können Sie gerne einen Pull Request eröffnen.**
-
-Wenn Sie dieses Projekt mögen, geben Sie ihm bitte einen Stern. Wenn Sie weitere nützliche wissenschaftliche Abkürzungen oder funktionale Plugins entwickelt haben, können Sie gerne ein Problem oder eine Pull-Anforderung öffnen. Wir haben auch ein README in [Englisch|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md), das von diesem Projekt selbst übersetzt wurde.
-Um dieses Projekt in eine beliebige Sprache mit GPT zu übersetzen, lesen Sie `multi_language.py` (experimentell).
-
-> **Hinweis**
->
-> 1. Beachten Sie bitte, dass nur Funktionserweiterungen (Schaltflächen) mit **roter Farbe** Dateien lesen können und einige Erweiterungen im **Dropdown-Menü** des Erweiterungsbereichs zu finden sind. Außerdem begrüßen wir jede neue Funktionserweiterung mit **höchster Priorität** und bearbeiten sie.
->
-> 2. Die Funktionalität jeder Datei in diesem Projekt wird in der Selbstanalyse [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) detailliert beschrieben. Mit der Weiterentwicklung der Versionen können Sie jederzeit die zugehörigen Funktions-Erweiterungen aufrufen, um durch Aufruf von GPT einen Selbstanalysebericht des Projekts zu erstellen. Häufig gestellte Fragen finden Sie in der [`Wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Installationsanweisungen](#Installation).
->
-> 3. Dieses Projekt ist kompatibel und fördert die Verwendung von inländischen Sprachmodellen wie ChatGLM und RWKV, Pangu, etc. Es unterstützt das Vorhandensein mehrerer api-keys, die in der Konfigurationsdatei wie folgt angegeben werden können: `API_KEY="openai-key1,openai-key2,api2d-key3"`. Wenn ein `API_KEY` temporär geändert werden muss, geben Sie den temporären `API_KEY` im Eingabebereich ein und drücken Sie dann die Eingabetaste, um ihn zu übernehmen.Funktion | Beschreibung
---- | ---
-Ein-Klick-Polieren | Unterstützt ein-Klick-Polieren und ein-Klick-Suche nach grammatikalischen Fehlern in wissenschaftlichen Arbeiten
-Ein-Klick Chinesisch-Englisch Übersetzung | Ein-Klick Chinesisch-Englisch Übersetzung
-Ein-Klick-Code-Erklärung | Zeigt Code, erklärt Code, erzeugt Code und fügt Kommentare zum Code hinzu
-[Benutzerdefinierte Tastenkombinationen](https://www.bilibili.com/video/BV14s4y1E7jN) | Unterstützt benutzerdefinierte Tastenkombinationen
-Modulare Gestaltung | Unterstützt leistungsstarke individuelle [Funktions-Plugins](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions). Plugins unterstützen [Hot-Updates](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
-[Selbstprogramm-Analyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Funktions-Plugin] [Ein-Klick Verstehen](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) der Quellcode dieses Projekts
-[Programmanalyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Funktions-Plugin] Ein-Klick-Analyse des Projektbaums anderer Python/C/C++/Java/Lua/...-Projekte
-Lesen von Papieren, [Übersetzen](https://www.bilibili.com/video/BV1KT411x7Wn) von Papieren | [Funktions-Plugin] Ein-Klick Erklärung des gesamten LaTeX/PDF-Artikels und Erstellung einer Zusammenfassung
-LaTeX-Volltext-Übersetzung und [Polieren](https://www.bilibili.com/video/BV1FT411H7c5/) | [Funktions-Plugin] Ein-Klick-Übersetzung oder-Polieren des LaTeX-Artikels
-Bulk-Kommentargenerierung | [Funktions-Plugin] Ein-Klick Massenerstellung von Funktionskommentaren
-Markdown [Chinesisch-Englisch Übersetzung](https://www.bilibili.com/video/BV1yo4y157jV/) | [Funktions-Plugin] Haben Sie die [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) in den oben genannten 5 Sprachen gesehen?
-Analyse-Berichtserstellung von chat | [Funktions-Plugin] Automatische Zusammenfassung nach der Ausführung
-[Funktion zur vollständigen Übersetzung von PDF-Artikeln](https://www.bilibili.com/video/BV1KT411x7Wn) | [Funktions-Plugin] Extrahiert Titel und Zusammenfassung der PDF-Artikel und übersetzt den gesamten Text (mehrere Threads)
-[Arxiv-Assistent](https://www.bilibili.com/video/BV1LM4y1279X) | [Funktions-Plugin] Geben Sie die Arxiv-Artikel-URL ein und klicken Sie auf Eine-Klick-Übersetzung-Zusammenfassung + PDF-Download
-[Google Scholar Integrations-Assistent](https://www.bilibili.com/video/BV19L411U7ia) | [Funktions-Plugin] Geben Sie eine beliebige Google Scholar Such-URL ein und lassen Sie gpt Ihnen bei der Erstellung von [relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/) helfen
-Internet-Informationen Aggregation + GPT | [Funktions-Plugin] Lassen Sie GPT eine Frage beantworten, indem es [zuerst Informationen aus dem Internet](https://www.bilibili.com/video/BV1om4y127ck/) sammelt und so die Informationen nie veralten
-Anzeige von Formeln / Bildern / Tabellen | Zeigt Formeln in beiden Formen, [TeX-Format und gerendeter Form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), unterstützt Formeln und Code-Highlights
-Unterstützung von PlugIns mit mehreren Threads | Unterstützt den Aufruf mehrerer Threads in Chatgpt, um Text oder Programme [Batch zu verarbeiten](https://www.bilibili.com/video/BV1FT411H7c5/)
-Starten Sie das dunkle Gradio-[Thema](https://github.com/binary-husky/gpt_academic/issues/173) | Fügen Sie ```/?__theme=dark``` an das Ende der Browser-URL an, um das dunkle Thema zu aktivieren
-[Unterstützung für mehrere LLM-Modelle](https://www.bilibili.com/video/BV1wT411p7yf), [API2D](https://api2d.com/) Interface-Unterstützung | Das Gefühl, gleichzeitig von GPT3.5, GPT4, [Tshinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS) bedient zu werden, muss toll sein, oder?
-Zugriff auf weitere LLM-Modelle, Unterstützung von [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Hinzufügen der Newbing-Schnittstelle (neues Bing), Einführung der Unterstützung von [Jittorllms](https://github.com/Jittor/JittorLLMs) der Tsinghua-Universität, [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) und [Pangu alpha](https://openi.org.cn/pangu/)
-Weitere neue Funktionen (wie Bildgenerierung) …… | Siehe Ende dieses Dokuments ……
-
-- Neue Oberfläche (Ändern Sie die LAYOUT-Option in `config.py`, um zwischen "Seitenlayout" und "Oben-unten-Layout" zu wechseln)
-
-
-
- All buttons are dynamically generated by reading `functional.py`, and custom functions can be easily added, freeing up the clipboard.
-
-
-
-
-- Proofreading/Correcting
-
-
-
-
-- If the output contains formulas, they will be displayed in both tex format and rendered format for easy copying and reading.
-
-
-
-
-- Don't feel like reading the project code? Show off the entire project to chatgpt.
-
-
-
-
-- Multiple large language models are mixed and called together (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4).
-
-
-
-
----
-# Installation
-## Installation-Method 1: Run directly (Windows, Linux or MacOS)
-
-1. Download the project
-```sh
-git clone https://github.com/binary-husky/gpt_academic.git
-cd gpt_academic
-```
-
-2. Configure API_KEY
-
-Configure API KEY and other settings in `config.py`. [Special Network Environment Settings](https://github.com/binary-husky/gpt_academic/issues/1).
-
-(P.S. When the program is running, it will first check whether there is a "config_private.py" private configuration file, and use the configuration defined in it to override the configuration of "config.py". Therefore, if you understand our configuration reading logic, we strongly recommend that you create a new configuration file named "config_private.py" next to "config.py" and transfer (copy) the configurations in "config.py" to "config_private.py". "config_private.py" is not controlled by git, which can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`, and the writing format of environment variables refers to the `docker-compose` file. Reading priority: `environment variable` > `config_private.py` >`config.py`)
-
-
-3. Install dependencies
-```sh
-# (Option I: If familar with Python) (Python version 3.9 or above, the newer the better), Note: Use the official pip source or Ali pip source, temporary switching method: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
-python -m pip install -r requirements.txt
-
-# (Option II: If not familiar with Python) Use anaconda with similar steps (https://www.bilibili.com/video/BV1rc411W7Dr):
-conda create -n gptac_venv python=3.11 # Create an anaconda environment
-conda activate gptac_venv # Activate the anaconda environment
-python -m pip install -r requirements.txt # Same step as pip installation
-```
-
-Click to expand if supporting Tsinghua ChatGLM/Fudan MOSS as backend
-
-
-[Optional Step] If supporting Tsinghua ChatGLM/Fudan MOSS as backend, additional dependencies need to be installed (Prerequisites: Familiar with Python + Used Pytorch + Sufficient computer configuration):
-```sh
-# [Optional Step I] Support Tsinghua ChatGLM. Remark: If encountering "Call ChatGLM fail Cannot load ChatGLM parameters", please refer to the following: 1: The above default installation is torch+cpu version. To use cuda, uninstall torch and reinstall torch+cuda; 2: If the model cannot be loaded due to insufficient machine configuration, you can modify the model precision in `request_llm/bridge_chatglm.py`, and modify all AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
-python -m pip install -r request_llm/requirements_chatglm.txt
-
-# [Optional Step II] Support Fudan MOSS
-python -m pip install -r request_llm/requirements_moss.txt
-git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # When executing this line of code, you must be in the project root path
-
-# [Optional Step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file contains the expected models. Currently supported models are as follows (jittorllms series currently only supports docker solutions):
-AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
-```
-
-
-
-
-
-
-4. Run
-```sh
-python main.py
-```5. Testing Function Plugin
-```
-- Test function plugin template function (requires gpt to answer what happened today in history), you can use this function as a template to implement more complex functions
- Click "[Function Plugin Template Demo] Today in History"
-```
-
-## Installation-Method 2: Using Docker
-
-1. Only ChatGPT (Recommended for most people)
-
-``` sh
-git clone https://github.com/binary-husky/gpt_academic.git # Download the project
-cd gpt_academic # Enter the path
-nano config.py # Edit config.py with any text editor, Configure "Proxy","API_KEY"and"WEB_PORT" (e.g 50923) etc.
-docker build -t gpt-academic . # Install
-
-# (Last step-option 1) Under Linux environment, use `--net=host` is more convenient and quick
-docker run --rm -it --net=host gpt-academic
-# (Last step-option 2) Under macOS/windows environment, can only use the -p option to expose the container's port(eg.50923) to the port on the host.
-docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
-```
-
-2. ChatGPT + ChatGLM + MOSS (Requires familiarity with Docker)
-
-``` sh
-# Modify docker-compose.yml, delete solution 1 and solution 3, and retain solution 2. Modify the configuration of solution 2 in docker-compose.yml, referring to the comments in it.
-docker-compose up
-```
-
-3. ChatGPT+LLAMA+Pangu+RWKV(Requires familiarity with Docker)
-``` sh
-# Modify docker-compose.yml, delete solution 1 and solution 2, and retain solution 3. Modify the configuration of solution 3 in docker-compose.yml, referring to the comments in it.
-docker-compose up
-```
-
-
-## Installation-Method 3: Other Deployment Options
-
-1. How to use reverse proxy URL/Microsoft Azure API
-Configure API_URL_REDIRECT according to the instructions in `config.py`.
-
-2. Remote cloud server deployment (requires cloud server knowledge and experience)
-Please visit [Deployment wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
-
-3. Using WSL 2 (Windows subsystem for Linux)
-Please visit [Deployment wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
-
-4. How to run at a secondary URL (such as `http://localhost/subpath`)
-Please visit [FastAPI operating instructions](docs/WithFastapi.md)
-
-5. Use docker-compose to run
-Please read docker-compose.yml and follow the prompts to operate.
-
----
-# Advanced Usage
-## Customize new convenience buttons / custom function plugins.
-
-1. Customize new convenience buttons (Academic Shortcut Keys)
-Open `core_functional.py` with any text editor, add an entry as follows, and then restart the program. (If the button has been added successfully and is visible, then the prefix and suffix can be hot-modified, and it will take effect without restarting the program.)
-For example
-```
-"Super English to Chinese": {
- # Prefix, will be added before your input. For example, used to describe your requirements, such as translation, explaining code, polishing, etc.
- "Prefix": "Please translate the following content into Chinese, and then use a markdown table to explain the proper nouns that appear in the text one by one:\n\n",
-
- # Suffix, will be added after your input. For example, combined with prefix, you can enclose your input content in quotes.
- "Suffix": "",
-},
-```
-
-
-
-
-2. Custom function plugins
-
-Write powerful function plugins to perform any task you want and can't think of.
-The difficulty of plugin writing and debugging is very low in this project. As long as you have a certain knowledge of Python, you can implement your own plugin functions by imitating the template we provided.
-For more information, please refer to the [Function Plugin Guide](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
-
----
-# Latest Update
-## New feature dynamics1. Funktion zur Speicherung von Dialogen. Rufen Sie im Bereich der Funktions-Plugins "Aktuellen Dialog speichern" auf, um den aktuellen Dialog als lesbares und wiederherstellbares HTML-Datei zu speichern. Darüber hinaus können Sie im Funktions-Plugin-Bereich (Dropdown-Menü) "Laden von Dialogverlauf" aufrufen, um den vorherigen Dialog wiederherzustellen. Tipp: Wenn Sie keine Datei angeben und stattdessen direkt auf "Laden des Dialogverlaufs" klicken, können Sie das HTML-Cache-Archiv anzeigen. Durch Klicken auf "Löschen aller lokalen Dialogverlaufsdatensätze" können alle HTML-Archiv-Caches gelöscht werden.
-
-
-
-
-2. Berichterstellung. Die meisten Plugins generieren nach Abschluss der Ausführung einen Arbeitsbericht.
-
-
-
-
-
-
-3. Modularisierte Funktionsgestaltung, einfache Schnittstellen mit leistungsstarken Funktionen.
-
-
-
-
-
-4. Dies ist ein Open-Source-Projekt, das sich "selbst übersetzen" kann.
-
-
-
-
-5. Die Übersetzung anderer Open-Source-Projekte ist kein Problem.
-
-
-
-
-
-
-
-
-6. Dekorieren Sie [`live2d`](https://github.com/fghrsh/live2d_demo) mit kleinen Funktionen (standardmäßig deaktiviert, Änderungen an `config.py` erforderlich).
-
-
-
-
-7. Neue MOSS-Sprachmodellunterstützung.
-
-
-
-
-8. OpenAI-Bildgenerierung.
-
-
-
-
-9. OpenAI-Audio-Analyse und Zusammenfassung.
-
-
-
-
-10. Latex-Proofreading des gesamten Textes.
-
-
-
-
-
-## Version:
-- Version 3.5 (Todo): Rufen Sie alle Funktionserweiterungen dieses Projekts mit natürlicher Sprache auf (hohe Priorität).
-- Version 3.4 (Todo): Verbesserte Unterstützung mehrerer Threads für Local Large Model (LLM).
-- Version 3.3: + Internet-Informationssynthese-Funktion
-- Version 3.2: Funktionserweiterungen unterstützen mehr Parameter-Schnittstellen (Speicherung von Dialogen, Interpretation beliebigen Sprachcodes + gleichzeitige Abfrage jeder LLM-Kombination)
-- Version 3.1: Unterstützung mehrerer GPT-Modelle gleichzeitig! Unterstützung für API2D, Unterstützung für Lastenausgleich von mehreren API-Schlüsseln.
-- Version 3.0: Unterstützung von Chatglm und anderen kleinen LLMs
-- Version 2.6: Umstrukturierung der Plugin-Struktur zur Verbesserung der Interaktivität, Einführung weiterer Plugins
-- Version 2.5: Automatische Aktualisierung, Problembehebung bei Quelltexten großer Projekte, wenn der Text zu lang ist oder Token überlaufen.
-- Version 2.4: (1) Neue Funktion zur Übersetzung des gesamten PDF-Texts; (2) Neue Funktion zum Wechseln der Position des Eingabebereichs; (3) Neue Option für vertikales Layout; (4) Optimierung von Multithread-Funktions-Plugins.
-- Version 2.3: Verbesserte Interaktivität mit mehreren Threads
-- Version 2.2: Funktionserweiterungen unterstützen "Hot-Reload"
-- Version 2.1: Faltbares Layout
-- Version 2.0: Einführung von modularisierten Funktionserweiterungen
-- Version 1.0: Grundlegende Funktionengpt_academic Entwickler QQ-Gruppe-2: 610599535
-
-- Bekannte Probleme
- - Einige Browser-Übersetzungs-Plugins können die Frontend-Ausführung dieser Software stören.
- - Sowohl eine zu hohe als auch eine zu niedrige Version von Gradio führt zu verschiedenen Ausnahmen.
-
-## Referenz und Lernen
-
-```
-Der Code bezieht sich auf viele Designs von anderen herausragenden Projekten, insbesondere:
-
-# Projekt 1: ChatGLM-6B der Tsinghua Universität:
-https://github.com/THUDM/ChatGLM-6B
-
-# Projekt 2: JittorLLMs der Tsinghua Universität:
-https://github.com/Jittor/JittorLLMs
-
-# Projekt 3: Edge-GPT:
-https://github.com/acheong08/EdgeGPT
-
-# Projekt 4: ChuanhuChatGPT:
-https://github.com/GaiZhenbiao/ChuanhuChatGPT
-
-# Projekt 5: ChatPaper:
-https://github.com/kaixindelele/ChatPaper
-
-# Mehr:
-https://github.com/gradio-app/gradio
-https://github.com/fghrsh/live2d_demo
-```
\ No newline at end of file
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/dmnet_r50-d8.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/dmnet_r50-d8.py
deleted file mode 100644
index d22ba52640bebd805b3b8d07025e276dfb023759..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/dmnet_r50-d8.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='DMHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- filter_sizes=(1, 3, 5, 7),
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/core/evaluation/metrics.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/core/evaluation/metrics.py
deleted file mode 100644
index 16c7dd47cadd53cf1caaa194e28a343f2aacc599..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/core/evaluation/metrics.py
+++ /dev/null
@@ -1,326 +0,0 @@
-from collections import OrderedDict
-
-import annotator.uniformer.mmcv as mmcv
-import numpy as np
-import torch
-
-
-def f_score(precision, recall, beta=1):
- """calcuate the f-score value.
-
- Args:
- precision (float | torch.Tensor): The precision value.
- recall (float | torch.Tensor): The recall value.
- beta (int): Determines the weight of recall in the combined score.
- Default: False.
-
- Returns:
- [torch.tensor]: The f-score value.
- """
- score = (1 + beta**2) * (precision * recall) / (
- (beta**2 * precision) + recall)
- return score
-
-
-def intersect_and_union(pred_label,
- label,
- num_classes,
- ignore_index,
- label_map=dict(),
- reduce_zero_label=False):
- """Calculate intersection and Union.
-
- Args:
- pred_label (ndarray | str): Prediction segmentation map
- or predict result filename.
- label (ndarray | str): Ground truth segmentation map
- or label filename.
- num_classes (int): Number of categories.
- ignore_index (int): Index that will be ignored in evaluation.
- label_map (dict): Mapping old labels to new labels. The parameter will
- work only when label is str. Default: dict().
- reduce_zero_label (bool): Wether ignore zero label. The parameter will
- work only when label is str. Default: False.
-
- Returns:
- torch.Tensor: The intersection of prediction and ground truth
- histogram on all classes.
- torch.Tensor: The union of prediction and ground truth histogram on
- all classes.
- torch.Tensor: The prediction histogram on all classes.
- torch.Tensor: The ground truth histogram on all classes.
- """
-
- if isinstance(pred_label, str):
- pred_label = torch.from_numpy(np.load(pred_label))
- else:
- pred_label = torch.from_numpy((pred_label))
-
- if isinstance(label, str):
- label = torch.from_numpy(
- mmcv.imread(label, flag='unchanged', backend='pillow'))
- else:
- label = torch.from_numpy(label)
-
- if label_map is not None:
- for old_id, new_id in label_map.items():
- label[label == old_id] = new_id
- if reduce_zero_label:
- label[label == 0] = 255
- label = label - 1
- label[label == 254] = 255
-
- mask = (label != ignore_index)
- pred_label = pred_label[mask]
- label = label[mask]
-
- intersect = pred_label[pred_label == label]
- area_intersect = torch.histc(
- intersect.float(), bins=(num_classes), min=0, max=num_classes - 1)
- area_pred_label = torch.histc(
- pred_label.float(), bins=(num_classes), min=0, max=num_classes - 1)
- area_label = torch.histc(
- label.float(), bins=(num_classes), min=0, max=num_classes - 1)
- area_union = area_pred_label + area_label - area_intersect
- return area_intersect, area_union, area_pred_label, area_label
-
-
-def total_intersect_and_union(results,
- gt_seg_maps,
- num_classes,
- ignore_index,
- label_map=dict(),
- reduce_zero_label=False):
- """Calculate Total Intersection and Union.
-
- Args:
- results (list[ndarray] | list[str]): List of prediction segmentation
- maps or list of prediction result filenames.
- gt_seg_maps (list[ndarray] | list[str]): list of ground truth
- segmentation maps or list of label filenames.
- num_classes (int): Number of categories.
- ignore_index (int): Index that will be ignored in evaluation.
- label_map (dict): Mapping old labels to new labels. Default: dict().
- reduce_zero_label (bool): Wether ignore zero label. Default: False.
-
- Returns:
- ndarray: The intersection of prediction and ground truth histogram
- on all classes.
- ndarray: The union of prediction and ground truth histogram on all
- classes.
- ndarray: The prediction histogram on all classes.
- ndarray: The ground truth histogram on all classes.
- """
- num_imgs = len(results)
- assert len(gt_seg_maps) == num_imgs
- total_area_intersect = torch.zeros((num_classes, ), dtype=torch.float64)
- total_area_union = torch.zeros((num_classes, ), dtype=torch.float64)
- total_area_pred_label = torch.zeros((num_classes, ), dtype=torch.float64)
- total_area_label = torch.zeros((num_classes, ), dtype=torch.float64)
- for i in range(num_imgs):
- area_intersect, area_union, area_pred_label, area_label = \
- intersect_and_union(
- results[i], gt_seg_maps[i], num_classes, ignore_index,
- label_map, reduce_zero_label)
- total_area_intersect += area_intersect
- total_area_union += area_union
- total_area_pred_label += area_pred_label
- total_area_label += area_label
- return total_area_intersect, total_area_union, total_area_pred_label, \
- total_area_label
-
-
-def mean_iou(results,
- gt_seg_maps,
- num_classes,
- ignore_index,
- nan_to_num=None,
- label_map=dict(),
- reduce_zero_label=False):
- """Calculate Mean Intersection and Union (mIoU)
-
- Args:
- results (list[ndarray] | list[str]): List of prediction segmentation
- maps or list of prediction result filenames.
- gt_seg_maps (list[ndarray] | list[str]): list of ground truth
- segmentation maps or list of label filenames.
- num_classes (int): Number of categories.
- ignore_index (int): Index that will be ignored in evaluation.
- nan_to_num (int, optional): If specified, NaN values will be replaced
- by the numbers defined by the user. Default: None.
- label_map (dict): Mapping old labels to new labels. Default: dict().
- reduce_zero_label (bool): Wether ignore zero label. Default: False.
-
- Returns:
- dict[str, float | ndarray]:
- float: Overall accuracy on all images.
- ndarray: Per category accuracy, shape (num_classes, ).
- ndarray: Per category IoU, shape (num_classes, ).
- """
- iou_result = eval_metrics(
- results=results,
- gt_seg_maps=gt_seg_maps,
- num_classes=num_classes,
- ignore_index=ignore_index,
- metrics=['mIoU'],
- nan_to_num=nan_to_num,
- label_map=label_map,
- reduce_zero_label=reduce_zero_label)
- return iou_result
-
-
-def mean_dice(results,
- gt_seg_maps,
- num_classes,
- ignore_index,
- nan_to_num=None,
- label_map=dict(),
- reduce_zero_label=False):
- """Calculate Mean Dice (mDice)
-
- Args:
- results (list[ndarray] | list[str]): List of prediction segmentation
- maps or list of prediction result filenames.
- gt_seg_maps (list[ndarray] | list[str]): list of ground truth
- segmentation maps or list of label filenames.
- num_classes (int): Number of categories.
- ignore_index (int): Index that will be ignored in evaluation.
- nan_to_num (int, optional): If specified, NaN values will be replaced
- by the numbers defined by the user. Default: None.
- label_map (dict): Mapping old labels to new labels. Default: dict().
- reduce_zero_label (bool): Wether ignore zero label. Default: False.
-
- Returns:
- dict[str, float | ndarray]: Default metrics.
- float: Overall accuracy on all images.
- ndarray: Per category accuracy, shape (num_classes, ).
- ndarray: Per category dice, shape (num_classes, ).
- """
-
- dice_result = eval_metrics(
- results=results,
- gt_seg_maps=gt_seg_maps,
- num_classes=num_classes,
- ignore_index=ignore_index,
- metrics=['mDice'],
- nan_to_num=nan_to_num,
- label_map=label_map,
- reduce_zero_label=reduce_zero_label)
- return dice_result
-
-
-def mean_fscore(results,
- gt_seg_maps,
- num_classes,
- ignore_index,
- nan_to_num=None,
- label_map=dict(),
- reduce_zero_label=False,
- beta=1):
- """Calculate Mean Intersection and Union (mIoU)
-
- Args:
- results (list[ndarray] | list[str]): List of prediction segmentation
- maps or list of prediction result filenames.
- gt_seg_maps (list[ndarray] | list[str]): list of ground truth
- segmentation maps or list of label filenames.
- num_classes (int): Number of categories.
- ignore_index (int): Index that will be ignored in evaluation.
- nan_to_num (int, optional): If specified, NaN values will be replaced
- by the numbers defined by the user. Default: None.
- label_map (dict): Mapping old labels to new labels. Default: dict().
- reduce_zero_label (bool): Wether ignore zero label. Default: False.
- beta (int): Determines the weight of recall in the combined score.
- Default: False.
-
-
- Returns:
- dict[str, float | ndarray]: Default metrics.
- float: Overall accuracy on all images.
- ndarray: Per category recall, shape (num_classes, ).
- ndarray: Per category precision, shape (num_classes, ).
- ndarray: Per category f-score, shape (num_classes, ).
- """
- fscore_result = eval_metrics(
- results=results,
- gt_seg_maps=gt_seg_maps,
- num_classes=num_classes,
- ignore_index=ignore_index,
- metrics=['mFscore'],
- nan_to_num=nan_to_num,
- label_map=label_map,
- reduce_zero_label=reduce_zero_label,
- beta=beta)
- return fscore_result
-
-
-def eval_metrics(results,
- gt_seg_maps,
- num_classes,
- ignore_index,
- metrics=['mIoU'],
- nan_to_num=None,
- label_map=dict(),
- reduce_zero_label=False,
- beta=1):
- """Calculate evaluation metrics
- Args:
- results (list[ndarray] | list[str]): List of prediction segmentation
- maps or list of prediction result filenames.
- gt_seg_maps (list[ndarray] | list[str]): list of ground truth
- segmentation maps or list of label filenames.
- num_classes (int): Number of categories.
- ignore_index (int): Index that will be ignored in evaluation.
- metrics (list[str] | str): Metrics to be evaluated, 'mIoU' and 'mDice'.
- nan_to_num (int, optional): If specified, NaN values will be replaced
- by the numbers defined by the user. Default: None.
- label_map (dict): Mapping old labels to new labels. Default: dict().
- reduce_zero_label (bool): Wether ignore zero label. Default: False.
- Returns:
- float: Overall accuracy on all images.
- ndarray: Per category accuracy, shape (num_classes, ).
- ndarray: Per category evaluation metrics, shape (num_classes, ).
- """
- if isinstance(metrics, str):
- metrics = [metrics]
- allowed_metrics = ['mIoU', 'mDice', 'mFscore']
- if not set(metrics).issubset(set(allowed_metrics)):
- raise KeyError('metrics {} is not supported'.format(metrics))
-
- total_area_intersect, total_area_union, total_area_pred_label, \
- total_area_label = total_intersect_and_union(
- results, gt_seg_maps, num_classes, ignore_index, label_map,
- reduce_zero_label)
- all_acc = total_area_intersect.sum() / total_area_label.sum()
- ret_metrics = OrderedDict({'aAcc': all_acc})
- for metric in metrics:
- if metric == 'mIoU':
- iou = total_area_intersect / total_area_union
- acc = total_area_intersect / total_area_label
- ret_metrics['IoU'] = iou
- ret_metrics['Acc'] = acc
- elif metric == 'mDice':
- dice = 2 * total_area_intersect / (
- total_area_pred_label + total_area_label)
- acc = total_area_intersect / total_area_label
- ret_metrics['Dice'] = dice
- ret_metrics['Acc'] = acc
- elif metric == 'mFscore':
- precision = total_area_intersect / total_area_pred_label
- recall = total_area_intersect / total_area_label
- f_value = torch.tensor(
- [f_score(x[0], x[1], beta) for x in zip(precision, recall)])
- ret_metrics['Fscore'] = f_value
- ret_metrics['Precision'] = precision
- ret_metrics['Recall'] = recall
-
- ret_metrics = {
- metric: value.numpy()
- for metric, value in ret_metrics.items()
- }
- if nan_to_num is not None:
- ret_metrics = OrderedDict({
- metric: np.nan_to_num(metric_value, nan=nan_to_num)
- for metric, metric_value in ret_metrics.items()
- })
- return ret_metrics
diff --git a/spaces/NN520/AI/src/components/settings.tsx b/spaces/NN520/AI/src/components/settings.tsx
deleted file mode 100644
index e18aa5b484852bb5d047442a06e7143b6893cb0d..0000000000000000000000000000000000000000
--- a/spaces/NN520/AI/src/components/settings.tsx
+++ /dev/null
@@ -1,141 +0,0 @@
-import { useEffect, useState } from 'react'
-import { useAtom } from 'jotai'
-import { Switch } from '@headlessui/react'
-import { toast } from 'react-hot-toast'
-import { hashAtom, voiceAtom } from '@/state'
-import {
- Dialog,
- DialogContent,
- DialogDescription,
- DialogFooter,
- DialogHeader,
- DialogTitle
-} from '@/components/ui/dialog'
-import { Button } from './ui/button'
-import { Input } from './ui/input'
-import { ChunkKeys, parseCookies, extraCurlFromCookie, randomIP, encodeHeadersToCookie } from '@/lib/utils'
-import { ExternalLink } from './external-link'
-import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard'
-
-export function Settings() {
- const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 })
- const [loc, setLoc] = useAtom(hashAtom)
- const [curlValue, setCurlValue] = useState(extraCurlFromCookie(parseCookies(document.cookie, ChunkKeys)))
- const [enableTTS, setEnableTTS] = useAtom(voiceAtom)
-
- useEffect(() => {
- if (isCopied) {
- toast.success('复制成功')
- }
- }, [isCopied])
-
- if (loc === 'settings') {
- return (
-
- )
- } else if (loc === 'voice') {
- return (
-
- )
- }
- return null
-}
diff --git a/spaces/Najaf-Zawar/Old_Image-Restoration/README.md b/spaces/Najaf-Zawar/Old_Image-Restoration/README.md
deleted file mode 100644
index bff8aecb8b4da59d670a9cecb0b6596b45ea5439..0000000000000000000000000000000000000000
--- a/spaces/Najaf-Zawar/Old_Image-Restoration/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Old Image-Restoration
-emoji: 💻
-colorFrom: red
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/NimaKL/FireWatch5k/app.py b/spaces/NimaKL/FireWatch5k/app.py
deleted file mode 100644
index a1e236c71742cef91edd503d2f69b1b85abdb22d..0000000000000000000000000000000000000000
--- a/spaces/NimaKL/FireWatch5k/app.py
+++ /dev/null
@@ -1,38 +0,0 @@
-import gradio as gr
-from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification
-
-model_name = "NimaKL/FireWatch_tiny_75k"
-tokenizer = AutoTokenizer.from_pretrained(model_name)
-model = AutoModelForSequenceClassification.from_pretrained(model_name)
-
-def predict(text):
- inputs = tokenizer(text, return_tensors="pt")
- outputs = model(**inputs)
- logits = outputs.logits
- label_id = logits.argmax(axis=1).item()
- return "Danger of fire hazard!" if label_id == 1 else "It is unlikely that a fire will start in this area."
-
-# Define a custom CSS style
-custom_style = """
- body {
- background-color: #F262626;
- }
-"""
-
-# Define a function to generate HTML for embedding the Google Sheets document
-def get_sheet_html():
- return f''
-
-io = gr.Interface(
- fn=predict,
- inputs="text",
- outputs="text",
- title="FireWatch",
- description="
Predict whether a data row describes a fire hazard or not.
\
-
Here is a Google Sheets document containing sample data (You can use for testing). It is a heavy document so it might take a while to load.
",
- output_description="Prediction",
- examples=[['-26.76123, 147.15512, 393.02, 203.63'], ['-26.7598, 147.14514, 361.54, 79.4'], ['-25.70059, 149.48932, 313.9, 5.15'], ['-24.4318, 151.83102, 307.98, 8.79'], ['-23.21878, 148.91298, 314.08, 7.4'], ['7.87518, 19.9241, 316.32, 39.63'], ['-20.10942, 148.14326, 314.39, 8.8'], ['7.87772, 19.9048, 304.14, 13.43'], ['-20.79866, 124.46834, 366.74, 89.06']],
- theme="Streamlit",
- css=custom_style
-)
-io.launch()
diff --git a/spaces/Norod78/SillyTedTalkSnippetGenerator/README.md b/spaces/Norod78/SillyTedTalkSnippetGenerator/README.md
deleted file mode 100644
index e733521a0fa2780c765b1c90ba666a1ffbf1726a..0000000000000000000000000000000000000000
--- a/spaces/Norod78/SillyTedTalkSnippetGenerator/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Silly Ted-Talk Snippet Generator
-emoji: 🧑🏫
-colorFrom: green
-colorTo: pink
-sdk: gradio
-sdk_version: 3.1.1
-app_file: app.py
-pinned: false
-license: cc-by-nc-4.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/data/ofa_dataset.py b/spaces/OFA-Sys/OFA-Generic_Interface/data/ofa_dataset.py
deleted file mode 100644
index 02d856c28016b3a1c020fed483afe0aa797bf50f..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/data/ofa_dataset.py
+++ /dev/null
@@ -1,74 +0,0 @@
-import logging
-import re
-import torch.utils.data
-from fairseq.data import FairseqDataset
-
-logger = logging.getLogger(__name__)
-
-
-class OFADataset(FairseqDataset):
- def __init__(self, split, dataset, bpe, src_dict, tgt_dict):
- self.split = split
- self.dataset = dataset
- self.bpe = bpe
- self.src_dict = src_dict
- self.tgt_dict = tgt_dict
-
- self.bos = src_dict.bos()
- self.eos = src_dict.eos()
- self.pad = src_dict.pad()
- self.bos_item = torch.LongTensor([self.bos])
- self.eos_item = torch.LongTensor([self.eos])
-
- def __len__(self):
- return len(self.dataset)
-
- def encode_text(self, text, length=None, append_bos=False, append_eos=False, use_bpe=True):
- s = self.tgt_dict.encode_line(
- line=self.bpe.encode(text) if use_bpe else text,
- add_if_not_exist=False,
- append_eos=False
- ).long()
- if length is not None:
- s = s[:length]
- if append_bos:
- s = torch.cat([self.bos_item, s])
- if append_eos:
- s = torch.cat([s, self.eos_item])
- return s
-
- def pre_question(self, question, max_ques_words):
- question = question.lower().lstrip(",.!?*#:;~").replace('-', ' ').replace('/', ' ')
-
- question = re.sub(
- r"\s{2,}",
- ' ',
- question,
- )
- question = question.rstrip('\n')
- question = question.strip(' ')
-
- # truncate question
- question_words = question.split(' ')
- if len(question_words) > max_ques_words:
- question = ' '.join(question_words[:max_ques_words])
-
- return question
-
- def pre_caption(self, caption, max_words):
- caption = caption.lower().lstrip(",.!?*#:;~").replace('-', ' ').replace('/', ' ').replace('', 'person')
-
- caption = re.sub(
- r"\s{2,}",
- ' ',
- caption,
- )
- caption = caption.rstrip('\n')
- caption = caption.strip(' ')
-
- # truncate caption
- caption_words = caption.split(' ')
- if len(caption_words) > max_words:
- caption = ' '.join(caption_words[:max_words])
-
- return caption
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature_s2t.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature_s2t.py
deleted file mode 100644
index 6fff4faf44a92d42504559ecea8ec1047d2e5f14..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature_s2t.py
+++ /dev/null
@@ -1,92 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import csv
-import io
-import logging
-import os
-import os.path as op
-import sys
-
-from dump_hubert_feature import HubertFeatureReader
-from feature_utils import get_shard_range, dump_feature
-from fairseq.data.audio.audio_utils import get_waveform
-from fairseq.data.audio.speech_to_text_dataset import (
- read_from_uncompressed_zip,
-)
-
-
-logging.basicConfig(
- format="%(asctime)s | %(levelname)s | %(name)s | %(message)s",
- datefmt="%Y-%m-%d %H:%M:%S",
- level=os.environ.get("LOGLEVEL", "INFO").upper(),
- stream=sys.stdout,
-)
-logger = logging.getLogger("dump_hubert_feature_s2t")
-
-
-class HubertFeatureReaderS2T(HubertFeatureReader):
- def read_audio(self, path, ref_len=None):
- path, *extra = path.split(":")
- assert len(extra) == 2
- assert path.endswith(".zip")
-
- data = read_from_uncompressed_zip(path, int(extra[0]), int(extra[1]))
- f = io.BytesIO(data)
- wav, sr = get_waveform(f)
- assert sr == self.task.cfg.sample_rate, sr
- if wav.ndim == 2:
- wav = wav.mean(-1)
- assert wav.ndim == 1, wav.ndim
- if ref_len is not None and abs(ref_len - len(wav)) > 160:
- logging.warning(f"ref {ref_len} != read {len(wav)} ({path})")
- return wav
-
-
-def get_path_iterator(root, tsv, nshard, rank):
- with open(tsv) as f:
- reader = csv.DictReader(
- f,
- delimiter="\t",
- quotechar=None,
- doublequote=False,
- lineterminator="\n",
- quoting=csv.QUOTE_NONE,
- )
- subpaths = [op.join(root, e["audio"]) for e in reader]
- start, end = get_shard_range(len(subpaths), nshard, rank)
- subpaths = subpaths[start:end]
- def iterate():
- for subpath in subpaths:
- yield op.join(root, subpath), None
- return iterate, len(subpaths)
-
-
-def main(
- root, tsv_path, ckpt_path, layer, nshard, rank, feat_dir, split, max_chunk
-):
- reader = HubertFeatureReaderS2T(ckpt_path, layer, max_chunk)
- generator, num = get_path_iterator(root, tsv_path, nshard, rank)
- dump_feature(reader, generator, num, split, nshard, rank, feat_dir)
-
-
-
-if __name__ == "__main__":
- import argparse
-
- parser = argparse.ArgumentParser()
- parser.add_argument("root")
- parser.add_argument("tsv_path")
- parser.add_argument("ckpt_path")
- parser.add_argument("layer", type=int)
- parser.add_argument("nshard", type=int)
- parser.add_argument("rank", type=int)
- parser.add_argument("feat_dir")
- parser.add_argument("split")
- parser.add_argument("--max_chunk", type=int, default=1600000)
- args = parser.parse_args()
- logger.info(args)
-
- main(**vars(args))
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/denoise_and_vad_audio.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/denoise_and_vad_audio.py
deleted file mode 100644
index 4e13b38a5d3fb44dd3969e6afcb8f202274ee3b7..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/denoise_and_vad_audio.py
+++ /dev/null
@@ -1,204 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import logging
-import os
-import csv
-import tempfile
-from collections import defaultdict
-from pathlib import Path
-
-import torchaudio
-try:
- import webrtcvad
-except ImportError:
- raise ImportError("Please install py-webrtcvad: pip install webrtcvad")
-import pandas as pd
-from tqdm import tqdm
-
-from examples.speech_synthesis.preprocessing.denoiser.pretrained import master64
-import examples.speech_synthesis.preprocessing.denoiser.utils as utils
-from examples.speech_synthesis.preprocessing.vad import (
- frame_generator, vad_collector, read_wave, write_wave, FS_MS, THRESHOLD,
- SCALE
-)
-from examples.speech_to_text.data_utils import save_df_to_tsv
-
-
-log = logging.getLogger(__name__)
-
-PATHS = ["after_denoise", "after_vad"]
-MIN_T = 0.05
-
-
-def generate_tmp_filename(extension="txt"):
- return tempfile._get_default_tempdir() + "/" + \
- next(tempfile._get_candidate_names()) + "." + extension
-
-
-def convert_sr(inpath, sr, output_path=None):
- if not output_path:
- output_path = generate_tmp_filename("wav")
- cmd = f"sox {inpath} -r {sr} {output_path}"
- os.system(cmd)
- return output_path
-
-
-def apply_vad(vad, inpath):
- audio, sample_rate = read_wave(inpath)
- frames = frame_generator(FS_MS, audio, sample_rate)
- frames = list(frames)
- segments = vad_collector(sample_rate, FS_MS, 300, vad, frames)
- merge_segments = list()
- timestamp_start = 0.0
- timestamp_end = 0.0
- # removing start, end, and long sequences of sils
- for i, segment in enumerate(segments):
- merge_segments.append(segment[0])
- if i and timestamp_start:
- sil_duration = segment[1] - timestamp_end
- if sil_duration > THRESHOLD:
- merge_segments.append(int(THRESHOLD / SCALE) * (b'\x00'))
- else:
- merge_segments.append(int((sil_duration / SCALE)) * (b'\x00'))
- timestamp_start = segment[1]
- timestamp_end = segment[2]
- segment = b''.join(merge_segments)
- return segment, sample_rate
-
-
-def write(wav, filename, sr=16_000):
- # Normalize audio if it prevents clipping
- wav = wav / max(wav.abs().max().item(), 1)
- torchaudio.save(filename, wav.cpu(), sr, encoding="PCM_S",
- bits_per_sample=16)
-
-
-def process(args):
- # making sure we are requested either denoise or vad
- if not args.denoise and not args.vad:
- log.error("No denoise or vad is requested.")
- return
-
- log.info("Creating out directories...")
- if args.denoise:
- out_denoise = Path(args.output_dir).absolute().joinpath(PATHS[0])
- out_denoise.mkdir(parents=True, exist_ok=True)
- if args.vad:
- out_vad = Path(args.output_dir).absolute().joinpath(PATHS[1])
- out_vad.mkdir(parents=True, exist_ok=True)
-
- log.info("Loading pre-trained speech enhancement model...")
- model = master64().to(args.device)
-
- log.info("Building the VAD model...")
- vad = webrtcvad.Vad(int(args.vad_agg_level))
-
- # preparing the output dict
- output_dict = defaultdict(list)
-
- log.info(f"Parsing input manifest: {args.audio_manifest}")
- with open(args.audio_manifest, "r") as f:
- manifest_dict = csv.DictReader(f, delimiter="\t")
- for row in tqdm(manifest_dict):
- filename = str(row["audio"])
-
- final_output = filename
- keep_sample = True
- n_frames = row["n_frames"]
- snr = -1
- if args.denoise:
- output_path_denoise = out_denoise.joinpath(Path(filename).name)
- # convert to 16khz in case we use a differet sr
- tmp_path = convert_sr(final_output, 16000)
-
- # loading audio file and generating the enhanced version
- out, sr = torchaudio.load(tmp_path)
- out = out.to(args.device)
- estimate = model(out)
- estimate = (1 - args.dry_wet) * estimate + args.dry_wet * out
- write(estimate[0], str(output_path_denoise), sr)
-
- snr = utils.cal_snr(out, estimate)
- snr = snr.cpu().detach().numpy()[0][0]
- final_output = str(output_path_denoise)
-
- if args.vad:
- output_path_vad = out_vad.joinpath(Path(filename).name)
- sr = torchaudio.info(final_output).sample_rate
- if sr in [16000, 32000, 48000]:
- tmp_path = final_output
- elif sr < 16000:
- tmp_path = convert_sr(final_output, 16000)
- elif sr < 32000:
- tmp_path = convert_sr(final_output, 32000)
- else:
- tmp_path = convert_sr(final_output, 48000)
- # apply VAD
- segment, sample_rate = apply_vad(vad, tmp_path)
- if len(segment) < sample_rate * MIN_T:
- keep_sample = False
- print((
- f"WARNING: skip {filename} because it is too short "
- f"after VAD ({len(segment) / sample_rate} < {MIN_T})"
- ))
- else:
- if sample_rate != sr:
- tmp_path = generate_tmp_filename("wav")
- write_wave(tmp_path, segment, sample_rate)
- convert_sr(tmp_path, sr,
- output_path=str(output_path_vad))
- else:
- write_wave(str(output_path_vad), segment, sample_rate)
- final_output = str(output_path_vad)
- segment, _ = torchaudio.load(final_output)
- n_frames = segment.size(1)
-
- if keep_sample:
- output_dict["id"].append(row["id"])
- output_dict["audio"].append(final_output)
- output_dict["n_frames"].append(n_frames)
- output_dict["tgt_text"].append(row["tgt_text"])
- output_dict["speaker"].append(row["speaker"])
- output_dict["src_text"].append(row["src_text"])
- output_dict["snr"].append(snr)
-
- out_tsv_path = Path(args.output_dir) / Path(args.audio_manifest).name
- log.info(f"Saving manifest to {out_tsv_path.as_posix()}")
- save_df_to_tsv(pd.DataFrame.from_dict(output_dict), out_tsv_path)
-
-
-def main():
- parser = argparse.ArgumentParser()
- parser.add_argument("--audio-manifest", "-i", required=True,
- type=str, help="path to the input manifest.")
- parser.add_argument(
- "--output-dir", "-o", required=True, type=str,
- help="path to the output dir. it will contain files after denoising and"
- " vad"
- )
- parser.add_argument("--vad-agg-level", "-a", type=int, default=2,
- help="the aggresive level of the vad [0-3].")
- parser.add_argument(
- "--dry-wet", "-dw", type=float, default=0.01,
- help="the level of linear interpolation between noisy and enhanced "
- "files."
- )
- parser.add_argument(
- "--device", "-d", type=str, default="cpu",
- help="the device to be used for the speech enhancement model: "
- "cpu | cuda."
- )
- parser.add_argument("--denoise", action="store_true",
- help="apply a denoising")
- parser.add_argument("--vad", action="store_true", help="apply a VAD")
- args = parser.parse_args()
-
- process(args)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/encoders/gpt2_bpe.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/encoders/gpt2_bpe.py
deleted file mode 100644
index b7426b249bbbabd8e20bbe8ca5449809efdf85fc..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/encoders/gpt2_bpe.py
+++ /dev/null
@@ -1,45 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass, field
-
-from fairseq import file_utils
-from fairseq.data.encoders import register_bpe
-from fairseq.dataclass import FairseqDataclass
-
-from .gpt2_bpe_utils import get_encoder
-
-
-DEFAULT_ENCODER_JSON = "https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json"
-DEFAULT_VOCAB_BPE = "https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe"
-
-
-@dataclass
-class GPT2BPEConfig(FairseqDataclass):
- gpt2_encoder_json: str = field(
- default=DEFAULT_ENCODER_JSON, metadata={"help": "path to encoder.json"}
- )
- gpt2_vocab_bpe: str = field(
- default=DEFAULT_VOCAB_BPE, metadata={"help": "path to vocab.bpe"}
- )
-
-
-@register_bpe("gpt2", dataclass=GPT2BPEConfig)
-class GPT2BPE(object):
- def __init__(self, cfg):
- encoder_json = file_utils.cached_path(cfg.gpt2_encoder_json)
- vocab_bpe = file_utils.cached_path(cfg.gpt2_vocab_bpe)
- self.bpe = get_encoder(encoder_json, vocab_bpe)
-
- def encode(self, x: str) -> str:
- return " ".join(map(str, self.bpe.encode(x)))
-
- def decode(self, x: str) -> str:
- return self.bpe.decode(
- [int(tok) if tok not in {"", ""} and not tok.startswith('<') else tok for tok in x.split()]
- )
-
- def is_beginning_of_word(self, x: str) -> bool:
- return self.decode(x).startswith(" ")
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/resampling_dataset.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/resampling_dataset.py
deleted file mode 100644
index 3d3b993164dc3962df48bacff26714328e843e80..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/resampling_dataset.py
+++ /dev/null
@@ -1,139 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-
-import numpy as np
-from fairseq.data import BaseWrapperDataset, plasma_utils
-
-
-logger = logging.getLogger(__name__)
-
-
-class ResamplingDataset(BaseWrapperDataset):
- """Randomly samples from a given dataset at each epoch.
-
- Sampling is done with or without replacement, depending on the "replace"
- parameter.
-
- Optionally, the epoch size can be rescaled. This is potentially desirable
- to increase per-epoch coverage of the base dataset (since sampling with
- replacement means that many items in the dataset will be left out). In the
- case of sampling without replacement, size_ratio should be strictly less
- than 1.
-
- Args:
- dataset (~torch.utils.data.Dataset): dataset on which to sample.
- weights (List[float]): list of probability weights
- (default: None, which corresponds to uniform sampling).
- replace (bool): sampling mode; True for "with replacement", or False
- for "without replacement" (default: True)
- size_ratio (float): the ratio to subsample to; must be positive
- (default: 1.0).
- batch_by_size (bool): whether or not to batch by sequence length
- (default: True).
- seed (int): RNG seed to use (default: 0).
- epoch (int): starting epoch number (default: 1).
- """
-
- def __init__(
- self,
- dataset,
- weights=None,
- replace=True,
- size_ratio=1.0,
- batch_by_size=True,
- seed=0,
- epoch=1,
- ):
- super().__init__(dataset)
-
- if weights is None:
- self.weights = None
-
- else:
- assert len(weights) == len(dataset)
- weights_arr = np.array(weights, dtype=np.float64)
- weights_arr /= weights_arr.sum()
- self.weights = plasma_utils.PlasmaArray(weights_arr)
-
- self.replace = replace
-
- assert size_ratio > 0.0
- if not self.replace:
- assert size_ratio < 1.0
- self.size_ratio = float(size_ratio)
- self.actual_size = np.ceil(len(dataset) * self.size_ratio).astype(int)
-
- self.batch_by_size = batch_by_size
- self.seed = seed
-
- self._cur_epoch = None
- self._cur_indices = None
-
- self.set_epoch(epoch)
-
- def __getitem__(self, index):
- return self.dataset[self._cur_indices.array[index]]
-
- def __len__(self):
- return self.actual_size
-
- @property
- def sizes(self):
- if isinstance(self.dataset.sizes, list):
- return [s[self._cur_indices.array] for s in self.dataset.sizes]
- return self.dataset.sizes[self._cur_indices.array]
-
- def num_tokens(self, index):
- return self.dataset.num_tokens(self._cur_indices.array[index])
-
- def size(self, index):
- return self.dataset.size(self._cur_indices.array[index])
-
- def ordered_indices(self):
- if self.batch_by_size:
- order = [
- np.arange(len(self)),
- self.sizes,
- ] # No need to handle `self.shuffle == True`
- return np.lexsort(order)
- else:
- return np.arange(len(self))
-
- def prefetch(self, indices):
- self.dataset.prefetch(self._cur_indices.array[indices])
-
- @property
- def can_reuse_epoch_itr_across_epochs(self):
- return False
-
- def set_epoch(self, epoch):
- logger.debug("ResamplingDataset.set_epoch: {}".format(epoch))
- super().set_epoch(epoch)
-
- if epoch == self._cur_epoch:
- return
-
- self._cur_epoch = epoch
-
- # Generate a weighted sample of indices as a function of the
- # random seed and the current epoch.
-
- rng = np.random.RandomState(
- [
- 42, # magic number
- self.seed % (2 ** 32), # global seed
- self._cur_epoch, # epoch index
- ]
- )
- self._cur_indices = plasma_utils.PlasmaArray(
- rng.choice(
- len(self.dataset),
- self.actual_size,
- replace=self.replace,
- p=(None if self.weights is None else self.weights.array),
- )
- )
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quantization/scalar/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quantization/scalar/__init__.py
deleted file mode 100644
index 143834f3d036780eb6844c82f0c6f2d10cfe2f61..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quantization/scalar/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .utils import quantize_model_ # NOQA
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/laser/laser_src/multitask_data_utils.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/laser/laser_src/multitask_data_utils.py
deleted file mode 100644
index b05caea26793bf5112a7abc29d76225f578f3ebe..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/laser/laser_src/multitask_data_utils.py
+++ /dev/null
@@ -1,143 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from collections import OrderedDict
-
-import numpy as np
-
-from fairseq.data import BaseWrapperDataset, FairseqDataset, iterators
-
-
-class MultiItr(object):
- def __init__(self, itr):
- self.itr = itr
- self._counts = [0 for x in itr]
-
- def __len__(self):
- return sum(len(itr) for itr in self.itr)
-
- def __iter__(self):
- return self
-
- def __next__(self):
- ratios = [count / len(itr) for count, itr in zip(self._counts, self.itr)]
- idx = ratios.index(min(ratios))
- self._counts[idx] += 1
- return next(self.itr[idx])
-
-
-class MultidatasetEpochBatchIterator(iterators.EpochBatchIterating):
- """A wrapper around multiple epoch batch iterators."""
-
- def __init__(
- self,
- dataset,
- batch_sampler,
- seed=1,
- num_shards=1,
- shard_id=0,
- num_workers=0,
- epoch=1,
- ):
-
- assert isinstance(dataset, OrderedDict)
- assert len(dataset)
- assert isinstance(dataset[next(iter(dataset))], FairseqDataset)
-
- self.iterators = []
-
- self.epoch = epoch
- for key, dt in dataset.items():
- epoch_iter = iterators.EpochBatchIterator(
- dataset=dt,
- collate_fn=dt.collater,
- batch_sampler=batch_sampler[key],
- seed=seed,
- num_shards=num_shards,
- shard_id=shard_id,
- num_workers=0,
- epoch=epoch,
- )
- self.iterators.append(epoch_iter)
-
- def __len__(self):
- return sum(len(itr) for itr in self.iterators)
-
- def next_epoch_itr(self, shuffle=True, fix_batches_to_gpus=False):
- # `self.epoch += 1` should be handled by underlying `EpochBatchIterator`s.
- return MultiItr(
- [
- itr.next_epoch_itr(
- shuffle=shuffle, fix_batches_to_gpus=fix_batches_to_gpus
- )
- for itr in self.iterators
- ]
- )
-
- def end_of_epoch(self):
- return all(itr.end_of_epoch() for itr in self.iterators)
-
- @property
- def next_epoch_idx(self):
- """Return the epoch index after *next_epoch_itr* is called."""
-
- epochs = [itr.next_epoch_idx for itr in self.iterators]
- self.epoch = epochs[0]
- assert all(epoch == self.epoch for epoch in epochs)
-
- return self.epoch
-
- @property
- def iterations_in_epoch(self):
- return sum(itr.iterations_in_epoch for itr in self.iterators)
-
- def state_dict(self):
- return {
- "iterators": [it.state_dict() for it in self.iterators],
- "epoch": self.epoch,
- }
-
- def load_state_dict(self, state_dict):
- self.epoch = state_dict["epoch"]
- for it, d in zip(self.iterators, state_dict["iterators"]):
- it.load_state_dict(d)
-
-
-class MultitaskDatasetWrapper(BaseWrapperDataset):
- """A wrapper for a multitask dataset."""
-
- def __init__(self, dataset, target_language_id, sample=1.0, name=""):
- super().__init__(dataset)
- self.target_language_id = target_language_id
- self.sample = sample
- self.name = name
-
- def collater(self, *args, **kwargs):
- ans = self.dataset.collater(*args, **kwargs)
- if "net_input" in ans:
- ans["net_input"]["target_language_id"] = self.target_language_id
- ans["net_input"]["dataset_name"] = self.name
- return ans
-
- def num_tokens(self, *args, **kwargs):
- return self.dataset.num_tokens(*args, **kwargs)
-
- def ordered_indices(self, *args, **kwargs):
- indices = self.dataset.ordered_indices(*args, **kwargs)
- # Hacky solution for sampling
- size = int(self.sample * indices.shape[0])
-
- return indices.take(np.sort(np.random.permutation(indices.shape[0])[:size]))
-
- def size(self, index: int):
- return self.dataset.size(index)
-
- @property
- def supports_prefetch(self):
- """Whether this dataset supports prefetching."""
- return getattr(self.dataset, "supports_prefetch", False)
-
- def prefetch(self, indices):
- return self.dataset.prefetch(indices)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/noisychannel/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/noisychannel/__init__.py
deleted file mode 100644
index 89f1aef4f6328d25425e0bcabb42dfffd2ed35f0..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/noisychannel/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .rerank_options import * # noqa
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/transformer_layer.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/transformer_layer.py
deleted file mode 100644
index 347b8118daa2818af5e0230a793f2fa8fcd63b3a..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/transformer_layer.py
+++ /dev/null
@@ -1,459 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from typing import Dict, List, Optional
-
-import torch
-import torch.nn as nn
-from fairseq import utils
-from fairseq.modules import LayerNorm, MultiheadAttention
-from fairseq.modules.fairseq_dropout import FairseqDropout
-from fairseq.modules.quant_noise import quant_noise
-from torch import Tensor
-from fairseq.models.transformer import (
- TransformerConfig,
-)
-
-
-class TransformerEncoderLayerBase(nn.Module):
- """Encoder layer block.
-
- In the original paper each operation (multi-head attention or FFN) is
- postprocessed with: `dropout -> add residual -> layernorm`. In the
- tensor2tensor code they suggest that learning is more robust when
- preprocessing each layer with layernorm and postprocessing with:
- `dropout -> add residual`. We default to the approach in the paper, but the
- tensor2tensor approach can be enabled by setting
- *cfg.encoder.normalize_before* to ``True``.
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- """
-
- def __init__(self, cfg):
- super().__init__()
- self.cfg = cfg
- self.embed_dim = cfg.encoder.embed_dim
- self.quant_noise = cfg.quant_noise.pq
- self.quant_noise_block_size = cfg.quant_noise.pq_block_size
- self.self_attn = self.build_self_attention(self.embed_dim, cfg)
- self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=cfg.export)
- self.dropout_module = FairseqDropout(
- cfg.dropout, module_name=self.__class__.__name__
- )
- self.activation_fn = utils.get_activation_fn(activation=cfg.activation_fn)
- activation_dropout_p = cfg.activation_dropout
- if activation_dropout_p == 0:
- # for backwards compatibility with models that use cfg.relu_dropout
- activation_dropout_p = cfg.relu_dropout or 0
- self.activation_dropout_module = FairseqDropout(
- float(activation_dropout_p), module_name=self.__class__.__name__
- )
- self.normalize_before = cfg.encoder.normalize_before
- self.fc1 = self.build_fc1(
- self.embed_dim,
- cfg.encoder.ffn_embed_dim,
- self.quant_noise,
- self.quant_noise_block_size,
- )
- self.fc2 = self.build_fc2(
- cfg.encoder.ffn_embed_dim,
- self.embed_dim,
- self.quant_noise,
- self.quant_noise_block_size,
- )
-
- self.final_layer_norm = LayerNorm(self.embed_dim, export=cfg.export)
-
- def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size):
- return quant_noise(
- nn.Linear(input_dim, output_dim), p=q_noise, block_size=qn_block_size
- )
-
- def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size):
- return quant_noise(
- nn.Linear(input_dim, output_dim), p=q_noise, block_size=qn_block_size
- )
-
- def build_self_attention(self, embed_dim, cfg):
- return MultiheadAttention(
- embed_dim,
- cfg.encoder.attention_heads,
- dropout=cfg.attention_dropout,
- self_attention=True,
- q_noise=self.quant_noise,
- qn_block_size=self.quant_noise_block_size,
- )
-
- def residual_connection(self, x, residual):
- return residual + x
-
- def upgrade_state_dict_named(self, state_dict, name):
- """
- Rename layer norm states from `...layer_norms.0.weight` to
- `...self_attn_layer_norm.weight` and `...layer_norms.1.weight` to
- `...final_layer_norm.weight`
- """
- layer_norm_map = {"0": "self_attn_layer_norm", "1": "final_layer_norm"}
- for old, new in layer_norm_map.items():
- for m in ("weight", "bias"):
- k = "{}.layer_norms.{}.{}".format(name, old, m)
- if k in state_dict:
- state_dict["{}.{}.{}".format(name, new, m)] = state_dict[k]
- del state_dict[k]
-
- def forward(
- self,
- x,
- encoder_padding_mask: Optional[Tensor],
- attn_mask: Optional[Tensor] = None,
- ):
- """
- Args:
- x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)`
- encoder_padding_mask (ByteTensor): binary ByteTensor of shape
- `(batch, seq_len)` where padding elements are indicated by ``1``.
- attn_mask (ByteTensor): binary tensor of shape `(tgt_len, src_len)`,
- where `tgt_len` is the length of output and `src_len` is the
- length of input, though here both are equal to `seq_len`.
- `attn_mask[tgt_i, src_j] = 1` means that when calculating the
- embedding for `tgt_i`, we exclude (mask out) `src_j`. This is
- useful for strided self-attention.
-
- Returns:
- encoded output of shape `(seq_len, batch, embed_dim)`
- """
- # anything in original attn_mask = 1, becomes -1e8
- # anything in original attn_mask = 0, becomes 0
- # Note that we cannot use -inf here, because at some edge cases,
- # the attention weight (before softmax) for some padded element in query
- # will become -inf, which results in NaN in model parameters
- if attn_mask is not None:
- attn_mask = attn_mask.masked_fill(
- attn_mask.to(torch.bool),
- -1e8 if x.dtype == torch.float32 else -1e4
- )
-
- residual = x
- if self.normalize_before:
- x = self.self_attn_layer_norm(x)
- x, _ = self.self_attn(
- query=x,
- key=x,
- value=x,
- key_padding_mask=encoder_padding_mask,
- need_weights=False,
- attn_mask=attn_mask,
- )
- x = self.dropout_module(x)
- x = self.residual_connection(x, residual)
- if not self.normalize_before:
- x = self.self_attn_layer_norm(x)
-
- residual = x
- if self.normalize_before:
- x = self.final_layer_norm(x)
- x = self.activation_fn(self.fc1(x))
- x = self.activation_dropout_module(x)
- x = self.fc2(x)
- x = self.dropout_module(x)
- x = self.residual_connection(x, residual)
- if not self.normalize_before:
- x = self.final_layer_norm(x)
- return x
-
-
-# backward compatible with the legacy argparse format
-class TransformerEncoderLayer(TransformerEncoderLayerBase):
- def __init__(self, args):
- super().__init__(TransformerConfig.from_namespace(args))
- self.args = args
-
- def build_self_attention(self, embed_dim, args):
- return super().build_self_attention(
- embed_dim, TransformerConfig.from_namespace(args)
- )
-
-
-class TransformerDecoderLayerBase(nn.Module):
- """Decoder layer block.
-
- In the original paper each operation (multi-head attention, encoder
- attention or FFN) is postprocessed with: `dropout -> add residual ->
- layernorm`. In the tensor2tensor code they suggest that learning is more
- robust when preprocessing each layer with layernorm and postprocessing with:
- `dropout -> add residual`. We default to the approach in the paper, but the
- tensor2tensor approach can be enabled by setting
- *cfg.decoder.normalize_before* to ``True``.
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- no_encoder_attn (bool, optional): whether to attend to encoder outputs
- (default: False).
- """
-
- def __init__(
- self, cfg, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False
- ):
- super().__init__()
- self.embed_dim = cfg.decoder.embed_dim
- self.dropout_module = FairseqDropout(
- cfg.dropout, module_name=self.__class__.__name__
- )
- self.quant_noise = cfg.quant_noise.pq
- self.quant_noise_block_size = cfg.quant_noise.pq_block_size
-
- self.cross_self_attention = cfg.cross_self_attention
-
- self.self_attn = self.build_self_attention(
- self.embed_dim,
- cfg,
- add_bias_kv=add_bias_kv,
- add_zero_attn=add_zero_attn,
- )
-
- self.activation_fn = utils.get_activation_fn(activation=cfg.activation_fn)
- activation_dropout_p = cfg.activation_dropout
- if activation_dropout_p == 0:
- # for backwards compatibility with models that use cfg.relu_dropout
- activation_dropout_p = cfg.relu_dropout or 0
- self.activation_dropout_module = FairseqDropout(
- float(activation_dropout_p), module_name=self.__class__.__name__
- )
- self.normalize_before = cfg.decoder.normalize_before
-
- self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=cfg.export)
-
- if no_encoder_attn:
- self.encoder_attn = None
- self.encoder_attn_layer_norm = None
- else:
- self.encoder_attn = self.build_encoder_attention(self.embed_dim, cfg)
- self.encoder_attn_layer_norm = LayerNorm(self.embed_dim, export=cfg.export)
-
- self.fc1 = self.build_fc1(
- self.embed_dim,
- cfg.decoder.ffn_embed_dim,
- self.quant_noise,
- self.quant_noise_block_size,
- )
- self.fc2 = self.build_fc2(
- cfg.decoder.ffn_embed_dim,
- self.embed_dim,
- self.quant_noise,
- self.quant_noise_block_size,
- )
-
- self.final_layer_norm = LayerNorm(self.embed_dim, export=cfg.export)
- self.need_attn = True
-
- self.onnx_trace = False
-
- def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size):
- return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size)
-
- def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size):
- return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size)
-
- def build_self_attention(
- self, embed_dim, cfg, add_bias_kv=False, add_zero_attn=False
- ):
- return MultiheadAttention(
- embed_dim,
- cfg.decoder.attention_heads,
- dropout=cfg.attention_dropout,
- add_bias_kv=add_bias_kv,
- add_zero_attn=add_zero_attn,
- self_attention=not cfg.cross_self_attention,
- q_noise=self.quant_noise,
- qn_block_size=self.quant_noise_block_size,
- )
-
- def build_encoder_attention(self, embed_dim, cfg):
- return MultiheadAttention(
- embed_dim,
- cfg.decoder.attention_heads,
- kdim=cfg.encoder.embed_dim,
- vdim=cfg.encoder.embed_dim,
- dropout=cfg.attention_dropout,
- encoder_decoder_attention=True,
- q_noise=self.quant_noise,
- qn_block_size=self.quant_noise_block_size,
- )
-
- def prepare_for_onnx_export_(self):
- self.onnx_trace = True
-
- def residual_connection(self, x, residual):
- return residual + x
-
- def forward(
- self,
- x,
- encoder_out: Optional[torch.Tensor] = None,
- encoder_padding_mask: Optional[torch.Tensor] = None,
- incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None,
- prev_self_attn_state: Optional[List[torch.Tensor]] = None,
- prev_attn_state: Optional[List[torch.Tensor]] = None,
- self_attn_mask: Optional[torch.Tensor] = None,
- self_attn_padding_mask: Optional[torch.Tensor] = None,
- need_attn: bool = False,
- need_head_weights: bool = False,
- ):
- """
- Args:
- x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)`
- encoder_padding_mask (ByteTensor, optional): binary
- ByteTensor of shape `(batch, src_len)` where padding
- elements are indicated by ``1``.
- need_attn (bool, optional): return attention weights
- need_head_weights (bool, optional): return attention weights
- for each head (default: return average over heads).
-
- Returns:
- encoded output of shape `(seq_len, batch, embed_dim)`
- """
- if need_head_weights:
- need_attn = True
-
- residual = x
- if self.normalize_before:
- x = self.self_attn_layer_norm(x)
- if prev_self_attn_state is not None:
- prev_key, prev_value = prev_self_attn_state[:2]
- saved_state: Dict[str, Optional[Tensor]] = {
- "prev_key": prev_key,
- "prev_value": prev_value,
- }
- if len(prev_self_attn_state) >= 3:
- saved_state["prev_key_padding_mask"] = prev_self_attn_state[2]
- assert incremental_state is not None
- self.self_attn._set_input_buffer(incremental_state, saved_state)
- _self_attn_input_buffer = self.self_attn._get_input_buffer(incremental_state)
- if self.cross_self_attention and not (
- incremental_state is not None
- and _self_attn_input_buffer is not None
- and "prev_key" in _self_attn_input_buffer
- ):
- if self_attn_mask is not None:
- assert encoder_out is not None
- self_attn_mask = torch.cat(
- (x.new_zeros(x.size(0), encoder_out.size(0)), self_attn_mask), dim=1
- )
- if self_attn_padding_mask is not None:
- if encoder_padding_mask is None:
- assert encoder_out is not None
- encoder_padding_mask = self_attn_padding_mask.new_zeros(
- encoder_out.size(1), encoder_out.size(0)
- )
- self_attn_padding_mask = torch.cat(
- (encoder_padding_mask, self_attn_padding_mask), dim=1
- )
- assert encoder_out is not None
- y = torch.cat((encoder_out, x), dim=0)
- else:
- y = x
-
- x, attn = self.self_attn(
- query=x,
- key=y,
- value=y,
- key_padding_mask=self_attn_padding_mask,
- incremental_state=incremental_state,
- need_weights=False,
- attn_mask=self_attn_mask,
- )
- x = self.dropout_module(x)
- x = self.residual_connection(x, residual)
- if not self.normalize_before:
- x = self.self_attn_layer_norm(x)
-
- if self.encoder_attn is not None and encoder_out is not None:
- residual = x
- if self.normalize_before:
- x = self.encoder_attn_layer_norm(x)
- if prev_attn_state is not None:
- prev_key, prev_value = prev_attn_state[:2]
- saved_state: Dict[str, Optional[Tensor]] = {
- "prev_key": prev_key,
- "prev_value": prev_value,
- }
- if len(prev_attn_state) >= 3:
- saved_state["prev_key_padding_mask"] = prev_attn_state[2]
- assert incremental_state is not None
- self.encoder_attn._set_input_buffer(incremental_state, saved_state)
-
- x, attn = self.encoder_attn(
- query=x,
- key=encoder_out,
- value=encoder_out,
- key_padding_mask=encoder_padding_mask,
- incremental_state=incremental_state,
- static_kv=True,
- need_weights=need_attn or (not self.training and self.need_attn),
- need_head_weights=need_head_weights,
- )
- x = self.dropout_module(x)
- x = self.residual_connection(x, residual)
- if not self.normalize_before:
- x = self.encoder_attn_layer_norm(x)
-
- residual = x
- if self.normalize_before:
- x = self.final_layer_norm(x)
-
- x = self.activation_fn(self.fc1(x))
- x = self.activation_dropout_module(x)
- x = self.fc2(x)
- x = self.dropout_module(x)
- x = self.residual_connection(x, residual)
- if not self.normalize_before:
- x = self.final_layer_norm(x)
- if self.onnx_trace and incremental_state is not None:
- saved_state = self.self_attn._get_input_buffer(incremental_state)
- assert saved_state is not None
- if self_attn_padding_mask is not None:
- self_attn_state = [
- saved_state["prev_key"],
- saved_state["prev_value"],
- saved_state["prev_key_padding_mask"],
- ]
- else:
- self_attn_state = [saved_state["prev_key"], saved_state["prev_value"]]
- return x, attn, self_attn_state
- return x, attn, None
-
- def make_generation_fast_(self, need_attn: bool = False, **kwargs):
- self.need_attn = need_attn
-
-
-# backward compatible with the legacy argparse format
-class TransformerDecoderLayer(TransformerDecoderLayerBase):
- def __init__(
- self, args, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False
- ):
- super().__init__(
- TransformerConfig.from_namespace(args),
- no_encoder_attn=no_encoder_attn,
- add_bias_kv=add_bias_kv,
- add_zero_attn=add_zero_attn,
- )
- self.args = args
-
- def build_self_attention(
- self, embed_dim, args, add_bias_kv=False, add_zero_attn=False
- ):
- return super().build_self_attention(
- embed_dim,
- TransformerConfig.from_namespace(args),
- add_bias_kv=add_bias_kv,
- add_zero_attn=add_zero_attn,
- )
-
- def build_encoder_attention(self, embed_dim, args):
- return super().build_encoder_attention(
- embed_dim,
- TransformerConfig.from_namespace(args),
- )
diff --git a/spaces/ORI-Muchim/BarKeYaeTTS/text/__init__.py b/spaces/ORI-Muchim/BarKeYaeTTS/text/__init__.py
deleted file mode 100644
index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000
--- a/spaces/ORI-Muchim/BarKeYaeTTS/text/__init__.py
+++ /dev/null
@@ -1,32 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-from text import cleaners
-
-
-def text_to_sequence(text, symbols, cleaner_names):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- cleaner_names: names of the cleaner functions to run the text through
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- _symbol_to_id = {s: i for i, s in enumerate(symbols)}
-
- sequence = []
-
- clean_text = _clean_text(text, cleaner_names)
- for symbol in clean_text:
- if symbol not in _symbol_to_id.keys():
- continue
- symbol_id = _symbol_to_id[symbol]
- sequence += [symbol_id]
- return sequence
-
-
-def _clean_text(text, cleaner_names):
- for name in cleaner_names:
- cleaner = getattr(cleaners, name)
- if not cleaner:
- raise Exception('Unknown cleaner: %s' % name)
- text = cleaner(text)
- return text
diff --git a/spaces/Omnibus/summarize-long-text/summarize.py b/spaces/Omnibus/summarize-long-text/summarize.py
deleted file mode 100644
index 35ee78e06538ac6826887be8dc0d5c9694760aaa..0000000000000000000000000000000000000000
--- a/spaces/Omnibus/summarize-long-text/summarize.py
+++ /dev/null
@@ -1,152 +0,0 @@
-import logging
-import pprint as pp
-
-from utils import validate_pytorch2
-
-logging.basicConfig(level=logging.INFO)
-import torch
-from tqdm.auto import tqdm
-from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
-
-
-def load_model_and_tokenizer(model_name: str) -> tuple:
- """
- load_model_and_tokenizer - load a model and tokenizer from a model name/ID on the hub
- :param str model_name: the model name/ID on the hub
- :return tuple: a tuple containing the model and tokenizer
- """
- logger = logging.getLogger(__name__)
- device = "cuda" if torch.cuda.is_available() else "cpu"
- model = AutoModelForSeq2SeqLM.from_pretrained(
- model_name,
- ).to(device)
- model = model.eval()
-
- tokenizer = AutoTokenizer.from_pretrained(model_name)
-
- logger.info(f"Loaded model {model_name} to {device}")
-
- if validate_pytorch2():
- try:
- logger.info("Compiling model with Torch 2.0")
- model = torch.compile(model)
- except Exception as e:
- logger.warning(f"Could not compile model with Torch 2.0: {e}")
- else:
- logger.info("Torch 2.0 not detected, skipping compilation")
-
- return model, tokenizer
-
-
-def summarize_and_score(ids, mask, model, tokenizer, **kwargs):
- """
- summarize_and_score - given a batch of ids and a mask, return a summary and a score for the summary
-
- Args:
- ids (): the batch of ids
- mask (): the attention mask for the batch
- model (): the model to use for summarization
- tokenizer (): the tokenizer to use for summarization
-
- Returns:
- str: the summary of the batch
- """
-
- ids = ids[None, :]
- mask = mask[None, :]
-
- input_ids = ids.to("cuda") if torch.cuda.is_available() else ids
- attention_mask = mask.to("cuda") if torch.cuda.is_available() else mask
-
- global_attention_mask = torch.zeros_like(attention_mask)
- # put global attention on token
- global_attention_mask[:, 0] = 1
-
- summary_pred_ids = model.generate(
- input_ids,
- attention_mask=attention_mask,
- global_attention_mask=global_attention_mask,
- output_scores=True,
- return_dict_in_generate=True,
- **kwargs,
- )
- summary = tokenizer.batch_decode(
- summary_pred_ids.sequences,
- skip_special_tokens=True,
- remove_invalid_values=True,
- )
- score = round(summary_pred_ids.sequences_scores.cpu().numpy()[0], 4)
-
- return summary, score
-
-
-def summarize_via_tokenbatches(
- input_text: str,
- model,
- tokenizer,
- batch_length=2048,
- batch_stride=16,
- min_batch_length: int = 512,
- **kwargs,
-):
- """
- summarize_via_tokenbatches - a function that takes a string and returns a summary
-
- Args:
- input_text (str): the text to summarize
- model (): the model to use for summarization
- tokenizer (): the tokenizer to use for summarization
- batch_length (int, optional): the length of each batch. Defaults to 2048.
- batch_stride (int, optional): the stride of each batch. Defaults to 16. The stride is the number of tokens that overlap between batches.
-
- Returns:
- str: the summary
- """
- # log all input parameters
- logger = logging.getLogger(__name__)
- # log all input parameters
- if batch_length < min_batch_length:
- logger.warning(
- f"batch_length must be at least {min_batch_length}. Setting batch_length to {min_batch_length}"
- )
- batch_length = min_batch_length
-
- logger.info(f"input parameters:\n{pp.pformat(kwargs)}")
- logger.info(f"batch_length: {batch_length}, batch_stride: {batch_stride}")
- encoded_input = tokenizer(
- input_text,
- padding="max_length",
- truncation=True,
- max_length=batch_length,
- stride=batch_stride,
- return_overflowing_tokens=True,
- add_special_tokens=False,
- return_tensors="pt",
- )
-
- in_id_arr, att_arr = encoded_input.input_ids, encoded_input.attention_mask
- gen_summaries = []
-
- pbar = tqdm(total=len(in_id_arr), desc="Summarizing")
-
- for _id, _mask in zip(in_id_arr, att_arr):
- result, score = summarize_and_score(
- ids=_id,
- mask=_mask,
- model=model,
- tokenizer=tokenizer,
- **kwargs,
- )
- score = round(float(score), 4)
- _sum = {
- "input_tokens": _id,
- "summary": result,
- "summary_score": score,
- }
- gen_summaries.append(_sum)
- logger.info(f"SCore {score} for summary:\n\t{result}")
- pbar.update()
-
- pbar.close()
- logger.debug(f"Generated summaries:\n{pp.pformat(gen_summaries)}")
- return gen_summaries
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/flatten.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/flatten.py
deleted file mode 100644
index f5ba4297567d650f147eebeed361e9d62fab899d..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/flatten.py
+++ /dev/null
@@ -1,330 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import collections
-from dataclasses import dataclass
-from typing import Callable, List, Optional, Tuple
-import torch
-from torch import nn
-
-from detectron2.structures import Boxes, Instances, ROIMasks
-from detectron2.utils.registry import _convert_target_to_string, locate
-
-from .torchscript_patch import patch_builtin_len
-
-
-@dataclass
-class Schema:
- """
- A Schema defines how to flatten a possibly hierarchical object into tuple of
- primitive objects, so it can be used as inputs/outputs of PyTorch's tracing.
-
- PyTorch does not support tracing a function that produces rich output
- structures (e.g. dict, Instances, Boxes). To trace such a function, we
- flatten the rich object into tuple of tensors, and return this tuple of tensors
- instead. Meanwhile, we also need to know how to "rebuild" the original object
- from the flattened results, so we can evaluate the flattened results.
- A Schema defines how to flatten an object, and while flattening it, it records
- necessary schemas so that the object can be rebuilt using the flattened outputs.
-
- The flattened object and the schema object is returned by ``.flatten`` classmethod.
- Then the original object can be rebuilt with the ``__call__`` method of schema.
-
- A Schema is a dataclass that can be serialized easily.
- """
-
- # inspired by FetchMapper in tensorflow/python/client/session.py
-
- @classmethod
- def flatten(cls, obj):
- raise NotImplementedError
-
- def __call__(self, values):
- raise NotImplementedError
-
- @staticmethod
- def _concat(values):
- ret = ()
- sizes = []
- for v in values:
- assert isinstance(v, tuple), "Flattened results must be a tuple"
- ret = ret + v
- sizes.append(len(v))
- return ret, sizes
-
- @staticmethod
- def _split(values, sizes):
- if len(sizes):
- expected_len = sum(sizes)
- assert (
- len(values) == expected_len
- ), f"Values has length {len(values)} but expect length {expected_len}."
- ret = []
- for k in range(len(sizes)):
- begin, end = sum(sizes[:k]), sum(sizes[: k + 1])
- ret.append(values[begin:end])
- return ret
-
-
-@dataclass
-class ListSchema(Schema):
- schemas: List[Schema] # the schemas that define how to flatten each element in the list
- sizes: List[int] # the flattened length of each element
-
- def __call__(self, values):
- values = self._split(values, self.sizes)
- if len(values) != len(self.schemas):
- raise ValueError(
- f"Values has length {len(values)} but schemas " f"has length {len(self.schemas)}!"
- )
- values = [m(v) for m, v in zip(self.schemas, values)]
- return list(values)
-
- @classmethod
- def flatten(cls, obj):
- res = [flatten_to_tuple(k) for k in obj]
- values, sizes = cls._concat([k[0] for k in res])
- return values, cls([k[1] for k in res], sizes)
-
-
-@dataclass
-class TupleSchema(ListSchema):
- def __call__(self, values):
- return tuple(super().__call__(values))
-
-
-@dataclass
-class IdentitySchema(Schema):
- def __call__(self, values):
- return values[0]
-
- @classmethod
- def flatten(cls, obj):
- return (obj,), cls()
-
-
-@dataclass
-class DictSchema(ListSchema):
- keys: List[str]
-
- def __call__(self, values):
- values = super().__call__(values)
- return dict(zip(self.keys, values))
-
- @classmethod
- def flatten(cls, obj):
- for k in obj.keys():
- if not isinstance(k, str):
- raise KeyError("Only support flattening dictionaries if keys are str.")
- keys = sorted(obj.keys())
- values = [obj[k] for k in keys]
- ret, schema = ListSchema.flatten(values)
- return ret, cls(schema.schemas, schema.sizes, keys)
-
-
-@dataclass
-class InstancesSchema(DictSchema):
- def __call__(self, values):
- image_size, fields = values[-1], values[:-1]
- fields = super().__call__(fields)
- return Instances(image_size, **fields)
-
- @classmethod
- def flatten(cls, obj):
- ret, schema = super().flatten(obj.get_fields())
- size = obj.image_size
- if not isinstance(size, torch.Tensor):
- size = torch.tensor(size)
- return ret + (size,), schema
-
-
-@dataclass
-class TensorWrapSchema(Schema):
- """
- For classes that are simple wrapper of tensors, e.g.
- Boxes, RotatedBoxes, BitMasks
- """
-
- class_name: str
-
- def __call__(self, values):
- return locate(self.class_name)(values[0])
-
- @classmethod
- def flatten(cls, obj):
- return (obj.tensor,), cls(_convert_target_to_string(type(obj)))
-
-
-# if more custom structures needed in the future, can allow
-# passing in extra schemas for custom types
-def flatten_to_tuple(obj):
- """
- Flatten an object so it can be used for PyTorch tracing.
- Also returns how to rebuild the original object from the flattened outputs.
-
- Returns:
- res (tuple): the flattened results that can be used as tracing outputs
- schema: an object with a ``__call__`` method such that ``schema(res) == obj``.
- It is a pure dataclass that can be serialized.
- """
- schemas = [
- ((str, bytes), IdentitySchema),
- (list, ListSchema),
- (tuple, TupleSchema),
- (collections.abc.Mapping, DictSchema),
- (Instances, InstancesSchema),
- ((Boxes, ROIMasks), TensorWrapSchema),
- ]
- for klass, schema in schemas:
- if isinstance(obj, klass):
- F = schema
- break
- else:
- F = IdentitySchema
-
- return F.flatten(obj)
-
-
-class TracingAdapter(nn.Module):
- """
- A model may take rich input/output format (e.g. dict or custom classes),
- but `torch.jit.trace` requires tuple of tensors as input/output.
- This adapter flattens input/output format of a model so it becomes traceable.
-
- It also records the necessary schema to rebuild model's inputs/outputs from flattened
- inputs/outputs.
-
- Example:
- ::
- outputs = model(inputs) # inputs/outputs may be rich structure
- adapter = TracingAdapter(model, inputs)
-
- # can now trace the model, with adapter.flattened_inputs, or another
- # tuple of tensors with the same length and meaning
- traced = torch.jit.trace(adapter, adapter.flattened_inputs)
-
- # traced model can only produce flattened outputs (tuple of tensors)
- flattened_outputs = traced(*adapter.flattened_inputs)
- # adapter knows the schema to convert it back (new_outputs == outputs)
- new_outputs = adapter.outputs_schema(flattened_outputs)
- """
-
- flattened_inputs: Tuple[torch.Tensor] = None
- """
- Flattened version of inputs given to this class's constructor.
- """
-
- inputs_schema: Schema = None
- """
- Schema of the inputs given to this class's constructor.
- """
-
- outputs_schema: Schema = None
- """
- Schema of the output produced by calling the given model with inputs.
- """
-
- def __init__(
- self,
- model: nn.Module,
- inputs,
- inference_func: Optional[Callable] = None,
- allow_non_tensor: bool = False,
- ):
- """
- Args:
- model: an nn.Module
- inputs: An input argument or a tuple of input arguments used to call model.
- After flattening, it has to only consist of tensors.
- inference_func: a callable that takes (model, *inputs), calls the
- model with inputs, and return outputs. By default it
- is ``lambda model, *inputs: model(*inputs)``. Can be override
- if you need to call the model differently.
- allow_non_tensor: allow inputs/outputs to contain non-tensor objects.
- This option will filter out non-tensor objects to make the
- model traceable, but ``inputs_schema``/``outputs_schema`` cannot be
- used anymore because inputs/outputs cannot be rebuilt from pure tensors.
- This is useful when you're only interested in the single trace of
- execution (e.g. for flop count), but not interested in
- generalizing the traced graph to new inputs.
- """
- super().__init__()
- if isinstance(model, (nn.parallel.distributed.DistributedDataParallel, nn.DataParallel)):
- model = model.module
- self.model = model
- if not isinstance(inputs, tuple):
- inputs = (inputs,)
- self.inputs = inputs
- self.allow_non_tensor = allow_non_tensor
-
- if inference_func is None:
- inference_func = lambda model, *inputs: model(*inputs) # noqa
- self.inference_func = inference_func
-
- self.flattened_inputs, self.inputs_schema = flatten_to_tuple(inputs)
-
- if all(isinstance(x, torch.Tensor) for x in self.flattened_inputs):
- return
- if self.allow_non_tensor:
- self.flattened_inputs = tuple(
- [x for x in self.flattened_inputs if isinstance(x, torch.Tensor)]
- )
- self.inputs_schema = None
- else:
- for input in self.flattened_inputs:
- if not isinstance(input, torch.Tensor):
- raise ValueError(
- "Inputs for tracing must only contain tensors. "
- f"Got a {type(input)} instead."
- )
-
- def forward(self, *args: torch.Tensor):
- with torch.no_grad(), patch_builtin_len():
- if self.inputs_schema is not None:
- inputs_orig_format = self.inputs_schema(args)
- else:
- if len(args) != len(self.flattened_inputs) or any(
- x is not y for x, y in zip(args, self.flattened_inputs)
- ):
- raise ValueError(
- "TracingAdapter does not contain valid inputs_schema."
- " So it cannot generalize to other inputs and must be"
- " traced with `.flattened_inputs`."
- )
- inputs_orig_format = self.inputs
-
- outputs = self.inference_func(self.model, *inputs_orig_format)
- flattened_outputs, schema = flatten_to_tuple(outputs)
-
- flattened_output_tensors = tuple(
- [x for x in flattened_outputs if isinstance(x, torch.Tensor)]
- )
- if len(flattened_output_tensors) < len(flattened_outputs):
- if self.allow_non_tensor:
- flattened_outputs = flattened_output_tensors
- self.outputs_schema = None
- else:
- raise ValueError(
- "Model cannot be traced because some model outputs "
- "cannot flatten to tensors."
- )
- else: # schema is valid
- if self.outputs_schema is None:
- self.outputs_schema = schema
- else:
- assert self.outputs_schema == schema, (
- "Model should always return outputs with the same "
- "structure so it can be traced!"
- )
- return flattened_outputs
-
- def _create_wrapper(self, traced_model):
- """
- Return a function that has an input/output interface the same as the
- original model, but it calls the given traced model under the hood.
- """
-
- def forward(*args):
- flattened_inputs, _ = flatten_to_tuple(args)
- flattened_outputs = traced_model(*flattened_inputs)
- return self.outputs_schema(flattened_outputs)
-
- return forward
diff --git a/spaces/OptimalScale/Robin-33b/lmflow/datasets/dataset.py b/spaces/OptimalScale/Robin-33b/lmflow/datasets/dataset.py
deleted file mode 100644
index 8228d20ab4165515c2d1d09ae679473a53dbb6ed..0000000000000000000000000000000000000000
--- a/spaces/OptimalScale/Robin-33b/lmflow/datasets/dataset.py
+++ /dev/null
@@ -1,308 +0,0 @@
-#!/usr/bin/env python
-# coding=utf-8
-"""This Python code defines a class Dataset with methods for initializing, loading,
-and manipulating datasets from different backends such as Hugging Face and JSON.
-
-The `Dataset` class includes methods for loading datasets from a dictionary and a Hugging
-Face dataset, mapping datasets, and retrieving the backend dataset and arguments.
-"""
-
-
-
-# Importing necessary libraries and modules
-import json
-from pathlib import Path
-from typing import Optional
-
-from datasets import load_dataset
-from datasets import Dataset as HFDataset
-
-from lmflow.args import DatasetArguments
-
-DATASET_TYPES = [
- "text_only",
- "text2text",
-]
-
-KEY_TYPE = "type"
-KEY_INSTANCES = "instances"
-
-class Dataset:
- r"""
- Initializes the Dataset object with the given parameters.
-
- Parameters
- ------------
- data_args : DatasetArguments object.
- Contains the arguments required to load the dataset.
-
- backend : str, default="huggingface"
- A string representing the dataset backend. Defaults to "huggingface".
-
- args : Optional.
- Positional arguments.
-
- kwargs : Optional.
- Keyword arguments.
- """
- def __init__(self, data_args=None, backend: str="huggingface", *args, **kwargs):
- self.data_args = data_args
- self.backend = backend
- self.backend_dataset = None
- self.type = None # Original type of the dataset
- self.dataset_path = data_args.dataset_path
-
- if data_args.dataset_path is None:
- return
-
- if backend == "huggingface":
- data_files = [
- x.absolute().as_posix()
- for x in Path(self.dataset_path).glob("*.json")
- ]
-
- # Iterate through all the files and ensure they have the same data type
- for single_file in data_files:
- with open(single_file) as fin:
- json_data = json.load(fin)
- if KEY_TYPE not in json_data.keys():
- raise ValueError(
- f'"{KEY_TYPE}" field must be specified for data, e.g.'
- '{\n'
- f' "{KEY_TYPE}: "text_only",\n'
- f' "{KEY_INSTANCES}": [\n'
- ' { "text": "Sentence 1: This is a sentence." }\n'
- ' { "text": "Sentence 2: This is another sentence." }\n'
- f' ]\n'
- '}'
- )
-
- if self.type is None:
- self.type = json_data[KEY_TYPE]
- elif self.type != json_data[KEY_TYPE]:
- raise ValueError(
- 'All task files must have same data types. Previous'
- f' files have type "{self.type}", but in file'
- f' {single_file}, it has type "{self.type}".'
- )
-
- # Load the dataset using the HuggingFace dataset library
- extensions = "json"
- raw_dataset = load_dataset(
- extensions,
- data_files=data_files,
- field=KEY_INSTANCES,
- split="train",
- use_auth_token=None,
- )
- self.backend_dataset = raw_dataset
- elif backend == "json":
- # TODO (@Jiachun)
- pass
- else:
- raise NotImplementedError(f'Unsupported dataset backend "{backend}"')
-
-
- def _check_data_type(self):
- # TODO: check if data type and data structure matches, raise messages
- # with hints
- pass
-
-
- def from_dict(self, dict_obj: dict, *args, **kwargs):
- r"""
- Create a Dataset object from a dictionary.
-
- Return a Dataset given a dict with format:
- {
- "type": TYPE,
- "instances": [
- {
- "key_1": VALUE_1.1,
- "key_2": VALUE_1.2,
- ...
- },
- {
- "key_1": VALUE_2.1,
- "key_2": VALUE_2.2,
- ...
- },
- ...
- ]
- }
-
- Parameters
- -----------
-
- dict_obj : dict.
- A dictionary containing the dataset information.
-
- args : Optional.
- Positional arguments.
-
- kwargs : Optional.
- Keyword arguments.
-
- Returns
- ---------
-
- self : Dataset object.
- """
- if self.backend == "huggingface":
- if KEY_TYPE not in dict_obj:
- raise ValueError(
- f'"{KEY_TYPE}" must be provided to initialize a dataset'
- )
- if KEY_INSTANCES not in dict_obj:
- raise ValueError(
- f'"{KEY_INSTANCES}" must be provided to initialize a dataset'
- )
-
- self.type = dict_obj[KEY_TYPE]
-
- hf_dict = {}
- if len(dict_obj[KEY_INSTANCES]) > 0:
- for key in dict_obj[KEY_INSTANCES][0].keys():
- hf_dict[key] = [ instance[key] for instance in dict_obj[KEY_INSTANCES] ]
-
- self.backend_dataset = HFDataset.from_dict(hf_dict, *args, **kwargs)
- return self
- else:
- raise NotImplementedError(
- f'Currently .from_dict is not supported for backend "{backend}"'
- )
-
-
- @classmethod
- def create_from_dict(cls, dict_obj, *args, **kwargs):
- r"""
- Returns
- --------
-
- Returns a Dataset object given a dict.
- """
- empty_data_args = DatasetArguments(dataset_path=None)
- dataset = Dataset(empty_data_args)
- return dataset.from_dict(dict_obj)
-
-
- def to_dict(self):
- r"""
- Returns
- ---------
-
- Return a dict represents the dataset:
- {
- "type": TYPE,
- "instances": [
- {
- "key_1": VALUE_1.1,
- "key_2": VALUE_1.2,
- ...
- },
- {
- "key_1": VALUE_2.1,
- "key_2": VALUE_2.2,
- ...
- },
- ...
- ]
- }
-
- A python dict object represents the content of this dataset.
- """
- if self.backend == "huggingface":
- dict_obj = {}
- dict_obj[KEY_TYPE] = self.get_type()
-
- hf_dict = self.backend_dataset.to_dict()
- dict_obj[KEY_INSTANCES] = []
-
- first_key = None
- for key in hf_dict.keys():
- first_key = key
- break
-
- if first_key is not None:
- num_instances = len(hf_dict[first_key])
- dict_obj[KEY_INSTANCES] = [
- {
- key: hf_dict[key][i] for key in hf_dict.keys()
- }
- for i in range(num_instances)
- ]
-
- return dict_obj
- else:
- raise NotImplementedError(
- f'Current .to_dict is not supported for backend "{backend}"'
- )
-
-
- def map(self, *args, **kwargs):
- r"""
- Parameters
- ------------
- args : Optional.
- Positional arguments.
-
- kwargs : Optional.
- Keyword arguments.
-
- Returns
- ---------
-
- self : Dataset object.
- """
- # If the dataset uses Hugging Face as the backend,
- # call the `map()` function of the Hugging Face backend dataset
- if self.backend == "huggingface":
- # Set the mapped dataset as the backend dataset of the current dataset
- mapped_backend_dataset = self.backend_dataset.map(*args, **kwargs)
- self.backend_dataset = mapped_backend_dataset
- return self
- else:
- # If the backend is not Hugging Face, raise a NotImplementedError
- raise NotImplementedError(
- f'Currently .map is not supported for backend "{backend}"'
- )
-
-
- def get_backend(self) -> Optional[str]:
- r"""
- Returns
- ---------
-
- self.backend
- """
- return self.backend
-
-
- def get_backend_dataset(self):
- r"""
- Returns
- ---------
-
- self.backend_dataset
- """
- return self.backend_dataset
-
-
- def get_data_args(self):
- r"""
- Returns
- ---------
-
- self.data_args
- """
- return self.data_args
-
-
- def get_type(self):
- r"""
- Returns
- ---------
-
- self.type
- """
- return self.type
diff --git a/spaces/Osborn-bh/ChatGLM3-6B-Osborn/composite_demo/conversation.py b/spaces/Osborn-bh/ChatGLM3-6B-Osborn/composite_demo/conversation.py
deleted file mode 100644
index 1ac13e774775dfad8e18e728e8b33ca2a40b8f65..0000000000000000000000000000000000000000
--- a/spaces/Osborn-bh/ChatGLM3-6B-Osborn/composite_demo/conversation.py
+++ /dev/null
@@ -1,119 +0,0 @@
-from dataclasses import dataclass
-from enum import auto, Enum
-import json
-
-from PIL.Image import Image
-import streamlit as st
-from streamlit.delta_generator import DeltaGenerator
-
-TOOL_PROMPT = 'Answer the following questions as best as you can. You have access to the following tools:\n'
-
-class Role(Enum):
- SYSTEM = auto()
- USER = auto()
- ASSISTANT = auto()
- TOOL = auto()
- INTERPRETER = auto()
- OBSERVATION = auto()
-
- def __str__(self):
- match self:
- case Role.SYSTEM:
- return "<|system|>"
- case Role.USER:
- return "<|user|>"
- case Role.ASSISTANT | Role.TOOL | Role.INTERPRETER:
- return "<|assistant|>"
- case Role.OBSERVATION:
- return "<|observation|>"
-
- # Get the message block for the given role
- def get_message(self):
- # Compare by value here, because the enum object in the session state
- # is not the same as the enum cases here, due to streamlit's rerunning
- # behavior.
- match self.value:
- case Role.SYSTEM.value:
- return
- case Role.USER.value:
- return st.chat_message(name="user", avatar="user")
- case Role.ASSISTANT.value:
- return st.chat_message(name="assistant", avatar="assistant")
- case Role.TOOL.value:
- return st.chat_message(name="tool", avatar="assistant")
- case Role.INTERPRETER.value:
- return st.chat_message(name="interpreter", avatar="assistant")
- case Role.OBSERVATION.value:
- return st.chat_message(name="observation", avatar="user")
- case _:
- st.error(f'Unexpected role: {self}')
-
-@dataclass
-class Conversation:
- role: Role
- content: str
- tool: str | None = None
- image: Image | None = None
-
- def __str__(self) -> str:
- print(self.role, self.content, self.tool)
- match self.role:
- case Role.SYSTEM | Role.USER | Role.ASSISTANT | Role.OBSERVATION:
- return f'{self.role}\n{self.content}'
- case Role.TOOL:
- return f'{self.role}{self.tool}\n{self.content}'
- case Role.INTERPRETER:
- return f'{self.role}interpreter\n{self.content}'
-
- # Human readable format
- def get_text(self) -> str:
- text = postprocess_text(self.content)
- match self.role.value:
- case Role.TOOL.value:
- text = f'Calling tool `{self.tool}`:\n{text}'
- case Role.INTERPRETER.value:
- text = f'{text}'
- case Role.OBSERVATION.value:
- text = f'Observation:\n```\n{text}\n```'
- return text
-
- # Display as a markdown block
- def show(self, placeholder: DeltaGenerator | None=None) -> str:
- if placeholder:
- message = placeholder
- else:
- message = self.role.get_message()
- if self.image:
- message.image(self.image)
- else:
- text = self.get_text()
- message.markdown(text)
-
-def preprocess_text(
- system: str | None,
- tools: list[dict] | None,
- history: list[Conversation],
-) -> str:
- if tools:
- tools = json.dumps(tools, indent=4, ensure_ascii=False)
-
- prompt = f"{Role.SYSTEM}\n"
- prompt += system if not tools else TOOL_PROMPT
- if tools:
- tools = json.loads(tools)
- prompt += json.dumps(tools, ensure_ascii=False)
- for conversation in history:
- prompt += f'{conversation}'
- prompt += f'{Role.ASSISTANT}\n'
- return prompt
-
-def postprocess_text(text: str) -> str:
- text = text.replace("\(", "$")
- text = text.replace("\)", "$")
- text = text.replace("\[", "$$")
- text = text.replace("\]", "$$")
- text = text.replace("<|assistant|>", "")
- text = text.replace("<|observation|>", "")
- text = text.replace("<|system|>", "")
- text = text.replace("<|user|>", "")
- return text.strip()
\ No newline at end of file
diff --git a/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/data/__init__.py b/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/data/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/backbone/__init__.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/backbone/__init__.py
deleted file mode 100644
index 9020c2df23e2af280b7bb168b996ae9eaf312eb8..0000000000000000000000000000000000000000
--- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/backbone/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/oneformer_model.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/oneformer_model.py
deleted file mode 100644
index 01508df74cfe8a722dd937b7f54b12296258c5a1..0000000000000000000000000000000000000000
--- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/oneformer_model.py
+++ /dev/null
@@ -1,486 +0,0 @@
-# ------------------------------------------------------------------------------
-# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/maskformer_model.py
-# Modified by Jitesh Jain (https://github.com/praeclarumjj3)
-# ------------------------------------------------------------------------------
-
-from typing import Tuple
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from detectron2.config import configurable
-from detectron2.data import MetadataCatalog
-from detectron2.modeling import META_ARCH_REGISTRY, build_backbone, build_sem_seg_head
-from detectron2.modeling.backbone import Backbone
-from detectron2.modeling.postprocessing import sem_seg_postprocess
-from detectron2.structures import Boxes, ImageList, Instances, BitMasks
-from detectron2.utils.memory import retry_if_cuda_oom
-
-from .modeling.criterion import SetCriterion
-from .modeling.matcher import HungarianMatcher
-from einops import rearrange
-from .modeling.transformer_decoder.text_transformer import TextTransformer
-from .modeling.transformer_decoder.oneformer_transformer_decoder import MLP
-from oneformer.data.tokenizer import SimpleTokenizer, Tokenize
-
-@META_ARCH_REGISTRY.register()
-class OneFormer(nn.Module):
- """
- Main class for mask classification semantic segmentation architectures.
- """
-
- @configurable
- def __init__(
- self,
- *,
- backbone: Backbone,
- sem_seg_head: nn.Module,
- task_mlp: nn.Module,
- text_encoder: nn.Module,
- text_projector: nn.Module,
- criterion: nn.Module,
- prompt_ctx: nn.Embedding,
- num_queries: int,
- object_mask_threshold: float,
- overlap_threshold: float,
- metadata,
- size_divisibility: int,
- sem_seg_postprocess_before_inference: bool,
- pixel_mean: Tuple[float],
- pixel_std: Tuple[float],
- # inference
- semantic_on: bool,
- panoptic_on: bool,
- instance_on: bool,
- detection_on: bool,
- test_topk_per_image: int,
- task_seq_len: int,
- max_seq_len: int,
- is_demo: bool,
- ):
- """
- Args:
- backbone: a backbone module, must follow detectron2's backbone interface
- sem_seg_head: a module that predicts semantic segmentation from backbone features
- criterion: a module that defines the loss
- num_queries: int, number of queries
- object_mask_threshold: float, threshold to filter query based on classification score
- for panoptic segmentation inference
- overlap_threshold: overlap threshold used in general inference for panoptic segmentation
- metadata: dataset meta, get `thing` and `stuff` category names for panoptic
- segmentation inference
- size_divisibility: Some backbones require the input height and width to be divisible by a
- specific integer. We can use this to override such requirement.
- sem_seg_postprocess_before_inference: whether to resize the prediction back
- to original input size before semantic segmentation inference or after.
- For high-resolution dataset like Mapillary, resizing predictions before
- inference will cause OOM error.
- pixel_mean, pixel_std: list or tuple with #channels element, representing
- the per-channel mean and std to be used to normalize the input image
- semantic_on: bool, whether to output semantic segmentation prediction
- instance_on: bool, whether to output instance segmentation prediction
- panoptic_on: bool, whether to output panoptic segmentation prediction
- test_topk_per_image: int, instance segmentation parameter, keep topk instances per image
- """
- super().__init__()
- self.backbone = backbone
- self.sem_seg_head = sem_seg_head
- self.task_mlp = task_mlp
- self.text_encoder = text_encoder
- self.text_projector = text_projector
- self.prompt_ctx = prompt_ctx
- self.criterion = criterion
- self.num_queries = num_queries
- self.overlap_threshold = overlap_threshold
- self.object_mask_threshold = object_mask_threshold
- self.metadata = metadata
- if size_divisibility < 0:
- # use backbone size_divisibility if not set
- size_divisibility = self.backbone.size_divisibility
- self.size_divisibility = size_divisibility
- self.sem_seg_postprocess_before_inference = sem_seg_postprocess_before_inference
- self.register_buffer("pixel_mean", torch.Tensor(pixel_mean).view(-1, 1, 1), False)
- self.register_buffer("pixel_std", torch.Tensor(pixel_std).view(-1, 1, 1), False)
-
- # additional args
- self.semantic_on = semantic_on
- self.instance_on = instance_on
- self.panoptic_on = panoptic_on
- self.detection_on = detection_on
- self.test_topk_per_image = test_topk_per_image
-
- self.text_tokenizer = Tokenize(SimpleTokenizer(), max_seq_len=max_seq_len)
- self.task_tokenizer = Tokenize(SimpleTokenizer(), max_seq_len=task_seq_len)
- self.is_demo = is_demo
-
- self.thing_indices = [k for k in self.metadata.thing_dataset_id_to_contiguous_id.keys()]
-
- if not self.semantic_on:
- assert self.sem_seg_postprocess_before_inference
-
- @classmethod
- def from_config(cls, cfg):
- backbone = build_backbone(cfg)
- sem_seg_head = build_sem_seg_head(cfg, backbone.output_shape())
-
- if cfg.MODEL.IS_TRAIN:
- text_encoder = TextTransformer(context_length=cfg.MODEL.TEXT_ENCODER.CONTEXT_LENGTH,
- width=cfg.MODEL.TEXT_ENCODER.WIDTH,
- layers=cfg.MODEL.TEXT_ENCODER.NUM_LAYERS,
- vocab_size=cfg.MODEL.TEXT_ENCODER.VOCAB_SIZE)
- text_projector = MLP(text_encoder.width, cfg.MODEL.ONE_FORMER.HIDDEN_DIM,
- cfg.MODEL.ONE_FORMER.HIDDEN_DIM, cfg.MODEL.TEXT_ENCODER.PROJ_NUM_LAYERS)
- if cfg.MODEL.TEXT_ENCODER.N_CTX > 0:
- prompt_ctx = nn.Embedding(cfg.MODEL.TEXT_ENCODER.N_CTX, cfg.MODEL.TEXT_ENCODER.WIDTH)
- else:
- prompt_ctx = None
- else:
- text_encoder = None
- text_projector = None
- prompt_ctx = None
-
- task_mlp = MLP(cfg.INPUT.TASK_SEQ_LEN, cfg.MODEL.ONE_FORMER.HIDDEN_DIM,
- cfg.MODEL.ONE_FORMER.HIDDEN_DIM, 2)
-
- # Loss parameters:
- deep_supervision = cfg.MODEL.ONE_FORMER.DEEP_SUPERVISION
- no_object_weight = cfg.MODEL.ONE_FORMER.NO_OBJECT_WEIGHT
-
- # loss weights
- class_weight = cfg.MODEL.ONE_FORMER.CLASS_WEIGHT
- dice_weight = cfg.MODEL.ONE_FORMER.DICE_WEIGHT
- mask_weight = cfg.MODEL.ONE_FORMER.MASK_WEIGHT
- contrastive_weight = cfg.MODEL.ONE_FORMER.CONTRASTIVE_WEIGHT
-
- # building criterion
- matcher = HungarianMatcher(
- cost_class=class_weight,
- cost_mask=mask_weight,
- cost_dice=dice_weight,
- num_points=cfg.MODEL.ONE_FORMER.TRAIN_NUM_POINTS,
- )
-
- weight_dict = {"loss_ce": class_weight, "loss_mask": mask_weight,
- "loss_dice": dice_weight, "loss_contrastive": contrastive_weight}
-
-
- if deep_supervision:
- dec_layers = cfg.MODEL.ONE_FORMER.DEC_LAYERS
- aux_weight_dict = {}
- for i in range(dec_layers - 1):
- aux_weight_dict.update({k + f"_{i}": v for k, v in weight_dict.items()})
- weight_dict.update(aux_weight_dict)
-
- losses = ["labels", "masks", "contrastive"]
-
- criterion = SetCriterion(
- sem_seg_head.num_classes,
- matcher=matcher,
- weight_dict=weight_dict,
- eos_coef=no_object_weight,
- contrast_temperature=cfg.MODEL.ONE_FORMER.CONTRASTIVE_TEMPERATURE,
- losses=losses,
- num_points=cfg.MODEL.ONE_FORMER.TRAIN_NUM_POINTS,
- oversample_ratio=cfg.MODEL.ONE_FORMER.OVERSAMPLE_RATIO,
- importance_sample_ratio=cfg.MODEL.ONE_FORMER.IMPORTANCE_SAMPLE_RATIO,
- )
-
- return {
- "backbone": backbone,
- "sem_seg_head": sem_seg_head,
- "task_mlp": task_mlp,
- "prompt_ctx": prompt_ctx,
- "text_encoder": text_encoder,
- "text_projector": text_projector,
- "criterion": criterion,
- "num_queries": cfg.MODEL.ONE_FORMER.NUM_OBJECT_QUERIES,
- "object_mask_threshold": cfg.MODEL.TEST.OBJECT_MASK_THRESHOLD,
- "overlap_threshold": cfg.MODEL.TEST.OVERLAP_THRESHOLD,
- "metadata": MetadataCatalog.get(cfg.DATASETS.TRAIN[0]),
- "size_divisibility": cfg.MODEL.ONE_FORMER.SIZE_DIVISIBILITY,
- "sem_seg_postprocess_before_inference": (
- cfg.MODEL.TEST.SEM_SEG_POSTPROCESSING_BEFORE_INFERENCE
- or cfg.MODEL.TEST.PANOPTIC_ON
- or cfg.MODEL.TEST.INSTANCE_ON
- ),
- "pixel_mean": cfg.MODEL.PIXEL_MEAN,
- "pixel_std": cfg.MODEL.PIXEL_STD,
- # inference
- "semantic_on": cfg.MODEL.TEST.SEMANTIC_ON,
- "instance_on": cfg.MODEL.TEST.INSTANCE_ON,
- "panoptic_on": cfg.MODEL.TEST.PANOPTIC_ON,
- "detection_on": cfg.MODEL.TEST.DETECTION_ON,
- "test_topk_per_image": cfg.TEST.DETECTIONS_PER_IMAGE,
- "task_seq_len": cfg.INPUT.TASK_SEQ_LEN,
- "max_seq_len": cfg.INPUT.MAX_SEQ_LEN,
- "is_demo": cfg.MODEL.IS_DEMO,
- }
-
- @property
- def device(self):
- return self.pixel_mean.device
-
- def encode_text(self, text):
- assert text.ndim in [2, 3], text.ndim
- b = text.shape[0]
- squeeze_dim = False
- num_text = 1
- if text.ndim == 3:
- num_text = text.shape[1]
- text = rearrange(text, 'b n l -> (b n) l', n=num_text)
- squeeze_dim = True
-
- # [B, C]
- x = self.text_encoder(text)
-
- text_x = self.text_projector(x)
-
- if squeeze_dim:
- text_x = rearrange(text_x, '(b n) c -> b n c', n=num_text)
- if self.prompt_ctx is not None:
- text_ctx = self.prompt_ctx.weight.unsqueeze(0).repeat(text_x.shape[0], 1, 1)
- text_x = torch.cat([text_x, text_ctx], dim=1)
-
- return {"texts": text_x}
-
- def forward(self, batched_inputs):
- """
- Args:
- batched_inputs: a list, batched outputs of :class:`DatasetMapper`.
- Each item in the list contains the inputs for one image.
- For now, each item in the list is a dict that contains:
- * "image": Tensor, image in (C, H, W) format.
- * "instances": per-region ground truth
- * Other information that's included in the original dicts, such as:
- "height", "width" (int): the output resolution of the model (may be different
- from input resolution), used in inference.
- Returns:
- list[dict]:
- each dict has the results for one image. The dict contains the following keys:
- * "sem_seg":
- A Tensor that represents the
- per-pixel segmentation prediced by the head.
- The prediction has shape KxHxW that represents the logits of
- each class for each pixel.
- * "panoptic_seg":
- A tuple that represent panoptic output
- panoptic_seg (Tensor): of shape (height, width) where the values are ids for each segment.
- segments_info (list[dict]): Describe each segment in `panoptic_seg`.
- Each dict contains keys "id", "category_id", "isthing".
- """
- images = [x["image"].to(self.device) for x in batched_inputs]
- images = [(x - self.pixel_mean) / self.pixel_std for x in images]
- images = ImageList.from_tensors(images, self.size_divisibility)
-
- tasks = torch.cat([self.task_tokenizer(x["task"]).to(self.device).unsqueeze(0) for x in batched_inputs], dim=0)
- tasks = self.task_mlp(tasks.float())
-
- features = self.backbone(images.tensor)
- outputs = self.sem_seg_head(features, tasks)
-
- if self.training:
- texts = torch.cat([self.text_tokenizer(x["text"]).to(self.device).unsqueeze(0) for x in batched_inputs], dim=0)
- texts_x = self.encode_text(texts)
-
- outputs = {**outputs, **texts_x}
-
- # mask classification target
- if "instances" in batched_inputs[0]:
- gt_instances = [x["instances"].to(self.device) for x in batched_inputs]
- targets = self.prepare_targets(gt_instances, images)
- else:
- targets = None
-
- # bipartite matching-based loss
- losses = self.criterion(outputs, targets)
-
- for k in list(losses.keys()):
- if k in self.criterion.weight_dict:
- losses[k] *= self.criterion.weight_dict[k]
- else:
- # remove this loss if not specified in `weight_dict`
- losses.pop(k)
- return losses
- else:
- mask_cls_results = outputs["pred_logits"]
- mask_pred_results = outputs["pred_masks"]
- # upsample masks
- mask_pred_results = F.interpolate(
- mask_pred_results,
- size=(images.tensor.shape[-2], images.tensor.shape[-1]),
- mode="bilinear",
- align_corners=False,
- )
-
- del outputs
-
- processed_results = []
- for i, data in enumerate(zip(
- mask_cls_results, mask_pred_results, batched_inputs, images.image_sizes
- )):
- mask_cls_result, mask_pred_result, input_per_image, image_size = data
- height = input_per_image.get("height", image_size[0])
- width = input_per_image.get("width", image_size[1])
- processed_results.append({})
-
- if self.sem_seg_postprocess_before_inference:
- mask_pred_result = retry_if_cuda_oom(sem_seg_postprocess)(
- mask_pred_result, image_size, height, width
- )
- mask_cls_result = mask_cls_result.to(mask_pred_result)
-
- # semantic segmentation inference
- if self.semantic_on:
- r = retry_if_cuda_oom(self.semantic_inference)(mask_cls_result, mask_pred_result)
- if not self.sem_seg_postprocess_before_inference:
- r = retry_if_cuda_oom(sem_seg_postprocess)(r, image_size, height, width)
- processed_results[-1]["sem_seg"] = r
-
- # panoptic segmentation inference
- if self.panoptic_on:
- panoptic_r = retry_if_cuda_oom(self.panoptic_inference)(mask_cls_result, mask_pred_result)
- processed_results[-1]["panoptic_seg"] = panoptic_r
-
- # instance segmentation inference
- if self.instance_on:
- instance_r = retry_if_cuda_oom(self.instance_inference)(mask_cls_result, mask_pred_result, input_per_image["task"])
- processed_results[-1]["instances"] = instance_r
-
- if self.detection_on:
- bbox_r = retry_if_cuda_oom(self.instance_inference)(mask_cls_result, mask_pred_result, input_per_image["task"])
- processed_results[-1]["box_instances"] = bbox_r
-
- return processed_results
-
- def prepare_targets(self, targets, images):
- h_pad, w_pad = images.tensor.shape[-2:]
- new_targets = []
- for targets_per_image in targets:
- # pad gt
- gt_masks = targets_per_image.gt_masks
- padded_masks = torch.zeros((gt_masks.shape[0], h_pad, w_pad), dtype=gt_masks.dtype, device=gt_masks.device)
- padded_masks[:, : gt_masks.shape[1], : gt_masks.shape[2]] = gt_masks
- new_targets.append(
- {
- "labels": targets_per_image.gt_classes,
- "masks": padded_masks,
- }
- )
- return new_targets
-
- def semantic_inference(self, mask_cls, mask_pred):
- mask_cls = F.softmax(mask_cls, dim=-1)[..., :-1]
- mask_pred = mask_pred.sigmoid()
- semseg = torch.einsum("qc,qhw->chw", mask_cls, mask_pred)
- return semseg
-
- def panoptic_inference(self, mask_cls, mask_pred):
- scores, labels = F.softmax(mask_cls, dim=-1).max(-1)
- mask_pred = mask_pred.sigmoid()
-
- keep = labels.ne(self.sem_seg_head.num_classes) & (scores > self.object_mask_threshold)
- cur_scores = scores[keep]
- cur_classes = labels[keep]
- cur_masks = mask_pred[keep]
- cur_mask_cls = mask_cls[keep]
- cur_mask_cls = cur_mask_cls[:, :-1]
-
- cur_prob_masks = cur_scores.view(-1, 1, 1) * cur_masks
-
- h, w = cur_masks.shape[-2:]
- panoptic_seg = torch.zeros((h, w), dtype=torch.int32, device=cur_masks.device)
- segments_info = []
-
- current_segment_id = 0
-
- if cur_masks.shape[0] == 0:
- # We didn't detect any mask :(
- return panoptic_seg, segments_info
- else:
- # take argmax
- cur_mask_ids = cur_prob_masks.argmax(0)
- stuff_memory_list = {}
- for k in range(cur_classes.shape[0]):
- pred_class = cur_classes[k].item()
- isthing = pred_class in self.metadata.thing_dataset_id_to_contiguous_id.values()
- mask_area = (cur_mask_ids == k).sum().item()
- original_area = (cur_masks[k] >= 0.5).sum().item()
- mask = (cur_mask_ids == k) & (cur_masks[k] >= 0.5)
-
- if mask_area > 0 and original_area > 0 and mask.sum().item() > 0:
- if mask_area / original_area < self.overlap_threshold:
- continue
-
- # merge stuff regions
- if not isthing:
- if int(pred_class) in stuff_memory_list.keys():
- panoptic_seg[mask] = stuff_memory_list[int(pred_class)]
- continue
- else:
- stuff_memory_list[int(pred_class)] = current_segment_id + 1
-
- current_segment_id += 1
- panoptic_seg[mask] = current_segment_id
-
- segments_info.append(
- {
- "id": current_segment_id,
- "isthing": bool(isthing),
- "category_id": int(pred_class),
- }
- )
-
- return panoptic_seg, segments_info
-
- def instance_inference(self, mask_cls, mask_pred, task_type):
- # mask_pred is already processed to have the same shape as original input
- image_size = mask_pred.shape[-2:]
-
- # [Q, K]
- scores = F.softmax(mask_cls, dim=-1)[:, :-1]
- labels = torch.arange(self.sem_seg_head.num_classes, device=self.device).unsqueeze(0).repeat(self.num_queries, 1).flatten(0, 1)
-
- # scores_per_image, topk_indices = scores.flatten(0, 1).topk(self.num_queries, sorted=False)
- scores_per_image, topk_indices = scores.flatten(0, 1).topk(self.test_topk_per_image, sorted=False)
- labels_per_image = labels[topk_indices]
-
- topk_indices = topk_indices // self.sem_seg_head.num_classes
- # mask_pred = mask_pred.unsqueeze(1).repeat(1, self.sem_seg_head.num_classes, 1).flatten(0, 1)
- mask_pred = mask_pred[topk_indices]
-
- # Only consider scores with confidence over [self.object_mask_threshold] for demo
- if self.is_demo:
- keep = scores_per_image > self.object_mask_threshold
- scores_per_image = scores_per_image[keep]
- labels_per_image = labels_per_image[keep]
- mask_pred = mask_pred[keep]
-
- # if this is panoptic segmentation, we only keep the "thing" classes
- if self.panoptic_on:
- keep = torch.zeros_like(scores_per_image).bool()
- for i, lab in enumerate(labels_per_image):
- keep[i] = lab in self.metadata.thing_dataset_id_to_contiguous_id.values()
-
- scores_per_image = scores_per_image[keep]
- labels_per_image = labels_per_image[keep]
- mask_pred = mask_pred[keep]
-
- if 'ade20k' in self.metadata.name and not self.is_demo and "instance" in task_type:
- for i in range(labels_per_image.shape[0]):
- labels_per_image[i] = self.thing_indices.index(labels_per_image[i].item())
-
- result = Instances(image_size)
- # mask (before sigmoid)
- result.pred_masks = (mask_pred > 0).float()
- if self.detection_on:
- # Uncomment the following to get boxes from masks (this is slow)
- result.pred_boxes = BitMasks(mask_pred > 0).get_bounding_boxes()
- else:
- result.pred_boxes = Boxes(torch.zeros(mask_pred.size(0), 4))
-
- # calculate average mask prob
- mask_scores_per_image = (mask_pred.sigmoid().flatten(1) * result.pred_masks.flatten(1)).sum(1) / (result.pred_masks.flatten(1).sum(1) + 1e-6)
- result.scores = scores_per_image * mask_scores_per_image
- result.pred_classes = labels_per_image
- return result
\ No newline at end of file
diff --git a/spaces/PVIT/pvit/Home.py b/spaces/PVIT/pvit/Home.py
deleted file mode 100644
index 212349d3feab42d296b1cc2583840ccb3b346b49..0000000000000000000000000000000000000000
--- a/spaces/PVIT/pvit/Home.py
+++ /dev/null
@@ -1,616 +0,0 @@
-import os
-import re
-import copy
-import json
-import yaml
-import random
-import streamlit as st
-from PIL import Image, ImageDraw
-import requests
-import base64
-from io import BytesIO
-import seaborn as sns
-import matplotlib.pyplot as plt
-import pandas as pd
-
-from collections import defaultdict
-import datetime
-import json
-import os
-import time
-
-import gradio as gr
-import requests
-
-import hashlib
-import time
-
-import streamlit as st
-import streamlit.components.v1 as components
-from streamlit_chat import message as st_message
-from streamlit_drawable_canvas import st_canvas
-
-st.set_page_config(page_title="Model Chat", page_icon="🌍", layout="wide", initial_sidebar_state="collapsed")
-
-col_img, col_chat = st.columns([1, 1])
-with col_chat:
- with st.container():
- input_area = st.container()
- chatbox = st.container()
-
-# ==================== Conversation =================== #
-import dataclasses
-from enum import auto, Enum
-from typing import List, Tuple
-
-
-class SeparatorStyle(Enum):
- """Different separator style."""
- SINGLE = auto()
- TWO = auto()
-
-import re
-# Hack for displaying Region in Chatbot
-def convert_region_tags(text):
- pattern = r'(.*?)<\/Region>'
- replaced_text = re.sub(pattern, lambda m: '<Region>' + m.group(1).replace('<', '<').replace('>', '>') + '</Region>', text)
- return replaced_text
-
-@dataclasses.dataclass
-class Conversation:
- """A class that keeps all conversation history."""
- system: str
- roles: List[str]
- messages: List[List[str]]
- offset: int
- sep_style: SeparatorStyle = SeparatorStyle.SINGLE
- sep: str = "###"
- sep2: str = None
- version: str = "Unknown"
-
- skip_next: bool = False
-
- def get_prompt(self):
- if self.sep_style == SeparatorStyle.SINGLE:
- ret = self.system + self.sep
- for role, message in self.messages:
- if message:
- if type(message) is tuple:
- message, _, _ = message
- ret += role + ": " + message + self.sep
- else:
- ret += role + ":"
- return ret
- elif self.sep_style == SeparatorStyle.TWO:
- seps = [self.sep, self.sep2]
- ret = self.system + seps[0]
- for i, (role, message) in enumerate(self.messages):
- if message:
- if type(message) is tuple:
- message, _, _ = message
- ret += role + ": " + message + seps[i % 2]
- else:
- ret += role + ":"
- return ret
- else:
- raise ValueError(f"Invalid style: {self.sep_style}")
-
- def append_message(self, role, message):
- self.messages.append([role, message])
-
- def get_images(self, return_pil=False):
- images = []
- for i, (role, msg) in enumerate(self.messages[self.offset:]):
- if i % 2 == 0:
- if type(msg) is tuple:
- import base64
- from io import BytesIO
- from PIL import Image
- msg, image, image_process_mode = msg
- if image_process_mode == "Pad":
- def expand2square(pil_img, background_color=(122, 116, 104)):
- width, height = pil_img.size
- if width == height:
- return pil_img
- elif width > height:
- result = Image.new(pil_img.mode, (width, width), background_color)
- result.paste(pil_img, (0, (width - height) // 2))
- return result
- else:
- result = Image.new(pil_img.mode, (height, height), background_color)
- result.paste(pil_img, ((height - width) // 2, 0))
- return result
- image = expand2square(image)
- elif image_process_mode == "Crop":
- pass
- elif image_process_mode == "Resize":
- image = image.resize((224, 224))
- else:
- raise ValueError(f"Invalid image_process_mode: {image_process_mode}")
- max_hw, min_hw = max(image.size), min(image.size)
- aspect_ratio = max_hw / min_hw
- max_len, min_len = 800, 400
- shortest_edge = int(min(max_len / aspect_ratio, min_len, min_hw))
- longest_edge = int(shortest_edge * aspect_ratio)
- W, H = image.size
- if H > W:
- H, W = longest_edge, shortest_edge
- else:
- H, W = shortest_edge, longest_edge
- image = image.resize((W, H))
- if return_pil:
- images.append(image)
- else:
- buffered = BytesIO()
- image.save(buffered, format="JPEG")
- img_b64_str = base64.b64encode(buffered.getvalue()).decode()
- images.append(img_b64_str)
- return images
-
- def to_gradio_chatbot(self):
- ret = []
- for i, (role, msg) in enumerate(self.messages[self.offset:]):
- if i % 2 == 0:
- if type(msg) is tuple:
- import base64
- from io import BytesIO
- msg, image, image_process_mode = msg
- msg = convert_region_tags(msg)
- max_hw, min_hw = max(image.size), min(image.size)
- aspect_ratio = max_hw / min_hw
- max_len, min_len = 800, 400
- shortest_edge = int(min(max_len / aspect_ratio, min_len, min_hw))
- longest_edge = int(shortest_edge * aspect_ratio)
- W, H = image.size
- if H > W:
- H, W = longest_edge, shortest_edge
- else:
- H, W = shortest_edge, longest_edge
- image = image.resize((W, H))
- # image = image.resize((224, 224))
- buffered = BytesIO()
- image.save(buffered, format="JPEG")
- img_b64_str = base64.b64encode(buffered.getvalue()).decode()
- img_str = f''
- msg = msg.replace('', img_str)
- else:
- msg = convert_region_tags(msg)
- ret.append([msg, None])
- else:
- if isinstance(msg, str):
- msg = convert_region_tags(msg)
- ret[-1][-1] = msg
- return ret
-
- def copy(self):
- return Conversation(
- system=self.system,
- roles=self.roles,
- messages=[[x, y] for x, y in self.messages],
- offset=self.offset,
- sep_style=self.sep_style,
- sep=self.sep,
- sep2=self.sep2)
-
- def dict(self):
- if len(self.get_images()) > 0:
- return {
- "system": self.system,
- "roles": self.roles,
- "messages": [[x, y[0] if type(y) is tuple else y] for x, y in self.messages],
- "offset": self.offset,
- "sep": self.sep,
- "sep2": self.sep2,
- }
- return {
- "system": self.system,
- "roles": self.roles,
- "messages": self.messages,
- "offset": self.offset,
- "sep": self.sep,
- "sep2": self.sep2,
- }
-
-conv_vicuna_v1_1 = Conversation(
- system="A chat between a curious user and an artificial intelligence assistant. "
- "The assistant gives helpful, detailed, and polite answers to the user's questions.",
- roles=("USER", "ASSISTANT"),
- version="v1",
- messages=(),
- offset=0,
- sep_style=SeparatorStyle.TWO,
- sep=" ",
- sep2="",
-)
-
-default_conversation = conv_vicuna_v1_1
-
-# ==================== Chat =================== #
-
-
-def convert_bbox_to_region(bbox_xywh, image_width, image_height):
- bbox_x, bbox_y, bbox_w, bbox_h = bbox_xywh
- x1 = bbox_x
- y1 = bbox_y
- x2 = bbox_x + bbox_w
- y2 = bbox_y + bbox_h
-
- x1_normalized = x1 / image_width
- y1_normalized = y1 / image_height
- x2_normalized = x2 / image_width
- y2_normalized = y2 / image_height
-
- x1_norm = int(x1_normalized * 1000)
- y1_norm = int(y1_normalized * 1000)
- x2_norm = int(x2_normalized * 1000)
- y2_norm = int(y2_normalized * 1000)
-
- region_format = "".format(x1_norm, y1_norm, x2_norm, y2_norm)
- return region_format
-
-def load_config(config_fn, field='chat'):
- config = yaml.load(open(config_fn), Loader=yaml.Loader)
- return config[field]
-
-chat_config = load_config('configs/chat.yaml')
-
-def get_model_list():
- return ['PVIT_v1.0']
-
-def change_model(model_name):
- if model_name != st.session_state.get('model_name', ''):
- st.session_state['model_name'] = 'PVIT_v1.0'
- st.session_state['model_addr'] = chat_config['model_addr']
- st.session_state['messages'] = []
-
-
-def init_chat(image=None):
- st.session_state['image'] = image
- if 'input_message' not in st.session_state:
- st.session_state['input_message'] = ''
- if 'messages' not in st.session_state:
- st.session_state['messages'] = []
-
-def clear_messages():
- st.session_state['messages'] = []
- st.session_state['input_message'] = ''
-
-def encode_img(img):
- if isinstance(img, str):
- img = Image.open(img).convert('RGB')
- im_file = BytesIO()
- img.save(im_file, format="JPEG")
- elif isinstance(img, Image.Image):
- im_file = BytesIO()
- img.save(im_file, format="JPEG")
- else:
- im_file = img
- im_bytes = im_file.getvalue() # im_bytes: image in binary format.
- im_b64 = base64.b64encode(im_bytes).decode()
- return im_b64
-
-
-def send_one_message(message, max_new_tokens=32, temperature=0.7):
- conv = default_conversation.copy()
- # for role, msg in st.session_state['messages']:
- # with chatbox:
- # st_message(msg.lstrip('\n'), is_user=(role==conv.roles[0]))
-
- # # show message
- # with chatbox:
- # st_message(message, is_user=True)
- if 'messages' not in st.session_state:
- st.session_state['messages'] = []
- if len(st.session_state['messages']) == 0:
- if '' not in message:
- message = '\n' + message
- st.session_state['messages'].append([conv.roles[0], message])
- conv.messages = copy.deepcopy(st.session_state['messages'])
- # conv.append_message(conv.roles[0], message)
- conv.append_message(conv.roles[1], None)
- prompt = conv.get_prompt()
-
- if 'canvas_result' in st.session_state:
- objects = st.session_state['canvas_result'].get('objects', [])
- for i, obj in enumerate(objects):
- prompt = prompt.replace(f'[REGION-{i}]', obj['bbox_label'])
-
- headers = {"User-Agent": "LLaVA Client"}
- pload = {
- "prompt": prompt,
- "images": [st.session_state['image']],
- "max_new_tokens": max_new_tokens,
- "temperature": temperature,
- "stop": conv.sep2,
- }
- print(prompt)
- response = requests.post(st.session_state['model_addr'] + "/worker_generate_stream", headers=headers,
- json=pload, stream=True)
- result = ""
- for chunk in response.iter_lines(chunk_size=8192, decode_unicode=False, delimiter=b"\0"):
- if chunk:
- data_t = json.loads(chunk.decode("utf-8"))
- output = data_t["text"].split(conv.roles[1]+':')[-1]
- result = output
-
- # # show response
- # with chatbox:
- # st_message(result)
- st.session_state['messages'].append([conv.roles[1], result])
-
-
-# Customize Streamlit UI using CSS # background-color: #eb5424;
-st.markdown("""
-
-""", unsafe_allow_html=True)
-
-# ==================== Draw Bounding Boxes =================== #
-
-COLORS = sns.color_palette("tab10", n_colors=10).as_hex()
-random.Random(32).shuffle(COLORS)
-
-def update_annotation_states(canvas_result, ratio, img_size):
- for obj in canvas_result['objects']:
- top = obj["top"] * ratio
- left = obj["left"] * ratio
- width = obj["width"] * ratio
- height = obj["height"] * ratio
- obj['bbox_label'] = convert_bbox_to_region([left, top, width, height], img_size[0], img_size[1])
- st.session_state['canvas_result'] = canvas_result
- st.session_state['label_color'] = COLORS[len(st.session_state['canvas_result']['objects'])+1]
-
-def init_canvas():
- if 'canvas_result' not in st.session_state:
- st.session_state['canvas_result'] = None
- if 'label_color' not in st.session_state:
- st.session_state['label_color'] = COLORS[0]
-
-def input_message(msg):
- st.session_state['input_message'] = msg
-
-
-def get_objects():
- canvas_result = st.session_state.get('canvas_result', {})
- if canvas_result is not None:
- objects = canvas_result.get('objects', [])
- else:
- objects = []
- return objects
-
-def format_object_str(input_str):
- if 'canvas_result' in st.session_state:
- objects = st.session_state['canvas_result'].get('objects', [])
- for i, obj in enumerate(objects):
- input_str = input_str.replace(f'[REGION-{i}]', obj['bbox_label'])
- return input_str
-
-# select model
-model_list = get_model_list()
-with col_img:
- model_name = st.selectbox(
- 'Choose a model to chat with',
- model_list
- )
-change_model(model_name)
-
-css = ''
-# upload image
-with col_img:
- image = st.file_uploader("Chat with Image", type=["png", "jpg", "jpeg"], on_change=clear_messages)
- img_fn = image.name if image is not None else None
-if image:
- init_chat(encode_img(image))
- init_canvas()
-
- img = Image.open(image).convert('RGB')
-
- width = 700
- height = round(width * img.size[1] * 1.0 / img.size[0])
- ratio = img.size[0] / width
-
- with st.sidebar:
- max_new_tokens = st.number_input('max_new_tokens', min_value=1, max_value=1024, value=128)
- temperature = st.number_input('temperature', min_value=0.0, max_value=1.0, value=0.0)
- drawing_mode = st.selectbox(
- "Drawing tool:", ("rect", "point", "line", "circle"),
- )
- drawing_mode = "transform" if st.checkbox("Move ROIs", False) else drawing_mode
- stroke_width = st.slider("Stroke width: ", 1, 25, 3)
- # bg_color = st.color_picker("Background color: ", "#eee", key="bg_color")
-
- # save_file = st.text_input("Save File", value="saved.jsonl")
- # save_button = st.button(label='Save')
-
- # if save_button:
- # if img_fn is None:
- # st.warning("Please upload an image first!")
- # else:
- # conversations_to_save = [{'from': role, 'value': format_object_str(conv)} for (role, conv) in st.session_state['messages']]
- # model_name = st.session_state['model_name']
- # save_dict = {
- # 'image': img_fn,
- # 'conversations': conversations_to_save,
- # 'info': {
- # 'model_name': model_name
- # }
- # }
-
- # save_image_path = os.path.join(chat_config['save_path'], 'images')
- # os.makedirs(save_image_path, exist_ok=True)
-
- # img.save(os.path.join(save_image_path, img_fn))
-
- # chat_save_path = os.path.join(chat_config['save_path'], save_file)
- # with open(chat_save_path, 'a+') as fout:
- # fout.write(json.dumps(save_dict) + '\n')
-
- # st.success('Save successfully!')
-
- with col_img:
- canvas_result = st_canvas(
- fill_color=st.session_state['label_color'] + "77", # Fixed fill color with some opacity
- stroke_width=stroke_width,
- stroke_color=st.session_state['label_color'] + "77",
- background_color="#eee",
- background_image=Image.open(image) if image else None,
- update_streamlit=True,
- width=width,
- height=height,
- drawing_mode=drawing_mode,
- point_display_radius=3 if drawing_mode == 'point' else 0,
- key="canvas"
- )
-
- if canvas_result.json_data is not None:
- update_annotation_states(canvas_result.json_data, ratio, img.size)
-
- if st.session_state.get('submit_btn', False):
- send_one_message(st.session_state['input_message'], max_new_tokens=max_new_tokens, temperature=temperature)
- st.session_state['input_message'] = ""
-
- with input_area:
- col3, col4, col5 = st.columns([5, 1, 1])
-
- with col3:
- message = st.text_input('User', key="input_message")
-
- with col4:
- submit_btn = st.button(label='submit', key='submit_btn')
-
- components.html(
- """
-
- """,
- height=0,
- width=0,
- )
-
- with col5:
- clear_btn = st.button(label='clear', on_click=clear_messages)
-
-
- objects = get_objects()
-
- if len(objects):
- bbox_cols = st.columns([1 for _ in range(len(objects))])
-
- def on_bbox_button_click(str):
- def f():
- st.session_state['input_message'] += str
- return f
-
- for i, (obj, bbox_col) in enumerate(zip(objects, bbox_cols)):
- with bbox_col:
- st.button(label=f'Region-{i}', on_click=on_bbox_button_click(f'[REGION-{i}]'))
- # css += f"#root > div:nth-child(1) > div.withScreencast > div > div > div > section.main.css-uf99v8.e1g8pov65 > div.block-container.css-z5fcl4.e1g8pov64 > div:nth-child(1) > div > div.css-ocqkz7.esravye3 > div:nth-child(2) > div:nth-child(1) > div > div:nth-child(1) > div > div:nth-child(1) > div > div:nth-child(2) > div:nth-child({i+1}) > div:nth-child(1) > div > div > div > button {{background-color:{obj['stroke'][:7]}; bottom: 0px}} \n" + '\n'
- css += f"#root > div:nth-child(1) > div.withScreencast > div > div > div > section.main.css-uf99v8.ea3mdgi5 > div.block-container.css-awvpbp.ea3mdgi4 > div:nth-child(1) > div > div.css-ocqkz7.e1f1d6gn3 > div:nth-child(2) > div:nth-child(1) > div > div:nth-child(1) > div > div:nth-child(1) > div > div:nth-child(3) > div:nth-child({i+1}) > div:nth-child(1) > div > div > div > button {{background-color:{obj['stroke'][:7]}; bottom: 0px}} \n" + '\n'
- # css += f"#root > div:nth-child(1) > div.withScreencast > div > div > div > section.main.css-uf99v8.ea3mdgi5 > div.block-container.css-awvpbp.ea3mdgi4 > div:nth-child(1) > div > div.css-ocqkz7.e1f1d6gn3 > div:nth-child(2) > div:nth-child(1) > div > div:nth-child(1) > div > div:nth-child(1) > div > div:nth-child(2) > div:nth-child({i+1}) > div:nth-child(1) > div > div > div > button {{background-color:{obj['stroke'][:7]}; bottom: 0px}} \n" + '\n'
-
- for i, (role, msg) in enumerate(st.session_state['messages']):
- with chatbox:
- st_message(msg.lstrip('\n'), is_user=(role==default_conversation.roles[0]), key=f'{i}-{msg}')
-
-st.markdown("", unsafe_allow_html=True)
-
-st.markdown(
-"""
---------------------
-### User Manual
-
-- **Step 1.** Upload an image here
-""")
-
-st.image("figures/upload_image.png")
-
-st.markdown(
-"""
-- **Step 2.** (Optional) You can draw bounding boxes on the image. Each box you draw creates a corresponding button of the same color.
-""")
-
-st.image("figures/bbox.png", width=512)
-
-st.markdown(
-"""
-- **Step 3.** Ask questions. Insert region tokens in the question by clicking on the `Region-i` button. For example:
-
-> What color is the dog in [REGION-0]?
-
-> What is the relationship between the dog in [REGION-0] and the dog in [REGION-1]?
-
-**Note**: This demo is in its experimental stage, and we are actively working on improvements.
-
-""")
\ No newline at end of file
diff --git a/spaces/PaddlePaddle/resnext101_32x16d_wsl/README.md b/spaces/PaddlePaddle/resnext101_32x16d_wsl/README.md
deleted file mode 100644
index b2d7f59e4797ad8ef5ff596d95fb0b3175dabfdc..0000000000000000000000000000000000000000
--- a/spaces/PaddlePaddle/resnext101_32x16d_wsl/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Resnext101_32x16d_wsl
-emoji: 😻
-colorFrom: yellow
-colorTo: yellow
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/self-references.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/self-references.go
deleted file mode 100644
index 98bb1c1b69ef6b8e2527feacd43ecba1c5a28a02..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/self-references.go and /dev/null differ
diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/amp.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/amp.py
deleted file mode 100644
index ed97eb5b413a7f8375c3faa2135b0e3f3add230a..0000000000000000000000000000000000000000
--- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/amp.py
+++ /dev/null
@@ -1,14 +0,0 @@
-from contextlib import contextmanager
-
-@contextmanager
-def nullcontext(enter_result=None, **kwargs):
- yield enter_result
-
-try:
- from torch.cuda.amp import autocast, GradScaler, custom_fwd, custom_bwd
-except:
- print('[Warning] Library for automatic mixed precision is not found, AMP is disabled!!')
- GradScaler = nullcontext
- autocast = nullcontext
- custom_fwd = nullcontext
- custom_bwd = nullcontext
\ No newline at end of file
diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/quantization/vq.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/quantization/vq.py
deleted file mode 100644
index aa57bea59db95ddae35e0657f723ca3a29ee943b..0000000000000000000000000000000000000000
--- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/quantization/vq.py
+++ /dev/null
@@ -1,115 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-import typing as tp
-
-import torch
-
-from .base import BaseQuantizer, QuantizedResult
-from .core_vq import ResidualVectorQuantization
-
-
-class ResidualVectorQuantizer(BaseQuantizer):
- """Residual Vector Quantizer.
-
- Args:
- dimension (int): Dimension of the codebooks.
- n_q (int): Number of residual vector quantizers used.
- q_dropout (bool): Random quantizer drop out at train time.
- bins (int): Codebook size.
- decay (float): Decay for exponential moving average over the codebooks.
- kmeans_init (bool): Whether to use kmeans to initialize the codebooks.
- kmeans_iters (int): Number of iterations used for kmeans initialization.
- threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes
- that have an exponential moving average cluster size less than the specified threshold with
- randomly selected vector from the current batch.
- orthogonal_reg_weight (float): Orthogonal regularization weights.
- orthogonal_reg_active_codes_only (bool): Apply orthogonal regularization only on active codes.
- orthogonal_reg_max_codes (optional int): Maximum number of codes to consider.
- for orthogonal regularization.
- """
- def __init__(
- self,
- dimension: int = 256,
- n_q: int = 8,
- q_dropout: bool = False,
- bins: int = 1024,
- decay: float = 0.99,
- kmeans_init: bool = True,
- kmeans_iters: int = 10,
- threshold_ema_dead_code: int = 2,
- orthogonal_reg_weight: float = 0.0,
- orthogonal_reg_active_codes_only: bool = False,
- orthogonal_reg_max_codes: tp.Optional[int] = None,
- ):
- super().__init__()
- self.max_n_q = n_q
- self.n_q = n_q
- self.q_dropout = q_dropout
- self.dimension = dimension
- self.bins = bins
- self.decay = decay
- self.kmeans_init = kmeans_init
- self.kmeans_iters = kmeans_iters
- self.threshold_ema_dead_code = threshold_ema_dead_code
- self.orthogonal_reg_weight = orthogonal_reg_weight
- self.orthogonal_reg_active_codes_only = orthogonal_reg_active_codes_only
- self.orthogonal_reg_max_codes = orthogonal_reg_max_codes
- self.vq = ResidualVectorQuantization(
- dim=self.dimension,
- codebook_size=self.bins,
- num_quantizers=self.n_q,
- decay=self.decay,
- kmeans_init=self.kmeans_init,
- kmeans_iters=self.kmeans_iters,
- threshold_ema_dead_code=self.threshold_ema_dead_code,
- orthogonal_reg_weight=self.orthogonal_reg_weight,
- orthogonal_reg_active_codes_only=self.orthogonal_reg_active_codes_only,
- orthogonal_reg_max_codes=self.orthogonal_reg_max_codes,
- channels_last=False
- )
-
- def forward(self, x: torch.Tensor, frame_rate: int):
- n_q = self.n_q
- if self.training and self.q_dropout:
- n_q = int(torch.randint(1, self.n_q + 1, (1,)).item())
- bw_per_q = math.log2(self.bins) * frame_rate / 1000
- quantized, codes, commit_loss = self.vq(x, n_q=n_q)
- codes = codes.transpose(0, 1)
- # codes is [B, K, T], with T frames, K nb of codebooks.
- bw = torch.tensor(n_q * bw_per_q).to(x)
- return QuantizedResult(quantized, codes, bw, penalty=torch.mean(commit_loss))
-
- def encode(self, x: torch.Tensor) -> torch.Tensor:
- """Encode a given input tensor with the specified frame rate at the given bandwidth.
- The RVQ encode method sets the appropriate number of quantizer to use
- and returns indices for each quantizer.
- """
- n_q = self.n_q
- codes = self.vq.encode(x, n_q=n_q)
- codes = codes.transpose(0, 1)
- # codes is [B, K, T], with T frames, K nb of codebooks.
- return codes
-
- def decode(self, codes: torch.Tensor) -> torch.Tensor:
- """Decode the given codes to the quantized representation."""
- # codes is [B, K, T], with T frames, K nb of codebooks, vq.decode expects [K, B, T].
- codes = codes.transpose(0, 1)
- quantized = self.vq.decode(codes)
- return quantized
-
- @property
- def total_codebooks(self):
- return self.max_n_q
-
- @property
- def num_codebooks(self):
- return self.n_q
-
- def set_num_codebooks(self, n: int):
- assert n > 0 and n <= self.max_n_q
- self.n_q = n
diff --git a/spaces/PunGrumpy/text-generation/README.md b/spaces/PunGrumpy/text-generation/README.md
deleted file mode 100644
index cf6a8c1091f47d56b7dda8f7bb088cef8b963459..0000000000000000000000000000000000000000
--- a/spaces/PunGrumpy/text-generation/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Text Generation
-emoji: 🐨
-colorFrom: red
-colorTo: pink
-sdk: docker
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/inject_securetransport.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/inject_securetransport.py
deleted file mode 100644
index 276aa79bb81356cdca73af0a5851b448707784a4..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/inject_securetransport.py
+++ /dev/null
@@ -1,35 +0,0 @@
-"""A helper module that injects SecureTransport, on import.
-
-The import should be done as early as possible, to ensure all requests and
-sessions (or whatever) are created after injecting SecureTransport.
-
-Note that we only do the injection on macOS, when the linked OpenSSL is too
-old to handle TLSv1.2.
-"""
-
-import sys
-
-
-def inject_securetransport() -> None:
- # Only relevant on macOS
- if sys.platform != "darwin":
- return
-
- try:
- import ssl
- except ImportError:
- return
-
- # Checks for OpenSSL 1.0.1
- if ssl.OPENSSL_VERSION_NUMBER >= 0x1000100F:
- return
-
- try:
- from pip._vendor.urllib3.contrib import securetransport
- except (ImportError, OSError):
- return
-
- securetransport.inject_into_urllib3()
-
-
-inject_securetransport()
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/utf8prober.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/utf8prober.py
deleted file mode 100644
index 3aae09e863036b6185cf115047e441b15ea8c5e8..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/utf8prober.py
+++ /dev/null
@@ -1,80 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# The Original Code is mozilla.org code.
-#
-# The Initial Developer of the Original Code is
-# Netscape Communications Corporation.
-# Portions created by the Initial Developer are Copyright (C) 1998
-# the Initial Developer. All Rights Reserved.
-#
-# Contributor(s):
-# Mark Pilgrim - port to Python
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-from .charsetprober import CharSetProber
-from .codingstatemachine import CodingStateMachine
-from .enums import MachineState, ProbingState
-from .mbcssm import UTF8_SM_MODEL
-
-
-class UTF8Prober(CharSetProber):
- ONE_CHAR_PROB = 0.5
-
- def __init__(self):
- super().__init__()
- self.coding_sm = CodingStateMachine(UTF8_SM_MODEL)
- self._num_mb_chars = None
- self.reset()
-
- def reset(self):
- super().reset()
- self.coding_sm.reset()
- self._num_mb_chars = 0
-
- @property
- def charset_name(self):
- return "utf-8"
-
- @property
- def language(self):
- return ""
-
- def feed(self, byte_str):
- for c in byte_str:
- coding_state = self.coding_sm.next_state(c)
- if coding_state == MachineState.ERROR:
- self._state = ProbingState.NOT_ME
- break
- if coding_state == MachineState.ITS_ME:
- self._state = ProbingState.FOUND_IT
- break
- if coding_state == MachineState.START:
- if self.coding_sm.get_current_charlen() >= 2:
- self._num_mb_chars += 1
-
- if self.state == ProbingState.DETECTING:
- if self.get_confidence() > self.SHORTCUT_THRESHOLD:
- self._state = ProbingState.FOUND_IT
-
- return self.state
-
- def get_confidence(self):
- unlike = 0.99
- if self._num_mb_chars < 6:
- unlike *= self.ONE_CHAR_PROB**self._num_mb_chars
- return 1.0 - unlike
- return unlike
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/ema.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/ema.py
deleted file mode 100644
index 15c7e68088f019802a59e7ae41cc1fe0c7f28f96..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/ema.py
+++ /dev/null
@@ -1,89 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from ...parallel import is_module_wrapper
-from ..hooks.hook import HOOKS, Hook
-
-
-@HOOKS.register_module()
-class EMAHook(Hook):
- r"""Exponential Moving Average Hook.
-
- Use Exponential Moving Average on all parameters of model in training
- process. All parameters have a ema backup, which update by the formula
- as below. EMAHook takes priority over EvalHook and CheckpointSaverHook.
-
- .. math::
-
- \text{Xema\_{t+1}} = (1 - \text{momentum}) \times
- \text{Xema\_{t}} + \text{momentum} \times X_t
-
- Args:
- momentum (float): The momentum used for updating ema parameter.
- Defaults to 0.0002.
- interval (int): Update ema parameter every interval iteration.
- Defaults to 1.
- warm_up (int): During first warm_up steps, we may use smaller momentum
- to update ema parameters more slowly. Defaults to 100.
- resume_from (str): The checkpoint path. Defaults to None.
- """
-
- def __init__(self,
- momentum=0.0002,
- interval=1,
- warm_up=100,
- resume_from=None):
- assert isinstance(interval, int) and interval > 0
- self.warm_up = warm_up
- self.interval = interval
- assert momentum > 0 and momentum < 1
- self.momentum = momentum**interval
- self.checkpoint = resume_from
-
- def before_run(self, runner):
- """To resume model with it's ema parameters more friendly.
-
- Register ema parameter as ``named_buffer`` to model
- """
- model = runner.model
- if is_module_wrapper(model):
- model = model.module
- self.param_ema_buffer = {}
- self.model_parameters = dict(model.named_parameters(recurse=True))
- for name, value in self.model_parameters.items():
- # "." is not allowed in module's buffer name
- buffer_name = f"ema_{name.replace('.', '_')}"
- self.param_ema_buffer[name] = buffer_name
- model.register_buffer(buffer_name, value.data.clone())
- self.model_buffers = dict(model.named_buffers(recurse=True))
- if self.checkpoint is not None:
- runner.resume(self.checkpoint)
-
- def after_train_iter(self, runner):
- """Update ema parameter every self.interval iterations."""
- curr_step = runner.iter
- # We warm up the momentum considering the instability at beginning
- momentum = min(self.momentum,
- (1 + curr_step) / (self.warm_up + curr_step))
- if curr_step % self.interval != 0:
- return
- for name, parameter in self.model_parameters.items():
- buffer_name = self.param_ema_buffer[name]
- buffer_parameter = self.model_buffers[buffer_name]
- buffer_parameter.mul_(1 - momentum).add_(momentum, parameter.data)
-
- def after_train_epoch(self, runner):
- """We load parameter values from ema backup to model before the
- EvalHook."""
- self._swap_ema_parameters()
-
- def before_train_epoch(self, runner):
- """We recover model's parameter from ema backup after last epoch's
- EvalHook."""
- self._swap_ema_parameters()
-
- def _swap_ema_parameters(self):
- """Swap the parameter of model with parameter in ema_buffer."""
- for name, value in self.model_parameters.items():
- temp = value.data.clone()
- ema_buffer = self.model_buffers[self.param_ema_buffer[name]]
- value.data.copy_(ema_buffer.data)
- ema_buffer.data.copy_(temp)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/__init__.py
deleted file mode 100644
index f004dd95d97df16167f932587b3ce73b05b04a37..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/__init__.py
+++ /dev/null
@@ -1,41 +0,0 @@
-from .anchor_free_head import AnchorFreeHead
-from .anchor_head import AnchorHead
-from .atss_head import ATSSHead
-from .cascade_rpn_head import CascadeRPNHead, StageCascadeRPNHead
-from .centripetal_head import CentripetalHead
-from .corner_head import CornerHead
-from .embedding_rpn_head import EmbeddingRPNHead
-from .fcos_head import FCOSHead
-from .fovea_head import FoveaHead
-from .free_anchor_retina_head import FreeAnchorRetinaHead
-from .fsaf_head import FSAFHead
-from .ga_retina_head import GARetinaHead
-from .ga_rpn_head import GARPNHead
-from .gfl_head import GFLHead
-from .guided_anchor_head import FeatureAdaption, GuidedAnchorHead
-from .ld_head import LDHead
-from .nasfcos_head import NASFCOSHead
-from .paa_head import PAAHead
-from .pisa_retinanet_head import PISARetinaHead
-from .pisa_ssd_head import PISASSDHead
-from .reppoints_head import RepPointsHead
-from .retina_head import RetinaHead
-from .retina_sepbn_head import RetinaSepBNHead
-from .rpn_head import RPNHead
-from .sabl_retina_head import SABLRetinaHead
-from .ssd_head import SSDHead
-from .transformer_head import TransformerHead
-from .vfnet_head import VFNetHead
-from .yolact_head import YOLACTHead, YOLACTProtonet, YOLACTSegmHead
-from .yolo_head import YOLOV3Head
-
-__all__ = [
- 'AnchorFreeHead', 'AnchorHead', 'GuidedAnchorHead', 'FeatureAdaption',
- 'RPNHead', 'GARPNHead', 'RetinaHead', 'RetinaSepBNHead', 'GARetinaHead',
- 'SSDHead', 'FCOSHead', 'RepPointsHead', 'FoveaHead',
- 'FreeAnchorRetinaHead', 'ATSSHead', 'FSAFHead', 'NASFCOSHead',
- 'PISARetinaHead', 'PISASSDHead', 'GFLHead', 'CornerHead', 'YOLACTHead',
- 'YOLACTSegmHead', 'YOLACTProtonet', 'YOLOV3Head', 'PAAHead',
- 'SABLRetinaHead', 'CentripetalHead', 'VFNetHead', 'TransformerHead',
- 'StageCascadeRPNHead', 'CascadeRPNHead', 'EmbeddingRPNHead', 'LDHead'
-]
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/necks/pafpn.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/necks/pafpn.py
deleted file mode 100644
index d7c0b50f29e882aacb5158b33ead3d4566d0ce0b..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/necks/pafpn.py
+++ /dev/null
@@ -1,142 +0,0 @@
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import ConvModule
-from mmcv.runner import auto_fp16
-
-from ..builder import NECKS
-from .fpn import FPN
-
-
-@NECKS.register_module()
-class PAFPN(FPN):
- """Path Aggregation Network for Instance Segmentation.
-
- This is an implementation of the `PAFPN in Path Aggregation Network
- `_.
-
- Args:
- in_channels (List[int]): Number of input channels per scale.
- out_channels (int): Number of output channels (used at each scale)
- num_outs (int): Number of output scales.
- start_level (int): Index of the start input backbone level used to
- build the feature pyramid. Default: 0.
- end_level (int): Index of the end input backbone level (exclusive) to
- build the feature pyramid. Default: -1, which means the last level.
- add_extra_convs (bool): Whether to add conv layers on top of the
- original feature maps. Default: False.
- extra_convs_on_inputs (bool): Whether to apply extra conv on
- the original feature from the backbone. Default: False.
- relu_before_extra_convs (bool): Whether to apply relu before the extra
- conv. Default: False.
- no_norm_on_lateral (bool): Whether to apply norm on lateral.
- Default: False.
- conv_cfg (dict): Config dict for convolution layer. Default: None.
- norm_cfg (dict): Config dict for normalization layer. Default: None.
- act_cfg (str): Config dict for activation layer in ConvModule.
- Default: None.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- num_outs,
- start_level=0,
- end_level=-1,
- add_extra_convs=False,
- extra_convs_on_inputs=True,
- relu_before_extra_convs=False,
- no_norm_on_lateral=False,
- conv_cfg=None,
- norm_cfg=None,
- act_cfg=None):
- super(PAFPN,
- self).__init__(in_channels, out_channels, num_outs, start_level,
- end_level, add_extra_convs, extra_convs_on_inputs,
- relu_before_extra_convs, no_norm_on_lateral,
- conv_cfg, norm_cfg, act_cfg)
- # add extra bottom up pathway
- self.downsample_convs = nn.ModuleList()
- self.pafpn_convs = nn.ModuleList()
- for i in range(self.start_level + 1, self.backbone_end_level):
- d_conv = ConvModule(
- out_channels,
- out_channels,
- 3,
- stride=2,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
- pafpn_conv = ConvModule(
- out_channels,
- out_channels,
- 3,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
- self.downsample_convs.append(d_conv)
- self.pafpn_convs.append(pafpn_conv)
-
- @auto_fp16()
- def forward(self, inputs):
- """Forward function."""
- assert len(inputs) == len(self.in_channels)
-
- # build laterals
- laterals = [
- lateral_conv(inputs[i + self.start_level])
- for i, lateral_conv in enumerate(self.lateral_convs)
- ]
-
- # build top-down path
- used_backbone_levels = len(laterals)
- for i in range(used_backbone_levels - 1, 0, -1):
- prev_shape = laterals[i - 1].shape[2:]
- laterals[i - 1] += F.interpolate(
- laterals[i], size=prev_shape, mode='nearest')
-
- # build outputs
- # part 1: from original levels
- inter_outs = [
- self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels)
- ]
-
- # part 2: add bottom-up path
- for i in range(0, used_backbone_levels - 1):
- inter_outs[i + 1] += self.downsample_convs[i](inter_outs[i])
-
- outs = []
- outs.append(inter_outs[0])
- outs.extend([
- self.pafpn_convs[i - 1](inter_outs[i])
- for i in range(1, used_backbone_levels)
- ])
-
- # part 3: add extra levels
- if self.num_outs > len(outs):
- # use max pool to get more levels on top of outputs
- # (e.g., Faster R-CNN, Mask R-CNN)
- if not self.add_extra_convs:
- for i in range(self.num_outs - used_backbone_levels):
- outs.append(F.max_pool2d(outs[-1], 1, stride=2))
- # add conv layers on top of original feature maps (RetinaNet)
- else:
- if self.add_extra_convs == 'on_input':
- orig = inputs[self.backbone_end_level - 1]
- outs.append(self.fpn_convs[used_backbone_levels](orig))
- elif self.add_extra_convs == 'on_lateral':
- outs.append(self.fpn_convs[used_backbone_levels](
- laterals[-1]))
- elif self.add_extra_convs == 'on_output':
- outs.append(self.fpn_convs[used_backbone_levels](outs[-1]))
- else:
- raise NotImplementedError
- for i in range(used_backbone_levels + 1, self.num_outs):
- if self.relu_before_extra_convs:
- outs.append(self.fpn_convs[i](F.relu(outs[-1])))
- else:
- outs.append(self.fpn_convs[i](outs[-1]))
- return tuple(outs)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/backbones/uniformer.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/backbones/uniformer.py
deleted file mode 100644
index 0c4bb88e4c928540cca9ab609988b916520f5b7a..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/backbones/uniformer.py
+++ /dev/null
@@ -1,422 +0,0 @@
-# --------------------------------------------------------
-# UniFormer
-# Copyright (c) 2022 SenseTime X-Lab
-# Licensed under The MIT License [see LICENSE for details]
-# Written by Kunchang Li
-# --------------------------------------------------------
-
-from collections import OrderedDict
-import math
-
-from functools import partial
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-import numpy as np
-from timm.models.layers import DropPath, to_2tuple, trunc_normal_
-
-from annotator.uniformer.mmcv_custom import load_checkpoint
-from annotator.uniformer.mmseg.utils import get_root_logger
-from ..builder import BACKBONES
-
-
-class Mlp(nn.Module):
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-class CMlp(nn.Module):
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Conv2d(in_features, hidden_features, 1)
- self.act = act_layer()
- self.fc2 = nn.Conv2d(hidden_features, out_features, 1)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-class CBlock(nn.Module):
- def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim)
- self.norm1 = nn.BatchNorm2d(dim)
- self.conv1 = nn.Conv2d(dim, dim, 1)
- self.conv2 = nn.Conv2d(dim, dim, 1)
- self.attn = nn.Conv2d(dim, dim, 5, padding=2, groups=dim)
- # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = nn.BatchNorm2d(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = CMlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- def forward(self, x):
- x = x + self.pos_embed(x)
- x = x + self.drop_path(self.conv2(self.attn(self.conv1(self.norm1(x)))))
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- return x
-
-
-class Attention(nn.Module):
- def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.):
- super().__init__()
- self.num_heads = num_heads
- head_dim = dim // num_heads
- # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights
- self.scale = qk_scale or head_dim ** -0.5
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- def forward(self, x):
- B, N, C = x.shape
- qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- attn = (q @ k.transpose(-2, -1)) * self.scale
- attn = attn.softmax(dim=-1)
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-class SABlock(nn.Module):
- def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim)
- self.norm1 = norm_layer(dim)
- self.attn = Attention(
- dim,
- num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
- attn_drop=attn_drop, proj_drop=drop)
- # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- def forward(self, x):
- x = x + self.pos_embed(x)
- B, N, H, W = x.shape
- x = x.flatten(2).transpose(1, 2)
- x = x + self.drop_path(self.attn(self.norm1(x)))
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- x = x.transpose(1, 2).reshape(B, N, H, W)
- return x
-
-
-def window_partition(x, window_size):
- """
- Args:
- x: (B, H, W, C)
- window_size (int): window size
- Returns:
- windows: (num_windows*B, window_size, window_size, C)
- """
- B, H, W, C = x.shape
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-
-def window_reverse(windows, window_size, H, W):
- """
- Args:
- windows: (num_windows*B, window_size, window_size, C)
- window_size (int): Window size
- H (int): Height of image
- W (int): Width of image
- Returns:
- x: (B, H, W, C)
- """
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-class SABlock_Windows(nn.Module):
- def __init__(self, dim, num_heads, window_size=14, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.window_size=window_size
- self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim)
- self.norm1 = norm_layer(dim)
- self.attn = Attention(
- dim,
- num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
- attn_drop=attn_drop, proj_drop=drop)
- # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- def forward(self, x):
- x = x + self.pos_embed(x)
- x = x.permute(0, 2, 3, 1)
- B, H, W, C = x.shape
- shortcut = x
- x = self.norm1(x)
-
- pad_l = pad_t = 0
- pad_r = (self.window_size - W % self.window_size) % self.window_size
- pad_b = (self.window_size - H % self.window_size) % self.window_size
- x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
- _, Hp, Wp, _ = x.shape
-
- x_windows = window_partition(x, self.window_size) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA
- attn_windows = self.attn(x_windows) # nW*B, window_size*window_size, C
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
-
- # reverse cyclic shift
- if pad_r > 0 or pad_b > 0:
- x = x[:, :H, :W, :].contiguous()
-
- x = shortcut + self.drop_path(x)
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- x = x.permute(0, 3, 1, 2).reshape(B, C, H, W)
- return x
-
-
-class PatchEmbed(nn.Module):
- """ Image to Patch Embedding
- """
- def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768):
- super().__init__()
- img_size = to_2tuple(img_size)
- patch_size = to_2tuple(patch_size)
- num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0])
- self.img_size = img_size
- self.patch_size = patch_size
- self.num_patches = num_patches
- self.norm = nn.LayerNorm(embed_dim)
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
-
- def forward(self, x):
- B, _, H, W = x.shape
- x = self.proj(x)
- B, _, H, W = x.shape
- x = x.flatten(2).transpose(1, 2)
- x = self.norm(x)
- x = x.reshape(B, H, W, -1).permute(0, 3, 1, 2).contiguous()
- return x
-
-
-@BACKBONES.register_module()
-class UniFormer(nn.Module):
- """ Vision Transformer
- A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale` -
- https://arxiv.org/abs/2010.11929
- """
- def __init__(self, layers=[3, 4, 8, 3], img_size=224, in_chans=3, num_classes=80, embed_dim=[64, 128, 320, 512],
- head_dim=64, mlp_ratio=4., qkv_bias=True, qk_scale=None, representation_size=None,
- drop_rate=0., attn_drop_rate=0., drop_path_rate=0., norm_layer=partial(nn.LayerNorm, eps=1e-6),
- pretrained_path=None, use_checkpoint=False, checkpoint_num=[0, 0, 0, 0],
- windows=False, hybrid=False, window_size=14):
- """
- Args:
- layer (list): number of block in each layer
- img_size (int, tuple): input image size
- in_chans (int): number of input channels
- num_classes (int): number of classes for classification head
- embed_dim (int): embedding dimension
- head_dim (int): dimension of attention heads
- mlp_ratio (int): ratio of mlp hidden dim to embedding dim
- qkv_bias (bool): enable bias for qkv if True
- qk_scale (float): override default qk scale of head_dim ** -0.5 if set
- representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set
- drop_rate (float): dropout rate
- attn_drop_rate (float): attention dropout rate
- drop_path_rate (float): stochastic depth rate
- norm_layer (nn.Module): normalization layer
- pretrained_path (str): path of pretrained model
- use_checkpoint (bool): whether use checkpoint
- checkpoint_num (list): index for using checkpoint in every stage
- windows (bool): whether use window MHRA
- hybrid (bool): whether use hybrid MHRA
- window_size (int): size of window (>14)
- """
- super().__init__()
- self.num_classes = num_classes
- self.use_checkpoint = use_checkpoint
- self.checkpoint_num = checkpoint_num
- self.windows = windows
- print(f'Use Checkpoint: {self.use_checkpoint}')
- print(f'Checkpoint Number: {self.checkpoint_num}')
- self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
- norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
-
- self.patch_embed1 = PatchEmbed(
- img_size=img_size, patch_size=4, in_chans=in_chans, embed_dim=embed_dim[0])
- self.patch_embed2 = PatchEmbed(
- img_size=img_size // 4, patch_size=2, in_chans=embed_dim[0], embed_dim=embed_dim[1])
- self.patch_embed3 = PatchEmbed(
- img_size=img_size // 8, patch_size=2, in_chans=embed_dim[1], embed_dim=embed_dim[2])
- self.patch_embed4 = PatchEmbed(
- img_size=img_size // 16, patch_size=2, in_chans=embed_dim[2], embed_dim=embed_dim[3])
-
- self.pos_drop = nn.Dropout(p=drop_rate)
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(layers))] # stochastic depth decay rule
- num_heads = [dim // head_dim for dim in embed_dim]
- self.blocks1 = nn.ModuleList([
- CBlock(
- dim=embed_dim[0], num_heads=num_heads[0], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer)
- for i in range(layers[0])])
- self.norm1=norm_layer(embed_dim[0])
- self.blocks2 = nn.ModuleList([
- CBlock(
- dim=embed_dim[1], num_heads=num_heads[1], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]], norm_layer=norm_layer)
- for i in range(layers[1])])
- self.norm2 = norm_layer(embed_dim[1])
- if self.windows:
- print('Use local window for all blocks in stage3')
- self.blocks3 = nn.ModuleList([
- SABlock_Windows(
- dim=embed_dim[2], num_heads=num_heads[2], window_size=window_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer)
- for i in range(layers[2])])
- elif hybrid:
- print('Use hybrid window for blocks in stage3')
- block3 = []
- for i in range(layers[2]):
- if (i + 1) % 4 == 0:
- block3.append(SABlock(
- dim=embed_dim[2], num_heads=num_heads[2], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer))
- else:
- block3.append(SABlock_Windows(
- dim=embed_dim[2], num_heads=num_heads[2], window_size=window_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer))
- self.blocks3 = nn.ModuleList(block3)
- else:
- print('Use global window for all blocks in stage3')
- self.blocks3 = nn.ModuleList([
- SABlock(
- dim=embed_dim[2], num_heads=num_heads[2], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer)
- for i in range(layers[2])])
- self.norm3 = norm_layer(embed_dim[2])
- self.blocks4 = nn.ModuleList([
- SABlock(
- dim=embed_dim[3], num_heads=num_heads[3], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]+layers[2]], norm_layer=norm_layer)
- for i in range(layers[3])])
- self.norm4 = norm_layer(embed_dim[3])
-
- # Representation layer
- if representation_size:
- self.num_features = representation_size
- self.pre_logits = nn.Sequential(OrderedDict([
- ('fc', nn.Linear(embed_dim, representation_size)),
- ('act', nn.Tanh())
- ]))
- else:
- self.pre_logits = nn.Identity()
-
- self.apply(self._init_weights)
- self.init_weights(pretrained=pretrained_path)
-
- def init_weights(self, pretrained):
- if isinstance(pretrained, str):
- logger = get_root_logger()
- load_checkpoint(self, pretrained, map_location='cpu', strict=False, logger=logger)
- print(f'Load pretrained model from {pretrained}')
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- @torch.jit.ignore
- def no_weight_decay(self):
- return {'pos_embed', 'cls_token'}
-
- def get_classifier(self):
- return self.head
-
- def reset_classifier(self, num_classes, global_pool=''):
- self.num_classes = num_classes
- self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()
-
- def forward_features(self, x):
- out = []
- x = self.patch_embed1(x)
- x = self.pos_drop(x)
- for i, blk in enumerate(self.blocks1):
- if self.use_checkpoint and i < self.checkpoint_num[0]:
- x = checkpoint.checkpoint(blk, x)
- else:
- x = blk(x)
- x_out = self.norm1(x.permute(0, 2, 3, 1))
- out.append(x_out.permute(0, 3, 1, 2).contiguous())
- x = self.patch_embed2(x)
- for i, blk in enumerate(self.blocks2):
- if self.use_checkpoint and i < self.checkpoint_num[1]:
- x = checkpoint.checkpoint(blk, x)
- else:
- x = blk(x)
- x_out = self.norm2(x.permute(0, 2, 3, 1))
- out.append(x_out.permute(0, 3, 1, 2).contiguous())
- x = self.patch_embed3(x)
- for i, blk in enumerate(self.blocks3):
- if self.use_checkpoint and i < self.checkpoint_num[2]:
- x = checkpoint.checkpoint(blk, x)
- else:
- x = blk(x)
- x_out = self.norm3(x.permute(0, 2, 3, 1))
- out.append(x_out.permute(0, 3, 1, 2).contiguous())
- x = self.patch_embed4(x)
- for i, blk in enumerate(self.blocks4):
- if self.use_checkpoint and i < self.checkpoint_num[3]:
- x = checkpoint.checkpoint(blk, x)
- else:
- x = blk(x)
- x_out = self.norm4(x.permute(0, 2, 3, 1))
- out.append(x_out.permute(0, 3, 1, 2).contiguous())
- return tuple(out)
-
- def forward(self, x):
- x = self.forward_features(x)
- return x
diff --git a/spaces/Sa-m/Neural-Style-Transfer-Image-Stylization/README.md b/spaces/Sa-m/Neural-Style-Transfer-Image-Stylization/README.md
deleted file mode 100644
index e49db46d27d14c1512535a66844b7f44667fef13..0000000000000000000000000000000000000000
--- a/spaces/Sa-m/Neural-Style-Transfer-Image-Stylization/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Neural Style Transfer Image Stylization
-emoji: 🌍
-colorFrom: red
-colorTo: gray
-sdk: gradio
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Salesforce/EDICT/my_diffusers/commands/env.py b/spaces/Salesforce/EDICT/my_diffusers/commands/env.py
deleted file mode 100644
index 81a878bff6688d3c510b53c60ac9d0e51e4aebcc..0000000000000000000000000000000000000000
--- a/spaces/Salesforce/EDICT/my_diffusers/commands/env.py
+++ /dev/null
@@ -1,70 +0,0 @@
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import platform
-from argparse import ArgumentParser
-
-import huggingface_hub
-
-from .. import __version__ as version
-from ..utils import is_torch_available, is_transformers_available
-from . import BaseDiffusersCLICommand
-
-
-def info_command_factory(_):
- return EnvironmentCommand()
-
-
-class EnvironmentCommand(BaseDiffusersCLICommand):
- @staticmethod
- def register_subcommand(parser: ArgumentParser):
- download_parser = parser.add_parser("env")
- download_parser.set_defaults(func=info_command_factory)
-
- def run(self):
- hub_version = huggingface_hub.__version__
-
- pt_version = "not installed"
- pt_cuda_available = "NA"
- if is_torch_available():
- import torch
-
- pt_version = torch.__version__
- pt_cuda_available = torch.cuda.is_available()
-
- transformers_version = "not installed"
- if is_transformers_available:
- import transformers
-
- transformers_version = transformers.__version__
-
- info = {
- "`diffusers` version": version,
- "Platform": platform.platform(),
- "Python version": platform.python_version(),
- "PyTorch version (GPU?)": f"{pt_version} ({pt_cuda_available})",
- "Huggingface_hub version": hub_version,
- "Transformers version": transformers_version,
- "Using GPU in script?": "",
- "Using distributed or parallel set-up in script?": "",
- }
-
- print("\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\n")
- print(self.format_dict(info))
-
- return info
-
- @staticmethod
- def format_dict(d):
- return "\n".join([f"- {prop}: {val}" for prop, val in d.items()]) + "\n"
diff --git a/spaces/Sandiago21/speech-to-speech-translation-german/app.py b/spaces/Sandiago21/speech-to-speech-translation-german/app.py
deleted file mode 100644
index e0c7bc8ac90450eeea216bb5a3333ffe10be347c..0000000000000000000000000000000000000000
--- a/spaces/Sandiago21/speech-to-speech-translation-german/app.py
+++ /dev/null
@@ -1,142 +0,0 @@
-import gradio as gr
-import numpy as np
-import torch
-from datasets import load_dataset
-from transformers import SpeechT5ForTextToSpeech, SpeechT5HifiGan, SpeechT5Processor, pipeline
-
-
-device = "cuda:0" if torch.cuda.is_available() else "cpu"
-
-# load speech translation checkpoint
-asr_pipe = pipeline("automatic-speech-recognition", model="openai/whisper-large-v2", device=device)
-
-# load text-to-speech checkpoint and speaker embeddings
-model_id = "Sandiago21/speecht5_finetuned_mozilla_foundation_common_voice_13_german" # update with your model id
-# pipe = pipeline("automatic-speech-recognition", model=model_id)
-model = SpeechT5ForTextToSpeech.from_pretrained(model_id)
-processor = SpeechT5Processor.from_pretrained(model_id)
-vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
-embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
-speaker_embeddings = torch.tensor(embeddings_dataset[7440]["xvector"]).unsqueeze(0)
-
-replacements = [
- ("Ä", "E"),
- ("Æ", "E"),
- ("Ç", "C"),
- ("É", "E"),
- ("Í", "I"),
- ("Ó", "O"),
- ("Ö", "E"),
- ("Ü", "Y"),
- ("ß", "S"),
- ("à", "a"),
- ("á", "a"),
- ("ã", "a"),
- ("ä", "e"),
- ("å", "a"),
- ("ë", "e"),
- ("í", "i"),
- ("ï", "i"),
- ("ð", "o"),
- ("ñ", "n"),
- ("ò", "o"),
- ("ó", "o"),
- ("ô", "o"),
- ("ö", "u"),
- ("ú", "u"),
- ("ü", "y"),
- ("ý", "y"),
- ("Ā", "A"),
- ("ā", "a"),
- ("ă", "a"),
- ("ą", "a"),
- ("ć", "c"),
- ("Č", "C"),
- ("č", "c"),
- ("ď", "d"),
- ("Đ", "D"),
- ("ę", "e"),
- ("ě", "e"),
- ("ğ", "g"),
- ("İ", "I"),
- ("О", "O"),
- ("Ł", "L"),
- ("ń", "n"),
- ("ň", "n"),
- ("Ō", "O"),
- ("ō", "o"),
- ("ő", "o"),
- ("ř", "r"),
- ("Ś", "S"),
- ("ś", "s"),
- ("Ş", "S"),
- ("ş", "s"),
- ("Š", "S"),
- ("š", "s"),
- ("ū", "u"),
- ("ź", "z"),
- ("Ż", "Z"),
- ("Ž", "Z"),
- ("ǐ", "i"),
- ("ǐ", "i"),
- ("ș", "s"),
- ("ț", "t"),
-]
-
-
-def cleanup_text(text):
- for src, dst in replacements:
- text = text.replace(src, dst)
- return text
-
-
-def transcribe_to_german(audio):
- outputs = asr_pipe(audio, max_new_tokens=256, generate_kwargs={"task": "transcribe", "language": "german"})
- return outputs["text"]
-
-
-def synthesise_from_german(text):
- text = cleanup_text(text)
- inputs = processor(text=text, return_tensors="pt")
- speech = model.generate_speech(inputs["input_ids"].to(device), speaker_embeddings.to(device), vocoder=vocoder)
- return speech.cpu()
-
-
-def speech_to_speech_translation(audio):
- translated_text = transcribe_to_german(audio)
- synthesised_speech = synthesise_from_german(translated_text)
- synthesised_speech = (synthesised_speech.numpy() * 32767).astype(np.int16)
- return ((16000, synthesised_speech), translated_text)
-
-
-title = "Cascaded STST"
-description = """
-Demo for cascaded speech-to-speech translation (STST), mapping from source speech in any language to target speech in German. Demo uses OpenAI's [Whisper Large v2](https://huggingface.co/openai/whisper-large-v2) model for speech translation, and [Sandiago21/speecht5_finetuned_mozilla_foundation_common_voice_13_german](https://huggingface.co/Sandiago21/speecht5_finetuned_mozilla_foundation_common_voice_13_german) checkpoint for text-to-speech, which is based on Microsoft's
-[SpeechT5 TTS](https://huggingface.co/microsoft/speecht5_tts) model for text-to-speech, fine-tuned in German Audio dataset:
-
-"""
-
-demo = gr.Blocks()
-
-mic_translate = gr.Interface(
- fn=speech_to_speech_translation,
- inputs=gr.Audio(source="microphone", type="filepath"),
- outputs=[gr.Audio(label="Generated Speech", type="numpy"), gr.outputs.Textbox()],
- title=title,
- description=description,
-)
-
-file_translate = gr.Interface(
- fn=speech_to_speech_translation,
- inputs=gr.Audio(source="upload", type="filepath"),
- outputs=[gr.Audio(label="Generated Speech", type="numpy"), gr.outputs.Textbox()],
- examples=[["./example.wav"]],
- title=title,
- description=description,
-)
-
-with demo:
- gr.TabbedInterface([mic_translate, file_translate], ["Microphone", "Audio File"])
-
-demo.launch()
-
diff --git a/spaces/Sapiensia/MakerDiffusion/README.md b/spaces/Sapiensia/MakerDiffusion/README.md
deleted file mode 100644
index 73a674b4248e0f183def5706750b386cbc39e86b..0000000000000000000000000000000000000000
--- a/spaces/Sapiensia/MakerDiffusion/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Basilisk-AI Maker Diffusion V-4.0
-emoji: 👁
-colorFrom: indigo
-colorTo: purple
-sdk: gradio
-sdk_version: 3.21.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/SarthakSidhant/Go-Cattle/diseases/mycotoxicosis.md b/spaces/SarthakSidhant/Go-Cattle/diseases/mycotoxicosis.md
deleted file mode 100644
index 72da57ca6e45604280b8adddcd265749b89d543e..0000000000000000000000000000000000000000
--- a/spaces/SarthakSidhant/Go-Cattle/diseases/mycotoxicosis.md
+++ /dev/null
@@ -1,36 +0,0 @@
-## Mycotoxicosis
-
-**Information:** Mycotoxicosis is a disease caused by the consumption of feed or forage contaminated with mycotoxins. Mycotoxins are poisonous substances produced by fungi, which can grow on a variety of crops, including corn, wheat, and hay.
-
-**Symptoms:**
-
-* The symptoms of mycotoxicosis can vary depending on the type of mycotoxin ingested, the amount ingested, and the animal's individual susceptibility. Some common symptoms include:
- * Loss of appetite
- * Weight loss
- * Diarrhea
- * Vomiting
- * Jaundice
- * Impaired reproduction
- * Death
-
-**Remedies:**
-
-* There is no specific treatment for mycotoxicosis. Treatment is usually supportive and may include:
- * Administering activated charcoal to absorb the mycotoxin
- * Providing fluids and electrolytes
- * Treating other underlying conditions
-
-**Causes:**
-
-* Mycotoxicosis is caused by the consumption of feed or forage contaminated with mycotoxins. Mycotoxins are produced by fungi, which can grow on a variety of crops, including corn, wheat, and hay.
-* Mycotoxins can be produced in the field, during storage, or during processing of feed and forage.
-* The risk of mycotoxicosis is increased in warm, humid conditions.
-
-**Prevention:**
-
-* The best way to prevent mycotoxicosis is to:
- * Feed cattle a balanced diet
- * Store feed and forage properly
- * Test feed and forage for mycotoxins
- * Use mycotoxin binders to reduce the absorption of mycotoxins
-
diff --git a/spaces/ServerX/PorcoDiaz/train/data_utils.py b/spaces/ServerX/PorcoDiaz/train/data_utils.py
deleted file mode 100644
index 71c0eff1815469a52399dc90a093a2f8a29223eb..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/train/data_utils.py
+++ /dev/null
@@ -1,512 +0,0 @@
-import os, traceback
-import numpy as np
-import torch
-import torch.utils.data
-
-from mel_processing import spectrogram_torch
-from utils import load_wav_to_torch, load_filepaths_and_text
-
-
-class TextAudioLoaderMultiNSFsid(torch.utils.data.Dataset):
- """
- 1) loads audio, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
-
- def __init__(self, audiopaths_and_text, hparams):
- self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text)
- self.max_wav_value = hparams.max_wav_value
- self.sampling_rate = hparams.sampling_rate
- self.filter_length = hparams.filter_length
- self.hop_length = hparams.hop_length
- self.win_length = hparams.win_length
- self.sampling_rate = hparams.sampling_rate
- self.min_text_len = getattr(hparams, "min_text_len", 1)
- self.max_text_len = getattr(hparams, "max_text_len", 5000)
- self._filter()
-
- def _filter(self):
- """
- Filter text & store spec lengths
- """
- # Store spectrogram lengths for Bucketing
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
- # spec_length = wav_length // hop_length
- audiopaths_and_text_new = []
- lengths = []
- for audiopath, text, pitch, pitchf, dv in self.audiopaths_and_text:
- if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
- audiopaths_and_text_new.append([audiopath, text, pitch, pitchf, dv])
- lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length))
- self.audiopaths_and_text = audiopaths_and_text_new
- self.lengths = lengths
-
- def get_sid(self, sid):
- sid = torch.LongTensor([int(sid)])
- return sid
-
- def get_audio_text_pair(self, audiopath_and_text):
- # separate filename and text
- file = audiopath_and_text[0]
- phone = audiopath_and_text[1]
- pitch = audiopath_and_text[2]
- pitchf = audiopath_and_text[3]
- dv = audiopath_and_text[4]
-
- phone, pitch, pitchf = self.get_labels(phone, pitch, pitchf)
- spec, wav = self.get_audio(file)
- dv = self.get_sid(dv)
-
- len_phone = phone.size()[0]
- len_spec = spec.size()[-1]
- # print(123,phone.shape,pitch.shape,spec.shape)
- if len_phone != len_spec:
- len_min = min(len_phone, len_spec)
- # amor
- len_wav = len_min * self.hop_length
-
- spec = spec[:, :len_min]
- wav = wav[:, :len_wav]
-
- phone = phone[:len_min, :]
- pitch = pitch[:len_min]
- pitchf = pitchf[:len_min]
-
- return (spec, wav, phone, pitch, pitchf, dv)
-
- def get_labels(self, phone, pitch, pitchf):
- phone = np.load(phone)
- phone = np.repeat(phone, 2, axis=0)
- pitch = np.load(pitch)
- pitchf = np.load(pitchf)
- n_num = min(phone.shape[0], 900) # DistributedBucketSampler
- # print(234,phone.shape,pitch.shape)
- phone = phone[:n_num, :]
- pitch = pitch[:n_num]
- pitchf = pitchf[:n_num]
- phone = torch.FloatTensor(phone)
- pitch = torch.LongTensor(pitch)
- pitchf = torch.FloatTensor(pitchf)
- return phone, pitch, pitchf
-
- def get_audio(self, filename):
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError(
- "{} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate
- )
- )
- audio_norm = audio
- # audio_norm = audio / self.max_wav_value
- # audio_norm = audio / np.abs(audio).max()
-
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if os.path.exists(spec_filename):
- try:
- spec = torch.load(spec_filename)
- except:
- print(spec_filename, traceback.format_exc())
- spec = spectrogram_torch(
- audio_norm,
- self.filter_length,
- self.sampling_rate,
- self.hop_length,
- self.win_length,
- center=False,
- )
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
- else:
- spec = spectrogram_torch(
- audio_norm,
- self.filter_length,
- self.sampling_rate,
- self.hop_length,
- self.win_length,
- center=False,
- )
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
- return spec, audio_norm
-
- def __getitem__(self, index):
- return self.get_audio_text_pair(self.audiopaths_and_text[index])
-
- def __len__(self):
- return len(self.audiopaths_and_text)
-
-
-class TextAudioCollateMultiNSFsid:
- """Zero-pads model inputs and targets"""
-
- def __init__(self, return_ids=False):
- self.return_ids = return_ids
-
- def __call__(self, batch):
- """Collate's training batch from normalized text and aduio
- PARAMS
- ------
- batch: [text_normalized, spec_normalized, wav_normalized]
- """
- # Right zero-pad all one-hot text sequences to max input length
- _, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[0].size(1) for x in batch]), dim=0, descending=True
- )
-
- max_spec_len = max([x[0].size(1) for x in batch])
- max_wave_len = max([x[1].size(1) for x in batch])
- spec_lengths = torch.LongTensor(len(batch))
- wave_lengths = torch.LongTensor(len(batch))
- spec_padded = torch.FloatTensor(len(batch), batch[0][0].size(0), max_spec_len)
- wave_padded = torch.FloatTensor(len(batch), 1, max_wave_len)
- spec_padded.zero_()
- wave_padded.zero_()
-
- max_phone_len = max([x[2].size(0) for x in batch])
- phone_lengths = torch.LongTensor(len(batch))
- phone_padded = torch.FloatTensor(
- len(batch), max_phone_len, batch[0][2].shape[1]
- ) # (spec, wav, phone, pitch)
- pitch_padded = torch.LongTensor(len(batch), max_phone_len)
- pitchf_padded = torch.FloatTensor(len(batch), max_phone_len)
- phone_padded.zero_()
- pitch_padded.zero_()
- pitchf_padded.zero_()
- # dv = torch.FloatTensor(len(batch), 256)#gin=256
- sid = torch.LongTensor(len(batch))
-
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- spec = row[0]
- spec_padded[i, :, : spec.size(1)] = spec
- spec_lengths[i] = spec.size(1)
-
- wave = row[1]
- wave_padded[i, :, : wave.size(1)] = wave
- wave_lengths[i] = wave.size(1)
-
- phone = row[2]
- phone_padded[i, : phone.size(0), :] = phone
- phone_lengths[i] = phone.size(0)
-
- pitch = row[3]
- pitch_padded[i, : pitch.size(0)] = pitch
- pitchf = row[4]
- pitchf_padded[i, : pitchf.size(0)] = pitchf
-
- # dv[i] = row[5]
- sid[i] = row[5]
-
- return (
- phone_padded,
- phone_lengths,
- pitch_padded,
- pitchf_padded,
- spec_padded,
- spec_lengths,
- wave_padded,
- wave_lengths,
- # dv
- sid,
- )
-
-
-class TextAudioLoader(torch.utils.data.Dataset):
- """
- 1) loads audio, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
-
- def __init__(self, audiopaths_and_text, hparams):
- self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text)
- self.max_wav_value = hparams.max_wav_value
- self.sampling_rate = hparams.sampling_rate
- self.filter_length = hparams.filter_length
- self.hop_length = hparams.hop_length
- self.win_length = hparams.win_length
- self.sampling_rate = hparams.sampling_rate
- self.min_text_len = getattr(hparams, "min_text_len", 1)
- self.max_text_len = getattr(hparams, "max_text_len", 5000)
- self._filter()
-
- def _filter(self):
- """
- Filter text & store spec lengths
- """
- # Store spectrogram lengths for Bucketing
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
- # spec_length = wav_length // hop_length
- audiopaths_and_text_new = []
- lengths = []
- for audiopath, text, dv in self.audiopaths_and_text:
- if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
- audiopaths_and_text_new.append([audiopath, text, dv])
- lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length))
- self.audiopaths_and_text = audiopaths_and_text_new
- self.lengths = lengths
-
- def get_sid(self, sid):
- sid = torch.LongTensor([int(sid)])
- return sid
-
- def get_audio_text_pair(self, audiopath_and_text):
- # separate filename and text
- file = audiopath_and_text[0]
- phone = audiopath_and_text[1]
- dv = audiopath_and_text[2]
-
- phone = self.get_labels(phone)
- spec, wav = self.get_audio(file)
- dv = self.get_sid(dv)
-
- len_phone = phone.size()[0]
- len_spec = spec.size()[-1]
- if len_phone != len_spec:
- len_min = min(len_phone, len_spec)
- len_wav = len_min * self.hop_length
- spec = spec[:, :len_min]
- wav = wav[:, :len_wav]
- phone = phone[:len_min, :]
- return (spec, wav, phone, dv)
-
- def get_labels(self, phone):
- phone = np.load(phone)
- phone = np.repeat(phone, 2, axis=0)
- n_num = min(phone.shape[0], 900) # DistributedBucketSampler
- phone = phone[:n_num, :]
- phone = torch.FloatTensor(phone)
- return phone
-
- def get_audio(self, filename):
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError(
- "{} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate
- )
- )
- audio_norm = audio
- # audio_norm = audio / self.max_wav_value
- # audio_norm = audio / np.abs(audio).max()
-
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if os.path.exists(spec_filename):
- try:
- spec = torch.load(spec_filename)
- except:
- print(spec_filename, traceback.format_exc())
- spec = spectrogram_torch(
- audio_norm,
- self.filter_length,
- self.sampling_rate,
- self.hop_length,
- self.win_length,
- center=False,
- )
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
- else:
- spec = spectrogram_torch(
- audio_norm,
- self.filter_length,
- self.sampling_rate,
- self.hop_length,
- self.win_length,
- center=False,
- )
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
- return spec, audio_norm
-
- def __getitem__(self, index):
- return self.get_audio_text_pair(self.audiopaths_and_text[index])
-
- def __len__(self):
- return len(self.audiopaths_and_text)
-
-
-class TextAudioCollate:
- """Zero-pads model inputs and targets"""
-
- def __init__(self, return_ids=False):
- self.return_ids = return_ids
-
- def __call__(self, batch):
- """Collate's training batch from normalized text and aduio
- PARAMS
- ------
- batch: [text_normalized, spec_normalized, wav_normalized]
- """
- # Right zero-pad all one-hot text sequences to max input length
- _, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[0].size(1) for x in batch]), dim=0, descending=True
- )
-
- max_spec_len = max([x[0].size(1) for x in batch])
- max_wave_len = max([x[1].size(1) for x in batch])
- spec_lengths = torch.LongTensor(len(batch))
- wave_lengths = torch.LongTensor(len(batch))
- spec_padded = torch.FloatTensor(len(batch), batch[0][0].size(0), max_spec_len)
- wave_padded = torch.FloatTensor(len(batch), 1, max_wave_len)
- spec_padded.zero_()
- wave_padded.zero_()
-
- max_phone_len = max([x[2].size(0) for x in batch])
- phone_lengths = torch.LongTensor(len(batch))
- phone_padded = torch.FloatTensor(
- len(batch), max_phone_len, batch[0][2].shape[1]
- )
- phone_padded.zero_()
- sid = torch.LongTensor(len(batch))
-
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- spec = row[0]
- spec_padded[i, :, : spec.size(1)] = spec
- spec_lengths[i] = spec.size(1)
-
- wave = row[1]
- wave_padded[i, :, : wave.size(1)] = wave
- wave_lengths[i] = wave.size(1)
-
- phone = row[2]
- phone_padded[i, : phone.size(0), :] = phone
- phone_lengths[i] = phone.size(0)
-
- sid[i] = row[3]
-
- return (
- phone_padded,
- phone_lengths,
- spec_padded,
- spec_lengths,
- wave_padded,
- wave_lengths,
- sid,
- )
-
-
-class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler):
- """
- Maintain similar input lengths in a batch.
- Length groups are specified by boundaries.
- Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}.
-
- It removes samples which are not included in the boundaries.
- Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded.
- """
-
- def __init__(
- self,
- dataset,
- batch_size,
- boundaries,
- num_replicas=None,
- rank=None,
- shuffle=True,
- ):
- super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
- self.lengths = dataset.lengths
- self.batch_size = batch_size
- self.boundaries = boundaries
-
- self.buckets, self.num_samples_per_bucket = self._create_buckets()
- self.total_size = sum(self.num_samples_per_bucket)
- self.num_samples = self.total_size // self.num_replicas
-
- def _create_buckets(self):
- buckets = [[] for _ in range(len(self.boundaries) - 1)]
- for i in range(len(self.lengths)):
- length = self.lengths[i]
- idx_bucket = self._bisect(length)
- if idx_bucket != -1:
- buckets[idx_bucket].append(i)
-
- for i in range(len(buckets) - 1, -1, -1): #
- if len(buckets[i]) == 0:
- buckets.pop(i)
- self.boundaries.pop(i + 1)
-
- num_samples_per_bucket = []
- for i in range(len(buckets)):
- len_bucket = len(buckets[i])
- total_batch_size = self.num_replicas * self.batch_size
- rem = (
- total_batch_size - (len_bucket % total_batch_size)
- ) % total_batch_size
- num_samples_per_bucket.append(len_bucket + rem)
- return buckets, num_samples_per_bucket
-
- def __iter__(self):
- # deterministically shuffle based on epoch
- g = torch.Generator()
- g.manual_seed(self.epoch)
-
- indices = []
- if self.shuffle:
- for bucket in self.buckets:
- indices.append(torch.randperm(len(bucket), generator=g).tolist())
- else:
- for bucket in self.buckets:
- indices.append(list(range(len(bucket))))
-
- batches = []
- for i in range(len(self.buckets)):
- bucket = self.buckets[i]
- len_bucket = len(bucket)
- ids_bucket = indices[i]
- num_samples_bucket = self.num_samples_per_bucket[i]
-
- # add extra samples to make it evenly divisible
- rem = num_samples_bucket - len_bucket
- ids_bucket = (
- ids_bucket
- + ids_bucket * (rem // len_bucket)
- + ids_bucket[: (rem % len_bucket)]
- )
-
- # subsample
- ids_bucket = ids_bucket[self.rank :: self.num_replicas]
-
- # batching
- for j in range(len(ids_bucket) // self.batch_size):
- batch = [
- bucket[idx]
- for idx in ids_bucket[
- j * self.batch_size : (j + 1) * self.batch_size
- ]
- ]
- batches.append(batch)
-
- if self.shuffle:
- batch_ids = torch.randperm(len(batches), generator=g).tolist()
- batches = [batches[i] for i in batch_ids]
- self.batches = batches
-
- assert len(self.batches) * self.batch_size == self.num_samples
- return iter(self.batches)
-
- def _bisect(self, x, lo=0, hi=None):
- if hi is None:
- hi = len(self.boundaries) - 1
-
- if hi > lo:
- mid = (hi + lo) // 2
- if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]:
- return mid
- elif x <= self.boundaries[mid]:
- return self._bisect(x, lo, mid)
- else:
- return self._bisect(x, mid + 1, hi)
- else:
- return -1
-
- def __len__(self):
- return self.num_samples // self.batch_size
diff --git a/spaces/SkyYeXianer/vits-uma-genshin-honkai/modules.py b/spaces/SkyYeXianer/vits-uma-genshin-honkai/modules.py
deleted file mode 100644
index 56ea4145eddf19dd330a3a41ab0183efc1686d83..0000000000000000000000000000000000000000
--- a/spaces/SkyYeXianer/vits-uma-genshin-honkai/modules.py
+++ /dev/null
@@ -1,388 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_tokenutil.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_tokenutil.py
deleted file mode 100644
index c4539d1fc7e330bfcde2086562c10f0f03161402..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_tokenutil.py
+++ /dev/null
@@ -1,141 +0,0 @@
-"""Tests for tokenutil"""
-# Copyright (c) IPython Development Team.
-# Distributed under the terms of the Modified BSD License.
-
-import pytest
-
-from IPython.utils.tokenutil import token_at_cursor, line_at_cursor
-
-def expect_token(expected, cell, cursor_pos):
- token = token_at_cursor(cell, cursor_pos)
- offset = 0
- for line in cell.splitlines():
- if offset + len(line) >= cursor_pos:
- break
- else:
- offset += len(line)+1
- column = cursor_pos - offset
- line_with_cursor = "%s|%s" % (line[:column], line[column:])
- assert token == expected, "Expected %r, got %r in: %r (pos %i)" % (
- expected,
- token,
- line_with_cursor,
- cursor_pos,
- )
-
-
-def test_simple():
- cell = "foo"
- for i in range(len(cell)):
- expect_token("foo", cell, i)
-
-def test_function():
- cell = "foo(a=5, b='10')"
- expected = 'foo'
- # up to `foo(|a=`
- for i in range(cell.find('a=') + 1):
- expect_token("foo", cell, i)
- # find foo after `=`
- for i in [cell.find('=') + 1, cell.rfind('=') + 1]:
- expect_token("foo", cell, i)
- # in between `5,|` and `|b=`
- for i in range(cell.find(','), cell.find('b=')):
- expect_token("foo", cell, i)
-
-def test_multiline():
- cell = '\n'.join([
- 'a = 5',
- 'b = hello("string", there)'
- ])
- expected = 'hello'
- start = cell.index(expected) + 1
- for i in range(start, start + len(expected)):
- expect_token(expected, cell, i)
- expected = 'hello'
- start = cell.index(expected) + 1
- for i in range(start, start + len(expected)):
- expect_token(expected, cell, i)
-
-def test_multiline_token():
- cell = '\n'.join([
- '"""\n\nxxxxxxxxxx\n\n"""',
- '5, """',
- 'docstring',
- 'multiline token',
- '""", [',
- '2, 3, "complicated"]',
- 'b = hello("string", there)'
- ])
- expected = 'hello'
- start = cell.index(expected) + 1
- for i in range(start, start + len(expected)):
- expect_token(expected, cell, i)
- expected = 'hello'
- start = cell.index(expected) + 1
- for i in range(start, start + len(expected)):
- expect_token(expected, cell, i)
-
-def test_nested_call():
- cell = "foo(bar(a=5), b=10)"
- expected = 'foo'
- start = cell.index('bar') + 1
- for i in range(start, start + 3):
- expect_token(expected, cell, i)
- expected = 'bar'
- start = cell.index('a=')
- for i in range(start, start + 3):
- expect_token(expected, cell, i)
- expected = 'foo'
- start = cell.index(')') + 1
- for i in range(start, len(cell)-1):
- expect_token(expected, cell, i)
-
-def test_attrs():
- cell = "a = obj.attr.subattr"
- expected = 'obj'
- idx = cell.find('obj') + 1
- for i in range(idx, idx + 3):
- expect_token(expected, cell, i)
- idx = cell.find('.attr') + 2
- expected = 'obj.attr'
- for i in range(idx, idx + 4):
- expect_token(expected, cell, i)
- idx = cell.find('.subattr') + 2
- expected = 'obj.attr.subattr'
- for i in range(idx, len(cell)):
- expect_token(expected, cell, i)
-
-def test_line_at_cursor():
- cell = ""
- (line, offset) = line_at_cursor(cell, cursor_pos=11)
- assert line == ""
- assert offset == 0
-
- # The position after a newline should be the start of the following line.
- cell = "One\nTwo\n"
- (line, offset) = line_at_cursor(cell, cursor_pos=4)
- assert line == "Two\n"
- assert offset == 4
-
- # The end of a cell should be on the last line
- cell = "pri\npri"
- (line, offset) = line_at_cursor(cell, cursor_pos=7)
- assert line == "pri"
- assert offset == 4
-
-
-@pytest.mark.parametrize(
- "c, token",
- zip(
- list(range(16, 22)) + list(range(22, 28)),
- ["int"] * (22 - 16) + ["map"] * (28 - 22),
- ),
-)
-def test_multiline_statement(c, token):
- cell = """a = (1,
- 3)
-
-int()
-map()
-"""
- expect_token(token, cell, c)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/vegalite/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/vegalite/__init__.py
deleted file mode 100644
index 690d64e63bc40a6006318cd70535017d41643def..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/vegalite/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-# ruff: noqa
-from .v5 import *
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/db/migrations/00001-migration-1.sqlite.sql b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/db/migrations/00001-migration-1.sqlite.sql
deleted file mode 100644
index a214bae8d5b0d6482fedd18265d4dfc756d47485..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/db/migrations/00001-migration-1.sqlite.sql
+++ /dev/null
@@ -1,3 +0,0 @@
-CREATE TABLE table1 (
- name TEXT PRIMARY KEY
-);
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/property/strategies.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/property/strategies.py
deleted file mode 100644
index b082e033d49f451f806eae9887026914a9e74413..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/property/strategies.py
+++ /dev/null
@@ -1,545 +0,0 @@
-import hashlib
-import hypothesis
-import hypothesis.strategies as st
-from typing import Any, Optional, List, Dict, Union
-from typing_extensions import TypedDict
-import numpy as np
-import numpy.typing as npt
-import chromadb.api.types as types
-import re
-from hypothesis.strategies._internal.strategies import SearchStrategy
-from hypothesis.errors import InvalidDefinition
-from hypothesis.stateful import RuleBasedStateMachine
-
-from dataclasses import dataclass
-
-from chromadb.api.types import Documents, Embeddings, Metadata
-
-# Set the random seed for reproducibility
-np.random.seed(0) # unnecessary, hypothesis does this for us
-
-# See Hypothesis documentation for creating strategies at
-# https://hypothesis.readthedocs.io/en/latest/data.html
-
-# NOTE: Because these strategies are used in state machines, we need to
-# work around an issue with state machines, in which strategies that frequently
-# are marked as invalid (i.e. through the use of `assume` or `.filter`) can cause the
-# state machine tests to fail with an hypothesis.errors.Unsatisfiable.
-
-# Ultimately this is because the entire state machine is run as a single Hypothesis
-# example, which ends up drawing from the same strategies an enormous number of times.
-# Whenever a strategy marks itself as invalid, Hypothesis tries to start the entire
-# state machine run over. See https://github.com/HypothesisWorks/hypothesis/issues/3618
-
-# Because strategy generation is all interrelated, seemingly small changes (especially
-# ones called early in a test) can have an outside effect. Generating lists with
-# unique=True, or dictionaries with a min size seems especially bad.
-
-# Please make changes to these strategies incrementally, testing to make sure they don't
-# start generating unsatisfiable examples.
-
-test_hnsw_config = {
- "hnsw:construction_ef": 128,
- "hnsw:search_ef": 128,
- "hnsw:M": 128,
-}
-
-
-class RecordSet(TypedDict):
- """
- A generated set of embeddings, ids, metadatas, and documents that
- represent what a user would pass to the API.
- """
-
- ids: Union[types.ID, List[types.ID]]
- embeddings: Optional[Union[types.Embeddings, types.Embedding]]
- metadatas: Optional[Union[List[types.Metadata], types.Metadata]]
- documents: Optional[Union[List[types.Document], types.Document]]
-
-
-class NormalizedRecordSet(TypedDict):
- """
- A RecordSet, with all fields normalized to lists.
- """
-
- ids: List[types.ID]
- embeddings: Optional[types.Embeddings]
- metadatas: Optional[List[types.Metadata]]
- documents: Optional[List[types.Document]]
-
-
-class StateMachineRecordSet(TypedDict):
- """
- Represents the internal state of a state machine in hypothesis tests.
- """
-
- ids: List[types.ID]
- embeddings: types.Embeddings
- metadatas: List[Optional[types.Metadata]]
- documents: List[Optional[types.Document]]
-
-
-class Record(TypedDict):
- """
- A single generated record.
- """
-
- id: types.ID
- embedding: Optional[types.Embedding]
- metadata: Optional[types.Metadata]
- document: Optional[types.Document]
-
-
-# TODO: support arbitrary text everywhere so we don't SQL-inject ourselves.
-# TODO: support empty strings everywhere
-sql_alphabet = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_"
-safe_text = st.text(alphabet=sql_alphabet, min_size=1)
-
-# Workaround for FastAPI json encoding peculiarities
-# https://github.com/tiangolo/fastapi/blob/8ac8d70d52bb0dd9eb55ba4e22d3e383943da05c/fastapi/encoders.py#L104
-safe_text = safe_text.filter(lambda s: not s.startswith("_sa"))
-
-safe_integers = st.integers(
- min_value=-(2**31), max_value=2**31 - 1
-) # TODO: handle longs
-safe_floats = st.floats(
- allow_infinity=False,
- allow_nan=False,
- allow_subnormal=False,
- min_value=-1e6,
- max_value=1e6,
-) # TODO: handle infinity and NAN
-
-safe_values: List[SearchStrategy[Union[int, float, str]]] = [
- safe_text,
- safe_integers,
- safe_floats,
-]
-
-
-def one_or_both(
- strategy_a: st.SearchStrategy[Any], strategy_b: st.SearchStrategy[Any]
-) -> st.SearchStrategy[Any]:
- return st.one_of(
- st.tuples(strategy_a, strategy_b),
- st.tuples(strategy_a, st.none()),
- st.tuples(st.none(), strategy_b),
- )
-
-
-# Temporarily generate only these to avoid SQL formatting issues.
-legal_id_characters = (
- "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_./+"
-)
-
-float_types = [np.float16, np.float32, np.float64]
-int_types = [np.int16, np.int32, np.int64] # TODO: handle int types
-
-
-@st.composite
-def collection_name(draw: st.DrawFn) -> str:
- _collection_name_re = re.compile(r"^[a-zA-Z][a-zA-Z0-9-]{1,60}[a-zA-Z0-9]$")
- _ipv4_address_re = re.compile(r"^([0-9]{1,3}\.){3}[0-9]{1,3}$")
- _two_periods_re = re.compile(r"\.\.")
-
- name: str = draw(st.from_regex(_collection_name_re))
- hypothesis.assume(not _ipv4_address_re.match(name))
- hypothesis.assume(not _two_periods_re.search(name))
-
- return name
-
-
-collection_metadata = st.one_of(
- st.none(), st.dictionaries(safe_text, st.one_of(*safe_values))
-)
-
-
-# TODO: Use a hypothesis strategy while maintaining embedding uniqueness
-# Or handle duplicate embeddings within a known epsilon
-def create_embeddings(
- dim: int,
- count: int,
- dtype: npt.DTypeLike,
-) -> types.Embeddings:
- embeddings: types.Embeddings = (
- np.random.uniform(
- low=-1.0,
- high=1.0,
- size=(count, dim),
- )
- .astype(dtype)
- .tolist()
- )
-
- return embeddings
-
-
-class hashing_embedding_function(types.EmbeddingFunction):
- def __init__(self, dim: int, dtype: npt.DTypeLike) -> None:
- self.dim = dim
- self.dtype = dtype
-
- def __call__(self, texts: types.Documents) -> types.Embeddings:
- # Hash the texts and convert to hex strings
- hashed_texts = [
- list(hashlib.sha256(text.encode("utf-8")).hexdigest()) for text in texts
- ]
- # Pad with repetition, or truncate the hex strings to the desired dimension
- padded_texts = [
- text * (self.dim // len(text)) + text[: self.dim % len(text)]
- for text in hashed_texts
- ]
-
- # Convert the hex strings to dtype
- embeddings: types.Embeddings = np.array(
- [[int(char, 16) / 15.0 for char in text] for text in padded_texts],
- dtype=self.dtype,
- ).tolist()
-
- return embeddings
-
-
-class not_implemented_embedding_function(types.EmbeddingFunction):
- def __call__(self, texts: Documents) -> Embeddings:
- assert False, "This embedding function is not implemented"
-
-
-def embedding_function_strategy(
- dim: int, dtype: npt.DTypeLike
-) -> st.SearchStrategy[types.EmbeddingFunction]:
- return st.just(hashing_embedding_function(dim, dtype))
-
-
-@dataclass
-class Collection:
- name: str
- metadata: Optional[types.Metadata]
- dimension: int
- dtype: npt.DTypeLike
- known_metadata_keys: types.Metadata
- known_document_keywords: List[str]
- has_documents: bool = False
- has_embeddings: bool = False
- embedding_function: Optional[types.EmbeddingFunction] = None
-
-
-@st.composite
-def collections(
- draw: st.DrawFn,
- add_filterable_data: bool = False,
- with_hnsw_params: bool = False,
- has_embeddings: Optional[bool] = None,
- has_documents: Optional[bool] = None,
-) -> Collection:
- """Strategy to generate a Collection object. If add_filterable_data is True, then known_metadata_keys and known_document_keywords will be populated with consistent data."""
-
- assert not ((has_embeddings is False) and (has_documents is False))
-
- name = draw(collection_name())
- metadata = draw(collection_metadata)
- dimension = draw(st.integers(min_value=2, max_value=2048))
- dtype = draw(st.sampled_from(float_types))
-
- if with_hnsw_params:
- if metadata is None:
- metadata = {}
- metadata.update(test_hnsw_config)
- # Sometimes, select a space at random
- if draw(st.booleans()):
- # TODO: pull the distance functions from a source of truth that lives not
- # in tests once https://github.com/chroma-core/issues/issues/61 lands
- metadata["hnsw:space"] = draw(st.sampled_from(["cosine", "l2", "ip"]))
-
- known_metadata_keys: Dict[str, Union[int, str, float]] = {}
- if add_filterable_data:
- while len(known_metadata_keys) < 5:
- key = draw(safe_text)
- known_metadata_keys[key] = draw(st.one_of(*safe_values))
-
- if has_documents is None:
- has_documents = draw(st.booleans())
- assert has_documents is not None
- if has_documents and add_filterable_data:
- known_document_keywords = draw(st.lists(safe_text, min_size=5, max_size=5))
- else:
- known_document_keywords = []
-
- if not has_documents:
- has_embeddings = True
- else:
- if has_embeddings is None:
- has_embeddings = draw(st.booleans())
- assert has_embeddings is not None
-
- embedding_function = draw(embedding_function_strategy(dimension, dtype))
-
- return Collection(
- name=name,
- metadata=metadata,
- dimension=dimension,
- dtype=dtype,
- known_metadata_keys=known_metadata_keys,
- has_documents=has_documents,
- known_document_keywords=known_document_keywords,
- has_embeddings=has_embeddings,
- embedding_function=embedding_function,
- )
-
-
-@st.composite
-def metadata(draw: st.DrawFn, collection: Collection) -> types.Metadata:
- """Strategy for generating metadata that could be a part of the given collection"""
- # First draw a random dictionary.
- metadata: types.Metadata = draw(st.dictionaries(safe_text, st.one_of(*safe_values)))
- # Then, remove keys that overlap with the known keys for the coll
- # to avoid type errors when comparing.
- if collection.known_metadata_keys:
- for key in collection.known_metadata_keys.keys():
- if key in metadata:
- del metadata[key]
- # Finally, add in some of the known keys for the collection
- sampling_dict: Dict[str, st.SearchStrategy[Union[str, int, float]]] = {
- k: st.just(v) for k, v in collection.known_metadata_keys.items()
- }
- metadata.update(draw(st.fixed_dictionaries({}, optional=sampling_dict)))
- return metadata
-
-
-@st.composite
-def document(draw: st.DrawFn, collection: Collection) -> types.Document:
- """Strategy for generating documents that could be a part of the given collection"""
-
- if collection.known_document_keywords:
- known_words_st = st.sampled_from(collection.known_document_keywords)
- else:
- known_words_st = st.text(min_size=1)
-
- random_words_st = st.text(min_size=1)
- words = draw(st.lists(st.one_of(known_words_st, random_words_st), min_size=1))
- return " ".join(words)
-
-
-@st.composite
-def recordsets(
- draw: st.DrawFn,
- collection_strategy: SearchStrategy[Collection] = collections(),
- id_strategy: SearchStrategy[str] = safe_text,
- min_size: int = 1,
- max_size: int = 50,
-) -> RecordSet:
- collection = draw(collection_strategy)
-
- ids = list(
- draw(st.lists(id_strategy, min_size=min_size, max_size=max_size, unique=True))
- )
-
- embeddings: Optional[Embeddings] = None
- if collection.has_embeddings:
- embeddings = create_embeddings(collection.dimension, len(ids), collection.dtype)
- metadatas = draw(
- st.lists(metadata(collection), min_size=len(ids), max_size=len(ids))
- )
- documents: Optional[Documents] = None
- if collection.has_documents:
- documents = draw(
- st.lists(document(collection), min_size=len(ids), max_size=len(ids))
- )
-
- # in the case where we have a single record, sometimes exercise
- # the code that handles individual values rather than lists.
- # In this case, any field may be a list or a single value.
- if len(ids) == 1:
- single_id: Union[str, List[str]] = ids[0] if draw(st.booleans()) else ids
- single_embedding = (
- embeddings[0]
- if embeddings is not None and draw(st.booleans())
- else embeddings
- )
- single_metadata: Union[Metadata, List[Metadata]] = (
- metadatas[0] if draw(st.booleans()) else metadatas
- )
- single_document = (
- documents[0] if documents is not None and draw(st.booleans()) else documents
- )
- return {
- "ids": single_id,
- "embeddings": single_embedding,
- "metadatas": single_metadata,
- "documents": single_document,
- }
-
- return {
- "ids": ids,
- "embeddings": embeddings,
- "metadatas": metadatas,
- "documents": documents,
- }
-
-
-# This class is mostly cloned from from hypothesis.stateful.RuleStrategy,
-# but always runs all the rules, instead of using a FeatureStrategy to
-# enable/disable rules. Disabled rules cause the entire test to be marked invalida and,
-# combined with the complexity of our other strategies, leads to an
-# unacceptably increased incidence of hypothesis.errors.Unsatisfiable.
-class DeterministicRuleStrategy(SearchStrategy): # type: ignore
- def __init__(self, machine: RuleBasedStateMachine) -> None:
- super().__init__() # type: ignore
- self.machine = machine
- self.rules = list(machine.rules()) # type: ignore
-
- # The order is a bit arbitrary. Primarily we're trying to group rules
- # that write to the same location together, and to put rules with no
- # target first as they have less effect on the structure. We order from
- # fewer to more arguments on grounds that it will plausibly need less
- # data. This probably won't work especially well and we could be
- # smarter about it, but it's better than just doing it in definition
- # order.
- self.rules.sort(
- key=lambda rule: (
- sorted(rule.targets),
- len(rule.arguments),
- rule.function.__name__,
- )
- )
-
- def __repr__(self) -> str:
- return "{}(machine={}({{...}}))".format(
- self.__class__.__name__,
- self.machine.__class__.__name__,
- )
-
- def do_draw(self, data): # type: ignore
- if not any(self.is_valid(rule) for rule in self.rules):
- msg = f"No progress can be made from state {self.machine!r}"
- raise InvalidDefinition(msg) from None
-
- rule = data.draw(st.sampled_from([r for r in self.rules if self.is_valid(r)]))
- argdata = data.draw(rule.arguments_strategy)
- return (rule, argdata)
-
- def is_valid(self, rule) -> bool: # type: ignore
- if not all(precond(self.machine) for precond in rule.preconditions):
- return False
-
- for b in rule.bundles:
- bundle = self.machine.bundle(b.name) # type: ignore
- if not bundle:
- return False
- return True
-
-
-@st.composite
-def where_clause(draw: st.DrawFn, collection: Collection) -> types.Where:
- """Generate a filter that could be used in a query against the given collection"""
-
- known_keys = sorted(collection.known_metadata_keys.keys())
-
- key = draw(st.sampled_from(known_keys))
- value = collection.known_metadata_keys[key]
-
- legal_ops: List[Optional[str]] = [None, "$eq", "$ne"]
- if not isinstance(value, str):
- legal_ops.extend(["$gt", "$lt", "$lte", "$gte"])
- if isinstance(value, float):
- # Add or subtract a small number to avoid floating point rounding errors
- value = value + draw(st.sampled_from([1e-6, -1e-6]))
-
- op: types.WhereOperator = draw(st.sampled_from(legal_ops))
-
- if op is None:
- return {key: value}
- else:
- return {key: {op: value}}
-
-
-@st.composite
-def where_doc_clause(draw: st.DrawFn, collection: Collection) -> types.WhereDocument:
- """Generate a where_document filter that could be used against the given collection"""
- if collection.known_document_keywords:
- word = draw(st.sampled_from(collection.known_document_keywords))
- else:
- word = draw(safe_text)
- return {"$contains": word}
-
-
-def binary_operator_clause(
- base_st: SearchStrategy[types.Where],
-) -> SearchStrategy[types.Where]:
- op: SearchStrategy[types.LogicalOperator] = st.sampled_from(["$and", "$or"])
- return st.dictionaries(
- keys=op,
- values=st.lists(base_st, max_size=2, min_size=2),
- min_size=1,
- max_size=1,
- )
-
-
-def binary_document_operator_clause(
- base_st: SearchStrategy[types.WhereDocument],
-) -> SearchStrategy[types.WhereDocument]:
- op: SearchStrategy[types.LogicalOperator] = st.sampled_from(["$and", "$or"])
- return st.dictionaries(
- keys=op,
- values=st.lists(base_st, max_size=2, min_size=2),
- min_size=1,
- max_size=1,
- )
-
-
-@st.composite
-def recursive_where_clause(draw: st.DrawFn, collection: Collection) -> types.Where:
- base_st = where_clause(collection)
- where: types.Where = draw(st.recursive(base_st, binary_operator_clause))
- return where
-
-
-@st.composite
-def recursive_where_doc_clause(
- draw: st.DrawFn, collection: Collection
-) -> types.WhereDocument:
- base_st = where_doc_clause(collection)
- where: types.WhereDocument = draw(
- st.recursive(base_st, binary_document_operator_clause)
- )
- return where
-
-
-class Filter(TypedDict):
- where: Optional[types.Where]
- ids: Optional[Union[str, List[str]]]
- where_document: Optional[types.WhereDocument]
-
-
-@st.composite
-def filters(
- draw: st.DrawFn,
- collection_st: st.SearchStrategy[Collection],
- recordset_st: st.SearchStrategy[RecordSet],
- include_all_ids: bool = False,
-) -> Filter:
- collection = draw(collection_st)
- recordset = draw(recordset_st)
-
- where_clause = draw(st.one_of(st.none(), recursive_where_clause(collection)))
- where_document_clause = draw(
- st.one_of(st.none(), recursive_where_doc_clause(collection))
- )
-
- ids: Optional[Union[List[types.ID], types.ID]]
- # Record sets can be a value instead of a list of values if there is only one record
- if isinstance(recordset["ids"], str):
- ids = [recordset["ids"]]
- else:
- ids = recordset["ids"]
-
- if not include_all_ids:
- ids = draw(st.one_of(st.none(), st.lists(st.sampled_from(ids))))
- if ids is not None:
- # Remove duplicates since hypothesis samples with replacement
- ids = list(set(ids))
-
- # Test both the single value list and the unwrapped single value case
- if ids is not None and len(ids) == 1 and draw(st.booleans()):
- ids = ids[0]
-
- return {"where": where_clause, "where_document": where_document_clause, "ids": ids}
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/datatypes/container.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/datatypes/container.py
deleted file mode 100644
index a96d570970177c0ab91447d8411e4ec09a9994cb..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/datatypes/container.py
+++ /dev/null
@@ -1,294 +0,0 @@
-import array
-import logging
-from typing import Sequence, Collection
-
-from clickhouse_connect.driver.insert import InsertContext
-from clickhouse_connect.driver.query import QueryContext
-from clickhouse_connect.driver.types import ByteSource
-from clickhouse_connect.json_impl import any_to_json
-from clickhouse_connect.datatypes.base import ClickHouseType, TypeDef
-from clickhouse_connect.driver.common import must_swap
-from clickhouse_connect.datatypes.registry import get_from_name
-
-
-logger = logging.getLogger(__name__)
-
-
-class Array(ClickHouseType):
- __slots__ = ('element_type',)
- python_type = list
-
- def __init__(self, type_def: TypeDef):
- super().__init__(type_def)
- self.element_type = get_from_name(type_def.values[0])
- self._name_suffix = f'({self.element_type.name})'
-
- def read_column_prefix(self, source: ByteSource):
- return self.element_type.read_column_prefix(source)
-
- def _data_size(self, sample: Sequence) -> int:
- if len(sample) == 0:
- return 8
- total = 0
- for x in sample:
- total += self.element_type.data_size(x)
- return total // len(sample) + 8
-
- # pylint: disable=too-many-locals
- def read_column_data(self, source: ByteSource, num_rows: int, ctx: QueryContext):
- final_type = self.element_type
- depth = 1
- while isinstance(final_type, Array):
- depth += 1
- final_type = final_type.element_type
- level_size = num_rows
- offset_sizes = []
- for _ in range(depth):
- level_offsets = source.read_array('Q', level_size)
- offset_sizes.append(level_offsets)
- level_size = level_offsets[-1] if level_offsets else 0
- if level_size:
- all_values = final_type.read_column_data(source, level_size, ctx)
- else:
- all_values = []
- column = all_values if isinstance(all_values, list) else list(all_values)
- for offset_range in reversed(offset_sizes):
- data = []
- last = 0
- for x in offset_range:
- data.append(column[last: x])
- last = x
- column = data
- return column
-
- def write_column_prefix(self, dest: bytearray):
- self.element_type.write_column_prefix(dest)
-
- def write_column_data(self, column: Sequence, dest: bytearray, ctx: InsertContext):
- final_type = self.element_type
- depth = 1
- while isinstance(final_type, Array):
- depth += 1
- final_type = final_type.element_type
- for _ in range(depth):
- total = 0
- data = []
- offsets = array.array('Q')
- for x in column:
- total += len(x)
- offsets.append(total)
- data.extend(x)
- if must_swap:
- offsets.byteswap()
- dest += offsets.tobytes()
- column = data
- final_type.write_column_data(column, dest, ctx)
-
-
-class Tuple(ClickHouseType):
- _slots = 'element_names', 'element_types'
- python_type = tuple
- valid_formats = 'tuple', 'json', 'native' # native is 'tuple' for unnamed tuples, and dict for named tuples
-
- def __init__(self, type_def: TypeDef):
- super().__init__(type_def)
- self.element_names = type_def.keys
- self.element_types = [get_from_name(name) for name in type_def.values]
- if self.element_names:
- self._name_suffix = f"({', '.join(k + ' ' + str(v) for k, v in zip(type_def.keys, type_def.values))})"
- else:
- self._name_suffix = type_def.arg_str
-
- def _data_size(self, sample: Collection) -> int:
- if len(sample) == 0:
- return 0
- elem_size = 0
- for ix, e_type in enumerate(self.element_types):
- if e_type.byte_size > 0:
- elem_size += e_type.byte_size
- else:
- elem_size += e_type.data_size([x[ix] for x in sample])
- return elem_size
-
- def read_column_prefix(self, source: ByteSource):
- for e_type in self.element_types:
- e_type.read_column_prefix(source)
-
- def read_column_data(self, source: ByteSource, num_rows: int, ctx: QueryContext):
- columns = []
- e_names = self.element_names
- for e_type in self.element_types:
- column = e_type.read_column_data(source, num_rows, ctx)
- columns.append(column)
- if e_names and self.read_format(ctx) != 'tuple':
- dicts = [{} for _ in range(num_rows)]
- for ix, x in enumerate(dicts):
- for y, key in enumerate(e_names):
- x[key] = columns[y][ix]
- if self.read_format(ctx) == 'json':
- to_json = any_to_json
- return [to_json(x) for x in dicts]
- return dicts
- return tuple(zip(*columns))
-
- def write_column_prefix(self, dest: bytearray):
- for e_type in self.element_types:
- e_type.write_column_prefix(dest)
-
- def write_column_data(self, column: Sequence, dest: bytearray, ctx: InsertContext):
- columns = list(zip(*column))
- for e_type, elem_column in zip(self.element_types, columns):
- e_type.write_column_data(elem_column, dest, ctx)
-
-
-class Map(ClickHouseType):
- _slots = 'key_type', 'value_type'
- python_type = dict
-
- def __init__(self, type_def: TypeDef):
- super().__init__(type_def)
- self.key_type = get_from_name(type_def.values[0])
- self.value_type = get_from_name(type_def.values[1])
- self._name_suffix = type_def.arg_str
-
- def _data_size(self, sample: Collection) -> int:
- total = 0
- if len(sample) == 0:
- return 0
- for x in sample:
- total += self.key_type.data_size(x.keys())
- total += self.value_type.data_size(x.values())
- return total // len(sample)
-
- def read_column_prefix(self, source: ByteSource):
- self.key_type.read_column_prefix(source)
- self.value_type.read_column_prefix(source)
-
- # pylint: disable=too-many-locals
- def read_column_data(self, source: ByteSource, num_rows: int, ctx: QueryContext):
- offsets = source.read_array('Q', num_rows)
- total_rows = offsets[-1]
- keys = self.key_type.read_column_data(source, total_rows, ctx)
- values = self.value_type.read_column_data(source, total_rows, ctx)
- all_pairs = tuple(zip(keys, values))
- column = []
- app = column.append
- last = 0
- for offset in offsets:
- app(dict(all_pairs[last: offset]))
- last = offset
- return column
-
- def write_column_prefix(self, dest: bytearray):
- self.key_type.write_column_prefix(dest)
- self.value_type.write_column_prefix(dest)
-
- def write_column_data(self, column: Sequence, dest: bytearray, ctx: InsertContext):
- offsets = array.array('Q')
- keys = []
- values = []
- total = 0
- for v in column:
- total += len(v)
- offsets.append(total)
- keys.extend(v.keys())
- values.extend(v.values())
- if must_swap:
- offsets.byteswap()
- dest += offsets.tobytes()
- self.key_type.write_column_data(keys, dest, ctx)
- self.value_type.write_column_data(values, dest, ctx)
-
-
-class Nested(ClickHouseType):
- __slots__ = 'tuple_array', 'element_names', 'element_types'
- python_type = Sequence[dict]
-
- def __init__(self, type_def):
- super().__init__(type_def)
- self.element_names = type_def.keys
- self.tuple_array = get_from_name(f"Array(Tuple({','.join(type_def.values)}))")
- self.element_types = self.tuple_array.element_type.element_types
- cols = [f'{x[0]} {x[1].name}' for x in zip(type_def.keys, self.element_types)]
- self._name_suffix = f"({', '.join(cols)})"
-
- def _data_size(self, sample: Collection) -> int:
- keys = self.element_names
- array_sample = [[tuple(sub_row[key] for key in keys) for sub_row in row] for row in sample]
- return self.tuple_array.data_size(array_sample)
-
- def read_column_prefix(self, source: ByteSource):
- self.tuple_array.read_column_prefix(source)
-
- def read_column_data(self, source: ByteSource, num_rows: int, ctx: QueryContext):
- keys = self.element_names
- data = self.tuple_array.read_column_data(source, num_rows, ctx)
- return [[dict(zip(keys, x)) for x in row] for row in data]
-
- def write_column_prefix(self, dest: bytearray):
- self.tuple_array.write_column_prefix(dest)
-
- def write_column_data(self, column: Sequence, dest: bytearray, ctx: InsertContext):
- keys = self.element_names
- data = [[tuple(sub_row[key] for key in keys) for sub_row in row] for row in column]
- self.tuple_array.write_column_data(data, dest, ctx)
-
-
-class JSON(ClickHouseType):
- python_type = dict
- # Native is a Python type (primitive, dict, array), string is an actual JSON string
- valid_formats = 'string', 'native'
-
- def write_column_prefix(self, dest: bytearray):
- dest.append(0x01)
-
- def _data_size(self, sample: Collection) -> int:
- if len(sample) == 0:
- return 0
- total = 0
- for x in sample:
- if isinstance(x, str):
- total += len(x)
- elif x:
- total += len(any_to_json(x))
- return total // len(sample) + 1
-
- # pylint: disable=duplicate-code
- def write_column_data(self, column: Sequence, dest: bytearray, ctx: InsertContext):
- app = dest.append
- first = self._first_value(column)
- if isinstance(first, str) or self.write_format(ctx) == 'string':
- for x in column:
- v = x.encode()
- sz = len(v)
- while True:
- b = sz & 0x7f
- sz >>= 7
- if sz == 0:
- app(b)
- break
- app(0x80 | b)
- dest += v
- else:
- to_json = any_to_json
- for x in column:
- v = to_json(x)
- sz = len(v)
- while True:
- b = sz & 0x7f
- sz >>= 7
- if sz == 0:
- app(b)
- break
- app(0x80 | b)
- dest += v
-
-
-class Object(JSON):
- python_type = dict
-
- def __init__(self, type_def):
- if type_def.values[0].lower() != "'json'":
- raise NotImplementedError('Only json Object type is currently supported')
- super().__init__(type_def)
- self._name_suffix = type_def.arg_str
diff --git a/spaces/Suniilkumaar/MusicGen-updated/audiocraft/modules/streaming.py b/spaces/Suniilkumaar/MusicGen-updated/audiocraft/modules/streaming.py
deleted file mode 100644
index fdbdf5e90fc0c6560873d66bf273460b38e5ed7e..0000000000000000000000000000000000000000
--- a/spaces/Suniilkumaar/MusicGen-updated/audiocraft/modules/streaming.py
+++ /dev/null
@@ -1,135 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Streaming module API that should be implemented by all Streaming components,
-"""
-
-from contextlib import contextmanager
-import typing as tp
-from torch import nn
-import torch
-
-
-State = tp.Dict[str, torch.Tensor]
-
-
-class StreamingModule(nn.Module):
- """Common API for streaming components.
-
- Each streaming component has a streaming state, which is just a dict[str, Tensor].
- By convention, the first dim of each tensor must be the batch size.
- Don't use dots in the key names, as this would clash with submodules
- (like in state_dict).
-
- If `self._is_streaming` is True, the component should use and remember
- the proper state inside `self._streaming_state`.
-
- To set a streaming component in streaming state, use
-
- with module.streaming():
- ...
-
- This will automatically reset the streaming state when exiting the context manager.
- This also automatically propagates to all streaming children module.
-
- Some module might also implement the `StreamingModule.flush` method, although
- this one is trickier, as all parents module must be StreamingModule and implement
- it as well for it to work properly. See `StreamingSequential` after.
- """
- def __init__(self) -> None:
- super().__init__()
- self._streaming_state: State = {}
- self._is_streaming = False
-
- def _apply_named_streaming(self, fn: tp.Any):
- for name, module in self.named_modules():
- if isinstance(module, StreamingModule):
- fn(name, module)
-
- def _set_streaming(self, streaming: bool):
- def _set_streaming(name, module):
- module._is_streaming = streaming
- self._apply_named_streaming(_set_streaming)
-
- @contextmanager
- def streaming(self):
- """Context manager to enter streaming mode. Reset streaming state on exit.
- """
- self._set_streaming(True)
- try:
- yield
- finally:
- self._set_streaming(False)
- self.reset_streaming()
-
- def reset_streaming(self):
- """Reset the streaming state.
- """
- def _reset(name: str, module: StreamingModule):
- module._streaming_state.clear()
-
- self._apply_named_streaming(_reset)
-
- def get_streaming_state(self) -> State:
- """Return the streaming state, including that of sub-modules.
- """
- state: State = {}
-
- def _add(name: str, module: StreamingModule):
- if name:
- name += "."
- for key, value in module._streaming_state.items():
- state[name + key] = value
-
- self._apply_named_streaming(_add)
- return state
-
- def set_streaming_state(self, state: State):
- """Set the streaming state, including that of sub-modules.
- """
- state = dict(state)
-
- def _set(name: str, module: StreamingModule):
- if name:
- name += "."
- module._streaming_state.clear()
- for key, value in list(state.items()):
- # complexity is not ideal here, but probably fine.
- if key.startswith(name):
- local_key = key[len(name):]
- if '.' not in local_key:
- module._streaming_state[local_key] = value
- del state[key]
-
- self._apply_named_streaming(_set)
- assert len(state) == 0, list(state.keys())
-
- def flush(self, x: tp.Optional[torch.Tensor] = None):
- """Flush any remaining outputs that were waiting for completion.
- Typically, for convolutions, this will add the final padding
- and process the last buffer.
-
- This should take an optional argument `x`, which will be provided
- if a module before this one in the streaming pipeline has already
- spitted out a flushed out buffer.
- """
- if x is None:
- return None
- else:
- return self(x)
-
-
-class StreamingSequential(StreamingModule, nn.Sequential):
- """A streaming compatible alternative of `nn.Sequential`.
- """
- def flush(self, x: tp.Optional[torch.Tensor] = None):
- for module in self:
- if isinstance(module, StreamingModule):
- x = module.flush(x)
- elif x is not None:
- x = module(x)
- return x
diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/meta_arch/oneformer_head.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/meta_arch/oneformer_head.py
deleted file mode 100644
index f8f8eb11b95838d2b61de5fa249a318877182c01..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/meta_arch/oneformer_head.py
+++ /dev/null
@@ -1,135 +0,0 @@
-# ------------------------------------------------------------------------------
-# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/modeling/meta_arch/mask_former_head.py
-# Modified by Jitesh Jain (https://github.com/praeclarumjj3)
-# ------------------------------------------------------------------------------
-
-import logging
-from copy import deepcopy
-from typing import Callable, Dict, List, Optional, Tuple, Union
-
-import fvcore.nn.weight_init as weight_init
-from torch import nn
-from torch.nn import functional as F
-
-from annotator.oneformer.detectron2.config import configurable
-from annotator.oneformer.detectron2.layers import Conv2d, ShapeSpec, get_norm
-from annotator.oneformer.detectron2.modeling import SEM_SEG_HEADS_REGISTRY
-from ..pixel_decoder.fpn import build_pixel_decoder
-from ..transformer_decoder.oneformer_transformer_decoder import build_transformer_decoder
-
-@SEM_SEG_HEADS_REGISTRY.register()
-class OneFormerHead(nn.Module):
-
- _version = 2
-
- def _load_from_state_dict(
- self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs
- ):
- version = local_metadata.get("version", None)
- if version is None or version < 2:
- # Do not warn if train from scratch
- scratch = True
- logger = logging.getLogger(__name__)
- for k in list(state_dict.keys()):
- newk = k
- if "sem_seg_head" in k and not k.startswith(prefix + "predictor"):
- newk = k.replace(prefix, prefix + "pixel_decoder.")
- # logger.debug(f"{k} ==> {newk}")
- if newk != k:
- state_dict[newk] = state_dict[k]
- del state_dict[k]
- scratch = False
-
- if not scratch:
- logger.warning(
- f"Weight format of {self.__class__.__name__} have changed! "
- "Please upgrade your models. Applying automatic conversion now ..."
- )
-
- @configurable
- def __init__(
- self,
- input_shape: Dict[str, ShapeSpec],
- *,
- num_classes: int,
- pixel_decoder: nn.Module,
- loss_weight: float = 1.0,
- ignore_value: int = -1,
- # extra parameters
- transformer_predictor: nn.Module,
- transformer_in_feature: str,
- ):
- """
- NOTE: this interface is experimental.
- Args:
- input_shape: shapes (channels and stride) of the input features
- num_classes: number of classes to predict
- pixel_decoder: the pixel decoder module
- loss_weight: loss weight
- ignore_value: category id to be ignored during training.
- transformer_predictor: the transformer decoder that makes prediction
- transformer_in_feature: input feature name to the transformer_predictor
- """
- super().__init__()
- input_shape = sorted(input_shape.items(), key=lambda x: x[1].stride)
- self.in_features = [k for k, v in input_shape]
- feature_strides = [v.stride for k, v in input_shape]
- feature_channels = [v.channels for k, v in input_shape]
-
- self.ignore_value = ignore_value
- self.common_stride = 4
- self.loss_weight = loss_weight
-
- self.pixel_decoder = pixel_decoder
- self.predictor = transformer_predictor
- self.transformer_in_feature = transformer_in_feature
-
- self.num_classes = num_classes
-
- @classmethod
- def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]):
- # figure out in_channels to transformer predictor
- if cfg.MODEL.ONE_FORMER.TRANSFORMER_IN_FEATURE == "transformer_encoder":
- transformer_predictor_in_channels = cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM
- elif cfg.MODEL.ONE_FORMER.TRANSFORMER_IN_FEATURE == "pixel_embedding":
- transformer_predictor_in_channels = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM
- elif cfg.MODEL.ONE_FORMER.TRANSFORMER_IN_FEATURE == "multi_scale_pixel_decoder":
- transformer_predictor_in_channels = cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM
- else:
- transformer_predictor_in_channels = input_shape[cfg.MODEL.ONE_FORMER.TRANSFORMER_IN_FEATURE].channels
-
- return {
- "input_shape": {
- k: v for k, v in input_shape.items() if k in cfg.MODEL.SEM_SEG_HEAD.IN_FEATURES
- },
- "ignore_value": cfg.MODEL.SEM_SEG_HEAD.IGNORE_VALUE,
- "num_classes": cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES,
- "pixel_decoder": build_pixel_decoder(cfg, input_shape),
- "loss_weight": cfg.MODEL.SEM_SEG_HEAD.LOSS_WEIGHT,
- "transformer_in_feature": cfg.MODEL.ONE_FORMER.TRANSFORMER_IN_FEATURE,
- "transformer_predictor": build_transformer_decoder(
- cfg,
- transformer_predictor_in_channels,
- mask_classification=True,
- ),
- }
-
- def forward(self, features, tasks, mask=None):
- return self.layers(features, tasks, mask)
-
- def layers(self, features, tasks, mask=None):
- mask_features, transformer_encoder_features, multi_scale_features, _, _ = self.pixel_decoder.forward_features(features)
-
- if self.transformer_in_feature == "multi_scale_pixel_decoder":
- predictions = self.predictor(multi_scale_features, mask_features, tasks, mask)
- else:
- if self.transformer_in_feature == "transformer_encoder":
- assert (
- transformer_encoder_features is not None
- ), "Please use the TransformerEncoderPixelDecoder."
- predictions = self.predictor(transformer_encoder_features, mask_features, mask)
- elif self.transformer_in_feature == "pixel_embedding":
- predictions = self.predictor(mask_features, mask_features, mask)
- else:
- predictions = self.predictor(features[self.transformer_in_feature], mask_features, mask)
- return predictions
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py
deleted file mode 100644
index d02122ca0e68743b1bf7a893afae96042f23838c..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py
+++ /dev/null
@@ -1,57 +0,0 @@
-from abc import ABCMeta, abstractmethod
-
-from .decode_head import BaseDecodeHead
-
-
-class BaseCascadeDecodeHead(BaseDecodeHead, metaclass=ABCMeta):
- """Base class for cascade decode head used in
- :class:`CascadeEncoderDecoder."""
-
- def __init__(self, *args, **kwargs):
- super(BaseCascadeDecodeHead, self).__init__(*args, **kwargs)
-
- @abstractmethod
- def forward(self, inputs, prev_output):
- """Placeholder of forward function."""
- pass
-
- def forward_train(self, inputs, prev_output, img_metas, gt_semantic_seg,
- train_cfg):
- """Forward function for training.
- Args:
- inputs (list[Tensor]): List of multi-level img features.
- prev_output (Tensor): The output of previous decode head.
- img_metas (list[dict]): List of image info dict where each dict
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmseg/datasets/pipelines/formatting.py:Collect`.
- gt_semantic_seg (Tensor): Semantic segmentation masks
- used if the architecture supports semantic segmentation task.
- train_cfg (dict): The training config.
-
- Returns:
- dict[str, Tensor]: a dictionary of loss components
- """
- seg_logits = self.forward(inputs, prev_output)
- losses = self.losses(seg_logits, gt_semantic_seg)
-
- return losses
-
- def forward_test(self, inputs, prev_output, img_metas, test_cfg):
- """Forward function for testing.
-
- Args:
- inputs (list[Tensor]): List of multi-level img features.
- prev_output (Tensor): The output of previous decode head.
- img_metas (list[dict]): List of image info dict where each dict
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmseg/datasets/pipelines/formatting.py:Collect`.
- test_cfg (dict): The testing config.
-
- Returns:
- Tensor: Output segmentation map.
- """
- return self.forward(inputs, prev_output)
diff --git a/spaces/TEnngal/bingo/src/pages/api/blob.ts b/spaces/TEnngal/bingo/src/pages/api/blob.ts
deleted file mode 100644
index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000
--- a/spaces/TEnngal/bingo/src/pages/api/blob.ts
+++ /dev/null
@@ -1,40 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { Readable } from 'node:stream'
-import { fetch } from '@/lib/isomorphic'
-
-const API_DOMAIN = 'https://www.bing.com'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const { bcid } = req.query
-
- const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`,
- {
- method: 'GET',
- headers: {
- "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"",
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": "\"Windows\"",
- "Referrer-Policy": "origin-when-cross-origin",
- },
- },
- )
-
- res.writeHead(200, {
- 'Content-Length': headers.get('content-length')!,
- 'Content-Type': headers.get('content-type')!,
- })
- // @ts-ignore
- return Readable.fromWeb(body!).pipe(res)
- } catch (e) {
- console.log('Error', e)
- return res.json({
- result: {
- value: 'UploadFailed',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/formatters/groff.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/formatters/groff.py
deleted file mode 100644
index 30a528e668f8e8bcbde9b466c95a2a34bffbef8f..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/formatters/groff.py
+++ /dev/null
@@ -1,170 +0,0 @@
-"""
- pygments.formatters.groff
- ~~~~~~~~~~~~~~~~~~~~~~~~~
-
- Formatter for groff output.
-
- :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-import math
-from pip._vendor.pygments.formatter import Formatter
-from pip._vendor.pygments.util import get_bool_opt, get_int_opt
-
-__all__ = ['GroffFormatter']
-
-
-class GroffFormatter(Formatter):
- """
- Format tokens with groff escapes to change their color and font style.
-
- .. versionadded:: 2.11
-
- Additional options accepted:
-
- `style`
- The style to use, can be a string or a Style subclass (default:
- ``'default'``).
-
- `monospaced`
- If set to true, monospace font will be used (default: ``true``).
-
- `linenos`
- If set to true, print the line numbers (default: ``false``).
-
- `wrap`
- Wrap lines to the specified number of characters. Disabled if set to 0
- (default: ``0``).
- """
-
- name = 'groff'
- aliases = ['groff','troff','roff']
- filenames = []
-
- def __init__(self, **options):
- Formatter.__init__(self, **options)
-
- self.monospaced = get_bool_opt(options, 'monospaced', True)
- self.linenos = get_bool_opt(options, 'linenos', False)
- self._lineno = 0
- self.wrap = get_int_opt(options, 'wrap', 0)
- self._linelen = 0
-
- self.styles = {}
- self._make_styles()
-
-
- def _make_styles(self):
- regular = '\\f[CR]' if self.monospaced else '\\f[R]'
- bold = '\\f[CB]' if self.monospaced else '\\f[B]'
- italic = '\\f[CI]' if self.monospaced else '\\f[I]'
-
- for ttype, ndef in self.style:
- start = end = ''
- if ndef['color']:
- start += '\\m[%s]' % ndef['color']
- end = '\\m[]' + end
- if ndef['bold']:
- start += bold
- end = regular + end
- if ndef['italic']:
- start += italic
- end = regular + end
- if ndef['bgcolor']:
- start += '\\M[%s]' % ndef['bgcolor']
- end = '\\M[]' + end
-
- self.styles[ttype] = start, end
-
-
- def _define_colors(self, outfile):
- colors = set()
- for _, ndef in self.style:
- if ndef['color'] is not None:
- colors.add(ndef['color'])
-
- for color in sorted(colors):
- outfile.write('.defcolor ' + color + ' rgb #' + color + '\n')
-
-
- def _write_lineno(self, outfile):
- self._lineno += 1
- outfile.write("%s% 4d " % (self._lineno != 1 and '\n' or '', self._lineno))
-
-
- def _wrap_line(self, line):
- length = len(line.rstrip('\n'))
- space = ' ' if self.linenos else ''
- newline = ''
-
- if length > self.wrap:
- for i in range(0, math.floor(length / self.wrap)):
- chunk = line[i*self.wrap:i*self.wrap+self.wrap]
- newline += (chunk + '\n' + space)
- remainder = length % self.wrap
- if remainder > 0:
- newline += line[-remainder-1:]
- self._linelen = remainder
- elif self._linelen + length > self.wrap:
- newline = ('\n' + space) + line
- self._linelen = length
- else:
- newline = line
- self._linelen += length
-
- return newline
-
-
- def _escape_chars(self, text):
- text = text.replace('\\', '\\[u005C]'). \
- replace('.', '\\[char46]'). \
- replace('\'', '\\[u0027]'). \
- replace('`', '\\[u0060]'). \
- replace('~', '\\[u007E]')
- copy = text
-
- for char in copy:
- if len(char) != len(char.encode()):
- uni = char.encode('unicode_escape') \
- .decode()[1:] \
- .replace('x', 'u00') \
- .upper()
- text = text.replace(char, '\\[u' + uni[1:] + ']')
-
- return text
-
-
- def format_unencoded(self, tokensource, outfile):
- self._define_colors(outfile)
-
- outfile.write('.nf\n\\f[CR]\n')
-
- if self.linenos:
- self._write_lineno(outfile)
-
- for ttype, value in tokensource:
- while ttype not in self.styles:
- ttype = ttype.parent
- start, end = self.styles[ttype]
-
- for line in value.splitlines(True):
- if self.wrap > 0:
- line = self._wrap_line(line)
-
- if start and end:
- text = self._escape_chars(line.rstrip('\n'))
- if text != '':
- outfile.write(''.join((start, text, end)))
- else:
- outfile.write(self._escape_chars(line.rstrip('\n')))
-
- if line.endswith('\n'):
- if self.linenos:
- self._write_lineno(outfile)
- self._linelen = 0
- else:
- outfile.write('\n')
- self._linelen = 0
-
- outfile.write('\n.fi')
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/ansi.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/ansi.py
deleted file mode 100644
index 66365e6536080bd9372d2a7a58b8ffa3447fec34..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/ansi.py
+++ /dev/null
@@ -1,240 +0,0 @@
-import re
-import sys
-from contextlib import suppress
-from typing import Iterable, NamedTuple, Optional
-
-from .color import Color
-from .style import Style
-from .text import Text
-
-re_ansi = re.compile(
- r"""
-(?:\x1b\](.*?)\x1b\\)|
-(?:\x1b([(@-Z\\-_]|\[[0-?]*[ -/]*[@-~]))
-""",
- re.VERBOSE,
-)
-
-
-class _AnsiToken(NamedTuple):
- """Result of ansi tokenized string."""
-
- plain: str = ""
- sgr: Optional[str] = ""
- osc: Optional[str] = ""
-
-
-def _ansi_tokenize(ansi_text: str) -> Iterable[_AnsiToken]:
- """Tokenize a string in to plain text and ANSI codes.
-
- Args:
- ansi_text (str): A String containing ANSI codes.
-
- Yields:
- AnsiToken: A named tuple of (plain, sgr, osc)
- """
-
- position = 0
- sgr: Optional[str]
- osc: Optional[str]
- for match in re_ansi.finditer(ansi_text):
- start, end = match.span(0)
- osc, sgr = match.groups()
- if start > position:
- yield _AnsiToken(ansi_text[position:start])
- if sgr:
- if sgr == "(":
- position = end + 1
- continue
- if sgr.endswith("m"):
- yield _AnsiToken("", sgr[1:-1], osc)
- else:
- yield _AnsiToken("", sgr, osc)
- position = end
- if position < len(ansi_text):
- yield _AnsiToken(ansi_text[position:])
-
-
-SGR_STYLE_MAP = {
- 1: "bold",
- 2: "dim",
- 3: "italic",
- 4: "underline",
- 5: "blink",
- 6: "blink2",
- 7: "reverse",
- 8: "conceal",
- 9: "strike",
- 21: "underline2",
- 22: "not dim not bold",
- 23: "not italic",
- 24: "not underline",
- 25: "not blink",
- 26: "not blink2",
- 27: "not reverse",
- 28: "not conceal",
- 29: "not strike",
- 30: "color(0)",
- 31: "color(1)",
- 32: "color(2)",
- 33: "color(3)",
- 34: "color(4)",
- 35: "color(5)",
- 36: "color(6)",
- 37: "color(7)",
- 39: "default",
- 40: "on color(0)",
- 41: "on color(1)",
- 42: "on color(2)",
- 43: "on color(3)",
- 44: "on color(4)",
- 45: "on color(5)",
- 46: "on color(6)",
- 47: "on color(7)",
- 49: "on default",
- 51: "frame",
- 52: "encircle",
- 53: "overline",
- 54: "not frame not encircle",
- 55: "not overline",
- 90: "color(8)",
- 91: "color(9)",
- 92: "color(10)",
- 93: "color(11)",
- 94: "color(12)",
- 95: "color(13)",
- 96: "color(14)",
- 97: "color(15)",
- 100: "on color(8)",
- 101: "on color(9)",
- 102: "on color(10)",
- 103: "on color(11)",
- 104: "on color(12)",
- 105: "on color(13)",
- 106: "on color(14)",
- 107: "on color(15)",
-}
-
-
-class AnsiDecoder:
- """Translate ANSI code in to styled Text."""
-
- def __init__(self) -> None:
- self.style = Style.null()
-
- def decode(self, terminal_text: str) -> Iterable[Text]:
- """Decode ANSI codes in an iterable of lines.
-
- Args:
- lines (Iterable[str]): An iterable of lines of terminal output.
-
- Yields:
- Text: Marked up Text.
- """
- for line in terminal_text.splitlines():
- yield self.decode_line(line)
-
- def decode_line(self, line: str) -> Text:
- """Decode a line containing ansi codes.
-
- Args:
- line (str): A line of terminal output.
-
- Returns:
- Text: A Text instance marked up according to ansi codes.
- """
- from_ansi = Color.from_ansi
- from_rgb = Color.from_rgb
- _Style = Style
- text = Text()
- append = text.append
- line = line.rsplit("\r", 1)[-1]
- for plain_text, sgr, osc in _ansi_tokenize(line):
- if plain_text:
- append(plain_text, self.style or None)
- elif osc is not None:
- if osc.startswith("8;"):
- _params, semicolon, link = osc[2:].partition(";")
- if semicolon:
- self.style = self.style.update_link(link or None)
- elif sgr is not None:
- # Translate in to semi-colon separated codes
- # Ignore invalid codes, because we want to be lenient
- codes = [
- min(255, int(_code) if _code else 0)
- for _code in sgr.split(";")
- if _code.isdigit() or _code == ""
- ]
- iter_codes = iter(codes)
- for code in iter_codes:
- if code == 0:
- # reset
- self.style = _Style.null()
- elif code in SGR_STYLE_MAP:
- # styles
- self.style += _Style.parse(SGR_STYLE_MAP[code])
- elif code == 38:
- # Foreground
- with suppress(StopIteration):
- color_type = next(iter_codes)
- if color_type == 5:
- self.style += _Style.from_color(
- from_ansi(next(iter_codes))
- )
- elif color_type == 2:
- self.style += _Style.from_color(
- from_rgb(
- next(iter_codes),
- next(iter_codes),
- next(iter_codes),
- )
- )
- elif code == 48:
- # Background
- with suppress(StopIteration):
- color_type = next(iter_codes)
- if color_type == 5:
- self.style += _Style.from_color(
- None, from_ansi(next(iter_codes))
- )
- elif color_type == 2:
- self.style += _Style.from_color(
- None,
- from_rgb(
- next(iter_codes),
- next(iter_codes),
- next(iter_codes),
- ),
- )
-
- return text
-
-
-if sys.platform != "win32" and __name__ == "__main__": # pragma: no cover
- import io
- import os
- import pty
- import sys
-
- decoder = AnsiDecoder()
-
- stdout = io.BytesIO()
-
- def read(fd: int) -> bytes:
- data = os.read(fd, 1024)
- stdout.write(data)
- return data
-
- pty.spawn(sys.argv[1:], read)
-
- from .console import Console
-
- console = Console(record=True)
-
- stdout_result = stdout.getvalue().decode("utf-8")
- print(stdout_result)
-
- for line in decoder.decode(stdout_result):
- console.print(line)
-
- console.save_html("stdout.html")
diff --git a/spaces/TechnoByte/soft-improved/theme_dropdown.py b/spaces/TechnoByte/soft-improved/theme_dropdown.py
deleted file mode 100644
index 6235388fd00549553df44028f3ccf03e946994ea..0000000000000000000000000000000000000000
--- a/spaces/TechnoByte/soft-improved/theme_dropdown.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import os
-import pathlib
-
-from gradio.themes.utils import ThemeAsset
-
-
-def create_theme_dropdown():
- import gradio as gr
-
- asset_path = pathlib.Path(__file__).parent / "themes"
- themes = []
- for theme_asset in os.listdir(str(asset_path)):
- themes.append(
- (ThemeAsset(theme_asset), gr.Theme.load(str(asset_path / theme_asset)))
- )
-
- def make_else_if(theme_asset):
- return f"""
- else if (theme == '{str(theme_asset[0].version)}') {{
- var theme_css = `{theme_asset[1]._get_theme_css()}`
- }}"""
-
- head, tail = themes[0], themes[1:]
- if_statement = f"""
- if (theme == "{str(head[0].version)}") {{
- var theme_css = `{head[1]._get_theme_css()}`
- }} {" ".join(make_else_if(t) for t in tail)}
- """
-
- latest_to_oldest = sorted([t[0] for t in themes], key=lambda asset: asset.version)[
- ::-1
- ]
- latest_to_oldest = [str(t.version) for t in latest_to_oldest]
-
- component = gr.Dropdown(
- choices=latest_to_oldest,
- value=latest_to_oldest[0],
- render=False,
- label="Select Version",
- ).style(container=False)
-
- return (
- component,
- f"""
- (theme) => {{
- if (!document.querySelector('.theme-css')) {{
- var theme_elem = document.createElement('style');
- theme_elem.classList.add('theme-css');
- document.head.appendChild(theme_elem);
- }} else {{
- var theme_elem = document.querySelector('.theme-css');
- }}
- {if_statement}
- theme_elem.innerHTML = theme_css;
- }}
- """,
- )
diff --git a/spaces/TencentARC/Caption-Anything/README.md b/spaces/TencentARC/Caption-Anything/README.md
deleted file mode 100644
index 5cf7a4f5679a7d7037957243442d7aba615993f9..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/Caption-Anything/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Caption Anything
-emoji: 📚
-colorFrom: green
-colorTo: green
-sdk: gradio
-sdk_version: 3.26.0
-python_version: 3.8.9
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ThomasSimonini/Conversation-in-a-Tavern/app.py b/spaces/ThomasSimonini/Conversation-in-a-Tavern/app.py
deleted file mode 100644
index 401574fd656a308a56018a1f9fc3c4ad366cb5e0..0000000000000000000000000000000000000000
--- a/spaces/ThomasSimonini/Conversation-in-a-Tavern/app.py
+++ /dev/null
@@ -1,106 +0,0 @@
-import gradio as gr
-from gradio.inputs import Textbox, Slider
-
-import requests
-
-# Template
-title = "A conversation with some NPC in a Tavern 🍻"
-description = ""
-article = """
-
If you liked don't forget to 💖 the project 🥰
-
Parameters:
-
-
message: what you want to say to the NPC.
-
npc_name: name of the NPC.
-
npc_prompt: prompt of the NPC, we can modify it to see if results are better.
-
top_p: control how deterministic the model is in generating a response.
-
temperature: (sampling temperature) higher values means the model will take more risks.
-
max_new_tokens: Max number of tokens in generation.
-
-"""
-theme="huggingface"
-
-
-# Builds the prompt from what previously happened
-def build_prompt(conversation, context, interlocutor_names):
- prompt = context + "\n"
- for player_msg, npc_msg in conversation:
- line = "\n- " + interlocutor_names[0] + ":" + player_msg
- prompt += line
- line = "\n- " + interlocutor_names[1] + ":" + npc_msg
- prompt += line
- prompt += ""
- return prompt
-
-# Recognize what the model said, if it used the correct format
-def clean_chat_output(txt, prompt, interlocutor_names):
- delimiter = "\n- "+interlocutor_names[0]
- output = txt.replace(prompt, '')
- output = output[:output.find(delimiter)]
- return output
-
-# GPT-J-6B API
-API_URL = "https://api-inference.huggingface.co/models/EleutherAI/gpt-j-6B"
-def query(payload):
- response = requests.post(API_URL, json=payload)
- return response.json()
-
-def chat(message, npc_name, initial_prompt, top_p, temperature, max_new_tokens, history=[]):
- interlocutor_names = ["Player", npc_name]
-
- print("message", message)
- print("npc_name", npc_name)
- print("initial_prompt", initial_prompt)
- print("top_p", top_p)
- print("temperature", temperature)
- print("max_new_tokens", max_new_tokens)
- print("history", history)
- response = "Test"
- history.append((message, ""))
- conversation = history
-
- # Build the prompt
- prompt = build_prompt(conversation, initial_prompt, interlocutor_names)
-
- # Build JSON
- json_req = {"inputs": prompt,
- "parameters":
- {
- "top_p": top_p,
- "temperature": temperature,
- "max_new_tokens": max_new_tokens,
- "return_full_text": False
- }}
-
- # Get the output
- output = query(json_req)
- output = output[0]['generated_text']
- print("output", output)
-
- answer = clean_chat_output(output, prompt, interlocutor_names)
- response = answer
- print("response", answer)
- history[-1] = (message, response)
- return history, history
-
-
-#io = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
-
-iface = gr.Interface(fn=chat,
-inputs=[Textbox(label="message", placeholder="Hello!"),
- Textbox(label="npc_name", placeholder="Antoine"),
- Textbox(label="initial_prompt", placeholder="The following is a conversation with Antoine, a guard for Northfall that's drinking in the Tavern."),
- Slider(minimum=0.5, maximum=1, step=0.05, default=0.9, label="top_p"),
- Slider(minimum=0.5, maximum=1.5, step=0.1, default=1.1, label="temperature"),
- Slider(minimum=20, maximum=250, step=10, default=50, label="max_new_tokens"),
- "state"],
- outputs=["chatbot","state"],
- #examples = [["Hello!", "", , 0.9, 1.1, 50, iface.state]],
- allow_screenshot=True,
- allow_flagging=True,
- title=title,
- article=article,
- theme=theme)
-
-if __name__ == "__main__":
- iface.launch()
\ No newline at end of file
diff --git a/spaces/TushDeMort/yolo/utils/torch_utils.py b/spaces/TushDeMort/yolo/utils/torch_utils.py
deleted file mode 100644
index 1e631b555508457a4944c11a479176463719c0e8..0000000000000000000000000000000000000000
--- a/spaces/TushDeMort/yolo/utils/torch_utils.py
+++ /dev/null
@@ -1,374 +0,0 @@
-# YOLOR PyTorch utils
-
-import datetime
-import logging
-import math
-import os
-import platform
-import subprocess
-import time
-from contextlib import contextmanager
-from copy import deepcopy
-from pathlib import Path
-
-import torch
-import torch.backends.cudnn as cudnn
-import torch.nn as nn
-import torch.nn.functional as F
-import torchvision
-
-try:
- import thop # for FLOPS computation
-except ImportError:
- thop = None
-logger = logging.getLogger(__name__)
-
-
-@contextmanager
-def torch_distributed_zero_first(local_rank: int):
- """
- Decorator to make all processes in distributed training wait for each local_master to do something.
- """
- if local_rank not in [-1, 0]:
- torch.distributed.barrier()
- yield
- if local_rank == 0:
- torch.distributed.barrier()
-
-
-def init_torch_seeds(seed=0):
- # Speed-reproducibility tradeoff https://pytorch.org/docs/stable/notes/randomness.html
- torch.manual_seed(seed)
- if seed == 0: # slower, more reproducible
- cudnn.benchmark, cudnn.deterministic = False, True
- else: # faster, less reproducible
- cudnn.benchmark, cudnn.deterministic = True, False
-
-
-def date_modified(path=__file__):
- # return human-readable file modification date, i.e. '2021-3-26'
- t = datetime.datetime.fromtimestamp(Path(path).stat().st_mtime)
- return f'{t.year}-{t.month}-{t.day}'
-
-
-def git_describe(path=Path(__file__).parent): # path must be a directory
- # return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe
- s = f'git -C {path} describe --tags --long --always'
- try:
- return subprocess.check_output(s, shell=True, stderr=subprocess.STDOUT).decode()[:-1]
- except subprocess.CalledProcessError as e:
- return '' # not a git repository
-
-
-def select_device(device='', batch_size=None):
- # device = 'cpu' or '0' or '0,1,2,3'
- s = f'YOLOR 🚀 {git_describe() or date_modified()} torch {torch.__version__} ' # string
- cpu = device.lower() == 'cpu'
- if cpu:
- os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # force torch.cuda.is_available() = False
- elif device: # non-cpu device requested
- os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable
- assert torch.cuda.is_available(), f'CUDA unavailable, invalid device {device} requested' # check availability
-
- cuda = not cpu and torch.cuda.is_available()
- if cuda:
- n = torch.cuda.device_count()
- if n > 1 and batch_size: # check that batch_size is compatible with device_count
- assert batch_size % n == 0, f'batch-size {batch_size} not multiple of GPU count {n}'
- space = ' ' * len(s)
- for i, d in enumerate(device.split(',') if device else range(n)):
- p = torch.cuda.get_device_properties(i)
- s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / 1024 ** 2}MB)\n" # bytes to MB
- else:
- s += 'CPU\n'
-
- logger.info(s.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else s) # emoji-safe
- return torch.device('cuda:0' if cuda else 'cpu')
-
-
-def time_synchronized():
- # pytorch-accurate time
- if torch.cuda.is_available():
- torch.cuda.synchronize()
- return time.time()
-
-
-def profile(x, ops, n=100, device=None):
- # profile a pytorch module or list of modules. Example usage:
- # x = torch.randn(16, 3, 640, 640) # input
- # m1 = lambda x: x * torch.sigmoid(x)
- # m2 = nn.SiLU()
- # profile(x, [m1, m2], n=100) # profile speed over 100 iterations
-
- device = device or torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
- x = x.to(device)
- x.requires_grad = True
- print(torch.__version__, device.type, torch.cuda.get_device_properties(0) if device.type == 'cuda' else '')
- print(f"\n{'Params':>12s}{'GFLOPS':>12s}{'forward (ms)':>16s}{'backward (ms)':>16s}{'input':>24s}{'output':>24s}")
- for m in ops if isinstance(ops, list) else [ops]:
- m = m.to(device) if hasattr(m, 'to') else m # device
- m = m.half() if hasattr(m, 'half') and isinstance(x, torch.Tensor) and x.dtype is torch.float16 else m # type
- dtf, dtb, t = 0., 0., [0., 0., 0.] # dt forward, backward
- try:
- flops = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # GFLOPS
- except:
- flops = 0
-
- for _ in range(n):
- t[0] = time_synchronized()
- y = m(x)
- t[1] = time_synchronized()
- try:
- _ = y.sum().backward()
- t[2] = time_synchronized()
- except: # no backward method
- t[2] = float('nan')
- dtf += (t[1] - t[0]) * 1000 / n # ms per op forward
- dtb += (t[2] - t[1]) * 1000 / n # ms per op backward
-
- s_in = tuple(x.shape) if isinstance(x, torch.Tensor) else 'list'
- s_out = tuple(y.shape) if isinstance(y, torch.Tensor) else 'list'
- p = sum(list(x.numel() for x in m.parameters())) if isinstance(m, nn.Module) else 0 # parameters
- print(f'{p:12}{flops:12.4g}{dtf:16.4g}{dtb:16.4g}{str(s_in):>24s}{str(s_out):>24s}')
-
-
-def is_parallel(model):
- return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel)
-
-
-def intersect_dicts(da, db, exclude=()):
- # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values
- return {k: v for k, v in da.items() if k in db and not any(x in k for x in exclude) and v.shape == db[k].shape}
-
-
-def initialize_weights(model):
- for m in model.modules():
- t = type(m)
- if t is nn.Conv2d:
- pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif t is nn.BatchNorm2d:
- m.eps = 1e-3
- m.momentum = 0.03
- elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6]:
- m.inplace = True
-
-
-def find_modules(model, mclass=nn.Conv2d):
- # Finds layer indices matching module class 'mclass'
- return [i for i, m in enumerate(model.module_list) if isinstance(m, mclass)]
-
-
-def sparsity(model):
- # Return global model sparsity
- a, b = 0., 0.
- for p in model.parameters():
- a += p.numel()
- b += (p == 0).sum()
- return b / a
-
-
-def prune(model, amount=0.3):
- # Prune model to requested global sparsity
- import torch.nn.utils.prune as prune
- print('Pruning model... ', end='')
- for name, m in model.named_modules():
- if isinstance(m, nn.Conv2d):
- prune.l1_unstructured(m, name='weight', amount=amount) # prune
- prune.remove(m, 'weight') # make permanent
- print(' %.3g global sparsity' % sparsity(model))
-
-
-def fuse_conv_and_bn(conv, bn):
- # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/
- fusedconv = nn.Conv2d(conv.in_channels,
- conv.out_channels,
- kernel_size=conv.kernel_size,
- stride=conv.stride,
- padding=conv.padding,
- groups=conv.groups,
- bias=True).requires_grad_(False).to(conv.weight.device)
-
- # prepare filters
- w_conv = conv.weight.clone().view(conv.out_channels, -1)
- w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var)))
- fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape))
-
- # prepare spatial bias
- b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias
- b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps))
- fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn)
-
- return fusedconv
-
-
-def model_info(model, verbose=False, img_size=640):
- # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320]
- n_p = sum(x.numel() for x in model.parameters()) # number parameters
- n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients
- if verbose:
- print('%5s %40s %9s %12s %20s %10s %10s' % ('layer', 'name', 'gradient', 'parameters', 'shape', 'mu', 'sigma'))
- for i, (name, p) in enumerate(model.named_parameters()):
- name = name.replace('module_list.', '')
- print('%5g %40s %9s %12g %20s %10.3g %10.3g' %
- (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std()))
-
- try: # FLOPS
- from thop import profile
- stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32
- img = torch.zeros((1, model.yaml.get('ch', 3), stride, stride), device=next(model.parameters()).device) # input
- flops = profile(deepcopy(model), inputs=(img,), verbose=False)[0] / 1E9 * 2 # stride GFLOPS
- img_size = img_size if isinstance(img_size, list) else [img_size, img_size] # expand if int/float
- fs = ', %.1f GFLOPS' % (flops * img_size[0] / stride * img_size[1] / stride) # 640x640 GFLOPS
- except (ImportError, Exception):
- fs = ''
-
- logger.info(f"Model Summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}")
-
-
-def load_classifier(name='resnet101', n=2):
- # Loads a pretrained model reshaped to n-class output
- model = torchvision.models.__dict__[name](pretrained=True)
-
- # ResNet model properties
- # input_size = [3, 224, 224]
- # input_space = 'RGB'
- # input_range = [0, 1]
- # mean = [0.485, 0.456, 0.406]
- # std = [0.229, 0.224, 0.225]
-
- # Reshape output to n classes
- filters = model.fc.weight.shape[1]
- model.fc.bias = nn.Parameter(torch.zeros(n), requires_grad=True)
- model.fc.weight = nn.Parameter(torch.zeros(n, filters), requires_grad=True)
- model.fc.out_features = n
- return model
-
-
-def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416)
- # scales img(bs,3,y,x) by ratio constrained to gs-multiple
- if ratio == 1.0:
- return img
- else:
- h, w = img.shape[2:]
- s = (int(h * ratio), int(w * ratio)) # new size
- img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize
- if not same_shape: # pad/crop img
- h, w = [math.ceil(x * ratio / gs) * gs for x in (h, w)]
- return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean
-
-
-def copy_attr(a, b, include=(), exclude=()):
- # Copy attributes from b to a, options to only include [...] and to exclude [...]
- for k, v in b.__dict__.items():
- if (len(include) and k not in include) or k.startswith('_') or k in exclude:
- continue
- else:
- setattr(a, k, v)
-
-
-class ModelEMA:
- """ Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models
- Keep a moving average of everything in the model state_dict (parameters and buffers).
- This is intended to allow functionality like
- https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage
- A smoothed version of the weights is necessary for some training schemes to perform well.
- This class is sensitive where it is initialized in the sequence of model init,
- GPU assignment and distributed training wrappers.
- """
-
- def __init__(self, model, decay=0.9999, updates=0):
- # Create EMA
- self.ema = deepcopy(model.module if is_parallel(model) else model).eval() # FP32 EMA
- # if next(model.parameters()).device.type != 'cpu':
- # self.ema.half() # FP16 EMA
- self.updates = updates # number of EMA updates
- self.decay = lambda x: decay * (1 - math.exp(-x / 2000)) # decay exponential ramp (to help early epochs)
- for p in self.ema.parameters():
- p.requires_grad_(False)
-
- def update(self, model):
- # Update EMA parameters
- with torch.no_grad():
- self.updates += 1
- d = self.decay(self.updates)
-
- msd = model.module.state_dict() if is_parallel(model) else model.state_dict() # model state_dict
- for k, v in self.ema.state_dict().items():
- if v.dtype.is_floating_point:
- v *= d
- v += (1. - d) * msd[k].detach()
-
- def update_attr(self, model, include=(), exclude=('process_group', 'reducer')):
- # Update EMA attributes
- copy_attr(self.ema, model, include, exclude)
-
-
-class BatchNormXd(torch.nn.modules.batchnorm._BatchNorm):
- def _check_input_dim(self, input):
- # The only difference between BatchNorm1d, BatchNorm2d, BatchNorm3d, etc
- # is this method that is overwritten by the sub-class
- # This original goal of this method was for tensor sanity checks
- # If you're ok bypassing those sanity checks (eg. if you trust your inference
- # to provide the right dimensional inputs), then you can just use this method
- # for easy conversion from SyncBatchNorm
- # (unfortunately, SyncBatchNorm does not store the original class - if it did
- # we could return the one that was originally created)
- return
-
-def revert_sync_batchnorm(module):
- # this is very similar to the function that it is trying to revert:
- # https://github.com/pytorch/pytorch/blob/c8b3686a3e4ba63dc59e5dcfe5db3430df256833/torch/nn/modules/batchnorm.py#L679
- module_output = module
- if isinstance(module, torch.nn.modules.batchnorm.SyncBatchNorm):
- new_cls = BatchNormXd
- module_output = BatchNormXd(module.num_features,
- module.eps, module.momentum,
- module.affine,
- module.track_running_stats)
- if module.affine:
- with torch.no_grad():
- module_output.weight = module.weight
- module_output.bias = module.bias
- module_output.running_mean = module.running_mean
- module_output.running_var = module.running_var
- module_output.num_batches_tracked = module.num_batches_tracked
- if hasattr(module, "qconfig"):
- module_output.qconfig = module.qconfig
- for name, child in module.named_children():
- module_output.add_module(name, revert_sync_batchnorm(child))
- del module
- return module_output
-
-
-class TracedModel(nn.Module):
-
- def __init__(self, model=None, device=None, img_size=(640,640)):
- super(TracedModel, self).__init__()
-
- print(" Convert model to Traced-model... ")
- self.stride = model.stride
- self.names = model.names
- self.model = model
-
- self.model = revert_sync_batchnorm(self.model)
- self.model.to('cpu')
- self.model.eval()
-
- self.detect_layer = self.model.model[-1]
- self.model.traced = True
-
- rand_example = torch.rand(1, 3, img_size, img_size)
-
- traced_script_module = torch.jit.trace(self.model, rand_example, strict=False)
- #traced_script_module = torch.jit.script(self.model)
- traced_script_module.save("traced_model.pt")
- print(" traced_script_module saved! ")
- self.model = traced_script_module
- self.model.to(device)
- self.detect_layer.to(device)
- print(" model is traced! \n")
-
- def forward(self, x, augment=False, profile=False):
- out = self.model(x)
- out = self.detect_layer(out)
- return out
\ No newline at end of file
diff --git a/spaces/UzNutq/README/README.md b/spaces/UzNutq/README/README.md
deleted file mode 100644
index 67bde24474b2ddeb20470f34662efd907766da3e..0000000000000000000000000000000000000000
--- a/spaces/UzNutq/README/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: README
-emoji: 💻
-colorFrom: yellow
-colorTo: blue
-sdk: static
-pinned: true
----
-
-Edit this `README.md` markdown file to author your organization card 🔥
diff --git a/spaces/Venafi/Vikram-Explorer/README.md b/spaces/Venafi/Vikram-Explorer/README.md
deleted file mode 100644
index e0d49ac46a1c41b31bcb504432018f8144e4c561..0000000000000000000000000000000000000000
--- a/spaces/Venafi/Vikram-Explorer/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Vikram Explorer Project
-emoji: 🚀
-colorFrom: gray
-colorTo: gray
-sdk: gradio
-sdk_version: 3.42.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/VickyKira/NASAGPT/g4f/Provider/Provider.py b/spaces/VickyKira/NASAGPT/g4f/Provider/Provider.py
deleted file mode 100644
index d24df76b6a6ccfc9b244f13a51bfc124b398a271..0000000000000000000000000000000000000000
--- a/spaces/VickyKira/NASAGPT/g4f/Provider/Provider.py
+++ /dev/null
@@ -1,16 +0,0 @@
-import os
-from ..typing import sha256, Dict, get_type_hints
-
-url = None
-model = None
-supports_stream = False
-needs_auth = False
-
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- return
-
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join(
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
diff --git a/spaces/VideoCrafter/VideoCrafter/lvdm/modules/attention.py b/spaces/VideoCrafter/VideoCrafter/lvdm/modules/attention.py
deleted file mode 100644
index 3d3140c216b85a14a33d9b1fa09eb6d19465f741..0000000000000000000000000000000000000000
--- a/spaces/VideoCrafter/VideoCrafter/lvdm/modules/attention.py
+++ /dev/null
@@ -1,475 +0,0 @@
-from functools import partial
-import torch
-from torch import nn, einsum
-import torch.nn.functional as F
-from einops import rearrange, repeat
-try:
- import xformers
- import xformers.ops
- XFORMERS_IS_AVAILBLE = True
-except:
- XFORMERS_IS_AVAILBLE = False
-from lvdm.common import (
- checkpoint,
- exists,
- default,
-)
-from lvdm.basics import (
- zero_module,
-)
-
-class RelativePosition(nn.Module):
- """ https://github.com/evelinehong/Transformer_Relative_Position_PyTorch/blob/master/relative_position.py """
-
- def __init__(self, num_units, max_relative_position):
- super().__init__()
- self.num_units = num_units
- self.max_relative_position = max_relative_position
- self.embeddings_table = nn.Parameter(torch.Tensor(max_relative_position * 2 + 1, num_units))
- nn.init.xavier_uniform_(self.embeddings_table)
-
- def forward(self, length_q, length_k):
- device = self.embeddings_table.device
- range_vec_q = torch.arange(length_q, device=device)
- range_vec_k = torch.arange(length_k, device=device)
- distance_mat = range_vec_k[None, :] - range_vec_q[:, None]
- distance_mat_clipped = torch.clamp(distance_mat, -self.max_relative_position, self.max_relative_position)
- final_mat = distance_mat_clipped + self.max_relative_position
- final_mat = final_mat.long()
- embeddings = self.embeddings_table[final_mat]
- return embeddings
-
-
-class CrossAttention(nn.Module):
-
- def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.,
- relative_position=False, temporal_length=None, img_cross_attention=False):
- super().__init__()
- inner_dim = dim_head * heads
- context_dim = default(context_dim, query_dim)
-
- self.scale = dim_head**-0.5
- self.heads = heads
- self.dim_head = dim_head
- self.to_q = nn.Linear(query_dim, inner_dim, bias=False)
- self.to_k = nn.Linear(context_dim, inner_dim, bias=False)
- self.to_v = nn.Linear(context_dim, inner_dim, bias=False)
- self.to_out = nn.Sequential(nn.Linear(inner_dim, query_dim), nn.Dropout(dropout))
-
- self.image_cross_attention_scale = 1.0
- self.text_context_len = 77
- self.img_cross_attention = img_cross_attention
- if self.img_cross_attention:
- self.to_k_ip = nn.Linear(context_dim, inner_dim, bias=False)
- self.to_v_ip = nn.Linear(context_dim, inner_dim, bias=False)
-
- self.relative_position = relative_position
- if self.relative_position:
- assert(temporal_length is not None)
- self.relative_position_k = RelativePosition(num_units=dim_head, max_relative_position=temporal_length)
- self.relative_position_v = RelativePosition(num_units=dim_head, max_relative_position=temporal_length)
- else:
- ## only used for spatial attention, while NOT for temporal attention
- if XFORMERS_IS_AVAILBLE and temporal_length is None:
- self.forward = self.efficient_forward
-
- def forward(self, x, context=None, mask=None):
- h = self.heads
-
- q = self.to_q(x)
- context = default(context, x)
- ## considering image token additionally
- if context is not None and self.img_cross_attention:
- context, context_img = context[:,:self.text_context_len,:], context[:,self.text_context_len:,:]
- k = self.to_k(context)
- v = self.to_v(context)
- k_ip = self.to_k_ip(context_img)
- v_ip = self.to_v_ip(context_img)
- else:
- k = self.to_k(context)
- v = self.to_v(context)
-
- q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v))
- sim = torch.einsum('b i d, b j d -> b i j', q, k) * self.scale
- if self.relative_position:
- len_q, len_k, len_v = q.shape[1], k.shape[1], v.shape[1]
- k2 = self.relative_position_k(len_q, len_k)
- sim2 = einsum('b t d, t s d -> b t s', q, k2) * self.scale # TODO check
- sim += sim2
- del k
-
- if exists(mask):
- ## feasible for causal attention mask only
- max_neg_value = -torch.finfo(sim.dtype).max
- mask = repeat(mask, 'b i j -> (b h) i j', h=h)
- sim.masked_fill_(~(mask>0.5), max_neg_value)
-
- # attention, what we cannot get enough of
- sim = sim.softmax(dim=-1)
- out = torch.einsum('b i j, b j d -> b i d', sim, v)
- if self.relative_position:
- v2 = self.relative_position_v(len_q, len_v)
- out2 = einsum('b t s, t s d -> b t d', sim, v2) # TODO check
- out += out2
- out = rearrange(out, '(b h) n d -> b n (h d)', h=h)
-
- ## considering image token additionally
- if context is not None and self.img_cross_attention:
- k_ip, v_ip = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (k_ip, v_ip))
- sim_ip = torch.einsum('b i d, b j d -> b i j', q, k_ip) * self.scale
- del k_ip
- sim_ip = sim_ip.softmax(dim=-1)
- out_ip = torch.einsum('b i j, b j d -> b i d', sim_ip, v_ip)
- out_ip = rearrange(out, '(b h) n d -> b n (h d)', h=h)
- out = out + self.image_cross_attention_scale * out_ip
- del q
-
- return self.to_out(out)
-
- def efficient_forward(self, x, context=None, mask=None):
- q = self.to_q(x)
- context = default(context, x)
-
- ## considering image token additionally
- if context is not None and self.img_cross_attention:
- context, context_img = context[:,:self.text_context_len,:], context[:,self.text_context_len:,:]
- k = self.to_k(context)
- v = self.to_v(context)
- k_ip = self.to_k_ip(context_img)
- v_ip = self.to_v_ip(context_img)
- else:
- k = self.to_k(context)
- v = self.to_v(context)
-
- b, _, _ = q.shape
- q, k, v = map(
- lambda t: t.unsqueeze(3)
- .reshape(b, t.shape[1], self.heads, self.dim_head)
- .permute(0, 2, 1, 3)
- .reshape(b * self.heads, t.shape[1], self.dim_head)
- .contiguous(),
- (q, k, v),
- )
- # actually compute the attention, what we cannot get enough of
- out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=None)
-
- ## considering image token additionally
- if context is not None and self.img_cross_attention:
- k_ip, v_ip = map(
- lambda t: t.unsqueeze(3)
- .reshape(b, t.shape[1], self.heads, self.dim_head)
- .permute(0, 2, 1, 3)
- .reshape(b * self.heads, t.shape[1], self.dim_head)
- .contiguous(),
- (k_ip, v_ip),
- )
- out_ip = xformers.ops.memory_efficient_attention(q, k_ip, v_ip, attn_bias=None, op=None)
- out_ip = (
- out_ip.unsqueeze(0)
- .reshape(b, self.heads, out.shape[1], self.dim_head)
- .permute(0, 2, 1, 3)
- .reshape(b, out.shape[1], self.heads * self.dim_head)
- )
-
- if exists(mask):
- raise NotImplementedError
- out = (
- out.unsqueeze(0)
- .reshape(b, self.heads, out.shape[1], self.dim_head)
- .permute(0, 2, 1, 3)
- .reshape(b, out.shape[1], self.heads * self.dim_head)
- )
- if context is not None and self.img_cross_attention:
- out = out + self.image_cross_attention_scale * out_ip
- return self.to_out(out)
-
-
-class BasicTransformerBlock(nn.Module):
-
- def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True,
- disable_self_attn=False, attention_cls=None, img_cross_attention=False):
- super().__init__()
- attn_cls = CrossAttention if attention_cls is None else attention_cls
- self.disable_self_attn = disable_self_attn
- self.attn1 = attn_cls(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout,
- context_dim=context_dim if self.disable_self_attn else None)
- self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff)
- self.attn2 = attn_cls(query_dim=dim, context_dim=context_dim, heads=n_heads, dim_head=d_head, dropout=dropout,
- img_cross_attention=img_cross_attention)
- self.norm1 = nn.LayerNorm(dim)
- self.norm2 = nn.LayerNorm(dim)
- self.norm3 = nn.LayerNorm(dim)
- self.checkpoint = checkpoint
-
- def forward(self, x, context=None, mask=None):
- ## implementation tricks: because checkpointing doesn't support non-tensor (e.g. None or scalar) arguments
- input_tuple = (x,) ## should not be (x), otherwise *input_tuple will decouple x into multiple arguments
- if context is not None:
- input_tuple = (x, context)
- if mask is not None:
- forward_mask = partial(self._forward, mask=mask)
- return checkpoint(forward_mask, (x,), self.parameters(), self.checkpoint)
- if context is not None and mask is not None:
- input_tuple = (x, context, mask)
- return checkpoint(self._forward, input_tuple, self.parameters(), self.checkpoint)
-
- def _forward(self, x, context=None, mask=None):
- x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None, mask=mask) + x
- x = self.attn2(self.norm2(x), context=context, mask=mask) + x
- x = self.ff(self.norm3(x)) + x
- return x
-
-
-class SpatialTransformer(nn.Module):
- """
- Transformer block for image-like data in spatial axis.
- First, project the input (aka embedding)
- and reshape to b, t, d.
- Then apply standard transformer action.
- Finally, reshape to image
- NEW: use_linear for more efficiency instead of the 1x1 convs
- """
-
- def __init__(self, in_channels, n_heads, d_head, depth=1, dropout=0., context_dim=None,
- use_checkpoint=True, disable_self_attn=False, use_linear=False, img_cross_attention=False):
- super().__init__()
- self.in_channels = in_channels
- inner_dim = n_heads * d_head
- self.norm = torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True)
- if not use_linear:
- self.proj_in = nn.Conv2d(in_channels, inner_dim, kernel_size=1, stride=1, padding=0)
- else:
- self.proj_in = nn.Linear(in_channels, inner_dim)
-
- self.transformer_blocks = nn.ModuleList([
- BasicTransformerBlock(
- inner_dim,
- n_heads,
- d_head,
- dropout=dropout,
- context_dim=context_dim,
- img_cross_attention=img_cross_attention,
- disable_self_attn=disable_self_attn,
- checkpoint=use_checkpoint) for d in range(depth)
- ])
- if not use_linear:
- self.proj_out = zero_module(nn.Conv2d(inner_dim, in_channels, kernel_size=1, stride=1, padding=0))
- else:
- self.proj_out = zero_module(nn.Linear(inner_dim, in_channels))
- self.use_linear = use_linear
-
-
- def forward(self, x, context=None):
- b, c, h, w = x.shape
- x_in = x
- x = self.norm(x)
- if not self.use_linear:
- x = self.proj_in(x)
- x = rearrange(x, 'b c h w -> b (h w) c').contiguous()
- if self.use_linear:
- x = self.proj_in(x)
- for i, block in enumerate(self.transformer_blocks):
- x = block(x, context=context)
- if self.use_linear:
- x = self.proj_out(x)
- x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w).contiguous()
- if not self.use_linear:
- x = self.proj_out(x)
- return x + x_in
-
-
-class TemporalTransformer(nn.Module):
- """
- Transformer block for image-like data in temporal axis.
- First, reshape to b, t, d.
- Then apply standard transformer action.
- Finally, reshape to image
- """
- def __init__(self, in_channels, n_heads, d_head, depth=1, dropout=0., context_dim=None,
- use_checkpoint=True, use_linear=False, only_self_att=True, causal_attention=False,
- relative_position=False, temporal_length=None):
- super().__init__()
- self.only_self_att = only_self_att
- self.relative_position = relative_position
- self.causal_attention = causal_attention
- self.in_channels = in_channels
- inner_dim = n_heads * d_head
- self.norm = torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True)
- self.proj_in = nn.Conv1d(in_channels, inner_dim, kernel_size=1, stride=1, padding=0)
- if not use_linear:
- self.proj_in = nn.Conv1d(in_channels, inner_dim, kernel_size=1, stride=1, padding=0)
- else:
- self.proj_in = nn.Linear(in_channels, inner_dim)
-
- if relative_position:
- assert(temporal_length is not None)
- attention_cls = partial(CrossAttention, relative_position=True, temporal_length=temporal_length)
- else:
- attention_cls = None
- if self.causal_attention:
- assert(temporal_length is not None)
- self.mask = torch.tril(torch.ones([1, temporal_length, temporal_length]))
-
- if self.only_self_att:
- context_dim = None
- self.transformer_blocks = nn.ModuleList([
- BasicTransformerBlock(
- inner_dim,
- n_heads,
- d_head,
- dropout=dropout,
- context_dim=context_dim,
- attention_cls=attention_cls,
- checkpoint=use_checkpoint) for d in range(depth)
- ])
- if not use_linear:
- self.proj_out = zero_module(nn.Conv1d(inner_dim, in_channels, kernel_size=1, stride=1, padding=0))
- else:
- self.proj_out = zero_module(nn.Linear(inner_dim, in_channels))
- self.use_linear = use_linear
-
- def forward(self, x, context=None):
- b, c, t, h, w = x.shape
- x_in = x
- x = self.norm(x)
- x = rearrange(x, 'b c t h w -> (b h w) c t').contiguous()
- if not self.use_linear:
- x = self.proj_in(x)
- x = rearrange(x, 'bhw c t -> bhw t c').contiguous()
- if self.use_linear:
- x = self.proj_in(x)
-
- if self.causal_attention:
- mask = self.mask.to(x.device)
- mask = repeat(mask, 'l i j -> (l bhw) i j', bhw=b*h*w)
- else:
- mask = None
-
- if self.only_self_att:
- ## note: if no context is given, cross-attention defaults to self-attention
- for i, block in enumerate(self.transformer_blocks):
- x = block(x, mask=mask)
- x = rearrange(x, '(b hw) t c -> b hw t c', b=b).contiguous()
- else:
- x = rearrange(x, '(b hw) t c -> b hw t c', b=b).contiguous()
- context = rearrange(context, '(b t) l con -> b t l con', t=t).contiguous()
- for i, block in enumerate(self.transformer_blocks):
- # calculate each batch one by one (since number in shape could not greater then 65,535 for some package)
- for j in range(b):
- context_j = repeat(
- context[j],
- 't l con -> (t r) l con', r=(h * w) // t, t=t).contiguous()
- ## note: causal mask will not applied in cross-attention case
- x[j] = block(x[j], context=context_j)
-
- if self.use_linear:
- x = self.proj_out(x)
- x = rearrange(x, 'b (h w) t c -> b c t h w', h=h, w=w).contiguous()
- if not self.use_linear:
- x = rearrange(x, 'b hw t c -> (b hw) c t').contiguous()
- x = self.proj_out(x)
- x = rearrange(x, '(b h w) c t -> b c t h w', b=b, h=h, w=w).contiguous()
-
- return x + x_in
-
-
-class GEGLU(nn.Module):
- def __init__(self, dim_in, dim_out):
- super().__init__()
- self.proj = nn.Linear(dim_in, dim_out * 2)
-
- def forward(self, x):
- x, gate = self.proj(x).chunk(2, dim=-1)
- return x * F.gelu(gate)
-
-
-class FeedForward(nn.Module):
- def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.):
- super().__init__()
- inner_dim = int(dim * mult)
- dim_out = default(dim_out, dim)
- project_in = nn.Sequential(
- nn.Linear(dim, inner_dim),
- nn.GELU()
- ) if not glu else GEGLU(dim, inner_dim)
-
- self.net = nn.Sequential(
- project_in,
- nn.Dropout(dropout),
- nn.Linear(inner_dim, dim_out)
- )
-
- def forward(self, x):
- return self.net(x)
-
-
-class LinearAttention(nn.Module):
- def __init__(self, dim, heads=4, dim_head=32):
- super().__init__()
- self.heads = heads
- hidden_dim = dim_head * heads
- self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias = False)
- self.to_out = nn.Conv2d(hidden_dim, dim, 1)
-
- def forward(self, x):
- b, c, h, w = x.shape
- qkv = self.to_qkv(x)
- q, k, v = rearrange(qkv, 'b (qkv heads c) h w -> qkv b heads c (h w)', heads = self.heads, qkv=3)
- k = k.softmax(dim=-1)
- context = torch.einsum('bhdn,bhen->bhde', k, v)
- out = torch.einsum('bhde,bhdn->bhen', context, q)
- out = rearrange(out, 'b heads c (h w) -> b (heads c) h w', heads=self.heads, h=h, w=w)
- return self.to_out(out)
-
-
-class SpatialSelfAttention(nn.Module):
- def __init__(self, in_channels):
- super().__init__()
- self.in_channels = in_channels
-
- self.norm = torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True)
- self.q = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.k = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.v = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.proj_out = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
-
- def forward(self, x):
- h_ = x
- h_ = self.norm(h_)
- q = self.q(h_)
- k = self.k(h_)
- v = self.v(h_)
-
- # compute attention
- b,c,h,w = q.shape
- q = rearrange(q, 'b c h w -> b (h w) c')
- k = rearrange(k, 'b c h w -> b c (h w)')
- w_ = torch.einsum('bij,bjk->bik', q, k)
-
- w_ = w_ * (int(c)**(-0.5))
- w_ = torch.nn.functional.softmax(w_, dim=2)
-
- # attend to values
- v = rearrange(v, 'b c h w -> b c (h w)')
- w_ = rearrange(w_, 'b i j -> b j i')
- h_ = torch.einsum('bij,bjk->bik', v, w_)
- h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h)
- h_ = self.proj_out(h_)
-
- return x+h_
diff --git a/spaces/Willow123/InternLM-XComposer/demo_asset/conversation.py b/spaces/Willow123/InternLM-XComposer/demo_asset/conversation.py
deleted file mode 100644
index ce285299a1281a93ff24a7226c101b5dd9ba75b9..0000000000000000000000000000000000000000
--- a/spaces/Willow123/InternLM-XComposer/demo_asset/conversation.py
+++ /dev/null
@@ -1,160 +0,0 @@
-from PIL import Image
-
-import torch
-from transformers import StoppingCriteria, StoppingCriteriaList
-
-import dataclasses
-from enum import auto, Enum
-from typing import List, Any
-
-
-class SeparatorStyle(Enum):
- """Different separator style."""
- SINGLE = auto()
- TWO = auto()
-
-
-@dataclasses.dataclass
-class Conversation:
- """A class that keeps all conversation history."""
- system: str
- roles: List[str]
- messages: List[List[str]]
- offset: int
- # system_img: List[Image.Image] = []
- sep_style: SeparatorStyle = SeparatorStyle.SINGLE
- sep: str = "###"
- sep2: str = None
-
- skip_next: bool = False
- conv_id: Any = None
-
- def get_prompt(self):
- if self.sep_style == SeparatorStyle.SINGLE:
- ret = self.system + self.sep
- for role, message in self.messages:
- if message:
- #ret += role + ": " + message + self.sep
- ret += role + ":" + message + self.sep
- else:
- ret += role + ":"
- return ret
- elif self.sep_style == SeparatorStyle.TWO:
- seps = [self.sep, self.sep2]
- ret = self.system + seps[0]
- for i, (role, message) in enumerate(self.messages):
- if message:
- ret += role + ": " + message[0] + seps[i % 2] if isinstance(message, list) else role + ": " + message + seps[i % 2]
- else:
- ret += role + ":"
- return ret
- elif self.sep_style == "7132":
- seps = [self.sep, self.sep2]
- ret = self.system
- for i, (role, message) in enumerate(self.messages):
- if message:
- ret += role + ": " + message[0] + seps[i % 2] if isinstance(message, list) else role + ": " + message + seps[i % 2]
- else:
- ret += role + ":"
- return ret
- elif self.sep_style == "raw":
- seps = [self.sep, self.sep2]
- ret = self.system
- for i, (role, message) in enumerate(self.messages):
- if message:
- ret += role + message + seps[i % 2]
- else:
- ret += role
- return ret
-
- else:
- raise ValueError(f"Invalid style: {self.sep_style}")
-
- def append_message(self, role, message):
- self.messages.append([role, message])
-
- def to_gradio_chatbot(self):
- ret = []
- for i, (role, msg) in enumerate(self.messages[self.offset:]):
- if i % 2 == 0:
- if type(msg) is tuple or type(msg) is list:
- import base64
- from io import BytesIO
- msg, image = msg
-
- max_hw, min_hw = max(image.size), min(image.size)
- aspect_ratio = max_hw / min_hw
- max_len, min_len = 800, 400
- shortest_edge = int(min(max_len / aspect_ratio, min_len, min_hw))
- longest_edge = int(shortest_edge * aspect_ratio)
- W, H = image.size
- if H > W:
- H, W = longest_edge, shortest_edge
- else:
- H, W = shortest_edge, longest_edge
- image = image.resize((W, H))
- # image = image.resize((224, 224))
- buffered = BytesIO()
- image.save(buffered, format="JPEG")
- img_b64_str = base64.b64encode(buffered.getvalue()).decode()
- img_str = f''
- msg = msg.replace('', img_str)
- ret.append([msg, None])
- else:
- ret[-1][-1] = msg
- return ret
-
- def copy(self):
- return Conversation(
- system=self.system,
- # system_img=self.system_img,
- roles=self.roles,
- messages=[[x, y] for x, y in self.messages],
- offset=self.offset,
- sep_style=self.sep_style,
- sep=self.sep,
- sep2=self.sep2,
- conv_id=self.conv_id)
-
- def dict(self):
- return {
- "system": self.system,
- # "system_img": self.system_img,
- "roles": self.roles,
- "messages": self.messages,
- "offset": self.offset,
- "sep": self.sep,
- "sep2": self.sep2,
- "conv_id": self.conv_id,
- }
-
-
-class StoppingCriteriaSub(StoppingCriteria):
-
- def __init__(self, stops=[], encounters=1):
- super().__init__()
- self.stops = stops
-
- def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor):
- for stop in self.stops:
- if torch.all((stop == input_ids[0][-len(stop):])).item():
- return True
-
- return False
-
-
-meta = """meta instruction
-You are an AI assistant whose name is 浦语.
-- 浦语 is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
-- 浦语 can understand and communicate fluently in the language chosen by the user such as English and 中文.
-conversation
-"""
-CONV_VISION_7132_v2 = Conversation(
- system=meta,
- roles=(" <|User|>", " <|Bot|>"),
- messages=(),
- offset=0,
- sep_style="7132",
- sep="",
- sep2="",
-)
diff --git a/spaces/Yan233th/so-vits-svc-models/resample.py b/spaces/Yan233th/so-vits-svc-models/resample.py
deleted file mode 100644
index f84119cd239b49d260ed1d9e367206adcc3aa03d..0000000000000000000000000000000000000000
--- a/spaces/Yan233th/so-vits-svc-models/resample.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import os
-import argparse
-import librosa
-import numpy as np
-from multiprocessing import Pool, cpu_count
-from scipy.io import wavfile
-from tqdm import tqdm
-
-
-def process(item):
- spkdir, wav_name, args = item
- # speaker 's5', 'p280', 'p315' are excluded,
- speaker = spkdir.replace("\\", "/").split("/")[-1]
- wav_path = os.path.join(args.in_dir, speaker, wav_name)
- if os.path.exists(wav_path) and '.wav' in wav_path:
- os.makedirs(os.path.join(args.out_dir2, speaker), exist_ok=True)
- wav, sr = librosa.load(wav_path, sr=None)
- wav, _ = librosa.effects.trim(wav, top_db=20)
- peak = np.abs(wav).max()
- if peak > 1.0:
- wav = 0.98 * wav / peak
- wav2 = librosa.resample(wav, orig_sr=sr, target_sr=args.sr2)
- wav2 /= max(wav2.max(), -wav2.min())
- save_name = wav_name
- save_path2 = os.path.join(args.out_dir2, speaker, save_name)
- wavfile.write(
- save_path2,
- args.sr2,
- (wav2 * np.iinfo(np.int16).max).astype(np.int16)
- )
-
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--sr2", type=int, default=44100, help="sampling rate")
- parser.add_argument("--in_dir", type=str, default="./dataset_raw", help="path to source dir")
- parser.add_argument("--out_dir2", type=str, default="./dataset/44k", help="path to target dir")
- args = parser.parse_args()
- processs = cpu_count()-2 if cpu_count() >4 else 1
- pool = Pool(processes=processs)
-
- for speaker in os.listdir(args.in_dir):
- spk_dir = os.path.join(args.in_dir, speaker)
- if os.path.isdir(spk_dir):
- print(spk_dir)
- for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])):
- pass
diff --git a/spaces/Yiqin/ChatVID/model/fastchat/conversation.py b/spaces/Yiqin/ChatVID/model/fastchat/conversation.py
deleted file mode 100644
index 6d5555dfe30df5c0193ffd7edf0a0e03f51b78ed..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/fastchat/conversation.py
+++ /dev/null
@@ -1,289 +0,0 @@
-"""
-Conversation prompt template.
-
-Now we support
-- Vicuna
-- Koala
-- OpenAssistant/oasst-sft-1-pythia-12b
-- StabilityAI/stablelm-tuned-alpha-7b
-- databricks/dolly-v2-12b
-- THUDM/chatglm-6b
-- project-baize/baize-lora-7B
-- Alpaca/LLaMa
-"""
-
-import dataclasses
-from enum import auto, Enum
-from typing import List, Tuple, Any
-
-
-class SeparatorStyle(Enum):
- """Different separator style."""
-
- SINGLE = auto()
- TWO = auto()
- DOLLY = auto()
- OASST_PYTHIA = auto()
- BAIZE = auto()
-
-
-@dataclasses.dataclass
-class Conversation:
- """A class that keeps all conversation history."""
-
- system: str
- roles: List[str]
- messages: List[List[str]]
- offset: int
- sep_style: SeparatorStyle = SeparatorStyle.SINGLE
- sep: str = "###"
- sep2: str = None
-
- # Used for gradio server
- skip_next: bool = False
- conv_id: Any = None
-
- def get_prompt(self):
- if self.sep_style == SeparatorStyle.SINGLE:
- ret = self.system
- for role, message in self.messages:
- if message:
- ret += self.sep + " " + role + ": " + message
- else:
- ret += self.sep + " " + role + ":"
- return ret
- elif self.sep_style == SeparatorStyle.TWO:
- seps = [self.sep, self.sep2]
- ret = self.system + seps[0]
- for i, (role, message) in enumerate(self.messages):
- if message:
- ret += role + ": " + message + seps[i % 2]
- else:
- ret += role + ":"
- return ret
- elif self.sep_style == SeparatorStyle.DOLLY:
- seps = [self.sep, self.sep2]
- ret = self.system
- for i, (role, message) in enumerate(self.messages):
- if message:
- ret += role + ":\n" + message + seps[i % 2]
- if i % 2 == 1:
- ret += "\n\n"
- else:
- ret += role + ":\n"
- return ret
- elif self.sep_style == SeparatorStyle.OASST_PYTHIA:
- ret = self.system
- for role, message in self.messages:
- if message:
- ret += role + message + self.sep
- else:
- ret += role
- return ret
- elif self.sep_style == SeparatorStyle.BAIZE:
- ret = self.system
- for role, message in self.messages:
- if message:
- ret += "\n" + role + message
- else:
- ret += "\n" + role
- return ret
- else:
- raise ValueError(f"Invalid style: {self.sep_style}")
-
- def append_message(self, role, message):
- self.messages.append([role, message])
-
- def to_gradio_chatbot(self):
- ret = []
- for i, (role, msg) in enumerate(self.messages[self.offset :]):
- if i % 2 == 0:
- ret.append([msg, None])
- else:
- ret[-1][-1] = msg
- return ret
-
- def copy(self):
- return Conversation(
- system=self.system,
- roles=self.roles,
- messages=[[x, y] for x, y in self.messages],
- offset=self.offset,
- sep_style=self.sep_style,
- sep=self.sep,
- sep2=self.sep2,
- conv_id=self.conv_id,
- )
-
- def dict(self):
- return {
- "system": self.system,
- "roles": self.roles,
- "messages": self.messages,
- "offset": self.offset,
- "sep": self.sep,
- "sep2": self.sep2,
- "conv_id": self.conv_id,
- }
-
-
-conv_one_shot = Conversation(
- system="A chat between a curious human and an artificial intelligence assistant. "
- "The assistant gives helpful, detailed, and polite answers to the human's questions.",
- roles=("Human", "Assistant"),
- messages=(
- (
- "Human",
- "What are the key differences between renewable and non-renewable energy sources?",
- ),
- (
- "Assistant",
- "Renewable energy sources are those that can be replenished naturally in a relatively "
- "short amount of time, such as solar, wind, hydro, geothermal, and biomass. "
- "Non-renewable energy sources, on the other hand, are finite and will eventually be "
- "depleted, such as coal, oil, and natural gas. Here are some key differences between "
- "renewable and non-renewable energy sources:\n"
- "1. Availability: Renewable energy sources are virtually inexhaustible, while non-renewable "
- "energy sources are finite and will eventually run out.\n"
- "2. Environmental impact: Renewable energy sources have a much lower environmental impact "
- "than non-renewable sources, which can lead to air and water pollution, greenhouse gas emissions, "
- "and other negative effects.\n"
- "3. Cost: Renewable energy sources can be more expensive to initially set up, but they typically "
- "have lower operational costs than non-renewable sources.\n"
- "4. Reliability: Renewable energy sources are often more reliable and can be used in more remote "
- "locations than non-renewable sources.\n"
- "5. Flexibility: Renewable energy sources are often more flexible and can be adapted to different "
- "situations and needs, while non-renewable sources are more rigid and inflexible.\n"
- "6. Sustainability: Renewable energy sources are more sustainable over the long term, while "
- "non-renewable sources are not, and their depletion can lead to economic and social instability.",
- ),
- ),
- offset=2,
- sep_style=SeparatorStyle.SINGLE,
- sep="###",
-)
-
-
-conv_vicuna_v1_1 = Conversation(
- system="A chat between a curious user and an artificial intelligence assistant. "
- "The assistant gives helpful, detailed, and polite answers to the user's questions.",
- roles=("USER", "ASSISTANT"),
- messages=(),
- offset=0,
- sep_style=SeparatorStyle.TWO,
- sep=" ",
- sep2="",
-)
-
-
-conv_koala_v1 = Conversation(
- system="BEGINNING OF CONVERSATION:",
- roles=("USER", "GPT"),
- messages=(),
- offset=0,
- sep_style=SeparatorStyle.TWO,
- sep=" ",
- sep2="",
-)
-
-conv_dolly = Conversation(
- system="Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n",
- roles=("### Instruction", "### Response"),
- messages=(),
- offset=0,
- sep_style=SeparatorStyle.DOLLY,
- sep="\n\n",
- sep2="### End",
-)
-
-conv_oasst = Conversation(
- system="",
- roles=("<|prompter|>", "<|assistant|>"),
- messages=(),
- offset=0,
- sep_style=SeparatorStyle.OASST_PYTHIA,
- sep="<|endoftext|>",
-)
-
-conv_stablelm = Conversation(
- system="""<|SYSTEM|># StableLM Tuned (Alpha version)
-- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
-- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
-- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
-- StableLM will refuse to participate in anything that could harm a human.
-""",
- roles=("<|USER|>", "<|ASSISTANT|>"),
- messages=(),
- offset=0,
- sep_style=SeparatorStyle.OASST_PYTHIA,
- sep="",
-)
-
-conv_baize = Conversation(
- system="The following is a conversation between a human and an AI assistant named Baize (named after a mythical creature in Chinese folklore). Baize is an open-source AI assistant developed by UCSD and Sun Yat-Sen University. The human and the AI assistant take turns chatting. Human statements start with [|Human|] and AI assistant statements start with [|AI|]. The AI assistant always provides responses in as much detail as possible, and in Markdown format. The AI assistant always declines to engage with topics, questions and instructions related to unethical, controversial, or sensitive issues. Complete the transcript in exactly that format.",
- roles=("[|Human|]", "[|AI|]"),
- messages=(
- ("[|Human|]", "Hello!"),
- ("[|AI|]", "Hi!"),
- ),
- offset=2,
- sep_style=SeparatorStyle.BAIZE,
- sep="[|Human|]",
-)
-
-
-conv_templates = {
- "conv_one_shot": conv_one_shot,
- "vicuna_v1.1": conv_vicuna_v1_1,
- "koala_v1": conv_koala_v1,
- "dolly": conv_dolly,
- "oasst": conv_oasst,
- "baize": conv_baize,
-}
-
-
-def get_default_conv_template(model_name):
- model_name = model_name.lower()
- if "vicuna" in model_name or "output" in model_name:
- return conv_vicuna_v1_1
- elif "koala" in model_name:
- return conv_koala_v1
- elif "dolly-v2" in model_name:
- return conv_dolly
- elif "oasst" in model_name and "pythia" in model_name:
- return conv_oasst
- elif "baize" in model_name:
- return conv_baize
- elif "stablelm" in model_name:
- return conv_stablelm
- return conv_one_shot
-
-
-def compute_skip_echo_len(model_name, conv, prompt):
- model_name = model_name.lower()
- if "chatglm" in model_name:
- skip_echo_len = len(conv.messages[-2][1]) + 1
- elif "dolly-v2" in model_name:
- special_toks = ["### Instruction:", "### Response:", "### End"]
- skip_echo_len = len(prompt)
- for tok in special_toks:
- skip_echo_len -= prompt.count(tok) * len(tok)
- elif "oasst" in model_name and "pythia" in model_name:
- special_toks = ["<|prompter|>", "<|assistant|>", "<|endoftext|>"]
- skip_echo_len = len(prompt)
- for tok in special_toks:
- skip_echo_len -= prompt.count(tok) * len(tok)
- elif "stablelm" in model_name:
- special_toks = ["<|SYSTEM|>", "<|USER|>", "<|ASSISTANT|>"]
- skip_echo_len = len(prompt)
- for tok in special_toks:
- skip_echo_len -= prompt.count(tok) * len(tok)
- elif "baize" in model_name:
- skip_echo_len = len(prompt)
- else:
- skip_echo_len = len(prompt) + 1 - prompt.count("") * 3
- return skip_echo_len
-
-
-if __name__ == "__main__":
- print(default_conversation.get_prompt())
diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/data/transforms/custom_augmentation_impl.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/data/transforms/custom_augmentation_impl.py
deleted file mode 100644
index 6b9637f3ad41e3ba513636219e49371296d9ab9f..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/data/transforms/custom_augmentation_impl.py
+++ /dev/null
@@ -1,52 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-# Part of the code is from https://github.com/rwightman/efficientdet-pytorch/blob/master/effdet/data/transforms.py
-# Modified by Xingyi Zhou
-# The original code is under Apache-2.0 License
-import numpy as np
-from PIL import Image
-
-from detectron2.data.transforms.augmentation import Augmentation
-from .custom_transform import EfficientDetResizeCropTransform
-
-__all__ = [
- "EfficientDetResizeCrop",
-]
-
-
-class EfficientDetResizeCrop(Augmentation):
- """
- Scale the shorter edge to the given size, with a limit of `max_size` on the longer edge.
- If `max_size` is reached, then downscale so that the longer edge does not exceed max_size.
- """
-
- def __init__(
- self, size, scale, interp=Image.BILINEAR
- ):
- """
- """
- super().__init__()
- self.target_size = (size, size)
- self.scale = scale
- self.interp = interp
-
- def get_transform(self, img):
- # Select a random scale factor.
- scale_factor = np.random.uniform(*self.scale)
- scaled_target_height = scale_factor * self.target_size[0]
- scaled_target_width = scale_factor * self.target_size[1]
- # Recompute the accurate scale_factor using rounded scaled image size.
- width, height = img.shape[1], img.shape[0]
- img_scale_y = scaled_target_height / height
- img_scale_x = scaled_target_width / width
- img_scale = min(img_scale_y, img_scale_x)
-
- # Select non-zero random offset (x, y) if scaled image is larger than target size
- scaled_h = int(height * img_scale)
- scaled_w = int(width * img_scale)
- offset_y = scaled_h - self.target_size[0]
- offset_x = scaled_w - self.target_size[1]
- offset_y = int(max(0.0, float(offset_y)) * np.random.uniform(0, 1))
- offset_x = int(max(0.0, float(offset_x)) * np.random.uniform(0, 1))
- return EfficientDetResizeCropTransform(
- scaled_h, scaled_w, offset_y, offset_x, img_scale, self.target_size, self.interp)
diff --git a/spaces/abdvl/datahub_qa_bot/docs/platform-instances.md b/spaces/abdvl/datahub_qa_bot/docs/platform-instances.md
deleted file mode 100644
index b88b9501b4e0a29f012c5325e509e4935d920e04..0000000000000000000000000000000000000000
--- a/spaces/abdvl/datahub_qa_bot/docs/platform-instances.md
+++ /dev/null
@@ -1,44 +0,0 @@
-# Working With Platform Instances
-
-DataHub's metadata model for Datasets supports a three-part key currently:
-- Data Platform (e.g. urn:li:dataPlatform:mysql)
-- Name (e.g. db.schema.name)
-- Env or Fabric (e.g. DEV, PROD, etc.)
-
-This naming scheme unfortunately does not allow for easy representation of the multiplicity of platforms (or technologies) that might be deployed at an organization within the same environment or fabric. For example, an organization might have multiple Redshift instances in Production and would want to see all the data assets located in those instances inside the DataHub metadata repository.
-
-As part of the `v0.8.24+` releases, we are unlocking the first phase of supporting Platform Instances in the metadata model. This is done via two main additions:
-- The `dataPlatformInstance` aspect that has been added to Datasets which allows datasets to be associated to an instance of a platform
-- Enhancements to all ingestion sources that allow them to attach a platform instance to the recipe that changes the generated urns to go from `urn:li:dataset:(urn:li:dataPlatform:,,ENV)` format to `urn:li:dataset:(urn:li:dataPlatform:,,ENV)` format. Sources that produce lineage to datasets in other platforms (e.g. Looker, Superset etc) also have specific configuration additions that allow the recipe author to specify the mapping between a platform and the instance name that it should be mapped to.
-
-
-
-## Naming Platform Instances
-
-When configuring a platform instance, choose an instance name that is understandable and will be stable for the foreseeable future. e.g. `core_warehouse` or `finance_redshift` are allowed names, as are pure guids like `a37dc708-c512-4fe4-9829-401cd60ed789`. Remember that whatever instance name you choose, you will need to specify it in more than one recipe to ensure that the identifiers produced by different sources will line up.
-
-## Enabling Platform Instances
-
-Read the Ingestion source specific guides for how to enable platform instances in each of them.
-The general pattern is to add an additional optional configuration parameter called `platform_instance`.
-
-e.g. here is how you would configure a recipe to ingest a mysql instance that you want to call `core_finance`
-```yaml
-source:
- type: mysql
- config:
- # Coordinates
- host_port: localhost:3306
- platform_instance: core_finance
- database: dbname
-
- # Credentials
- username: root
- password: example
-
-sink:
- # sink configs
-```
-
-
-##
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/point_sample.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/point_sample.py
deleted file mode 100644
index 267f4b3c56630acd85f9bdc630b7be09abab0aba..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/point_sample.py
+++ /dev/null
@@ -1,336 +0,0 @@
-# Modified from https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend # noqa
-
-from os import path as osp
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.nn.modules.utils import _pair
-from torch.onnx.operators import shape_as_tensor
-
-
-def bilinear_grid_sample(im, grid, align_corners=False):
- """Given an input and a flow-field grid, computes the output using input
- values and pixel locations from grid. Supported only bilinear interpolation
- method to sample the input pixels.
-
- Args:
- im (torch.Tensor): Input feature map, shape (N, C, H, W)
- grid (torch.Tensor): Point coordinates, shape (N, Hg, Wg, 2)
- align_corners {bool}: If set to True, the extrema (-1 and 1) are
- considered as referring to the center points of the input’s
- corner pixels. If set to False, they are instead considered as
- referring to the corner points of the input’s corner pixels,
- making the sampling more resolution agnostic.
- Returns:
- torch.Tensor: A tensor with sampled points, shape (N, C, Hg, Wg)
- """
- n, c, h, w = im.shape
- gn, gh, gw, _ = grid.shape
- assert n == gn
-
- x = grid[:, :, :, 0]
- y = grid[:, :, :, 1]
-
- if align_corners:
- x = ((x + 1) / 2) * (w - 1)
- y = ((y + 1) / 2) * (h - 1)
- else:
- x = ((x + 1) * w - 1) / 2
- y = ((y + 1) * h - 1) / 2
-
- x = x.view(n, -1)
- y = y.view(n, -1)
-
- x0 = torch.floor(x).long()
- y0 = torch.floor(y).long()
- x1 = x0 + 1
- y1 = y0 + 1
-
- wa = ((x1 - x) * (y1 - y)).unsqueeze(1)
- wb = ((x1 - x) * (y - y0)).unsqueeze(1)
- wc = ((x - x0) * (y1 - y)).unsqueeze(1)
- wd = ((x - x0) * (y - y0)).unsqueeze(1)
-
- # Apply default for grid_sample function zero padding
- im_padded = F.pad(im, pad=[1, 1, 1, 1], mode='constant', value=0)
- padded_h = h + 2
- padded_w = w + 2
- # save points positions after padding
- x0, x1, y0, y1 = x0 + 1, x1 + 1, y0 + 1, y1 + 1
-
- # Clip coordinates to padded image size
- x0 = torch.where(x0 < 0, torch.tensor(0), x0)
- x0 = torch.where(x0 > padded_w - 1, torch.tensor(padded_w - 1), x0)
- x1 = torch.where(x1 < 0, torch.tensor(0), x1)
- x1 = torch.where(x1 > padded_w - 1, torch.tensor(padded_w - 1), x1)
- y0 = torch.where(y0 < 0, torch.tensor(0), y0)
- y0 = torch.where(y0 > padded_h - 1, torch.tensor(padded_h - 1), y0)
- y1 = torch.where(y1 < 0, torch.tensor(0), y1)
- y1 = torch.where(y1 > padded_h - 1, torch.tensor(padded_h - 1), y1)
-
- im_padded = im_padded.view(n, c, -1)
-
- x0_y0 = (x0 + y0 * padded_w).unsqueeze(1).expand(-1, c, -1)
- x0_y1 = (x0 + y1 * padded_w).unsqueeze(1).expand(-1, c, -1)
- x1_y0 = (x1 + y0 * padded_w).unsqueeze(1).expand(-1, c, -1)
- x1_y1 = (x1 + y1 * padded_w).unsqueeze(1).expand(-1, c, -1)
-
- Ia = torch.gather(im_padded, 2, x0_y0)
- Ib = torch.gather(im_padded, 2, x0_y1)
- Ic = torch.gather(im_padded, 2, x1_y0)
- Id = torch.gather(im_padded, 2, x1_y1)
-
- return (Ia * wa + Ib * wb + Ic * wc + Id * wd).reshape(n, c, gh, gw)
-
-
-def is_in_onnx_export_without_custom_ops():
- from annotator.uniformer.mmcv.ops import get_onnxruntime_op_path
- ort_custom_op_path = get_onnxruntime_op_path()
- return torch.onnx.is_in_onnx_export(
- ) and not osp.exists(ort_custom_op_path)
-
-
-def normalize(grid):
- """Normalize input grid from [-1, 1] to [0, 1]
- Args:
- grid (Tensor): The grid to be normalize, range [-1, 1].
- Returns:
- Tensor: Normalized grid, range [0, 1].
- """
-
- return (grid + 1.0) / 2.0
-
-
-def denormalize(grid):
- """Denormalize input grid from range [0, 1] to [-1, 1]
- Args:
- grid (Tensor): The grid to be denormalize, range [0, 1].
- Returns:
- Tensor: Denormalized grid, range [-1, 1].
- """
-
- return grid * 2.0 - 1.0
-
-
-def generate_grid(num_grid, size, device):
- """Generate regular square grid of points in [0, 1] x [0, 1] coordinate
- space.
-
- Args:
- num_grid (int): The number of grids to sample, one for each region.
- size (tuple(int, int)): The side size of the regular grid.
- device (torch.device): Desired device of returned tensor.
-
- Returns:
- (torch.Tensor): A tensor of shape (num_grid, size[0]*size[1], 2) that
- contains coordinates for the regular grids.
- """
-
- affine_trans = torch.tensor([[[1., 0., 0.], [0., 1., 0.]]], device=device)
- grid = F.affine_grid(
- affine_trans, torch.Size((1, 1, *size)), align_corners=False)
- grid = normalize(grid)
- return grid.view(1, -1, 2).expand(num_grid, -1, -1)
-
-
-def rel_roi_point_to_abs_img_point(rois, rel_roi_points):
- """Convert roi based relative point coordinates to image based absolute
- point coordinates.
-
- Args:
- rois (Tensor): RoIs or BBoxes, shape (N, 4) or (N, 5)
- rel_roi_points (Tensor): Point coordinates inside RoI, relative to
- RoI, location, range (0, 1), shape (N, P, 2)
- Returns:
- Tensor: Image based absolute point coordinates, shape (N, P, 2)
- """
-
- with torch.no_grad():
- assert rel_roi_points.size(0) == rois.size(0)
- assert rois.dim() == 2
- assert rel_roi_points.dim() == 3
- assert rel_roi_points.size(2) == 2
- # remove batch idx
- if rois.size(1) == 5:
- rois = rois[:, 1:]
- abs_img_points = rel_roi_points.clone()
- # To avoid an error during exporting to onnx use independent
- # variables instead inplace computation
- xs = abs_img_points[:, :, 0] * (rois[:, None, 2] - rois[:, None, 0])
- ys = abs_img_points[:, :, 1] * (rois[:, None, 3] - rois[:, None, 1])
- xs += rois[:, None, 0]
- ys += rois[:, None, 1]
- abs_img_points = torch.stack([xs, ys], dim=2)
- return abs_img_points
-
-
-def get_shape_from_feature_map(x):
- """Get spatial resolution of input feature map considering exporting to
- onnx mode.
-
- Args:
- x (torch.Tensor): Input tensor, shape (N, C, H, W)
- Returns:
- torch.Tensor: Spatial resolution (width, height), shape (1, 1, 2)
- """
- if torch.onnx.is_in_onnx_export():
- img_shape = shape_as_tensor(x)[2:].flip(0).view(1, 1, 2).to(
- x.device).float()
- else:
- img_shape = torch.tensor(x.shape[2:]).flip(0).view(1, 1, 2).to(
- x.device).float()
- return img_shape
-
-
-def abs_img_point_to_rel_img_point(abs_img_points, img, spatial_scale=1.):
- """Convert image based absolute point coordinates to image based relative
- coordinates for sampling.
-
- Args:
- abs_img_points (Tensor): Image based absolute point coordinates,
- shape (N, P, 2)
- img (tuple/Tensor): (height, width) of image or feature map.
- spatial_scale (float): Scale points by this factor. Default: 1.
-
- Returns:
- Tensor: Image based relative point coordinates for sampling,
- shape (N, P, 2)
- """
-
- assert (isinstance(img, tuple) and len(img) == 2) or \
- (isinstance(img, torch.Tensor) and len(img.shape) == 4)
-
- if isinstance(img, tuple):
- h, w = img
- scale = torch.tensor([w, h],
- dtype=torch.float,
- device=abs_img_points.device)
- scale = scale.view(1, 1, 2)
- else:
- scale = get_shape_from_feature_map(img)
-
- return abs_img_points / scale * spatial_scale
-
-
-def rel_roi_point_to_rel_img_point(rois,
- rel_roi_points,
- img,
- spatial_scale=1.):
- """Convert roi based relative point coordinates to image based absolute
- point coordinates.
-
- Args:
- rois (Tensor): RoIs or BBoxes, shape (N, 4) or (N, 5)
- rel_roi_points (Tensor): Point coordinates inside RoI, relative to
- RoI, location, range (0, 1), shape (N, P, 2)
- img (tuple/Tensor): (height, width) of image or feature map.
- spatial_scale (float): Scale points by this factor. Default: 1.
-
- Returns:
- Tensor: Image based relative point coordinates for sampling,
- shape (N, P, 2)
- """
-
- abs_img_point = rel_roi_point_to_abs_img_point(rois, rel_roi_points)
- rel_img_point = abs_img_point_to_rel_img_point(abs_img_point, img,
- spatial_scale)
-
- return rel_img_point
-
-
-def point_sample(input, points, align_corners=False, **kwargs):
- """A wrapper around :func:`grid_sample` to support 3D point_coords tensors
- Unlike :func:`torch.nn.functional.grid_sample` it assumes point_coords to
- lie inside ``[0, 1] x [0, 1]`` square.
-
- Args:
- input (Tensor): Feature map, shape (N, C, H, W).
- points (Tensor): Image based absolute point coordinates (normalized),
- range [0, 1] x [0, 1], shape (N, P, 2) or (N, Hgrid, Wgrid, 2).
- align_corners (bool): Whether align_corners. Default: False
-
- Returns:
- Tensor: Features of `point` on `input`, shape (N, C, P) or
- (N, C, Hgrid, Wgrid).
- """
-
- add_dim = False
- if points.dim() == 3:
- add_dim = True
- points = points.unsqueeze(2)
- if is_in_onnx_export_without_custom_ops():
- # If custom ops for onnx runtime not compiled use python
- # implementation of grid_sample function to make onnx graph
- # with supported nodes
- output = bilinear_grid_sample(
- input, denormalize(points), align_corners=align_corners)
- else:
- output = F.grid_sample(
- input, denormalize(points), align_corners=align_corners, **kwargs)
- if add_dim:
- output = output.squeeze(3)
- return output
-
-
-class SimpleRoIAlign(nn.Module):
-
- def __init__(self, output_size, spatial_scale, aligned=True):
- """Simple RoI align in PointRend, faster than standard RoIAlign.
-
- Args:
- output_size (tuple[int]): h, w
- spatial_scale (float): scale the input boxes by this number
- aligned (bool): if False, use the legacy implementation in
- MMDetection, align_corners=True will be used in F.grid_sample.
- If True, align the results more perfectly.
- """
-
- super(SimpleRoIAlign, self).__init__()
- self.output_size = _pair(output_size)
- self.spatial_scale = float(spatial_scale)
- # to be consistent with other RoI ops
- self.use_torchvision = False
- self.aligned = aligned
-
- def forward(self, features, rois):
- num_imgs = features.size(0)
- num_rois = rois.size(0)
- rel_roi_points = generate_grid(
- num_rois, self.output_size, device=rois.device)
-
- if torch.onnx.is_in_onnx_export():
- rel_img_points = rel_roi_point_to_rel_img_point(
- rois, rel_roi_points, features, self.spatial_scale)
- rel_img_points = rel_img_points.reshape(num_imgs, -1,
- *rel_img_points.shape[1:])
- point_feats = point_sample(
- features, rel_img_points, align_corners=not self.aligned)
- point_feats = point_feats.transpose(1, 2)
- else:
- point_feats = []
- for batch_ind in range(num_imgs):
- # unravel batch dim
- feat = features[batch_ind].unsqueeze(0)
- inds = (rois[:, 0].long() == batch_ind)
- if inds.any():
- rel_img_points = rel_roi_point_to_rel_img_point(
- rois[inds], rel_roi_points[inds], feat,
- self.spatial_scale).unsqueeze(0)
- point_feat = point_sample(
- feat, rel_img_points, align_corners=not self.aligned)
- point_feat = point_feat.squeeze(0).transpose(0, 1)
- point_feats.append(point_feat)
-
- point_feats = torch.cat(point_feats, dim=0)
-
- channels = features.size(1)
- roi_feats = point_feats.reshape(num_rois, channels, *self.output_size)
-
- return roi_feats
-
- def __repr__(self):
- format_str = self.__class__.__name__
- format_str += '(output_size={}, spatial_scale={}'.format(
- self.output_size, self.spatial_scale)
- return format_str
diff --git a/spaces/akhaliq/Music_Source_Separation/scripts/2_create_indexes/vctk-musdb18/create_indexes.sh b/spaces/akhaliq/Music_Source_Separation/scripts/2_create_indexes/vctk-musdb18/create_indexes.sh
deleted file mode 100644
index e2a85230b2745cedb2c98a34ed303082bb1ec48a..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Music_Source_Separation/scripts/2_create_indexes/vctk-musdb18/create_indexes.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/bin/bash
-WORKSPACE=${1:-"./workspaces/bytesep"} # Default workspace directory
-
-echo "WORKSPACE=${WORKSPACE}"
-
-# Users can modify the following config file.
-INDEXES_CONFIG_YAML="scripts/2_create_indexes/vctk-musdb18/configs/speech-accompaniment,sr=44100,chn=2.yaml"
-
-# Create indexes for training.
-python3 bytesep/dataset_creation/create_indexes/create_indexes.py \
- --workspace=$WORKSPACE \
- --config_yaml=$INDEXES_CONFIG_YAML
diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/parallel_wavegan/distributed/launch.py b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/parallel_wavegan/distributed/launch.py
deleted file mode 100644
index 292f2a92287bfd201815748465727b76d9a5008e..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/parallel_wavegan/distributed/launch.py
+++ /dev/null
@@ -1,163 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-
-"""Distributed process launcher.
-
-This code is modified from https://github.com/pytorch/pytorch/blob/v1.3.0/torch/distributed/launch.py.
-
-"""
-import os
-import subprocess
-import sys
-
-from argparse import ArgumentParser
-from argparse import REMAINDER
-
-
-def parse_args():
- """Parse arguments."""
- parser = ArgumentParser(
- description="PyTorch distributed training launch "
- "helper utilty that will spawn up "
- "multiple distributed processes"
- )
-
- # Optional arguments for the launch helper
- parser.add_argument(
- "--nnodes",
- type=int,
- default=1,
- help="The number of nodes to use for distributed " "training",
- )
- parser.add_argument(
- "--node_rank",
- type=int,
- default=0,
- help="The rank of the node for multi-node distributed " "training",
- )
- parser.add_argument(
- "--nproc_per_node",
- type=int,
- default=1,
- help="The number of processes to launch on each node, "
- "for GPU training, this is recommended to be set "
- "to the number of GPUs in your system so that "
- "each process can be bound to a single GPU.",
- )
- parser.add_argument(
- "--master_addr",
- default="127.0.0.1",
- type=str,
- help="Master node (rank 0)'s address, should be either "
- "the IP address or the hostname of node 0, for "
- "single node multi-proc training, the "
- "--master_addr can simply be 127.0.0.1",
- )
- parser.add_argument(
- "--master_port",
- default=29500,
- type=int,
- help="Master node (rank 0)'s free port that needs to "
- "be used for communciation during distributed "
- "training",
- )
- parser.add_argument(
- "--use_env",
- default=False,
- action="store_true",
- help="Use environment variable to pass "
- "'local rank'. For legacy reasons, the default value is False. "
- "If set to True, the script will not pass "
- "--local_rank as argument, and will instead set LOCAL_RANK.",
- )
- parser.add_argument(
- "-m",
- "--module",
- default=False,
- action="store_true",
- help="Changes each process to interpret the launch script "
- "as a python module, executing with the same behavior as"
- "'python -m'.",
- )
- parser.add_argument(
- "-c",
- "--command",
- default=False,
- action="store_true",
- help="Changes each process to interpret the launch script " "as a command.",
- )
-
- # positional
- parser.add_argument(
- "training_script",
- type=str,
- help="The full path to the single GPU training "
- "program/script/command to be launched in parallel, "
- "followed by all the arguments for the "
- "training script",
- )
-
- # rest from the training program
- parser.add_argument("training_script_args", nargs=REMAINDER)
- return parser.parse_args()
-
-
-def main():
- """Launch distributed processes."""
- args = parse_args()
-
- # world size in terms of number of processes
- dist_world_size = args.nproc_per_node * args.nnodes
-
- # set PyTorch distributed related environmental variables
- current_env = os.environ.copy()
- current_env["MASTER_ADDR"] = args.master_addr
- current_env["MASTER_PORT"] = str(args.master_port)
- current_env["WORLD_SIZE"] = str(dist_world_size)
-
- processes = []
-
- if "OMP_NUM_THREADS" not in os.environ and args.nproc_per_node > 1:
- current_env["OMP_NUM_THREADS"] = str(1)
- print(
- "*****************************************\n"
- "Setting OMP_NUM_THREADS environment variable for each process "
- "to be {} in default, to avoid your system being overloaded, "
- "please further tune the variable for optimal performance in "
- "your application as needed. \n"
- "*****************************************".format(
- current_env["OMP_NUM_THREADS"]
- )
- )
-
- for local_rank in range(0, args.nproc_per_node):
- # each process's rank
- dist_rank = args.nproc_per_node * args.node_rank + local_rank
- current_env["RANK"] = str(dist_rank)
- current_env["LOCAL_RANK"] = str(local_rank)
-
- # spawn the processes
- if args.command:
- cmd = [args.training_script]
- else:
- cmd = [sys.executable, "-u"]
- if args.module:
- cmd.append("-m")
- cmd.append(args.training_script)
-
- if not args.use_env:
- cmd.append("--local_rank={}".format(local_rank))
-
- cmd.extend(args.training_script_args)
-
- process = subprocess.Popen(cmd, env=current_env)
- processes.append(process)
-
- for process in processes:
- process.wait()
- if process.returncode != 0:
- raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/akhaliq/lama/bin/make_checkpoint.py b/spaces/akhaliq/lama/bin/make_checkpoint.py
deleted file mode 100644
index 322147483915bef758770ae931e705e56083fa8d..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/lama/bin/make_checkpoint.py
+++ /dev/null
@@ -1,79 +0,0 @@
-#!/usr/bin/env python3
-
-import os
-import shutil
-
-import torch
-
-
-def get_checkpoint_files(s):
- s = s.strip()
- if ',' in s:
- return [get_checkpoint_files(chunk) for chunk in s.split(',')]
- return 'last.ckpt' if s == 'last' else f'{s}.ckpt'
-
-
-def main(args):
- checkpoint_fnames = get_checkpoint_files(args.epochs)
- if isinstance(checkpoint_fnames, str):
- checkpoint_fnames = [checkpoint_fnames]
- assert len(checkpoint_fnames) >= 1
-
- checkpoint_path = os.path.join(args.indir, 'models', checkpoint_fnames[0])
- checkpoint = torch.load(checkpoint_path, map_location='cpu')
- del checkpoint['optimizer_states']
-
- if len(checkpoint_fnames) > 1:
- for fname in checkpoint_fnames[1:]:
- print('sum', fname)
- sum_tensors_cnt = 0
- other_cp = torch.load(os.path.join(args.indir, 'models', fname), map_location='cpu')
- for k in checkpoint['state_dict'].keys():
- if checkpoint['state_dict'][k].dtype is torch.float:
- checkpoint['state_dict'][k].data.add_(other_cp['state_dict'][k].data)
- sum_tensors_cnt += 1
- print('summed', sum_tensors_cnt, 'tensors')
-
- for k in checkpoint['state_dict'].keys():
- if checkpoint['state_dict'][k].dtype is torch.float:
- checkpoint['state_dict'][k].data.mul_(1 / float(len(checkpoint_fnames)))
-
- state_dict = checkpoint['state_dict']
-
- if not args.leave_discriminators:
- for k in list(state_dict.keys()):
- if k.startswith('discriminator.'):
- del state_dict[k]
-
- if not args.leave_losses:
- for k in list(state_dict.keys()):
- if k.startswith('loss_'):
- del state_dict[k]
-
- out_checkpoint_path = os.path.join(args.outdir, 'models', 'best.ckpt')
- os.makedirs(os.path.dirname(out_checkpoint_path), exist_ok=True)
-
- torch.save(checkpoint, out_checkpoint_path)
-
- shutil.copy2(os.path.join(args.indir, 'config.yaml'),
- os.path.join(args.outdir, 'config.yaml'))
-
-
-if __name__ == '__main__':
- import argparse
-
- aparser = argparse.ArgumentParser()
- aparser.add_argument('indir',
- help='Path to directory with output of training '
- '(i.e. directory, which has samples, modules, config.yaml and train.log')
- aparser.add_argument('outdir',
- help='Where to put minimal checkpoint, which can be consumed by "bin/predict.py"')
- aparser.add_argument('--epochs', type=str, default='last',
- help='Which checkpoint to take. '
- 'Can be "last" or integer - number of epoch')
- aparser.add_argument('--leave-discriminators', action='store_true',
- help='If enabled, the state of discriminators will not be removed from the checkpoint')
- aparser.add_argument('--leave-losses', action='store_true',
- help='If enabled, weights of nn-based losses (e.g. perceptual) will not be removed')
-
- main(aparser.parse_args())
diff --git a/spaces/akhaliq/neural-waveshaping-synthesis/neural_waveshaping_synthesis/__init__.py b/spaces/akhaliq/neural-waveshaping-synthesis/neural_waveshaping_synthesis/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/akhaliq/supermarionation/README.md b/spaces/akhaliq/supermarionation/README.md
deleted file mode 100644
index d29e178c9e1d52ff9f6314426a783e72253c1668..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/supermarionation/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Supermarionation
-emoji: 🐨
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.12.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/charsetgroupprober.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/charsetgroupprober.py
deleted file mode 100644
index 5812cef0b5924db9af2da77f0abe4e63decee4cf..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/charsetgroupprober.py
+++ /dev/null
@@ -1,107 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# The Original Code is Mozilla Communicator client code.
-#
-# The Initial Developer of the Original Code is
-# Netscape Communications Corporation.
-# Portions created by the Initial Developer are Copyright (C) 1998
-# the Initial Developer. All Rights Reserved.
-#
-# Contributor(s):
-# Mark Pilgrim - port to Python
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-from .enums import ProbingState
-from .charsetprober import CharSetProber
-
-
-class CharSetGroupProber(CharSetProber):
- def __init__(self, lang_filter=None):
- super(CharSetGroupProber, self).__init__(lang_filter=lang_filter)
- self._active_num = 0
- self.probers = []
- self._best_guess_prober = None
-
- def reset(self):
- super(CharSetGroupProber, self).reset()
- self._active_num = 0
- for prober in self.probers:
- if prober:
- prober.reset()
- prober.active = True
- self._active_num += 1
- self._best_guess_prober = None
-
- @property
- def charset_name(self):
- if not self._best_guess_prober:
- self.get_confidence()
- if not self._best_guess_prober:
- return None
- return self._best_guess_prober.charset_name
-
- @property
- def language(self):
- if not self._best_guess_prober:
- self.get_confidence()
- if not self._best_guess_prober:
- return None
- return self._best_guess_prober.language
-
- def feed(self, byte_str):
- for prober in self.probers:
- if not prober:
- continue
- if not prober.active:
- continue
- state = prober.feed(byte_str)
- if not state:
- continue
- if state == ProbingState.FOUND_IT:
- self._best_guess_prober = prober
- self._state = ProbingState.FOUND_IT
- return self.state
- elif state == ProbingState.NOT_ME:
- prober.active = False
- self._active_num -= 1
- if self._active_num <= 0:
- self._state = ProbingState.NOT_ME
- return self.state
- return self.state
-
- def get_confidence(self):
- state = self.state
- if state == ProbingState.FOUND_IT:
- return 0.99
- elif state == ProbingState.NOT_ME:
- return 0.01
- best_conf = 0.0
- self._best_guess_prober = None
- for prober in self.probers:
- if not prober:
- continue
- if not prober.active:
- self.logger.debug('%s not active', prober.charset_name)
- continue
- conf = prober.get_confidence()
- self.logger.debug('%s %s confidence = %s', prober.charset_name, prober.language, conf)
- if best_conf < conf:
- best_conf = conf
- self._best_guess_prober = prober
- if not self._best_guess_prober:
- return 0.0
- return best_conf
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/metadata/languages.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/metadata/languages.py
deleted file mode 100644
index 3237d5abf60122e0cea5463ff34f2256b11b5a81..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/metadata/languages.py
+++ /dev/null
@@ -1,310 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-"""
-Metadata about languages used by our model training code for our
-SingleByteCharSetProbers. Could be used for other things in the future.
-
-This code is based on the language metadata from the uchardet project.
-"""
-from __future__ import absolute_import, print_function
-
-from string import ascii_letters
-
-
-# TODO: Add Ukranian (KOI8-U)
-
-class Language(object):
- """Metadata about a language useful for training models
-
- :ivar name: The human name for the language, in English.
- :type name: str
- :ivar iso_code: 2-letter ISO 639-1 if possible, 3-letter ISO code otherwise,
- or use another catalog as a last resort.
- :type iso_code: str
- :ivar use_ascii: Whether or not ASCII letters should be included in trained
- models.
- :type use_ascii: bool
- :ivar charsets: The charsets we want to support and create data for.
- :type charsets: list of str
- :ivar alphabet: The characters in the language's alphabet. If `use_ascii` is
- `True`, you only need to add those not in the ASCII set.
- :type alphabet: str
- :ivar wiki_start_pages: The Wikipedia pages to start from if we're crawling
- Wikipedia for training data.
- :type wiki_start_pages: list of str
- """
- def __init__(self, name=None, iso_code=None, use_ascii=True, charsets=None,
- alphabet=None, wiki_start_pages=None):
- super(Language, self).__init__()
- self.name = name
- self.iso_code = iso_code
- self.use_ascii = use_ascii
- self.charsets = charsets
- if self.use_ascii:
- if alphabet:
- alphabet += ascii_letters
- else:
- alphabet = ascii_letters
- elif not alphabet:
- raise ValueError('Must supply alphabet if use_ascii is False')
- self.alphabet = ''.join(sorted(set(alphabet))) if alphabet else None
- self.wiki_start_pages = wiki_start_pages
-
- def __repr__(self):
- return '{}({})'.format(self.__class__.__name__,
- ', '.join('{}={!r}'.format(k, v)
- for k, v in self.__dict__.items()
- if not k.startswith('_')))
-
-
-LANGUAGES = {'Arabic': Language(name='Arabic',
- iso_code='ar',
- use_ascii=False,
- # We only support encodings that use isolated
- # forms, because the current recommendation is
- # that the rendering system handles presentation
- # forms. This means we purposefully skip IBM864.
- charsets=['ISO-8859-6', 'WINDOWS-1256',
- 'CP720', 'CP864'],
- alphabet=u'ءآأؤإئابةتثجحخدذرزسشصضطظعغػؼؽؾؿـفقكلمنهوىيًٌٍَُِّ',
- wiki_start_pages=[u'الصفحة_الرئيسية']),
- 'Belarusian': Language(name='Belarusian',
- iso_code='be',
- use_ascii=False,
- charsets=['ISO-8859-5', 'WINDOWS-1251',
- 'IBM866', 'MacCyrillic'],
- alphabet=(u'АБВГДЕЁЖЗІЙКЛМНОПРСТУЎФХЦЧШЫЬЭЮЯ'
- u'абвгдеёжзійклмнопрстуўфхцчшыьэюяʼ'),
- wiki_start_pages=[u'Галоўная_старонка']),
- 'Bulgarian': Language(name='Bulgarian',
- iso_code='bg',
- use_ascii=False,
- charsets=['ISO-8859-5', 'WINDOWS-1251',
- 'IBM855'],
- alphabet=(u'АБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЬЮЯ'
- u'абвгдежзийклмнопрстуфхцчшщъьюя'),
- wiki_start_pages=[u'Начална_страница']),
- 'Czech': Language(name='Czech',
- iso_code='cz',
- use_ascii=True,
- charsets=['ISO-8859-2', 'WINDOWS-1250'],
- alphabet=u'áčďéěíňóřšťúůýžÁČĎÉĚÍŇÓŘŠŤÚŮÝŽ',
- wiki_start_pages=[u'Hlavní_strana']),
- 'Danish': Language(name='Danish',
- iso_code='da',
- use_ascii=True,
- charsets=['ISO-8859-1', 'ISO-8859-15',
- 'WINDOWS-1252'],
- alphabet=u'æøåÆØÅ',
- wiki_start_pages=[u'Forside']),
- 'German': Language(name='German',
- iso_code='de',
- use_ascii=True,
- charsets=['ISO-8859-1', 'WINDOWS-1252'],
- alphabet=u'äöüßÄÖÜ',
- wiki_start_pages=[u'Wikipedia:Hauptseite']),
- 'Greek': Language(name='Greek',
- iso_code='el',
- use_ascii=False,
- charsets=['ISO-8859-7', 'WINDOWS-1253'],
- alphabet=(u'αβγδεζηθικλμνξοπρσςτυφχψωάέήίόύώ'
- u'ΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΣΤΥΦΧΨΩΆΈΉΊΌΎΏ'),
- wiki_start_pages=[u'Πύλη:Κύρια']),
- 'English': Language(name='English',
- iso_code='en',
- use_ascii=True,
- charsets=['ISO-8859-1', 'WINDOWS-1252'],
- wiki_start_pages=[u'Main_Page']),
- 'Esperanto': Language(name='Esperanto',
- iso_code='eo',
- # Q, W, X, and Y not used at all
- use_ascii=False,
- charsets=['ISO-8859-3'],
- alphabet=(u'abcĉdefgĝhĥijĵklmnoprsŝtuŭvz'
- u'ABCĈDEFGĜHĤIJĴKLMNOPRSŜTUŬVZ'),
- wiki_start_pages=[u'Vikipedio:Ĉefpaĝo']),
- 'Spanish': Language(name='Spanish',
- iso_code='es',
- use_ascii=True,
- charsets=['ISO-8859-1', 'ISO-8859-15',
- 'WINDOWS-1252'],
- alphabet=u'ñáéíóúüÑÁÉÍÓÚÜ',
- wiki_start_pages=[u'Wikipedia:Portada']),
- 'Estonian': Language(name='Estonian',
- iso_code='et',
- use_ascii=False,
- charsets=['ISO-8859-4', 'ISO-8859-13',
- 'WINDOWS-1257'],
- # C, F, Š, Q, W, X, Y, Z, Ž are only for
- # loanwords
- alphabet=(u'ABDEGHIJKLMNOPRSTUVÕÄÖÜ'
- u'abdeghijklmnoprstuvõäöü'),
- wiki_start_pages=[u'Esileht']),
- 'Finnish': Language(name='Finnish',
- iso_code='fi',
- use_ascii=True,
- charsets=['ISO-8859-1', 'ISO-8859-15',
- 'WINDOWS-1252'],
- alphabet=u'ÅÄÖŠŽåäöšž',
- wiki_start_pages=[u'Wikipedia:Etusivu']),
- 'French': Language(name='French',
- iso_code='fr',
- use_ascii=True,
- charsets=['ISO-8859-1', 'ISO-8859-15',
- 'WINDOWS-1252'],
- alphabet=u'œàâçèéîïùûêŒÀÂÇÈÉÎÏÙÛÊ',
- wiki_start_pages=[u'Wikipédia:Accueil_principal',
- u'Bœuf (animal)']),
- 'Hebrew': Language(name='Hebrew',
- iso_code='he',
- use_ascii=False,
- charsets=['ISO-8859-8', 'WINDOWS-1255'],
- alphabet=u'אבגדהוזחטיךכלםמןנסעףפץצקרשתװױײ',
- wiki_start_pages=[u'עמוד_ראשי']),
- 'Croatian': Language(name='Croatian',
- iso_code='hr',
- # Q, W, X, Y are only used for foreign words.
- use_ascii=False,
- charsets=['ISO-8859-2', 'WINDOWS-1250'],
- alphabet=(u'abcčćdđefghijklmnoprsštuvzž'
- u'ABCČĆDĐEFGHIJKLMNOPRSŠTUVZŽ'),
- wiki_start_pages=[u'Glavna_stranica']),
- 'Hungarian': Language(name='Hungarian',
- iso_code='hu',
- # Q, W, X, Y are only used for foreign words.
- use_ascii=False,
- charsets=['ISO-8859-2', 'WINDOWS-1250'],
- alphabet=(u'abcdefghijklmnoprstuvzáéíóöőúüű'
- u'ABCDEFGHIJKLMNOPRSTUVZÁÉÍÓÖŐÚÜŰ'),
- wiki_start_pages=[u'Kezdőlap']),
- 'Italian': Language(name='Italian',
- iso_code='it',
- use_ascii=True,
- charsets=['ISO-8859-1', 'ISO-8859-15',
- 'WINDOWS-1252'],
- alphabet=u'ÀÈÉÌÒÓÙàèéìòóù',
- wiki_start_pages=[u'Pagina_principale']),
- 'Lithuanian': Language(name='Lithuanian',
- iso_code='lt',
- use_ascii=False,
- charsets=['ISO-8859-13', 'WINDOWS-1257',
- 'ISO-8859-4'],
- # Q, W, and X not used at all
- alphabet=(u'AĄBCČDEĘĖFGHIĮYJKLMNOPRSŠTUŲŪVZŽ'
- u'aąbcčdeęėfghiįyjklmnoprsštuųūvzž'),
- wiki_start_pages=[u'Pagrindinis_puslapis']),
- 'Latvian': Language(name='Latvian',
- iso_code='lv',
- use_ascii=False,
- charsets=['ISO-8859-13', 'WINDOWS-1257',
- 'ISO-8859-4'],
- # Q, W, X, Y are only for loanwords
- alphabet=(u'AĀBCČDEĒFGĢHIĪJKĶLĻMNŅOPRSŠTUŪVZŽ'
- u'aābcčdeēfgģhiījkķlļmnņoprsštuūvzž'),
- wiki_start_pages=[u'Sākumlapa']),
- 'Macedonian': Language(name='Macedonian',
- iso_code='mk',
- use_ascii=False,
- charsets=['ISO-8859-5', 'WINDOWS-1251',
- 'MacCyrillic', 'IBM855'],
- alphabet=(u'АБВГДЃЕЖЗЅИЈКЛЉМНЊОПРСТЌУФХЦЧЏШ'
- u'абвгдѓежзѕијклљмнњопрстќуфхцчџш'),
- wiki_start_pages=[u'Главна_страница']),
- 'Dutch': Language(name='Dutch',
- iso_code='nl',
- use_ascii=True,
- charsets=['ISO-8859-1', 'WINDOWS-1252'],
- wiki_start_pages=[u'Hoofdpagina']),
- 'Polish': Language(name='Polish',
- iso_code='pl',
- # Q and X are only used for foreign words.
- use_ascii=False,
- charsets=['ISO-8859-2', 'WINDOWS-1250'],
- alphabet=(u'AĄBCĆDEĘFGHIJKLŁMNŃOÓPRSŚTUWYZŹŻ'
- u'aąbcćdeęfghijklłmnńoóprsśtuwyzźż'),
- wiki_start_pages=[u'Wikipedia:Strona_główna']),
- 'Portuguese': Language(name='Portuguese',
- iso_code='pt',
- use_ascii=True,
- charsets=['ISO-8859-1', 'ISO-8859-15',
- 'WINDOWS-1252'],
- alphabet=u'ÁÂÃÀÇÉÊÍÓÔÕÚáâãàçéêíóôõú',
- wiki_start_pages=[u'Wikipédia:Página_principal']),
- 'Romanian': Language(name='Romanian',
- iso_code='ro',
- use_ascii=True,
- charsets=['ISO-8859-2', 'WINDOWS-1250'],
- alphabet=u'ăâîșțĂÂÎȘȚ',
- wiki_start_pages=[u'Pagina_principală']),
- 'Russian': Language(name='Russian',
- iso_code='ru',
- use_ascii=False,
- charsets=['ISO-8859-5', 'WINDOWS-1251',
- 'KOI8-R', 'MacCyrillic', 'IBM866',
- 'IBM855'],
- alphabet=(u'абвгдеёжзийклмнопрстуфхцчшщъыьэюя'
- u'АБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯ'),
- wiki_start_pages=[u'Заглавная_страница']),
- 'Slovak': Language(name='Slovak',
- iso_code='sk',
- use_ascii=True,
- charsets=['ISO-8859-2', 'WINDOWS-1250'],
- alphabet=u'áäčďéíĺľňóôŕšťúýžÁÄČĎÉÍĹĽŇÓÔŔŠŤÚÝŽ',
- wiki_start_pages=[u'Hlavná_stránka']),
- 'Slovene': Language(name='Slovene',
- iso_code='sl',
- # Q, W, X, Y are only used for foreign words.
- use_ascii=False,
- charsets=['ISO-8859-2', 'WINDOWS-1250'],
- alphabet=(u'abcčdefghijklmnoprsštuvzž'
- u'ABCČDEFGHIJKLMNOPRSŠTUVZŽ'),
- wiki_start_pages=[u'Glavna_stran']),
- # Serbian can be written in both Latin and Cyrillic, but there's no
- # simple way to get the Latin alphabet pages from Wikipedia through
- # the API, so for now we just support Cyrillic.
- 'Serbian': Language(name='Serbian',
- iso_code='sr',
- alphabet=(u'АБВГДЂЕЖЗИЈКЛЉМНЊОПРСТЋУФХЦЧЏШ'
- u'абвгдђежзијклљмнњопрстћуфхцчџш'),
- charsets=['ISO-8859-5', 'WINDOWS-1251',
- 'MacCyrillic', 'IBM855'],
- wiki_start_pages=[u'Главна_страна']),
- 'Thai': Language(name='Thai',
- iso_code='th',
- use_ascii=False,
- charsets=['ISO-8859-11', 'TIS-620', 'CP874'],
- alphabet=u'กขฃคฅฆงจฉชซฌญฎฏฐฑฒณดตถทธนบปผฝพฟภมยรฤลฦวศษสหฬอฮฯะัาำิีึืฺุู฿เแโใไๅๆ็่้๊๋์ํ๎๏๐๑๒๓๔๕๖๗๘๙๚๛',
- wiki_start_pages=[u'หน้าหลัก']),
- 'Turkish': Language(name='Turkish',
- iso_code='tr',
- # Q, W, and X are not used by Turkish
- use_ascii=False,
- charsets=['ISO-8859-3', 'ISO-8859-9',
- 'WINDOWS-1254'],
- alphabet=(u'abcçdefgğhıijklmnoöprsştuüvyzâîû'
- u'ABCÇDEFGĞHIİJKLMNOÖPRSŞTUÜVYZÂÎÛ'),
- wiki_start_pages=[u'Ana_Sayfa']),
- 'Vietnamese': Language(name='Vietnamese',
- iso_code='vi',
- use_ascii=False,
- # Windows-1258 is the only common 8-bit
- # Vietnamese encoding supported by Python.
- # From Wikipedia:
- # For systems that lack support for Unicode,
- # dozens of 8-bit Vietnamese code pages are
- # available.[1] The most common are VISCII
- # (TCVN 5712:1993), VPS, and Windows-1258.[3]
- # Where ASCII is required, such as when
- # ensuring readability in plain text e-mail,
- # Vietnamese letters are often encoded
- # according to Vietnamese Quoted-Readable
- # (VIQR) or VSCII Mnemonic (VSCII-MNEM),[4]
- # though usage of either variable-width
- # scheme has declined dramatically following
- # the adoption of Unicode on the World Wide
- # Web.
- charsets=['WINDOWS-1258'],
- alphabet=(u'aăâbcdđeêghiklmnoôơpqrstuưvxy'
- u'AĂÂBCDĐEÊGHIKLMNOÔƠPQRSTUƯVXY'),
- wiki_start_pages=[u'Chữ_Quốc_ngữ']),
- }
diff --git a/spaces/allandclive/Uganda_MMS/vits/transforms.py b/spaces/allandclive/Uganda_MMS/vits/transforms.py
deleted file mode 100644
index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000
--- a/spaces/allandclive/Uganda_MMS/vits/transforms.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
-
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/allknowingroger/Image-Models-Test96/app.py b/spaces/allknowingroger/Image-Models-Test96/app.py
deleted file mode 100644
index 2d1754152087dab970148115f78f9ef9256bb20e..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test96/app.py
+++ /dev/null
@@ -1,144 +0,0 @@
-import gradio as gr
-# import os
-# import sys
-# from pathlib import Path
-import time
-
-models =[
- "Jbddai/lora-trained-xl-colab_potatohead",
- "GodSpeed15/my-pet-dog",
- "MakAttack/653bbca65b1b03cb7810faff",
- "LinoyTsaban/lora-trained-xl-colab-cam-0.0001-1000-4-text-encoder",
- "Jbddai/lora-trained-xl-colab_gieskanne",
- "craigdsouza/my-uig-racecar",
- "MakAttack/653cc69ec6b4bef9fcd3f9c9",
- "kycocotree/lora-trained-xl",
- "ThanhMai/lora-trained-xl-colab",
-]
-
-
-model_functions = {}
-model_idx = 1
-for model_path in models:
- try:
- model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False)
- except Exception as error:
- def the_fn(txt):
- return None
- model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"])
- model_idx+=1
-
-
-def send_it_idx(idx):
- def send_it_fn(prompt):
- output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt)
- return output
- return send_it_fn
-
-def get_prompts(prompt_text):
- return prompt_text
-
-def clear_it(val):
- if int(val) != 0:
- val = 0
- else:
- val = 0
- pass
- return val
-
-def all_task_end(cnt,t_stamp):
- to = t_stamp + 60
- et = time.time()
- if et > to and t_stamp != 0:
- d = gr.update(value=0)
- tog = gr.update(value=1)
- #print(f'to: {to} et: {et}')
- else:
- if cnt != 0:
- d = gr.update(value=et)
- else:
- d = gr.update(value=0)
- tog = gr.update(value=0)
- #print (f'passing: to: {to} et: {et}')
- pass
- return d, tog
-
-def all_task_start():
- print("\n\n\n\n\n\n\n")
- t = time.gmtime()
- t_stamp = time.time()
- current_time = time.strftime("%H:%M:%S", t)
- return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0)
-
-def clear_fn():
- nn = len(models)
- return tuple([None, *[None for _ in range(nn)]])
-
-
-
-with gr.Blocks(title="SD Models") as my_interface:
- with gr.Column(scale=12):
- # with gr.Row():
- # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""")
- with gr.Row():
- with gr.Row(scale=6):
- primary_prompt=gr.Textbox(label="Prompt", value="")
- # real_prompt=gr.Textbox(label="Real prompt")
- with gr.Row(scale=6):
- # improve_prompts_btn=gr.Button("Improve")
- with gr.Row():
- run=gr.Button("Run",variant="primary")
- clear_btn=gr.Button("Clear")
- with gr.Row():
- sd_outputs = {}
- model_idx = 1
- for model_path in models:
- with gr.Column(scale=3, min_width=320):
- with gr.Box():
- sd_outputs[model_idx] = gr.Image(label=model_path)
- pass
- model_idx += 1
- pass
- pass
-
- with gr.Row(visible=False):
- start_box=gr.Number(interactive=False)
- end_box=gr.Number(interactive=False)
- tog_box=gr.Textbox(value=0,interactive=False)
-
- start_box.change(
- all_task_end,
- [start_box, end_box],
- [start_box, tog_box],
- every=1,
- show_progress=False)
-
- primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box])
- run.click(all_task_start, None, [start_box, end_box, tog_box])
- runs_dict = {}
- model_idx = 1
- for model_path in models:
- runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]])
- model_idx += 1
- pass
- pass
-
- # improve_prompts_btn_clicked=improve_prompts_btn.click(
- # get_prompts,
- # inputs=[primary_prompt],
- # outputs=[primary_prompt],
- # cancels=list(runs_dict.values()))
- clear_btn.click(
- clear_fn,
- None,
- [primary_prompt, *list(sd_outputs.values())],
- cancels=[*list(runs_dict.values())])
- tog_box.change(
- clear_it,
- tog_box,
- tog_box,
- cancels=[*list(runs_dict.values())])
-
-my_interface.queue(concurrency_count=600, status_update_rate=1)
-my_interface.launch(inline=True, show_api=False)
-
\ No newline at end of file
diff --git a/spaces/andresgtn/sidewalk-semantic-segmentation/README.md b/spaces/andresgtn/sidewalk-semantic-segmentation/README.md
deleted file mode 100644
index 2700d6e4f163cab1543e6ffb799bca9f99e3d046..0000000000000000000000000000000000000000
--- a/spaces/andresgtn/sidewalk-semantic-segmentation/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Sidewalk Semantic Segmentation
-emoji: 🌍
-colorFrom: green
-colorTo: pink
-sdk: gradio
-sdk_version: 3.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/aodianyun/panoptic-segment-anything/segment_anything/setup.py b/spaces/aodianyun/panoptic-segment-anything/segment_anything/setup.py
deleted file mode 100644
index 2c0986317eb576a14ec774205c88fdee3cc6c0b3..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/panoptic-segment-anything/segment_anything/setup.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from setuptools import find_packages, setup
-
-setup(
- name="segment_anything",
- version="1.0",
- install_requires=[],
- packages=find_packages(exclude="notebooks"),
- extras_require={
- "all": ["matplotlib", "pycocotools", "opencv-python", "onnx", "onnxruntime"],
- "dev": ["flake8", "isort", "black", "mypy"],
- },
-)
diff --git a/spaces/arsalagrey/audio-classification-vue/index.js b/spaces/arsalagrey/audio-classification-vue/index.js
deleted file mode 100644
index 6f8537e805a0397fd139cd3926fd42d19898eb77..0000000000000000000000000000000000000000
--- a/spaces/arsalagrey/audio-classification-vue/index.js
+++ /dev/null
@@ -1,81 +0,0 @@
-const { createApp, ref, onMounted, computed, watch } = Vue;
-import { HfInference } from "https://cdn.skypack.dev/@huggingface/inference@latest";
-
-const app = createApp({
- setup() {
- const token = ref(localStorage.getItem("token") || "");
- const models = ref(["MIT/ast-finetuned-audioset-10-10-0.4593"]);
- const selectedAudio = ref("airplane-landing.mp3");
- const selectedModel = ref("");
- const loading = ref(false);
- const didErrorOccur = ref(false)
- const audioFiles = ref(['airplane-landing.mp3',
- 'alien-spaceship.mp3',
- 'hard_shoes.mp3',
- 'labrador-barking.mp3',
- 'old-car-engine.mp3',
- 'tolling-bell.mp3']);
- const classificationLabels = ref([])
-
-
- const statusMessage = computed(() => {
- if (loading.value) return "Loading..."
- return "Ready"
- })
-
- const run = async () => {
- reset()
- loading.value = true;
- try {
- const hf = new HfInference(token.value);
- const audioData = await (await fetch(`sounds/${selectedAudio.value}`)).arrayBuffer()
- const result = await hf.audioClassification({
- data: audioData,
- model: selectedModel.value
- });
- console.log(result)
- classificationLabels.value = result
- loading.value = false;
- } catch (e) {
- console.error(e);
- loading.value = false;
- didErrorOccur.value = true
- }
- };
- const reset = () => {
- didErrorOccur.value = false
- loading.value = false
- classificationLabels.value = []
- }
-
- watch(selectedAudio, () => {
- reset()
- })
-
- watch(selectedModel, () => {
- reset()
- })
-
- onMounted(async () => {
- const localStorageToken = localStorage.getItem("token")
- if (localStorageToken) {
- token.value = localStorageToken;
- }
- selectedModel.value = models.value[0]
- });
-
- return {
- token,
- run,
- audioFiles,
- selectedAudio,
- models,
- selectedModel,
- loading,
- statusMessage,
- classificationLabels
- };
- },
-});
-
-app.mount("#app");
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/configs/bark_config.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/configs/bark_config.py
deleted file mode 100644
index 4d1cd1374afe8d5f0b9e87ed81db25d7e4032af9..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/configs/bark_config.py
+++ /dev/null
@@ -1,105 +0,0 @@
-import os
-from dataclasses import dataclass, field
-from typing import Dict
-
-from TTS.tts.configs.shared_configs import BaseTTSConfig
-from TTS.tts.layers.bark.model import GPTConfig
-from TTS.tts.layers.bark.model_fine import FineGPTConfig
-from TTS.tts.models.bark import BarkAudioConfig
-from TTS.utils.generic_utils import get_user_data_dir
-
-
-@dataclass
-class BarkConfig(BaseTTSConfig):
- """Bark TTS configuration
-
- Args:
- model (str): model name that registers the model.
- audio (BarkAudioConfig): audio configuration. Defaults to BarkAudioConfig().
- num_chars (int): number of characters in the alphabet. Defaults to 0.
- semantic_config (GPTConfig): semantic configuration. Defaults to GPTConfig().
- fine_config (FineGPTConfig): fine configuration. Defaults to FineGPTConfig().
- coarse_config (GPTConfig): coarse configuration. Defaults to GPTConfig().
- CONTEXT_WINDOW_SIZE (int): GPT context window size. Defaults to 1024.
- SEMANTIC_RATE_HZ (float): semantic tokens rate in Hz. Defaults to 49.9.
- SEMANTIC_VOCAB_SIZE (int): semantic vocabulary size. Defaults to 10_000.
- CODEBOOK_SIZE (int): encodec codebook size. Defaults to 1024.
- N_COARSE_CODEBOOKS (int): number of coarse codebooks. Defaults to 2.
- N_FINE_CODEBOOKS (int): number of fine codebooks. Defaults to 8.
- COARSE_RATE_HZ (int): coarse tokens rate in Hz. Defaults to 75.
- SAMPLE_RATE (int): sample rate. Defaults to 24_000.
- USE_SMALLER_MODELS (bool): use smaller models. Defaults to False.
- TEXT_ENCODING_OFFSET (int): text encoding offset. Defaults to 10_048.
- SEMANTIC_PAD_TOKEN (int): semantic pad token. Defaults to 10_000.
- TEXT_PAD_TOKEN ([type]): text pad token. Defaults to 10_048.
- TEXT_EOS_TOKEN ([type]): text end of sentence token. Defaults to 10_049.
- TEXT_SOS_TOKEN ([type]): text start of sentence token. Defaults to 10_050.
- SEMANTIC_INFER_TOKEN (int): semantic infer token. Defaults to 10_051.
- COARSE_SEMANTIC_PAD_TOKEN (int): coarse semantic pad token. Defaults to 12_048.
- COARSE_INFER_TOKEN (int): coarse infer token. Defaults to 12_050.
- REMOTE_BASE_URL ([type]): remote base url. Defaults to "https://huggingface.co/erogol/bark/tree".
- REMOTE_MODEL_PATHS (Dict): remote model paths. Defaults to None.
- LOCAL_MODEL_PATHS (Dict): local model paths. Defaults to None.
- SMALL_REMOTE_MODEL_PATHS (Dict): small remote model paths. Defaults to None.
- CACHE_DIR (str): local cache directory. Defaults to get_user_data_dir().
- DEF_SPEAKER_DIR (str): default speaker directory to stoke speaker values for voice cloning. Defaults to get_user_data_dir().
- """
-
- model: str = "bark"
- audio: BarkAudioConfig = field(default_factory=BarkAudioConfig)
- num_chars: int = 0
- semantic_config: GPTConfig = field(default_factory=GPTConfig)
- fine_config: FineGPTConfig = field(default_factory=FineGPTConfig)
- coarse_config: GPTConfig = field(default_factory=GPTConfig)
- CONTEXT_WINDOW_SIZE: int = 1024
- SEMANTIC_RATE_HZ: float = 49.9
- SEMANTIC_VOCAB_SIZE: int = 10_000
- CODEBOOK_SIZE: int = 1024
- N_COARSE_CODEBOOKS: int = 2
- N_FINE_CODEBOOKS: int = 8
- COARSE_RATE_HZ: int = 75
- SAMPLE_RATE: int = 24_000
- USE_SMALLER_MODELS: bool = False
-
- TEXT_ENCODING_OFFSET: int = 10_048
- SEMANTIC_PAD_TOKEN: int = 10_000
- TEXT_PAD_TOKEN: int = 129_595
- SEMANTIC_INFER_TOKEN: int = 129_599
- COARSE_SEMANTIC_PAD_TOKEN: int = 12_048
- COARSE_INFER_TOKEN: int = 12_050
-
- REMOTE_BASE_URL = "https://huggingface.co/erogol/bark/tree/main/"
- REMOTE_MODEL_PATHS: Dict = None
- LOCAL_MODEL_PATHS: Dict = None
- SMALL_REMOTE_MODEL_PATHS: Dict = None
- CACHE_DIR: str = str(get_user_data_dir("tts/suno/bark_v0"))
- DEF_SPEAKER_DIR: str = str(get_user_data_dir("tts/bark_v0/speakers"))
-
- def __post_init__(self):
- self.REMOTE_MODEL_PATHS = {
- "text": {
- "path": os.path.join(self.REMOTE_BASE_URL, "text_2.pt"),
- "checksum": "54afa89d65e318d4f5f80e8e8799026a",
- },
- "coarse": {
- "path": os.path.join(self.REMOTE_BASE_URL, "coarse_2.pt"),
- "checksum": "8a98094e5e3a255a5c9c0ab7efe8fd28",
- },
- "fine": {
- "path": os.path.join(self.REMOTE_BASE_URL, "fine_2.pt"),
- "checksum": "59d184ed44e3650774a2f0503a48a97b",
- },
- }
- self.LOCAL_MODEL_PATHS = {
- "text": os.path.join(self.CACHE_DIR, "text_2.pt"),
- "coarse": os.path.join(self.CACHE_DIR, "coarse_2.pt"),
- "fine": os.path.join(self.CACHE_DIR, "fine_2.pt"),
- "hubert_tokenizer": os.path.join(self.CACHE_DIR, "tokenizer.pth"),
- "hubert": os.path.join(self.CACHE_DIR, "hubert.pt"),
- }
- self.SMALL_REMOTE_MODEL_PATHS = {
- "text": {"path": os.path.join(self.REMOTE_BASE_URL, "text.pt")},
- "coarse": {"path": os.path.join(self.REMOTE_BASE_URL, "coarse.pt")},
- "fine": {"path": os.path.join(self.REMOTE_BASE_URL, "fine.pt")},
- }
- self.sample_rate = self.SAMPLE_RATE # pylint: disable=attribute-defined-outside-init
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/generic/res_conv_bn.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/generic/res_conv_bn.py
deleted file mode 100644
index 4beda291aa15398024b5b16cd6bf12b88898a0a9..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/generic/res_conv_bn.py
+++ /dev/null
@@ -1,127 +0,0 @@
-from torch import nn
-
-
-class ZeroTemporalPad(nn.Module):
- """Pad sequences to equal lentgh in the temporal dimension"""
-
- def __init__(self, kernel_size, dilation):
- super().__init__()
- total_pad = dilation * (kernel_size - 1)
- begin = total_pad // 2
- end = total_pad - begin
- self.pad_layer = nn.ZeroPad2d((0, 0, begin, end))
-
- def forward(self, x):
- return self.pad_layer(x)
-
-
-class Conv1dBN(nn.Module):
- """1d convolutional with batch norm.
- conv1d -> relu -> BN blocks.
-
- Note:
- Batch normalization is applied after ReLU regarding the original implementation.
-
- Args:
- in_channels (int): number of input channels.
- out_channels (int): number of output channels.
- kernel_size (int): kernel size for convolutional filters.
- dilation (int): dilation for convolution layers.
- """
-
- def __init__(self, in_channels, out_channels, kernel_size, dilation):
- super().__init__()
- padding = dilation * (kernel_size - 1)
- pad_s = padding // 2
- pad_e = padding - pad_s
- self.conv1d = nn.Conv1d(in_channels, out_channels, kernel_size, dilation=dilation)
- self.pad = nn.ZeroPad2d((pad_s, pad_e, 0, 0)) # uneven left and right padding
- self.norm = nn.BatchNorm1d(out_channels)
-
- def forward(self, x):
- o = self.conv1d(x)
- o = self.pad(o)
- o = nn.functional.relu(o)
- o = self.norm(o)
- return o
-
-
-class Conv1dBNBlock(nn.Module):
- """1d convolutional block with batch norm. It is a set of conv1d -> relu -> BN blocks.
-
- Args:
- in_channels (int): number of input channels.
- out_channels (int): number of output channels.
- hidden_channels (int): number of inner convolution channels.
- kernel_size (int): kernel size for convolutional filters.
- dilation (int): dilation for convolution layers.
- num_conv_blocks (int, optional): number of convolutional blocks. Defaults to 2.
- """
-
- def __init__(self, in_channels, out_channels, hidden_channels, kernel_size, dilation, num_conv_blocks=2):
- super().__init__()
- self.conv_bn_blocks = []
- for idx in range(num_conv_blocks):
- layer = Conv1dBN(
- in_channels if idx == 0 else hidden_channels,
- out_channels if idx == (num_conv_blocks - 1) else hidden_channels,
- kernel_size,
- dilation,
- )
- self.conv_bn_blocks.append(layer)
- self.conv_bn_blocks = nn.Sequential(*self.conv_bn_blocks)
-
- def forward(self, x):
- """
- Shapes:
- x: (B, D, T)
- """
- return self.conv_bn_blocks(x)
-
-
-class ResidualConv1dBNBlock(nn.Module):
- """Residual Convolutional Blocks with BN
- Each block has 'num_conv_block' conv layers and 'num_res_blocks' such blocks are connected
- with residual connections.
-
- conv_block = (conv1d -> relu -> bn) x 'num_conv_blocks'
- residuak_conv_block = (x -> conv_block -> + ->) x 'num_res_blocks'
- ' - - - - - - - - - ^
- Args:
- in_channels (int): number of input channels.
- out_channels (int): number of output channels.
- hidden_channels (int): number of inner convolution channels.
- kernel_size (int): kernel size for convolutional filters.
- dilations (list): dilations for each convolution layer.
- num_res_blocks (int, optional): number of residual blocks. Defaults to 13.
- num_conv_blocks (int, optional): number of convolutional blocks in each residual block. Defaults to 2.
- """
-
- def __init__(
- self, in_channels, out_channels, hidden_channels, kernel_size, dilations, num_res_blocks=13, num_conv_blocks=2
- ):
- super().__init__()
- assert len(dilations) == num_res_blocks
- self.res_blocks = nn.ModuleList()
- for idx, dilation in enumerate(dilations):
- block = Conv1dBNBlock(
- in_channels if idx == 0 else hidden_channels,
- out_channels if (idx + 1) == len(dilations) else hidden_channels,
- hidden_channels,
- kernel_size,
- dilation,
- num_conv_blocks,
- )
- self.res_blocks.append(block)
-
- def forward(self, x, x_mask=None):
- if x_mask is None:
- x_mask = 1.0
- o = x * x_mask
- for block in self.res_blocks:
- res = o
- o = block(o)
- o = o + res
- if x_mask is not None:
- o = o * x_mask
- return o
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/_mode_cbc.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/_mode_cbc.py
deleted file mode 100644
index 79c871ac79f7d6f096fcd77269781e3a6a2a9fb5..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/_mode_cbc.py
+++ /dev/null
@@ -1,293 +0,0 @@
-# ===================================================================
-#
-# Copyright (c) 2014, Legrandin
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# 1. Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# 2. Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
-# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
-# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-# ===================================================================
-
-"""
-Ciphertext Block Chaining (CBC) mode.
-"""
-
-__all__ = ['CbcMode']
-
-from Crypto.Util.py3compat import _copy_bytes
-from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer,
- create_string_buffer, get_raw_buffer,
- SmartPointer, c_size_t, c_uint8_ptr,
- is_writeable_buffer)
-
-from Crypto.Random import get_random_bytes
-
-raw_cbc_lib = load_pycryptodome_raw_lib("Crypto.Cipher._raw_cbc", """
- int CBC_start_operation(void *cipher,
- const uint8_t iv[],
- size_t iv_len,
- void **pResult);
- int CBC_encrypt(void *cbcState,
- const uint8_t *in,
- uint8_t *out,
- size_t data_len);
- int CBC_decrypt(void *cbcState,
- const uint8_t *in,
- uint8_t *out,
- size_t data_len);
- int CBC_stop_operation(void *state);
- """
- )
-
-
-class CbcMode(object):
- """*Cipher-Block Chaining (CBC)*.
-
- Each of the ciphertext blocks depends on the current
- and all previous plaintext blocks.
-
- An Initialization Vector (*IV*) is required.
-
- See `NIST SP800-38A`_ , Section 6.2 .
-
- .. _`NIST SP800-38A` : http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf
-
- :undocumented: __init__
- """
-
- def __init__(self, block_cipher, iv):
- """Create a new block cipher, configured in CBC mode.
-
- :Parameters:
- block_cipher : C pointer
- A smart pointer to the low-level block cipher instance.
-
- iv : bytes/bytearray/memoryview
- The initialization vector to use for encryption or decryption.
- It is as long as the cipher block.
-
- **The IV must be unpredictable**. Ideally it is picked randomly.
-
- Reusing the *IV* for encryptions performed with the same key
- compromises confidentiality.
- """
-
- self._state = VoidPointer()
- result = raw_cbc_lib.CBC_start_operation(block_cipher.get(),
- c_uint8_ptr(iv),
- c_size_t(len(iv)),
- self._state.address_of())
- if result:
- raise ValueError("Error %d while instantiating the CBC mode"
- % result)
-
- # Ensure that object disposal of this Python object will (eventually)
- # free the memory allocated by the raw library for the cipher mode
- self._state = SmartPointer(self._state.get(),
- raw_cbc_lib.CBC_stop_operation)
-
- # Memory allocated for the underlying block cipher is now owed
- # by the cipher mode
- block_cipher.release()
-
- self.block_size = len(iv)
- """The block size of the underlying cipher, in bytes."""
-
- self.iv = _copy_bytes(None, None, iv)
- """The Initialization Vector originally used to create the object.
- The value does not change."""
-
- self.IV = self.iv
- """Alias for `iv`"""
-
- self._next = [ self.encrypt, self.decrypt ]
-
- def encrypt(self, plaintext, output=None):
- """Encrypt data with the key and the parameters set at initialization.
-
- A cipher object is stateful: once you have encrypted a message
- you cannot encrypt (or decrypt) another message using the same
- object.
-
- The data to encrypt can be broken up in two or
- more pieces and `encrypt` can be called multiple times.
-
- That is, the statement:
-
- >>> c.encrypt(a) + c.encrypt(b)
-
- is equivalent to:
-
- >>> c.encrypt(a+b)
-
- That also means that you cannot reuse an object for encrypting
- or decrypting other data with the same key.
-
- This function does not add any padding to the plaintext.
-
- :Parameters:
- plaintext : bytes/bytearray/memoryview
- The piece of data to encrypt.
- Its lenght must be multiple of the cipher block size.
- :Keywords:
- output : bytearray/memoryview
- The location where the ciphertext must be written to.
- If ``None``, the ciphertext is returned.
- :Return:
- If ``output`` is ``None``, the ciphertext is returned as ``bytes``.
- Otherwise, ``None``.
- """
-
- if self.encrypt not in self._next:
- raise TypeError("encrypt() cannot be called after decrypt()")
- self._next = [ self.encrypt ]
-
- if output is None:
- ciphertext = create_string_buffer(len(plaintext))
- else:
- ciphertext = output
-
- if not is_writeable_buffer(output):
- raise TypeError("output must be a bytearray or a writeable memoryview")
-
- if len(plaintext) != len(output):
- raise ValueError("output must have the same length as the input"
- " (%d bytes)" % len(plaintext))
-
- result = raw_cbc_lib.CBC_encrypt(self._state.get(),
- c_uint8_ptr(plaintext),
- c_uint8_ptr(ciphertext),
- c_size_t(len(plaintext)))
- if result:
- if result == 3:
- raise ValueError("Data must be padded to %d byte boundary in CBC mode" % self.block_size)
- raise ValueError("Error %d while encrypting in CBC mode" % result)
-
- if output is None:
- return get_raw_buffer(ciphertext)
- else:
- return None
-
- def decrypt(self, ciphertext, output=None):
- """Decrypt data with the key and the parameters set at initialization.
-
- A cipher object is stateful: once you have decrypted a message
- you cannot decrypt (or encrypt) another message with the same
- object.
-
- The data to decrypt can be broken up in two or
- more pieces and `decrypt` can be called multiple times.
-
- That is, the statement:
-
- >>> c.decrypt(a) + c.decrypt(b)
-
- is equivalent to:
-
- >>> c.decrypt(a+b)
-
- This function does not remove any padding from the plaintext.
-
- :Parameters:
- ciphertext : bytes/bytearray/memoryview
- The piece of data to decrypt.
- Its length must be multiple of the cipher block size.
- :Keywords:
- output : bytearray/memoryview
- The location where the plaintext must be written to.
- If ``None``, the plaintext is returned.
- :Return:
- If ``output`` is ``None``, the plaintext is returned as ``bytes``.
- Otherwise, ``None``.
- """
-
- if self.decrypt not in self._next:
- raise TypeError("decrypt() cannot be called after encrypt()")
- self._next = [ self.decrypt ]
-
- if output is None:
- plaintext = create_string_buffer(len(ciphertext))
- else:
- plaintext = output
-
- if not is_writeable_buffer(output):
- raise TypeError("output must be a bytearray or a writeable memoryview")
-
- if len(ciphertext) != len(output):
- raise ValueError("output must have the same length as the input"
- " (%d bytes)" % len(plaintext))
-
- result = raw_cbc_lib.CBC_decrypt(self._state.get(),
- c_uint8_ptr(ciphertext),
- c_uint8_ptr(plaintext),
- c_size_t(len(ciphertext)))
- if result:
- if result == 3:
- raise ValueError("Data must be padded to %d byte boundary in CBC mode" % self.block_size)
- raise ValueError("Error %d while decrypting in CBC mode" % result)
-
- if output is None:
- return get_raw_buffer(plaintext)
- else:
- return None
-
-
-def _create_cbc_cipher(factory, **kwargs):
- """Instantiate a cipher object that performs CBC encryption/decryption.
-
- :Parameters:
- factory : module
- The underlying block cipher, a module from ``Crypto.Cipher``.
-
- :Keywords:
- iv : bytes/bytearray/memoryview
- The IV to use for CBC.
-
- IV : bytes/bytearray/memoryview
- Alias for ``iv``.
-
- Any other keyword will be passed to the underlying block cipher.
- See the relevant documentation for details (at least ``key`` will need
- to be present).
- """
-
- cipher_state = factory._create_base_cipher(kwargs)
- iv = kwargs.pop("IV", None)
- IV = kwargs.pop("iv", None)
-
- if (None, None) == (iv, IV):
- iv = get_random_bytes(factory.block_size)
- if iv is not None:
- if IV is not None:
- raise TypeError("You must either use 'iv' or 'IV', not both")
- else:
- iv = IV
-
- if len(iv) != factory.block_size:
- raise ValueError("Incorrect IV length (it must be %d bytes long)" %
- factory.block_size)
-
- if kwargs:
- raise TypeError("Unknown parameters for CBC: %s" % str(kwargs))
-
- return CbcMode(cipher_state, iv)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/Future.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/Future.py
deleted file mode 100644
index 848792e00bf21d57e7cb680ab5199123093ca96c..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/Future.py
+++ /dev/null
@@ -1,15 +0,0 @@
-def _get_feature(name):
- import __future__
- # fall back to a unique fake object for earlier Python versions or Python 3
- return getattr(__future__, name, object())
-
-unicode_literals = _get_feature("unicode_literals")
-with_statement = _get_feature("with_statement") # dummy
-division = _get_feature("division")
-print_function = _get_feature("print_function")
-absolute_import = _get_feature("absolute_import")
-nested_scopes = _get_feature("nested_scopes") # dummy
-generators = _get_feature("generators") # dummy
-generator_stop = _get_feature("generator_stop")
-
-del _get_feature
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/Image.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/Image.py
deleted file mode 100644
index 7faf0c2481ba1832303757d578d62b8594332713..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/Image.py
+++ /dev/null
@@ -1,3760 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# the Image class wrapper
-#
-# partial release history:
-# 1995-09-09 fl Created
-# 1996-03-11 fl PIL release 0.0 (proof of concept)
-# 1996-04-30 fl PIL release 0.1b1
-# 1999-07-28 fl PIL release 1.0 final
-# 2000-06-07 fl PIL release 1.1
-# 2000-10-20 fl PIL release 1.1.1
-# 2001-05-07 fl PIL release 1.1.2
-# 2002-03-15 fl PIL release 1.1.3
-# 2003-05-10 fl PIL release 1.1.4
-# 2005-03-28 fl PIL release 1.1.5
-# 2006-12-02 fl PIL release 1.1.6
-# 2009-11-15 fl PIL release 1.1.7
-#
-# Copyright (c) 1997-2009 by Secret Labs AB. All rights reserved.
-# Copyright (c) 1995-2009 by Fredrik Lundh.
-#
-# See the README file for information on usage and redistribution.
-#
-
-import atexit
-import builtins
-import io
-import logging
-import math
-import os
-import re
-import struct
-import sys
-import tempfile
-import warnings
-from collections.abc import Callable, MutableMapping
-from enum import IntEnum
-from pathlib import Path
-
-try:
- import defusedxml.ElementTree as ElementTree
-except ImportError:
- ElementTree = None
-
-# VERSION was removed in Pillow 6.0.0.
-# PILLOW_VERSION was removed in Pillow 9.0.0.
-# Use __version__ instead.
-from . import ImageMode, TiffTags, UnidentifiedImageError, __version__, _plugins
-from ._binary import i32le, o32be, o32le
-from ._deprecate import deprecate
-from ._util import DeferredError, is_path
-
-
-def __getattr__(name):
- categories = {"NORMAL": 0, "SEQUENCE": 1, "CONTAINER": 2}
- if name in categories:
- deprecate("Image categories", 10, "is_animated", plural=True)
- return categories[name]
- elif name in ("NEAREST", "NONE"):
- deprecate(name, 10, "Resampling.NEAREST or Dither.NONE")
- return 0
- old_resampling = {
- "LINEAR": "BILINEAR",
- "CUBIC": "BICUBIC",
- "ANTIALIAS": "LANCZOS",
- }
- if name in old_resampling:
- deprecate(name, 10, f"Resampling.{old_resampling[name]}")
- return Resampling[old_resampling[name]]
- for enum in (Transpose, Transform, Resampling, Dither, Palette, Quantize):
- if name in enum.__members__:
- deprecate(name, 10, f"{enum.__name__}.{name}")
- return enum[name]
- raise AttributeError(f"module '{__name__}' has no attribute '{name}'")
-
-
-logger = logging.getLogger(__name__)
-
-
-class DecompressionBombWarning(RuntimeWarning):
- pass
-
-
-class DecompressionBombError(Exception):
- pass
-
-
-# Limit to around a quarter gigabyte for a 24-bit (3 bpp) image
-MAX_IMAGE_PIXELS = int(1024 * 1024 * 1024 // 4 // 3)
-
-
-try:
- # If the _imaging C module is not present, Pillow will not load.
- # Note that other modules should not refer to _imaging directly;
- # import Image and use the Image.core variable instead.
- # Also note that Image.core is not a publicly documented interface,
- # and should be considered private and subject to change.
- from . import _imaging as core
-
- if __version__ != getattr(core, "PILLOW_VERSION", None):
- raise ImportError(
- "The _imaging extension was built for another version of Pillow or PIL:\n"
- f"Core version: {getattr(core, 'PILLOW_VERSION', None)}\n"
- f"Pillow version: {__version__}"
- )
-
-except ImportError as v:
- core = DeferredError(ImportError("The _imaging C module is not installed."))
- # Explanations for ways that we know we might have an import error
- if str(v).startswith("Module use of python"):
- # The _imaging C module is present, but not compiled for
- # the right version (windows only). Print a warning, if
- # possible.
- warnings.warn(
- "The _imaging extension was built for another version of Python.",
- RuntimeWarning,
- )
- elif str(v).startswith("The _imaging extension"):
- warnings.warn(str(v), RuntimeWarning)
- # Fail here anyway. Don't let people run with a mostly broken Pillow.
- # see docs/porting.rst
- raise
-
-
-# works everywhere, win for pypy, not cpython
-USE_CFFI_ACCESS = hasattr(sys, "pypy_version_info")
-try:
- import cffi
-except ImportError:
- cffi = None
-
-
-def isImageType(t):
- """
- Checks if an object is an image object.
-
- .. warning::
-
- This function is for internal use only.
-
- :param t: object to check if it's an image
- :returns: True if the object is an image
- """
- return hasattr(t, "im")
-
-
-#
-# Constants
-
-# transpose
-class Transpose(IntEnum):
- FLIP_LEFT_RIGHT = 0
- FLIP_TOP_BOTTOM = 1
- ROTATE_90 = 2
- ROTATE_180 = 3
- ROTATE_270 = 4
- TRANSPOSE = 5
- TRANSVERSE = 6
-
-
-# transforms (also defined in Imaging.h)
-class Transform(IntEnum):
- AFFINE = 0
- EXTENT = 1
- PERSPECTIVE = 2
- QUAD = 3
- MESH = 4
-
-
-# resampling filters (also defined in Imaging.h)
-class Resampling(IntEnum):
- NEAREST = 0
- BOX = 4
- BILINEAR = 2
- HAMMING = 5
- BICUBIC = 3
- LANCZOS = 1
-
-
-_filters_support = {
- Resampling.BOX: 0.5,
- Resampling.BILINEAR: 1.0,
- Resampling.HAMMING: 1.0,
- Resampling.BICUBIC: 2.0,
- Resampling.LANCZOS: 3.0,
-}
-
-
-# dithers
-class Dither(IntEnum):
- NONE = 0
- ORDERED = 1 # Not yet implemented
- RASTERIZE = 2 # Not yet implemented
- FLOYDSTEINBERG = 3 # default
-
-
-# palettes/quantizers
-class Palette(IntEnum):
- WEB = 0
- ADAPTIVE = 1
-
-
-class Quantize(IntEnum):
- MEDIANCUT = 0
- MAXCOVERAGE = 1
- FASTOCTREE = 2
- LIBIMAGEQUANT = 3
-
-
-if hasattr(core, "DEFAULT_STRATEGY"):
- DEFAULT_STRATEGY = core.DEFAULT_STRATEGY
- FILTERED = core.FILTERED
- HUFFMAN_ONLY = core.HUFFMAN_ONLY
- RLE = core.RLE
- FIXED = core.FIXED
-
-
-# --------------------------------------------------------------------
-# Registries
-
-ID = []
-OPEN = {}
-MIME = {}
-SAVE = {}
-SAVE_ALL = {}
-EXTENSION = {}
-DECODERS = {}
-ENCODERS = {}
-
-# --------------------------------------------------------------------
-# Modes
-
-_ENDIAN = "<" if sys.byteorder == "little" else ">"
-
-
-def _conv_type_shape(im):
- m = ImageMode.getmode(im.mode)
- shape = (im.height, im.width)
- extra = len(m.bands)
- if extra != 1:
- shape += (extra,)
- return shape, m.typestr
-
-
-MODES = ["1", "CMYK", "F", "HSV", "I", "L", "LAB", "P", "RGB", "RGBA", "RGBX", "YCbCr"]
-
-# raw modes that may be memory mapped. NOTE: if you change this, you
-# may have to modify the stride calculation in map.c too!
-_MAPMODES = ("L", "P", "RGBX", "RGBA", "CMYK", "I;16", "I;16L", "I;16B")
-
-
-def getmodebase(mode):
- """
- Gets the "base" mode for given mode. This function returns "L" for
- images that contain grayscale data, and "RGB" for images that
- contain color data.
-
- :param mode: Input mode.
- :returns: "L" or "RGB".
- :exception KeyError: If the input mode was not a standard mode.
- """
- return ImageMode.getmode(mode).basemode
-
-
-def getmodetype(mode):
- """
- Gets the storage type mode. Given a mode, this function returns a
- single-layer mode suitable for storing individual bands.
-
- :param mode: Input mode.
- :returns: "L", "I", or "F".
- :exception KeyError: If the input mode was not a standard mode.
- """
- return ImageMode.getmode(mode).basetype
-
-
-def getmodebandnames(mode):
- """
- Gets a list of individual band names. Given a mode, this function returns
- a tuple containing the names of individual bands (use
- :py:method:`~PIL.Image.getmodetype` to get the mode used to store each
- individual band.
-
- :param mode: Input mode.
- :returns: A tuple containing band names. The length of the tuple
- gives the number of bands in an image of the given mode.
- :exception KeyError: If the input mode was not a standard mode.
- """
- return ImageMode.getmode(mode).bands
-
-
-def getmodebands(mode):
- """
- Gets the number of individual bands for this mode.
-
- :param mode: Input mode.
- :returns: The number of bands in this mode.
- :exception KeyError: If the input mode was not a standard mode.
- """
- return len(ImageMode.getmode(mode).bands)
-
-
-# --------------------------------------------------------------------
-# Helpers
-
-_initialized = 0
-
-
-def preinit():
- """Explicitly load standard file format drivers."""
-
- global _initialized
- if _initialized >= 1:
- return
-
- try:
- from . import BmpImagePlugin
-
- assert BmpImagePlugin
- except ImportError:
- pass
- try:
- from . import GifImagePlugin
-
- assert GifImagePlugin
- except ImportError:
- pass
- try:
- from . import JpegImagePlugin
-
- assert JpegImagePlugin
- except ImportError:
- pass
- try:
- from . import PpmImagePlugin
-
- assert PpmImagePlugin
- except ImportError:
- pass
- try:
- from . import PngImagePlugin
-
- assert PngImagePlugin
- except ImportError:
- pass
- # try:
- # import TiffImagePlugin
- # assert TiffImagePlugin
- # except ImportError:
- # pass
-
- _initialized = 1
-
-
-def init():
- """
- Explicitly initializes the Python Imaging Library. This function
- loads all available file format drivers.
- """
-
- global _initialized
- if _initialized >= 2:
- return 0
-
- for plugin in _plugins:
- try:
- logger.debug("Importing %s", plugin)
- __import__(f"PIL.{plugin}", globals(), locals(), [])
- except ImportError as e:
- logger.debug("Image: failed to import %s: %s", plugin, e)
-
- if OPEN or SAVE:
- _initialized = 2
- return 1
-
-
-# --------------------------------------------------------------------
-# Codec factories (used by tobytes/frombytes and ImageFile.load)
-
-
-def _getdecoder(mode, decoder_name, args, extra=()):
-
- # tweak arguments
- if args is None:
- args = ()
- elif not isinstance(args, tuple):
- args = (args,)
-
- try:
- decoder = DECODERS[decoder_name]
- except KeyError:
- pass
- else:
- return decoder(mode, *args + extra)
-
- try:
- # get decoder
- decoder = getattr(core, decoder_name + "_decoder")
- except AttributeError as e:
- raise OSError(f"decoder {decoder_name} not available") from e
- return decoder(mode, *args + extra)
-
-
-def _getencoder(mode, encoder_name, args, extra=()):
-
- # tweak arguments
- if args is None:
- args = ()
- elif not isinstance(args, tuple):
- args = (args,)
-
- try:
- encoder = ENCODERS[encoder_name]
- except KeyError:
- pass
- else:
- return encoder(mode, *args + extra)
-
- try:
- # get encoder
- encoder = getattr(core, encoder_name + "_encoder")
- except AttributeError as e:
- raise OSError(f"encoder {encoder_name} not available") from e
- return encoder(mode, *args + extra)
-
-
-# --------------------------------------------------------------------
-# Simple expression analyzer
-
-
-def coerce_e(value):
- deprecate("coerce_e", 10)
- return value if isinstance(value, _E) else _E(1, value)
-
-
-# _E(scale, offset) represents the affine transformation scale * x + offset.
-# The "data" field is named for compatibility with the old implementation,
-# and should be renamed once coerce_e is removed.
-class _E:
- def __init__(self, scale, data):
- self.scale = scale
- self.data = data
-
- def __neg__(self):
- return _E(-self.scale, -self.data)
-
- def __add__(self, other):
- if isinstance(other, _E):
- return _E(self.scale + other.scale, self.data + other.data)
- return _E(self.scale, self.data + other)
-
- __radd__ = __add__
-
- def __sub__(self, other):
- return self + -other
-
- def __rsub__(self, other):
- return other + -self
-
- def __mul__(self, other):
- if isinstance(other, _E):
- return NotImplemented
- return _E(self.scale * other, self.data * other)
-
- __rmul__ = __mul__
-
- def __truediv__(self, other):
- if isinstance(other, _E):
- return NotImplemented
- return _E(self.scale / other, self.data / other)
-
-
-def _getscaleoffset(expr):
- a = expr(_E(1, 0))
- return (a.scale, a.data) if isinstance(a, _E) else (0, a)
-
-
-# --------------------------------------------------------------------
-# Implementation wrapper
-
-
-class Image:
- """
- This class represents an image object. To create
- :py:class:`~PIL.Image.Image` objects, use the appropriate factory
- functions. There's hardly ever any reason to call the Image constructor
- directly.
-
- * :py:func:`~PIL.Image.open`
- * :py:func:`~PIL.Image.new`
- * :py:func:`~PIL.Image.frombytes`
- """
-
- format = None
- format_description = None
- _close_exclusive_fp_after_loading = True
-
- def __init__(self):
- # FIXME: take "new" parameters / other image?
- # FIXME: turn mode and size into delegating properties?
- self.im = None
- self.mode = ""
- self._size = (0, 0)
- self.palette = None
- self.info = {}
- self._category = 0
- self.readonly = 0
- self.pyaccess = None
- self._exif = None
-
- def __getattr__(self, name):
- if name == "category":
- deprecate("Image categories", 10, "is_animated", plural=True)
- return self._category
- raise AttributeError(name)
-
- @property
- def width(self):
- return self.size[0]
-
- @property
- def height(self):
- return self.size[1]
-
- @property
- def size(self):
- return self._size
-
- def _new(self, im):
- new = Image()
- new.im = im
- new.mode = im.mode
- new._size = im.size
- if im.mode in ("P", "PA"):
- if self.palette:
- new.palette = self.palette.copy()
- else:
- from . import ImagePalette
-
- new.palette = ImagePalette.ImagePalette()
- new.info = self.info.copy()
- return new
-
- # Context manager support
- def __enter__(self):
- return self
-
- def __exit__(self, *args):
- if hasattr(self, "fp") and getattr(self, "_exclusive_fp", False):
- if getattr(self, "_fp", False):
- if self._fp != self.fp:
- self._fp.close()
- self._fp = DeferredError(ValueError("Operation on closed image"))
- if self.fp:
- self.fp.close()
- self.fp = None
-
- def close(self):
- """
- Closes the file pointer, if possible.
-
- This operation will destroy the image core and release its memory.
- The image data will be unusable afterward.
-
- This function is required to close images that have multiple frames or
- have not had their file read and closed by the
- :py:meth:`~PIL.Image.Image.load` method. See :ref:`file-handling` for
- more information.
- """
- try:
- if getattr(self, "_fp", False):
- if self._fp != self.fp:
- self._fp.close()
- self._fp = DeferredError(ValueError("Operation on closed image"))
- if self.fp:
- self.fp.close()
- self.fp = None
- except Exception as msg:
- logger.debug("Error closing: %s", msg)
-
- if getattr(self, "map", None):
- self.map = None
-
- # Instead of simply setting to None, we're setting up a
- # deferred error that will better explain that the core image
- # object is gone.
- self.im = DeferredError(ValueError("Operation on closed image"))
-
- def _copy(self):
- self.load()
- self.im = self.im.copy()
- self.pyaccess = None
- self.readonly = 0
-
- def _ensure_mutable(self):
- if self.readonly:
- self._copy()
- else:
- self.load()
-
- def _dump(self, file=None, format=None, **options):
- suffix = ""
- if format:
- suffix = "." + format
-
- if not file:
- f, filename = tempfile.mkstemp(suffix)
- os.close(f)
- else:
- filename = file
- if not filename.endswith(suffix):
- filename = filename + suffix
-
- self.load()
-
- if not format or format == "PPM":
- self.im.save_ppm(filename)
- else:
- self.save(filename, format, **options)
-
- return filename
-
- def __eq__(self, other):
- return (
- self.__class__ is other.__class__
- and self.mode == other.mode
- and self.size == other.size
- and self.info == other.info
- and self._category == other._category
- and self.getpalette() == other.getpalette()
- and self.tobytes() == other.tobytes()
- )
-
- def __repr__(self):
- return "<%s.%s image mode=%s size=%dx%d at 0x%X>" % (
- self.__class__.__module__,
- self.__class__.__name__,
- self.mode,
- self.size[0],
- self.size[1],
- id(self),
- )
-
- def _repr_pretty_(self, p, cycle):
- """IPython plain text display support"""
-
- # Same as __repr__ but without unpredictable id(self),
- # to keep Jupyter notebook `text/plain` output stable.
- p.text(
- "<%s.%s image mode=%s size=%dx%d>"
- % (
- self.__class__.__module__,
- self.__class__.__name__,
- self.mode,
- self.size[0],
- self.size[1],
- )
- )
-
- def _repr_png_(self):
- """iPython display hook support
-
- :returns: png version of the image as bytes
- """
- b = io.BytesIO()
- try:
- self.save(b, "PNG")
- except Exception as e:
- raise ValueError("Could not save to PNG for display") from e
- return b.getvalue()
-
- @property
- def __array_interface__(self):
- # numpy array interface support
- new = {}
- shape, typestr = _conv_type_shape(self)
- new["shape"] = shape
- new["typestr"] = typestr
- new["version"] = 3
- try:
- if self.mode == "1":
- # Binary images need to be extended from bits to bytes
- # See: https://github.com/python-pillow/Pillow/issues/350
- new["data"] = self.tobytes("raw", "L")
- else:
- new["data"] = self.tobytes()
- except Exception as e:
- if not isinstance(e, (MemoryError, RecursionError)):
- try:
- import numpy
- from packaging.version import parse as parse_version
- except ImportError:
- pass
- else:
- if parse_version(numpy.__version__) < parse_version("1.23"):
- warnings.warn(e)
- raise
- return new
-
- def __getstate__(self):
- return [self.info, self.mode, self.size, self.getpalette(), self.tobytes()]
-
- def __setstate__(self, state):
- Image.__init__(self)
- self.tile = []
- info, mode, size, palette, data = state
- self.info = info
- self.mode = mode
- self._size = size
- self.im = core.new(mode, size)
- if mode in ("L", "LA", "P", "PA") and palette:
- self.putpalette(palette)
- self.frombytes(data)
-
- def tobytes(self, encoder_name="raw", *args):
- """
- Return image as a bytes object.
-
- .. warning::
-
- This method returns the raw image data from the internal
- storage. For compressed image data (e.g. PNG, JPEG) use
- :meth:`~.save`, with a BytesIO parameter for in-memory
- data.
-
- :param encoder_name: What encoder to use. The default is to
- use the standard "raw" encoder.
-
- A list of C encoders can be seen under
- codecs section of the function array in
- :file:`_imaging.c`. Python encoders are
- registered within the relevant plugins.
- :param args: Extra arguments to the encoder.
- :returns: A :py:class:`bytes` object.
- """
-
- # may pass tuple instead of argument list
- if len(args) == 1 and isinstance(args[0], tuple):
- args = args[0]
-
- if encoder_name == "raw" and args == ():
- args = self.mode
-
- self.load()
-
- if self.width == 0 or self.height == 0:
- return b""
-
- # unpack data
- e = _getencoder(self.mode, encoder_name, args)
- e.setimage(self.im)
-
- bufsize = max(65536, self.size[0] * 4) # see RawEncode.c
-
- data = []
- while True:
- l, s, d = e.encode(bufsize)
- data.append(d)
- if s:
- break
- if s < 0:
- raise RuntimeError(f"encoder error {s} in tobytes")
-
- return b"".join(data)
-
- def tobitmap(self, name="image"):
- """
- Returns the image converted to an X11 bitmap.
-
- .. note:: This method only works for mode "1" images.
-
- :param name: The name prefix to use for the bitmap variables.
- :returns: A string containing an X11 bitmap.
- :raises ValueError: If the mode is not "1"
- """
-
- self.load()
- if self.mode != "1":
- raise ValueError("not a bitmap")
- data = self.tobytes("xbm")
- return b"".join(
- [
- f"#define {name}_width {self.size[0]}\n".encode("ascii"),
- f"#define {name}_height {self.size[1]}\n".encode("ascii"),
- f"static char {name}_bits[] = {{\n".encode("ascii"),
- data,
- b"};",
- ]
- )
-
- def frombytes(self, data, decoder_name="raw", *args):
- """
- Loads this image with pixel data from a bytes object.
-
- This method is similar to the :py:func:`~PIL.Image.frombytes` function,
- but loads data into this image instead of creating a new image object.
- """
-
- # may pass tuple instead of argument list
- if len(args) == 1 and isinstance(args[0], tuple):
- args = args[0]
-
- # default format
- if decoder_name == "raw" and args == ():
- args = self.mode
-
- # unpack data
- d = _getdecoder(self.mode, decoder_name, args)
- d.setimage(self.im)
- s = d.decode(data)
-
- if s[0] >= 0:
- raise ValueError("not enough image data")
- if s[1] != 0:
- raise ValueError("cannot decode image data")
-
- def load(self):
- """
- Allocates storage for the image and loads the pixel data. In
- normal cases, you don't need to call this method, since the
- Image class automatically loads an opened image when it is
- accessed for the first time.
-
- If the file associated with the image was opened by Pillow, then this
- method will close it. The exception to this is if the image has
- multiple frames, in which case the file will be left open for seek
- operations. See :ref:`file-handling` for more information.
-
- :returns: An image access object.
- :rtype: :ref:`PixelAccess` or :py:class:`PIL.PyAccess`
- """
- if self.im is not None and self.palette and self.palette.dirty:
- # realize palette
- mode, arr = self.palette.getdata()
- self.im.putpalette(mode, arr)
- self.palette.dirty = 0
- self.palette.rawmode = None
- if "transparency" in self.info and mode in ("LA", "PA"):
- if isinstance(self.info["transparency"], int):
- self.im.putpalettealpha(self.info["transparency"], 0)
- else:
- self.im.putpalettealphas(self.info["transparency"])
- self.palette.mode = "RGBA"
- else:
- palette_mode = "RGBA" if mode.startswith("RGBA") else "RGB"
- self.palette.mode = palette_mode
- self.palette.palette = self.im.getpalette(palette_mode, palette_mode)
-
- if self.im is not None:
- if cffi and USE_CFFI_ACCESS:
- if self.pyaccess:
- return self.pyaccess
- from . import PyAccess
-
- self.pyaccess = PyAccess.new(self, self.readonly)
- if self.pyaccess:
- return self.pyaccess
- return self.im.pixel_access(self.readonly)
-
- def verify(self):
- """
- Verifies the contents of a file. For data read from a file, this
- method attempts to determine if the file is broken, without
- actually decoding the image data. If this method finds any
- problems, it raises suitable exceptions. If you need to load
- the image after using this method, you must reopen the image
- file.
- """
- pass
-
- def convert(
- self, mode=None, matrix=None, dither=None, palette=Palette.WEB, colors=256
- ):
- """
- Returns a converted copy of this image. For the "P" mode, this
- method translates pixels through the palette. If mode is
- omitted, a mode is chosen so that all information in the image
- and the palette can be represented without a palette.
-
- The current version supports all possible conversions between
- "L", "RGB" and "CMYK". The ``matrix`` argument only supports "L"
- and "RGB".
-
- When translating a color image to greyscale (mode "L"),
- the library uses the ITU-R 601-2 luma transform::
-
- L = R * 299/1000 + G * 587/1000 + B * 114/1000
-
- The default method of converting a greyscale ("L") or "RGB"
- image into a bilevel (mode "1") image uses Floyd-Steinberg
- dither to approximate the original image luminosity levels. If
- dither is ``None``, all values larger than 127 are set to 255 (white),
- all other values to 0 (black). To use other thresholds, use the
- :py:meth:`~PIL.Image.Image.point` method.
-
- When converting from "RGBA" to "P" without a ``matrix`` argument,
- this passes the operation to :py:meth:`~PIL.Image.Image.quantize`,
- and ``dither`` and ``palette`` are ignored.
-
- When converting from "PA", if an "RGBA" palette is present, the alpha
- channel from the image will be used instead of the values from the palette.
-
- :param mode: The requested mode. See: :ref:`concept-modes`.
- :param matrix: An optional conversion matrix. If given, this
- should be 4- or 12-tuple containing floating point values.
- :param dither: Dithering method, used when converting from
- mode "RGB" to "P" or from "RGB" or "L" to "1".
- Available methods are :data:`Dither.NONE` or :data:`Dither.FLOYDSTEINBERG`
- (default). Note that this is not used when ``matrix`` is supplied.
- :param palette: Palette to use when converting from mode "RGB"
- to "P". Available palettes are :data:`Palette.WEB` or
- :data:`Palette.ADAPTIVE`.
- :param colors: Number of colors to use for the :data:`Palette.ADAPTIVE`
- palette. Defaults to 256.
- :rtype: :py:class:`~PIL.Image.Image`
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- self.load()
-
- has_transparency = self.info.get("transparency") is not None
- if not mode and self.mode == "P":
- # determine default mode
- if self.palette:
- mode = self.palette.mode
- else:
- mode = "RGB"
- if mode == "RGB" and has_transparency:
- mode = "RGBA"
- if not mode or (mode == self.mode and not matrix):
- return self.copy()
-
- if matrix:
- # matrix conversion
- if mode not in ("L", "RGB"):
- raise ValueError("illegal conversion")
- im = self.im.convert_matrix(mode, matrix)
- new = self._new(im)
- if has_transparency and self.im.bands == 3:
- transparency = new.info["transparency"]
-
- def convert_transparency(m, v):
- v = m[0] * v[0] + m[1] * v[1] + m[2] * v[2] + m[3] * 0.5
- return max(0, min(255, int(v)))
-
- if mode == "L":
- transparency = convert_transparency(matrix, transparency)
- elif len(mode) == 3:
- transparency = tuple(
- convert_transparency(matrix[i * 4 : i * 4 + 4], transparency)
- for i in range(0, len(transparency))
- )
- new.info["transparency"] = transparency
- return new
-
- if mode == "P" and self.mode == "RGBA":
- return self.quantize(colors)
-
- trns = None
- delete_trns = False
- # transparency handling
- if has_transparency:
- if (self.mode in ("1", "L", "I") and mode in ("LA", "RGBA")) or (
- self.mode == "RGB" and mode == "RGBA"
- ):
- # Use transparent conversion to promote from transparent
- # color to an alpha channel.
- new_im = self._new(
- self.im.convert_transparent(mode, self.info["transparency"])
- )
- del new_im.info["transparency"]
- return new_im
- elif self.mode in ("L", "RGB", "P") and mode in ("L", "RGB", "P"):
- t = self.info["transparency"]
- if isinstance(t, bytes):
- # Dragons. This can't be represented by a single color
- warnings.warn(
- "Palette images with Transparency expressed in bytes should be "
- "converted to RGBA images"
- )
- delete_trns = True
- else:
- # get the new transparency color.
- # use existing conversions
- trns_im = Image()._new(core.new(self.mode, (1, 1)))
- if self.mode == "P":
- trns_im.putpalette(self.palette)
- if isinstance(t, tuple):
- err = "Couldn't allocate a palette color for transparency"
- try:
- t = trns_im.palette.getcolor(t, self)
- except ValueError as e:
- if str(e) == "cannot allocate more than 256 colors":
- # If all 256 colors are in use,
- # then there is no need for transparency
- t = None
- else:
- raise ValueError(err) from e
- if t is None:
- trns = None
- else:
- trns_im.putpixel((0, 0), t)
-
- if mode in ("L", "RGB"):
- trns_im = trns_im.convert(mode)
- else:
- # can't just retrieve the palette number, got to do it
- # after quantization.
- trns_im = trns_im.convert("RGB")
- trns = trns_im.getpixel((0, 0))
-
- elif self.mode == "P" and mode in ("LA", "PA", "RGBA"):
- t = self.info["transparency"]
- delete_trns = True
-
- if isinstance(t, bytes):
- self.im.putpalettealphas(t)
- elif isinstance(t, int):
- self.im.putpalettealpha(t, 0)
- else:
- raise ValueError("Transparency for P mode should be bytes or int")
-
- if mode == "P" and palette == Palette.ADAPTIVE:
- im = self.im.quantize(colors)
- new = self._new(im)
- from . import ImagePalette
-
- new.palette = ImagePalette.ImagePalette("RGB", new.im.getpalette("RGB"))
- if delete_trns:
- # This could possibly happen if we requantize to fewer colors.
- # The transparency would be totally off in that case.
- del new.info["transparency"]
- if trns is not None:
- try:
- new.info["transparency"] = new.palette.getcolor(trns, new)
- except Exception:
- # if we can't make a transparent color, don't leave the old
- # transparency hanging around to mess us up.
- del new.info["transparency"]
- warnings.warn("Couldn't allocate palette entry for transparency")
- return new
-
- if "LAB" in (self.mode, mode):
- other_mode = mode if self.mode == "LAB" else self.mode
- if other_mode in ("RGB", "RGBA", "RGBX"):
- from . import ImageCms
-
- srgb = ImageCms.createProfile("sRGB")
- lab = ImageCms.createProfile("LAB")
- profiles = [lab, srgb] if self.mode == "LAB" else [srgb, lab]
- transform = ImageCms.buildTransform(
- profiles[0], profiles[1], self.mode, mode
- )
- return transform.apply(self)
-
- # colorspace conversion
- if dither is None:
- dither = Dither.FLOYDSTEINBERG
-
- try:
- im = self.im.convert(mode, dither)
- except ValueError:
- try:
- # normalize source image and try again
- modebase = getmodebase(self.mode)
- if modebase == self.mode:
- raise
- im = self.im.convert(modebase)
- im = im.convert(mode, dither)
- except KeyError as e:
- raise ValueError("illegal conversion") from e
-
- new_im = self._new(im)
- if mode == "P" and palette != Palette.ADAPTIVE:
- from . import ImagePalette
-
- new_im.palette = ImagePalette.ImagePalette("RGB", list(range(256)) * 3)
- if delete_trns:
- # crash fail if we leave a bytes transparency in an rgb/l mode.
- del new_im.info["transparency"]
- if trns is not None:
- if new_im.mode == "P":
- try:
- new_im.info["transparency"] = new_im.palette.getcolor(trns, new_im)
- except ValueError as e:
- del new_im.info["transparency"]
- if str(e) != "cannot allocate more than 256 colors":
- # If all 256 colors are in use,
- # then there is no need for transparency
- warnings.warn(
- "Couldn't allocate palette entry for transparency"
- )
- else:
- new_im.info["transparency"] = trns
- return new_im
-
- def quantize(
- self,
- colors=256,
- method=None,
- kmeans=0,
- palette=None,
- dither=Dither.FLOYDSTEINBERG,
- ):
- """
- Convert the image to 'P' mode with the specified number
- of colors.
-
- :param colors: The desired number of colors, <= 256
- :param method: :data:`Quantize.MEDIANCUT` (median cut),
- :data:`Quantize.MAXCOVERAGE` (maximum coverage),
- :data:`Quantize.FASTOCTREE` (fast octree),
- :data:`Quantize.LIBIMAGEQUANT` (libimagequant; check support
- using :py:func:`PIL.features.check_feature` with
- ``feature="libimagequant"``).
-
- By default, :data:`Quantize.MEDIANCUT` will be used.
-
- The exception to this is RGBA images. :data:`Quantize.MEDIANCUT`
- and :data:`Quantize.MAXCOVERAGE` do not support RGBA images, so
- :data:`Quantize.FASTOCTREE` is used by default instead.
- :param kmeans: Integer
- :param palette: Quantize to the palette of given
- :py:class:`PIL.Image.Image`.
- :param dither: Dithering method, used when converting from
- mode "RGB" to "P" or from "RGB" or "L" to "1".
- Available methods are :data:`Dither.NONE` or :data:`Dither.FLOYDSTEINBERG`
- (default).
- :returns: A new image
-
- """
-
- self.load()
-
- if method is None:
- # defaults:
- method = Quantize.MEDIANCUT
- if self.mode == "RGBA":
- method = Quantize.FASTOCTREE
-
- if self.mode == "RGBA" and method not in (
- Quantize.FASTOCTREE,
- Quantize.LIBIMAGEQUANT,
- ):
- # Caller specified an invalid mode.
- raise ValueError(
- "Fast Octree (method == 2) and libimagequant (method == 3) "
- "are the only valid methods for quantizing RGBA images"
- )
-
- if palette:
- # use palette from reference image
- palette.load()
- if palette.mode != "P":
- raise ValueError("bad mode for palette image")
- if self.mode != "RGB" and self.mode != "L":
- raise ValueError(
- "only RGB or L mode images can be quantized to a palette"
- )
- im = self.im.convert("P", dither, palette.im)
- new_im = self._new(im)
- new_im.palette = palette.palette.copy()
- return new_im
-
- im = self._new(self.im.quantize(colors, method, kmeans))
-
- from . import ImagePalette
-
- mode = im.im.getpalettemode()
- palette = im.im.getpalette(mode, mode)[: colors * len(mode)]
- im.palette = ImagePalette.ImagePalette(mode, palette)
-
- return im
-
- def copy(self):
- """
- Copies this image. Use this method if you wish to paste things
- into an image, but still retain the original.
-
- :rtype: :py:class:`~PIL.Image.Image`
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
- self.load()
- return self._new(self.im.copy())
-
- __copy__ = copy
-
- def crop(self, box=None):
- """
- Returns a rectangular region from this image. The box is a
- 4-tuple defining the left, upper, right, and lower pixel
- coordinate. See :ref:`coordinate-system`.
-
- Note: Prior to Pillow 3.4.0, this was a lazy operation.
-
- :param box: The crop rectangle, as a (left, upper, right, lower)-tuple.
- :rtype: :py:class:`~PIL.Image.Image`
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- if box is None:
- return self.copy()
-
- if box[2] < box[0]:
- raise ValueError("Coordinate 'right' is less than 'left'")
- elif box[3] < box[1]:
- raise ValueError("Coordinate 'lower' is less than 'upper'")
-
- self.load()
- return self._new(self._crop(self.im, box))
-
- def _crop(self, im, box):
- """
- Returns a rectangular region from the core image object im.
-
- This is equivalent to calling im.crop((x0, y0, x1, y1)), but
- includes additional sanity checks.
-
- :param im: a core image object
- :param box: The crop rectangle, as a (left, upper, right, lower)-tuple.
- :returns: A core image object.
- """
-
- x0, y0, x1, y1 = map(int, map(round, box))
-
- absolute_values = (abs(x1 - x0), abs(y1 - y0))
-
- _decompression_bomb_check(absolute_values)
-
- return im.crop((x0, y0, x1, y1))
-
- def draft(self, mode, size):
- """
- Configures the image file loader so it returns a version of the
- image that as closely as possible matches the given mode and
- size. For example, you can use this method to convert a color
- JPEG to greyscale while loading it.
-
- If any changes are made, returns a tuple with the chosen ``mode`` and
- ``box`` with coordinates of the original image within the altered one.
-
- Note that this method modifies the :py:class:`~PIL.Image.Image` object
- in place. If the image has already been loaded, this method has no
- effect.
-
- Note: This method is not implemented for most images. It is
- currently implemented only for JPEG and MPO images.
-
- :param mode: The requested mode.
- :param size: The requested size.
- """
- pass
-
- def _expand(self, xmargin, ymargin=None):
- if ymargin is None:
- ymargin = xmargin
- self.load()
- return self._new(self.im.expand(xmargin, ymargin, 0))
-
- def filter(self, filter):
- """
- Filters this image using the given filter. For a list of
- available filters, see the :py:mod:`~PIL.ImageFilter` module.
-
- :param filter: Filter kernel.
- :returns: An :py:class:`~PIL.Image.Image` object."""
-
- from . import ImageFilter
-
- self.load()
-
- if isinstance(filter, Callable):
- filter = filter()
- if not hasattr(filter, "filter"):
- raise TypeError(
- "filter argument should be ImageFilter.Filter instance or class"
- )
-
- multiband = isinstance(filter, ImageFilter.MultibandFilter)
- if self.im.bands == 1 or multiband:
- return self._new(filter.filter(self.im))
-
- ims = []
- for c in range(self.im.bands):
- ims.append(self._new(filter.filter(self.im.getband(c))))
- return merge(self.mode, ims)
-
- def getbands(self):
- """
- Returns a tuple containing the name of each band in this image.
- For example, ``getbands`` on an RGB image returns ("R", "G", "B").
-
- :returns: A tuple containing band names.
- :rtype: tuple
- """
- return ImageMode.getmode(self.mode).bands
-
- def getbbox(self):
- """
- Calculates the bounding box of the non-zero regions in the
- image.
-
- :returns: The bounding box is returned as a 4-tuple defining the
- left, upper, right, and lower pixel coordinate. See
- :ref:`coordinate-system`. If the image is completely empty, this
- method returns None.
-
- """
-
- self.load()
- return self.im.getbbox()
-
- def getcolors(self, maxcolors=256):
- """
- Returns a list of colors used in this image.
-
- The colors will be in the image's mode. For example, an RGB image will
- return a tuple of (red, green, blue) color values, and a P image will
- return the index of the color in the palette.
-
- :param maxcolors: Maximum number of colors. If this number is
- exceeded, this method returns None. The default limit is
- 256 colors.
- :returns: An unsorted list of (count, pixel) values.
- """
-
- self.load()
- if self.mode in ("1", "L", "P"):
- h = self.im.histogram()
- out = []
- for i in range(256):
- if h[i]:
- out.append((h[i], i))
- if len(out) > maxcolors:
- return None
- return out
- return self.im.getcolors(maxcolors)
-
- def getdata(self, band=None):
- """
- Returns the contents of this image as a sequence object
- containing pixel values. The sequence object is flattened, so
- that values for line one follow directly after the values of
- line zero, and so on.
-
- Note that the sequence object returned by this method is an
- internal PIL data type, which only supports certain sequence
- operations. To convert it to an ordinary sequence (e.g. for
- printing), use ``list(im.getdata())``.
-
- :param band: What band to return. The default is to return
- all bands. To return a single band, pass in the index
- value (e.g. 0 to get the "R" band from an "RGB" image).
- :returns: A sequence-like object.
- """
-
- self.load()
- if band is not None:
- return self.im.getband(band)
- return self.im # could be abused
-
- def getextrema(self):
- """
- Gets the minimum and maximum pixel values for each band in
- the image.
-
- :returns: For a single-band image, a 2-tuple containing the
- minimum and maximum pixel value. For a multi-band image,
- a tuple containing one 2-tuple for each band.
- """
-
- self.load()
- if self.im.bands > 1:
- extrema = []
- for i in range(self.im.bands):
- extrema.append(self.im.getband(i).getextrema())
- return tuple(extrema)
- return self.im.getextrema()
-
- def _getxmp(self, xmp_tags):
- def get_name(tag):
- return tag.split("}")[1]
-
- def get_value(element):
- value = {get_name(k): v for k, v in element.attrib.items()}
- children = list(element)
- if children:
- for child in children:
- name = get_name(child.tag)
- child_value = get_value(child)
- if name in value:
- if not isinstance(value[name], list):
- value[name] = [value[name]]
- value[name].append(child_value)
- else:
- value[name] = child_value
- elif value:
- if element.text:
- value["text"] = element.text
- else:
- return element.text
- return value
-
- if ElementTree is None:
- warnings.warn("XMP data cannot be read without defusedxml dependency")
- return {}
- else:
- root = ElementTree.fromstring(xmp_tags)
- return {get_name(root.tag): get_value(root)}
-
- def getexif(self):
- if self._exif is None:
- self._exif = Exif()
- self._exif._loaded = False
- elif self._exif._loaded:
- return self._exif
- self._exif._loaded = True
-
- exif_info = self.info.get("exif")
- if exif_info is None:
- if "Raw profile type exif" in self.info:
- exif_info = bytes.fromhex(
- "".join(self.info["Raw profile type exif"].split("\n")[3:])
- )
- elif hasattr(self, "tag_v2"):
- self._exif.bigtiff = self.tag_v2._bigtiff
- self._exif.endian = self.tag_v2._endian
- self._exif.load_from_fp(self.fp, self.tag_v2._offset)
- if exif_info is not None:
- self._exif.load(exif_info)
-
- # XMP tags
- if 0x0112 not in self._exif:
- xmp_tags = self.info.get("XML:com.adobe.xmp")
- if xmp_tags:
- match = re.search(r'tiff:Orientation(="|>)([0-9])', xmp_tags)
- if match:
- self._exif[0x0112] = int(match[2])
-
- return self._exif
-
- def _reload_exif(self):
- if self._exif is None or not self._exif._loaded:
- return
- self._exif._loaded = False
- self.getexif()
-
- def getim(self):
- """
- Returns a capsule that points to the internal image memory.
-
- :returns: A capsule object.
- """
-
- self.load()
- return self.im.ptr
-
- def getpalette(self, rawmode="RGB"):
- """
- Returns the image palette as a list.
-
- :param rawmode: The mode in which to return the palette. ``None`` will
- return the palette in its current mode.
-
- .. versionadded:: 9.1.0
-
- :returns: A list of color values [r, g, b, ...], or None if the
- image has no palette.
- """
-
- self.load()
- try:
- mode = self.im.getpalettemode()
- except ValueError:
- return None # no palette
- if rawmode is None:
- rawmode = mode
- return list(self.im.getpalette(mode, rawmode))
-
- def apply_transparency(self):
- """
- If a P mode image has a "transparency" key in the info dictionary,
- remove the key and apply the transparency to the palette instead.
- """
- if self.mode != "P" or "transparency" not in self.info:
- return
-
- from . import ImagePalette
-
- palette = self.getpalette("RGBA")
- transparency = self.info["transparency"]
- if isinstance(transparency, bytes):
- for i, alpha in enumerate(transparency):
- palette[i * 4 + 3] = alpha
- else:
- palette[transparency * 4 + 3] = 0
- self.palette = ImagePalette.ImagePalette("RGBA", bytes(palette))
- self.palette.dirty = 1
-
- del self.info["transparency"]
-
- def getpixel(self, xy):
- """
- Returns the pixel value at a given position.
-
- :param xy: The coordinate, given as (x, y). See
- :ref:`coordinate-system`.
- :returns: The pixel value. If the image is a multi-layer image,
- this method returns a tuple.
- """
-
- self.load()
- if self.pyaccess:
- return self.pyaccess.getpixel(xy)
- return self.im.getpixel(xy)
-
- def getprojection(self):
- """
- Get projection to x and y axes
-
- :returns: Two sequences, indicating where there are non-zero
- pixels along the X-axis and the Y-axis, respectively.
- """
-
- self.load()
- x, y = self.im.getprojection()
- return list(x), list(y)
-
- def histogram(self, mask=None, extrema=None):
- """
- Returns a histogram for the image. The histogram is returned as a
- list of pixel counts, one for each pixel value in the source
- image. Counts are grouped into 256 bins for each band, even if
- the image has more than 8 bits per band. If the image has more
- than one band, the histograms for all bands are concatenated (for
- example, the histogram for an "RGB" image contains 768 values).
-
- A bilevel image (mode "1") is treated as a greyscale ("L") image
- by this method.
-
- If a mask is provided, the method returns a histogram for those
- parts of the image where the mask image is non-zero. The mask
- image must have the same size as the image, and be either a
- bi-level image (mode "1") or a greyscale image ("L").
-
- :param mask: An optional mask.
- :param extrema: An optional tuple of manually-specified extrema.
- :returns: A list containing pixel counts.
- """
- self.load()
- if mask:
- mask.load()
- return self.im.histogram((0, 0), mask.im)
- if self.mode in ("I", "F"):
- if extrema is None:
- extrema = self.getextrema()
- return self.im.histogram(extrema)
- return self.im.histogram()
-
- def entropy(self, mask=None, extrema=None):
- """
- Calculates and returns the entropy for the image.
-
- A bilevel image (mode "1") is treated as a greyscale ("L")
- image by this method.
-
- If a mask is provided, the method employs the histogram for
- those parts of the image where the mask image is non-zero.
- The mask image must have the same size as the image, and be
- either a bi-level image (mode "1") or a greyscale image ("L").
-
- :param mask: An optional mask.
- :param extrema: An optional tuple of manually-specified extrema.
- :returns: A float value representing the image entropy
- """
- self.load()
- if mask:
- mask.load()
- return self.im.entropy((0, 0), mask.im)
- if self.mode in ("I", "F"):
- if extrema is None:
- extrema = self.getextrema()
- return self.im.entropy(extrema)
- return self.im.entropy()
-
- def paste(self, im, box=None, mask=None):
- """
- Pastes another image into this image. The box argument is either
- a 2-tuple giving the upper left corner, a 4-tuple defining the
- left, upper, right, and lower pixel coordinate, or None (same as
- (0, 0)). See :ref:`coordinate-system`. If a 4-tuple is given, the size
- of the pasted image must match the size of the region.
-
- If the modes don't match, the pasted image is converted to the mode of
- this image (see the :py:meth:`~PIL.Image.Image.convert` method for
- details).
-
- Instead of an image, the source can be a integer or tuple
- containing pixel values. The method then fills the region
- with the given color. When creating RGB images, you can
- also use color strings as supported by the ImageColor module.
-
- If a mask is given, this method updates only the regions
- indicated by the mask. You can use either "1", "L", "LA", "RGBA"
- or "RGBa" images (if present, the alpha band is used as mask).
- Where the mask is 255, the given image is copied as is. Where
- the mask is 0, the current value is preserved. Intermediate
- values will mix the two images together, including their alpha
- channels if they have them.
-
- See :py:meth:`~PIL.Image.Image.alpha_composite` if you want to
- combine images with respect to their alpha channels.
-
- :param im: Source image or pixel value (integer or tuple).
- :param box: An optional 4-tuple giving the region to paste into.
- If a 2-tuple is used instead, it's treated as the upper left
- corner. If omitted or None, the source is pasted into the
- upper left corner.
-
- If an image is given as the second argument and there is no
- third, the box defaults to (0, 0), and the second argument
- is interpreted as a mask image.
- :param mask: An optional mask image.
- """
-
- if isImageType(box) and mask is None:
- # abbreviated paste(im, mask) syntax
- mask = box
- box = None
-
- if box is None:
- box = (0, 0)
-
- if len(box) == 2:
- # upper left corner given; get size from image or mask
- if isImageType(im):
- size = im.size
- elif isImageType(mask):
- size = mask.size
- else:
- # FIXME: use self.size here?
- raise ValueError("cannot determine region size; use 4-item box")
- box += (box[0] + size[0], box[1] + size[1])
-
- if isinstance(im, str):
- from . import ImageColor
-
- im = ImageColor.getcolor(im, self.mode)
-
- elif isImageType(im):
- im.load()
- if self.mode != im.mode:
- if self.mode != "RGB" or im.mode not in ("LA", "RGBA", "RGBa"):
- # should use an adapter for this!
- im = im.convert(self.mode)
- im = im.im
-
- self._ensure_mutable()
-
- if mask:
- mask.load()
- self.im.paste(im, box, mask.im)
- else:
- self.im.paste(im, box)
-
- def alpha_composite(self, im, dest=(0, 0), source=(0, 0)):
- """'In-place' analog of Image.alpha_composite. Composites an image
- onto this image.
-
- :param im: image to composite over this one
- :param dest: Optional 2 tuple (left, top) specifying the upper
- left corner in this (destination) image.
- :param source: Optional 2 (left, top) tuple for the upper left
- corner in the overlay source image, or 4 tuple (left, top, right,
- bottom) for the bounds of the source rectangle
-
- Performance Note: Not currently implemented in-place in the core layer.
- """
-
- if not isinstance(source, (list, tuple)):
- raise ValueError("Source must be a tuple")
- if not isinstance(dest, (list, tuple)):
- raise ValueError("Destination must be a tuple")
- if not len(source) in (2, 4):
- raise ValueError("Source must be a 2 or 4-tuple")
- if not len(dest) == 2:
- raise ValueError("Destination must be a 2-tuple")
- if min(source) < 0:
- raise ValueError("Source must be non-negative")
-
- if len(source) == 2:
- source = source + im.size
-
- # over image, crop if it's not the whole thing.
- if source == (0, 0) + im.size:
- overlay = im
- else:
- overlay = im.crop(source)
-
- # target for the paste
- box = dest + (dest[0] + overlay.width, dest[1] + overlay.height)
-
- # destination image. don't copy if we're using the whole image.
- if box == (0, 0) + self.size:
- background = self
- else:
- background = self.crop(box)
-
- result = alpha_composite(background, overlay)
- self.paste(result, box)
-
- def point(self, lut, mode=None):
- """
- Maps this image through a lookup table or function.
-
- :param lut: A lookup table, containing 256 (or 65536 if
- self.mode=="I" and mode == "L") values per band in the
- image. A function can be used instead, it should take a
- single argument. The function is called once for each
- possible pixel value, and the resulting table is applied to
- all bands of the image.
-
- It may also be an :py:class:`~PIL.Image.ImagePointHandler`
- object::
-
- class Example(Image.ImagePointHandler):
- def point(self, data):
- # Return result
- :param mode: Output mode (default is same as input). In the
- current version, this can only be used if the source image
- has mode "L" or "P", and the output has mode "1" or the
- source image mode is "I" and the output mode is "L".
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- self.load()
-
- if isinstance(lut, ImagePointHandler):
- return lut.point(self)
-
- if callable(lut):
- # if it isn't a list, it should be a function
- if self.mode in ("I", "I;16", "F"):
- # check if the function can be used with point_transform
- # UNDONE wiredfool -- I think this prevents us from ever doing
- # a gamma function point transform on > 8bit images.
- scale, offset = _getscaleoffset(lut)
- return self._new(self.im.point_transform(scale, offset))
- # for other modes, convert the function to a table
- lut = [lut(i) for i in range(256)] * self.im.bands
-
- if self.mode == "F":
- # FIXME: _imaging returns a confusing error message for this case
- raise ValueError("point operation not supported for this mode")
-
- if mode != "F":
- lut = [round(i) for i in lut]
- return self._new(self.im.point(lut, mode))
-
- def putalpha(self, alpha):
- """
- Adds or replaces the alpha layer in this image. If the image
- does not have an alpha layer, it's converted to "LA" or "RGBA".
- The new layer must be either "L" or "1".
-
- :param alpha: The new alpha layer. This can either be an "L" or "1"
- image having the same size as this image, or an integer or
- other color value.
- """
-
- self._ensure_mutable()
-
- if self.mode not in ("LA", "PA", "RGBA"):
- # attempt to promote self to a matching alpha mode
- try:
- mode = getmodebase(self.mode) + "A"
- try:
- self.im.setmode(mode)
- except (AttributeError, ValueError) as e:
- # do things the hard way
- im = self.im.convert(mode)
- if im.mode not in ("LA", "PA", "RGBA"):
- raise ValueError from e # sanity check
- self.im = im
- self.pyaccess = None
- self.mode = self.im.mode
- except KeyError as e:
- raise ValueError("illegal image mode") from e
-
- if self.mode in ("LA", "PA"):
- band = 1
- else:
- band = 3
-
- if isImageType(alpha):
- # alpha layer
- if alpha.mode not in ("1", "L"):
- raise ValueError("illegal image mode")
- alpha.load()
- if alpha.mode == "1":
- alpha = alpha.convert("L")
- else:
- # constant alpha
- try:
- self.im.fillband(band, alpha)
- except (AttributeError, ValueError):
- # do things the hard way
- alpha = new("L", self.size, alpha)
- else:
- return
-
- self.im.putband(alpha.im, band)
-
- def putdata(self, data, scale=1.0, offset=0.0):
- """
- Copies pixel data from a flattened sequence object into the image. The
- values should start at the upper left corner (0, 0), continue to the
- end of the line, followed directly by the first value of the second
- line, and so on. Data will be read until either the image or the
- sequence ends. The scale and offset values are used to adjust the
- sequence values: **pixel = value*scale + offset**.
-
- :param data: A flattened sequence object.
- :param scale: An optional scale value. The default is 1.0.
- :param offset: An optional offset value. The default is 0.0.
- """
-
- self._ensure_mutable()
-
- self.im.putdata(data, scale, offset)
-
- def putpalette(self, data, rawmode="RGB"):
- """
- Attaches a palette to this image. The image must be a "P", "PA", "L"
- or "LA" image.
-
- The palette sequence must contain at most 256 colors, made up of one
- integer value for each channel in the raw mode.
- For example, if the raw mode is "RGB", then it can contain at most 768
- values, made up of red, green and blue values for the corresponding pixel
- index in the 256 colors.
- If the raw mode is "RGBA", then it can contain at most 1024 values,
- containing red, green, blue and alpha values.
-
- Alternatively, an 8-bit string may be used instead of an integer sequence.
-
- :param data: A palette sequence (either a list or a string).
- :param rawmode: The raw mode of the palette. Either "RGB", "RGBA", or a mode
- that can be transformed to "RGB" or "RGBA" (e.g. "R", "BGR;15", "RGBA;L").
- """
- from . import ImagePalette
-
- if self.mode not in ("L", "LA", "P", "PA"):
- raise ValueError("illegal image mode")
- if isinstance(data, ImagePalette.ImagePalette):
- palette = ImagePalette.raw(data.rawmode, data.palette)
- else:
- if not isinstance(data, bytes):
- data = bytes(data)
- palette = ImagePalette.raw(rawmode, data)
- self.mode = "PA" if "A" in self.mode else "P"
- self.palette = palette
- self.palette.mode = "RGB"
- self.load() # install new palette
-
- def putpixel(self, xy, value):
- """
- Modifies the pixel at the given position. The color is given as
- a single numerical value for single-band images, and a tuple for
- multi-band images. In addition to this, RGB and RGBA tuples are
- accepted for P and PA images.
-
- Note that this method is relatively slow. For more extensive changes,
- use :py:meth:`~PIL.Image.Image.paste` or the :py:mod:`~PIL.ImageDraw`
- module instead.
-
- See:
-
- * :py:meth:`~PIL.Image.Image.paste`
- * :py:meth:`~PIL.Image.Image.putdata`
- * :py:mod:`~PIL.ImageDraw`
-
- :param xy: The pixel coordinate, given as (x, y). See
- :ref:`coordinate-system`.
- :param value: The pixel value.
- """
-
- if self.readonly:
- self._copy()
- self.load()
-
- if self.pyaccess:
- return self.pyaccess.putpixel(xy, value)
-
- if (
- self.mode in ("P", "PA")
- and isinstance(value, (list, tuple))
- and len(value) in [3, 4]
- ):
- # RGB or RGBA value for a P or PA image
- if self.mode == "PA":
- alpha = value[3] if len(value) == 4 else 255
- value = value[:3]
- value = self.palette.getcolor(value, self)
- if self.mode == "PA":
- value = (value, alpha)
- return self.im.putpixel(xy, value)
-
- def remap_palette(self, dest_map, source_palette=None):
- """
- Rewrites the image to reorder the palette.
-
- :param dest_map: A list of indexes into the original palette.
- e.g. ``[1,0]`` would swap a two item palette, and ``list(range(256))``
- is the identity transform.
- :param source_palette: Bytes or None.
- :returns: An :py:class:`~PIL.Image.Image` object.
-
- """
- from . import ImagePalette
-
- if self.mode not in ("L", "P"):
- raise ValueError("illegal image mode")
-
- bands = 3
- palette_mode = "RGB"
- if source_palette is None:
- if self.mode == "P":
- self.load()
- palette_mode = self.im.getpalettemode()
- if palette_mode == "RGBA":
- bands = 4
- source_palette = self.im.getpalette(palette_mode, palette_mode)
- else: # L-mode
- source_palette = bytearray(i // 3 for i in range(768))
-
- palette_bytes = b""
- new_positions = [0] * 256
-
- # pick only the used colors from the palette
- for i, oldPosition in enumerate(dest_map):
- palette_bytes += source_palette[
- oldPosition * bands : oldPosition * bands + bands
- ]
- new_positions[oldPosition] = i
-
- # replace the palette color id of all pixel with the new id
-
- # Palette images are [0..255], mapped through a 1 or 3
- # byte/color map. We need to remap the whole image
- # from palette 1 to palette 2. New_positions is
- # an array of indexes into palette 1. Palette 2 is
- # palette 1 with any holes removed.
-
- # We're going to leverage the convert mechanism to use the
- # C code to remap the image from palette 1 to palette 2,
- # by forcing the source image into 'L' mode and adding a
- # mapping 'L' mode palette, then converting back to 'L'
- # sans palette thus converting the image bytes, then
- # assigning the optimized RGB palette.
-
- # perf reference, 9500x4000 gif, w/~135 colors
- # 14 sec prepatch, 1 sec postpatch with optimization forced.
-
- mapping_palette = bytearray(new_positions)
-
- m_im = self.copy()
- m_im.mode = "P"
-
- m_im.palette = ImagePalette.ImagePalette(
- palette_mode, palette=mapping_palette * bands
- )
- # possibly set palette dirty, then
- # m_im.putpalette(mapping_palette, 'L') # converts to 'P'
- # or just force it.
- # UNDONE -- this is part of the general issue with palettes
- m_im.im.putpalette(palette_mode + ";L", m_im.palette.tobytes())
-
- m_im = m_im.convert("L")
-
- m_im.putpalette(palette_bytes, palette_mode)
- m_im.palette = ImagePalette.ImagePalette(palette_mode, palette=palette_bytes)
-
- if "transparency" in self.info:
- try:
- m_im.info["transparency"] = dest_map.index(self.info["transparency"])
- except ValueError:
- if "transparency" in m_im.info:
- del m_im.info["transparency"]
-
- return m_im
-
- def _get_safe_box(self, size, resample, box):
- """Expands the box so it includes adjacent pixels
- that may be used by resampling with the given resampling filter.
- """
- filter_support = _filters_support[resample] - 0.5
- scale_x = (box[2] - box[0]) / size[0]
- scale_y = (box[3] - box[1]) / size[1]
- support_x = filter_support * scale_x
- support_y = filter_support * scale_y
-
- return (
- max(0, int(box[0] - support_x)),
- max(0, int(box[1] - support_y)),
- min(self.size[0], math.ceil(box[2] + support_x)),
- min(self.size[1], math.ceil(box[3] + support_y)),
- )
-
- def resize(self, size, resample=None, box=None, reducing_gap=None):
- """
- Returns a resized copy of this image.
-
- :param size: The requested size in pixels, as a 2-tuple:
- (width, height).
- :param resample: An optional resampling filter. This can be
- one of :py:data:`Resampling.NEAREST`, :py:data:`Resampling.BOX`,
- :py:data:`Resampling.BILINEAR`, :py:data:`Resampling.HAMMING`,
- :py:data:`Resampling.BICUBIC` or :py:data:`Resampling.LANCZOS`.
- If the image has mode "1" or "P", it is always set to
- :py:data:`Resampling.NEAREST`. If the image mode specifies a number
- of bits, such as "I;16", then the default filter is
- :py:data:`Resampling.NEAREST`. Otherwise, the default filter is
- :py:data:`Resampling.BICUBIC`. See: :ref:`concept-filters`.
- :param box: An optional 4-tuple of floats providing
- the source image region to be scaled.
- The values must be within (0, 0, width, height) rectangle.
- If omitted or None, the entire source is used.
- :param reducing_gap: Apply optimization by resizing the image
- in two steps. First, reducing the image by integer times
- using :py:meth:`~PIL.Image.Image.reduce`.
- Second, resizing using regular resampling. The last step
- changes size no less than by ``reducing_gap`` times.
- ``reducing_gap`` may be None (no first step is performed)
- or should be greater than 1.0. The bigger ``reducing_gap``,
- the closer the result to the fair resampling.
- The smaller ``reducing_gap``, the faster resizing.
- With ``reducing_gap`` greater or equal to 3.0, the result is
- indistinguishable from fair resampling in most cases.
- The default value is None (no optimization).
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- if resample is None:
- type_special = ";" in self.mode
- resample = Resampling.NEAREST if type_special else Resampling.BICUBIC
- elif resample not in (
- Resampling.NEAREST,
- Resampling.BILINEAR,
- Resampling.BICUBIC,
- Resampling.LANCZOS,
- Resampling.BOX,
- Resampling.HAMMING,
- ):
- message = f"Unknown resampling filter ({resample})."
-
- filters = [
- f"{filter[1]} ({filter[0]})"
- for filter in (
- (Resampling.NEAREST, "Image.Resampling.NEAREST"),
- (Resampling.LANCZOS, "Image.Resampling.LANCZOS"),
- (Resampling.BILINEAR, "Image.Resampling.BILINEAR"),
- (Resampling.BICUBIC, "Image.Resampling.BICUBIC"),
- (Resampling.BOX, "Image.Resampling.BOX"),
- (Resampling.HAMMING, "Image.Resampling.HAMMING"),
- )
- ]
- raise ValueError(
- message + " Use " + ", ".join(filters[:-1]) + " or " + filters[-1]
- )
-
- if reducing_gap is not None and reducing_gap < 1.0:
- raise ValueError("reducing_gap must be 1.0 or greater")
-
- size = tuple(size)
-
- self.load()
- if box is None:
- box = (0, 0) + self.size
- else:
- box = tuple(box)
-
- if self.size == size and box == (0, 0) + self.size:
- return self.copy()
-
- if self.mode in ("1", "P"):
- resample = Resampling.NEAREST
-
- if self.mode in ["LA", "RGBA"] and resample != Resampling.NEAREST:
- im = self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode])
- im = im.resize(size, resample, box)
- return im.convert(self.mode)
-
- self.load()
-
- if reducing_gap is not None and resample != Resampling.NEAREST:
- factor_x = int((box[2] - box[0]) / size[0] / reducing_gap) or 1
- factor_y = int((box[3] - box[1]) / size[1] / reducing_gap) or 1
- if factor_x > 1 or factor_y > 1:
- reduce_box = self._get_safe_box(size, resample, box)
- factor = (factor_x, factor_y)
- if callable(self.reduce):
- self = self.reduce(factor, box=reduce_box)
- else:
- self = Image.reduce(self, factor, box=reduce_box)
- box = (
- (box[0] - reduce_box[0]) / factor_x,
- (box[1] - reduce_box[1]) / factor_y,
- (box[2] - reduce_box[0]) / factor_x,
- (box[3] - reduce_box[1]) / factor_y,
- )
-
- return self._new(self.im.resize(size, resample, box))
-
- def reduce(self, factor, box=None):
- """
- Returns a copy of the image reduced ``factor`` times.
- If the size of the image is not dividable by ``factor``,
- the resulting size will be rounded up.
-
- :param factor: A greater than 0 integer or tuple of two integers
- for width and height separately.
- :param box: An optional 4-tuple of ints providing
- the source image region to be reduced.
- The values must be within ``(0, 0, width, height)`` rectangle.
- If omitted or ``None``, the entire source is used.
- """
- if not isinstance(factor, (list, tuple)):
- factor = (factor, factor)
-
- if box is None:
- box = (0, 0) + self.size
- else:
- box = tuple(box)
-
- if factor == (1, 1) and box == (0, 0) + self.size:
- return self.copy()
-
- if self.mode in ["LA", "RGBA"]:
- im = self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode])
- im = im.reduce(factor, box)
- return im.convert(self.mode)
-
- self.load()
-
- return self._new(self.im.reduce(factor, box))
-
- def rotate(
- self,
- angle,
- resample=Resampling.NEAREST,
- expand=0,
- center=None,
- translate=None,
- fillcolor=None,
- ):
- """
- Returns a rotated copy of this image. This method returns a
- copy of this image, rotated the given number of degrees counter
- clockwise around its centre.
-
- :param angle: In degrees counter clockwise.
- :param resample: An optional resampling filter. This can be
- one of :py:data:`Resampling.NEAREST` (use nearest neighbour),
- :py:data:`Resampling.BILINEAR` (linear interpolation in a 2x2
- environment), or :py:data:`Resampling.BICUBIC` (cubic spline
- interpolation in a 4x4 environment). If omitted, or if the image has
- mode "1" or "P", it is set to :py:data:`Resampling.NEAREST`.
- See :ref:`concept-filters`.
- :param expand: Optional expansion flag. If true, expands the output
- image to make it large enough to hold the entire rotated image.
- If false or omitted, make the output image the same size as the
- input image. Note that the expand flag assumes rotation around
- the center and no translation.
- :param center: Optional center of rotation (a 2-tuple). Origin is
- the upper left corner. Default is the center of the image.
- :param translate: An optional post-rotate translation (a 2-tuple).
- :param fillcolor: An optional color for area outside the rotated image.
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- angle = angle % 360.0
-
- # Fast paths regardless of filter, as long as we're not
- # translating or changing the center.
- if not (center or translate):
- if angle == 0:
- return self.copy()
- if angle == 180:
- return self.transpose(Transpose.ROTATE_180)
- if angle in (90, 270) and (expand or self.width == self.height):
- return self.transpose(
- Transpose.ROTATE_90 if angle == 90 else Transpose.ROTATE_270
- )
-
- # Calculate the affine matrix. Note that this is the reverse
- # transformation (from destination image to source) because we
- # want to interpolate the (discrete) destination pixel from
- # the local area around the (floating) source pixel.
-
- # The matrix we actually want (note that it operates from the right):
- # (1, 0, tx) (1, 0, cx) ( cos a, sin a, 0) (1, 0, -cx)
- # (0, 1, ty) * (0, 1, cy) * (-sin a, cos a, 0) * (0, 1, -cy)
- # (0, 0, 1) (0, 0, 1) ( 0, 0, 1) (0, 0, 1)
-
- # The reverse matrix is thus:
- # (1, 0, cx) ( cos -a, sin -a, 0) (1, 0, -cx) (1, 0, -tx)
- # (0, 1, cy) * (-sin -a, cos -a, 0) * (0, 1, -cy) * (0, 1, -ty)
- # (0, 0, 1) ( 0, 0, 1) (0, 0, 1) (0, 0, 1)
-
- # In any case, the final translation may be updated at the end to
- # compensate for the expand flag.
-
- w, h = self.size
-
- if translate is None:
- post_trans = (0, 0)
- else:
- post_trans = translate
- if center is None:
- # FIXME These should be rounded to ints?
- rotn_center = (w / 2.0, h / 2.0)
- else:
- rotn_center = center
-
- angle = -math.radians(angle)
- matrix = [
- round(math.cos(angle), 15),
- round(math.sin(angle), 15),
- 0.0,
- round(-math.sin(angle), 15),
- round(math.cos(angle), 15),
- 0.0,
- ]
-
- def transform(x, y, matrix):
- (a, b, c, d, e, f) = matrix
- return a * x + b * y + c, d * x + e * y + f
-
- matrix[2], matrix[5] = transform(
- -rotn_center[0] - post_trans[0], -rotn_center[1] - post_trans[1], matrix
- )
- matrix[2] += rotn_center[0]
- matrix[5] += rotn_center[1]
-
- if expand:
- # calculate output size
- xx = []
- yy = []
- for x, y in ((0, 0), (w, 0), (w, h), (0, h)):
- x, y = transform(x, y, matrix)
- xx.append(x)
- yy.append(y)
- nw = math.ceil(max(xx)) - math.floor(min(xx))
- nh = math.ceil(max(yy)) - math.floor(min(yy))
-
- # We multiply a translation matrix from the right. Because of its
- # special form, this is the same as taking the image of the
- # translation vector as new translation vector.
- matrix[2], matrix[5] = transform(-(nw - w) / 2.0, -(nh - h) / 2.0, matrix)
- w, h = nw, nh
-
- return self.transform(
- (w, h), Transform.AFFINE, matrix, resample, fillcolor=fillcolor
- )
-
- def save(self, fp, format=None, **params):
- """
- Saves this image under the given filename. If no format is
- specified, the format to use is determined from the filename
- extension, if possible.
-
- Keyword options can be used to provide additional instructions
- to the writer. If a writer doesn't recognise an option, it is
- silently ignored. The available options are described in the
- :doc:`image format documentation
- <../handbook/image-file-formats>` for each writer.
-
- You can use a file object instead of a filename. In this case,
- you must always specify the format. The file object must
- implement the ``seek``, ``tell``, and ``write``
- methods, and be opened in binary mode.
-
- :param fp: A filename (string), pathlib.Path object or file object.
- :param format: Optional format override. If omitted, the
- format to use is determined from the filename extension.
- If a file object was used instead of a filename, this
- parameter should always be used.
- :param params: Extra parameters to the image writer.
- :returns: None
- :exception ValueError: If the output format could not be determined
- from the file name. Use the format option to solve this.
- :exception OSError: If the file could not be written. The file
- may have been created, and may contain partial data.
- """
-
- filename = ""
- open_fp = False
- if isinstance(fp, Path):
- filename = str(fp)
- open_fp = True
- elif is_path(fp):
- filename = fp
- open_fp = True
- elif fp == sys.stdout:
- try:
- fp = sys.stdout.buffer
- except AttributeError:
- pass
- if not filename and hasattr(fp, "name") and is_path(fp.name):
- # only set the name for metadata purposes
- filename = fp.name
-
- # may mutate self!
- self._ensure_mutable()
-
- save_all = params.pop("save_all", False)
- self.encoderinfo = params
- self.encoderconfig = ()
-
- preinit()
-
- ext = os.path.splitext(filename)[1].lower()
-
- if not format:
- if ext not in EXTENSION:
- init()
- try:
- format = EXTENSION[ext]
- except KeyError as e:
- raise ValueError(f"unknown file extension: {ext}") from e
-
- if format.upper() not in SAVE:
- init()
- if save_all:
- save_handler = SAVE_ALL[format.upper()]
- else:
- save_handler = SAVE[format.upper()]
-
- created = False
- if open_fp:
- created = not os.path.exists(filename)
- if params.get("append", False):
- # Open also for reading ("+"), because TIFF save_all
- # writer needs to go back and edit the written data.
- fp = builtins.open(filename, "r+b")
- else:
- fp = builtins.open(filename, "w+b")
-
- try:
- save_handler(self, fp, filename)
- except Exception:
- if open_fp:
- fp.close()
- if created:
- try:
- os.remove(filename)
- except PermissionError:
- pass
- raise
- if open_fp:
- fp.close()
-
- def seek(self, frame):
- """
- Seeks to the given frame in this sequence file. If you seek
- beyond the end of the sequence, the method raises an
- ``EOFError`` exception. When a sequence file is opened, the
- library automatically seeks to frame 0.
-
- See :py:meth:`~PIL.Image.Image.tell`.
-
- If defined, :attr:`~PIL.Image.Image.n_frames` refers to the
- number of available frames.
-
- :param frame: Frame number, starting at 0.
- :exception EOFError: If the call attempts to seek beyond the end
- of the sequence.
- """
-
- # overridden by file handlers
- if frame != 0:
- raise EOFError
-
- def show(self, title=None):
- """
- Displays this image. This method is mainly intended for debugging purposes.
-
- This method calls :py:func:`PIL.ImageShow.show` internally. You can use
- :py:func:`PIL.ImageShow.register` to override its default behaviour.
-
- The image is first saved to a temporary file. By default, it will be in
- PNG format.
-
- On Unix, the image is then opened using the **display**, **eog** or
- **xv** utility, depending on which one can be found.
-
- On macOS, the image is opened with the native Preview application.
-
- On Windows, the image is opened with the standard PNG display utility.
-
- :param title: Optional title to use for the image window, where possible.
- """
-
- _show(self, title=title)
-
- def split(self):
- """
- Split this image into individual bands. This method returns a
- tuple of individual image bands from an image. For example,
- splitting an "RGB" image creates three new images each
- containing a copy of one of the original bands (red, green,
- blue).
-
- If you need only one band, :py:meth:`~PIL.Image.Image.getchannel`
- method can be more convenient and faster.
-
- :returns: A tuple containing bands.
- """
-
- self.load()
- if self.im.bands == 1:
- ims = [self.copy()]
- else:
- ims = map(self._new, self.im.split())
- return tuple(ims)
-
- def getchannel(self, channel):
- """
- Returns an image containing a single channel of the source image.
-
- :param channel: What channel to return. Could be index
- (0 for "R" channel of "RGB") or channel name
- ("A" for alpha channel of "RGBA").
- :returns: An image in "L" mode.
-
- .. versionadded:: 4.3.0
- """
- self.load()
-
- if isinstance(channel, str):
- try:
- channel = self.getbands().index(channel)
- except ValueError as e:
- raise ValueError(f'The image has no channel "{channel}"') from e
-
- return self._new(self.im.getband(channel))
-
- def tell(self):
- """
- Returns the current frame number. See :py:meth:`~PIL.Image.Image.seek`.
-
- If defined, :attr:`~PIL.Image.Image.n_frames` refers to the
- number of available frames.
-
- :returns: Frame number, starting with 0.
- """
- return 0
-
- def thumbnail(self, size, resample=Resampling.BICUBIC, reducing_gap=2.0):
- """
- Make this image into a thumbnail. This method modifies the
- image to contain a thumbnail version of itself, no larger than
- the given size. This method calculates an appropriate thumbnail
- size to preserve the aspect of the image, calls the
- :py:meth:`~PIL.Image.Image.draft` method to configure the file reader
- (where applicable), and finally resizes the image.
-
- Note that this function modifies the :py:class:`~PIL.Image.Image`
- object in place. If you need to use the full resolution image as well,
- apply this method to a :py:meth:`~PIL.Image.Image.copy` of the original
- image.
-
- :param size: Requested size.
- :param resample: Optional resampling filter. This can be one
- of :py:data:`Resampling.NEAREST`, :py:data:`Resampling.BOX`,
- :py:data:`Resampling.BILINEAR`, :py:data:`Resampling.HAMMING`,
- :py:data:`Resampling.BICUBIC` or :py:data:`Resampling.LANCZOS`.
- If omitted, it defaults to :py:data:`Resampling.BICUBIC`.
- (was :py:data:`Resampling.NEAREST` prior to version 2.5.0).
- See: :ref:`concept-filters`.
- :param reducing_gap: Apply optimization by resizing the image
- in two steps. First, reducing the image by integer times
- using :py:meth:`~PIL.Image.Image.reduce` or
- :py:meth:`~PIL.Image.Image.draft` for JPEG images.
- Second, resizing using regular resampling. The last step
- changes size no less than by ``reducing_gap`` times.
- ``reducing_gap`` may be None (no first step is performed)
- or should be greater than 1.0. The bigger ``reducing_gap``,
- the closer the result to the fair resampling.
- The smaller ``reducing_gap``, the faster resizing.
- With ``reducing_gap`` greater or equal to 3.0, the result is
- indistinguishable from fair resampling in most cases.
- The default value is 2.0 (very close to fair resampling
- while still being faster in many cases).
- :returns: None
- """
-
- provided_size = tuple(map(math.floor, size))
-
- def preserve_aspect_ratio():
- def round_aspect(number, key):
- return max(min(math.floor(number), math.ceil(number), key=key), 1)
-
- x, y = provided_size
- if x >= self.width and y >= self.height:
- return
-
- aspect = self.width / self.height
- if x / y >= aspect:
- x = round_aspect(y * aspect, key=lambda n: abs(aspect - n / y))
- else:
- y = round_aspect(
- x / aspect, key=lambda n: 0 if n == 0 else abs(aspect - x / n)
- )
- return x, y
-
- box = None
- if reducing_gap is not None:
- size = preserve_aspect_ratio()
- if size is None:
- return
-
- res = self.draft(None, (size[0] * reducing_gap, size[1] * reducing_gap))
- if res is not None:
- box = res[1]
- if box is None:
- self.load()
-
- # load() may have changed the size of the image
- size = preserve_aspect_ratio()
- if size is None:
- return
-
- if self.size != size:
- im = self.resize(size, resample, box=box, reducing_gap=reducing_gap)
-
- self.im = im.im
- self._size = size
- self.mode = self.im.mode
-
- self.readonly = 0
- self.pyaccess = None
-
- # FIXME: the different transform methods need further explanation
- # instead of bloating the method docs, add a separate chapter.
- def transform(
- self,
- size,
- method,
- data=None,
- resample=Resampling.NEAREST,
- fill=1,
- fillcolor=None,
- ):
- """
- Transforms this image. This method creates a new image with the
- given size, and the same mode as the original, and copies data
- to the new image using the given transform.
-
- :param size: The output size.
- :param method: The transformation method. This is one of
- :py:data:`Transform.EXTENT` (cut out a rectangular subregion),
- :py:data:`Transform.AFFINE` (affine transform),
- :py:data:`Transform.PERSPECTIVE` (perspective transform),
- :py:data:`Transform.QUAD` (map a quadrilateral to a rectangle), or
- :py:data:`Transform.MESH` (map a number of source quadrilaterals
- in one operation).
-
- It may also be an :py:class:`~PIL.Image.ImageTransformHandler`
- object::
-
- class Example(Image.ImageTransformHandler):
- def transform(self, size, data, resample, fill=1):
- # Return result
-
- It may also be an object with a ``method.getdata`` method
- that returns a tuple supplying new ``method`` and ``data`` values::
-
- class Example:
- def getdata(self):
- method = Image.Transform.EXTENT
- data = (0, 0, 100, 100)
- return method, data
- :param data: Extra data to the transformation method.
- :param resample: Optional resampling filter. It can be one of
- :py:data:`Resampling.NEAREST` (use nearest neighbour),
- :py:data:`Resampling.BILINEAR` (linear interpolation in a 2x2
- environment), or :py:data:`Resampling.BICUBIC` (cubic spline
- interpolation in a 4x4 environment). If omitted, or if the image
- has mode "1" or "P", it is set to :py:data:`Resampling.NEAREST`.
- See: :ref:`concept-filters`.
- :param fill: If ``method`` is an
- :py:class:`~PIL.Image.ImageTransformHandler` object, this is one of
- the arguments passed to it. Otherwise, it is unused.
- :param fillcolor: Optional fill color for the area outside the
- transform in the output image.
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- if self.mode in ("LA", "RGBA") and resample != Resampling.NEAREST:
- return (
- self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode])
- .transform(size, method, data, resample, fill, fillcolor)
- .convert(self.mode)
- )
-
- if isinstance(method, ImageTransformHandler):
- return method.transform(size, self, resample=resample, fill=fill)
-
- if hasattr(method, "getdata"):
- # compatibility w. old-style transform objects
- method, data = method.getdata()
-
- if data is None:
- raise ValueError("missing method data")
-
- im = new(self.mode, size, fillcolor)
- if self.mode == "P" and self.palette:
- im.palette = self.palette.copy()
- im.info = self.info.copy()
- if method == Transform.MESH:
- # list of quads
- for box, quad in data:
- im.__transformer(
- box, self, Transform.QUAD, quad, resample, fillcolor is None
- )
- else:
- im.__transformer(
- (0, 0) + size, self, method, data, resample, fillcolor is None
- )
-
- return im
-
- def __transformer(
- self, box, image, method, data, resample=Resampling.NEAREST, fill=1
- ):
- w = box[2] - box[0]
- h = box[3] - box[1]
-
- if method == Transform.AFFINE:
- data = data[:6]
-
- elif method == Transform.EXTENT:
- # convert extent to an affine transform
- x0, y0, x1, y1 = data
- xs = (x1 - x0) / w
- ys = (y1 - y0) / h
- method = Transform.AFFINE
- data = (xs, 0, x0, 0, ys, y0)
-
- elif method == Transform.PERSPECTIVE:
- data = data[:8]
-
- elif method == Transform.QUAD:
- # quadrilateral warp. data specifies the four corners
- # given as NW, SW, SE, and NE.
- nw = data[:2]
- sw = data[2:4]
- se = data[4:6]
- ne = data[6:8]
- x0, y0 = nw
- As = 1.0 / w
- At = 1.0 / h
- data = (
- x0,
- (ne[0] - x0) * As,
- (sw[0] - x0) * At,
- (se[0] - sw[0] - ne[0] + x0) * As * At,
- y0,
- (ne[1] - y0) * As,
- (sw[1] - y0) * At,
- (se[1] - sw[1] - ne[1] + y0) * As * At,
- )
-
- else:
- raise ValueError("unknown transformation method")
-
- if resample not in (
- Resampling.NEAREST,
- Resampling.BILINEAR,
- Resampling.BICUBIC,
- ):
- if resample in (Resampling.BOX, Resampling.HAMMING, Resampling.LANCZOS):
- message = {
- Resampling.BOX: "Image.Resampling.BOX",
- Resampling.HAMMING: "Image.Resampling.HAMMING",
- Resampling.LANCZOS: "Image.Resampling.LANCZOS",
- }[resample] + f" ({resample}) cannot be used."
- else:
- message = f"Unknown resampling filter ({resample})."
-
- filters = [
- f"{filter[1]} ({filter[0]})"
- for filter in (
- (Resampling.NEAREST, "Image.Resampling.NEAREST"),
- (Resampling.BILINEAR, "Image.Resampling.BILINEAR"),
- (Resampling.BICUBIC, "Image.Resampling.BICUBIC"),
- )
- ]
- raise ValueError(
- message + " Use " + ", ".join(filters[:-1]) + " or " + filters[-1]
- )
-
- image.load()
-
- self.load()
-
- if image.mode in ("1", "P"):
- resample = Resampling.NEAREST
-
- self.im.transform2(box, image.im, method, data, resample, fill)
-
- def transpose(self, method):
- """
- Transpose image (flip or rotate in 90 degree steps)
-
- :param method: One of :py:data:`Transpose.FLIP_LEFT_RIGHT`,
- :py:data:`Transpose.FLIP_TOP_BOTTOM`, :py:data:`Transpose.ROTATE_90`,
- :py:data:`Transpose.ROTATE_180`, :py:data:`Transpose.ROTATE_270`,
- :py:data:`Transpose.TRANSPOSE` or :py:data:`Transpose.TRANSVERSE`.
- :returns: Returns a flipped or rotated copy of this image.
- """
-
- self.load()
- return self._new(self.im.transpose(method))
-
- def effect_spread(self, distance):
- """
- Randomly spread pixels in an image.
-
- :param distance: Distance to spread pixels.
- """
- self.load()
- return self._new(self.im.effect_spread(distance))
-
- def toqimage(self):
- """Returns a QImage copy of this image"""
- from . import ImageQt
-
- if not ImageQt.qt_is_installed:
- raise ImportError("Qt bindings are not installed")
- return ImageQt.toqimage(self)
-
- def toqpixmap(self):
- """Returns a QPixmap copy of this image"""
- from . import ImageQt
-
- if not ImageQt.qt_is_installed:
- raise ImportError("Qt bindings are not installed")
- return ImageQt.toqpixmap(self)
-
-
-# --------------------------------------------------------------------
-# Abstract handlers.
-
-
-class ImagePointHandler:
- """
- Used as a mixin by point transforms
- (for use with :py:meth:`~PIL.Image.Image.point`)
- """
-
- pass
-
-
-class ImageTransformHandler:
- """
- Used as a mixin by geometry transforms
- (for use with :py:meth:`~PIL.Image.Image.transform`)
- """
-
- pass
-
-
-# --------------------------------------------------------------------
-# Factories
-
-#
-# Debugging
-
-
-def _wedge():
- """Create greyscale wedge (for debugging only)"""
-
- return Image()._new(core.wedge("L"))
-
-
-def _check_size(size):
- """
- Common check to enforce type and sanity check on size tuples
-
- :param size: Should be a 2 tuple of (width, height)
- :returns: True, or raises a ValueError
- """
-
- if not isinstance(size, (list, tuple)):
- raise ValueError("Size must be a tuple")
- if len(size) != 2:
- raise ValueError("Size must be a tuple of length 2")
- if size[0] < 0 or size[1] < 0:
- raise ValueError("Width and height must be >= 0")
-
- return True
-
-
-def new(mode, size, color=0):
- """
- Creates a new image with the given mode and size.
-
- :param mode: The mode to use for the new image. See:
- :ref:`concept-modes`.
- :param size: A 2-tuple, containing (width, height) in pixels.
- :param color: What color to use for the image. Default is black.
- If given, this should be a single integer or floating point value
- for single-band modes, and a tuple for multi-band modes (one value
- per band). When creating RGB images, you can also use color
- strings as supported by the ImageColor module. If the color is
- None, the image is not initialised.
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- _check_size(size)
-
- if color is None:
- # don't initialize
- return Image()._new(core.new(mode, size))
-
- if isinstance(color, str):
- # css3-style specifier
-
- from . import ImageColor
-
- color = ImageColor.getcolor(color, mode)
-
- im = Image()
- if mode == "P" and isinstance(color, (list, tuple)) and len(color) in [3, 4]:
- # RGB or RGBA value for a P image
- from . import ImagePalette
-
- im.palette = ImagePalette.ImagePalette()
- color = im.palette.getcolor(color)
- return im._new(core.fill(mode, size, color))
-
-
-def frombytes(mode, size, data, decoder_name="raw", *args):
- """
- Creates a copy of an image memory from pixel data in a buffer.
-
- In its simplest form, this function takes three arguments
- (mode, size, and unpacked pixel data).
-
- You can also use any pixel decoder supported by PIL. For more
- information on available decoders, see the section
- :ref:`Writing Your Own File Codec `.
-
- Note that this function decodes pixel data only, not entire images.
- If you have an entire image in a string, wrap it in a
- :py:class:`~io.BytesIO` object, and use :py:func:`~PIL.Image.open` to load
- it.
-
- :param mode: The image mode. See: :ref:`concept-modes`.
- :param size: The image size.
- :param data: A byte buffer containing raw data for the given mode.
- :param decoder_name: What decoder to use.
- :param args: Additional parameters for the given decoder.
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- _check_size(size)
-
- # may pass tuple instead of argument list
- if len(args) == 1 and isinstance(args[0], tuple):
- args = args[0]
-
- if decoder_name == "raw" and args == ():
- args = mode
-
- im = new(mode, size)
- im.frombytes(data, decoder_name, args)
- return im
-
-
-def frombuffer(mode, size, data, decoder_name="raw", *args):
- """
- Creates an image memory referencing pixel data in a byte buffer.
-
- This function is similar to :py:func:`~PIL.Image.frombytes`, but uses data
- in the byte buffer, where possible. This means that changes to the
- original buffer object are reflected in this image). Not all modes can
- share memory; supported modes include "L", "RGBX", "RGBA", and "CMYK".
-
- Note that this function decodes pixel data only, not entire images.
- If you have an entire image file in a string, wrap it in a
- :py:class:`~io.BytesIO` object, and use :py:func:`~PIL.Image.open` to load it.
-
- In the current version, the default parameters used for the "raw" decoder
- differs from that used for :py:func:`~PIL.Image.frombytes`. This is a
- bug, and will probably be fixed in a future release. The current release
- issues a warning if you do this; to disable the warning, you should provide
- the full set of parameters. See below for details.
-
- :param mode: The image mode. See: :ref:`concept-modes`.
- :param size: The image size.
- :param data: A bytes or other buffer object containing raw
- data for the given mode.
- :param decoder_name: What decoder to use.
- :param args: Additional parameters for the given decoder. For the
- default encoder ("raw"), it's recommended that you provide the
- full set of parameters::
-
- frombuffer(mode, size, data, "raw", mode, 0, 1)
-
- :returns: An :py:class:`~PIL.Image.Image` object.
-
- .. versionadded:: 1.1.4
- """
-
- _check_size(size)
-
- # may pass tuple instead of argument list
- if len(args) == 1 and isinstance(args[0], tuple):
- args = args[0]
-
- if decoder_name == "raw":
- if args == ():
- args = mode, 0, 1
- if args[0] in _MAPMODES:
- im = new(mode, (1, 1))
- im = im._new(core.map_buffer(data, size, decoder_name, 0, args))
- if mode == "P":
- from . import ImagePalette
-
- im.palette = ImagePalette.ImagePalette("RGB", im.im.getpalette("RGB"))
- im.readonly = 1
- return im
-
- return frombytes(mode, size, data, decoder_name, args)
-
-
-def fromarray(obj, mode=None):
- """
- Creates an image memory from an object exporting the array interface
- (using the buffer protocol).
-
- If ``obj`` is not contiguous, then the ``tobytes`` method is called
- and :py:func:`~PIL.Image.frombuffer` is used.
-
- If you have an image in NumPy::
-
- from PIL import Image
- import numpy as np
- im = Image.open("hopper.jpg")
- a = np.asarray(im)
-
- Then this can be used to convert it to a Pillow image::
-
- im = Image.fromarray(a)
-
- :param obj: Object with array interface
- :param mode: Optional mode to use when reading ``obj``. Will be determined from
- type if ``None``.
-
- This will not be used to convert the data after reading, but will be used to
- change how the data is read::
-
- from PIL import Image
- import numpy as np
- a = np.full((1, 1), 300)
- im = Image.fromarray(a, mode="L")
- im.getpixel((0, 0)) # 44
- im = Image.fromarray(a, mode="RGB")
- im.getpixel((0, 0)) # (44, 1, 0)
-
- See: :ref:`concept-modes` for general information about modes.
- :returns: An image object.
-
- .. versionadded:: 1.1.6
- """
- arr = obj.__array_interface__
- shape = arr["shape"]
- ndim = len(shape)
- strides = arr.get("strides", None)
- if mode is None:
- try:
- typekey = (1, 1) + shape[2:], arr["typestr"]
- except KeyError as e:
- raise TypeError("Cannot handle this data type") from e
- try:
- mode, rawmode = _fromarray_typemap[typekey]
- except KeyError as e:
- raise TypeError("Cannot handle this data type: %s, %s" % typekey) from e
- else:
- rawmode = mode
- if mode in ["1", "L", "I", "P", "F"]:
- ndmax = 2
- elif mode == "RGB":
- ndmax = 3
- else:
- ndmax = 4
- if ndim > ndmax:
- raise ValueError(f"Too many dimensions: {ndim} > {ndmax}.")
-
- size = 1 if ndim == 1 else shape[1], shape[0]
- if strides is not None:
- if hasattr(obj, "tobytes"):
- obj = obj.tobytes()
- else:
- obj = obj.tostring()
-
- return frombuffer(mode, size, obj, "raw", rawmode, 0, 1)
-
-
-def fromqimage(im):
- """Creates an image instance from a QImage image"""
- from . import ImageQt
-
- if not ImageQt.qt_is_installed:
- raise ImportError("Qt bindings are not installed")
- return ImageQt.fromqimage(im)
-
-
-def fromqpixmap(im):
- """Creates an image instance from a QPixmap image"""
- from . import ImageQt
-
- if not ImageQt.qt_is_installed:
- raise ImportError("Qt bindings are not installed")
- return ImageQt.fromqpixmap(im)
-
-
-_fromarray_typemap = {
- # (shape, typestr) => mode, rawmode
- # first two members of shape are set to one
- ((1, 1), "|b1"): ("1", "1;8"),
- ((1, 1), "|u1"): ("L", "L"),
- ((1, 1), "|i1"): ("I", "I;8"),
- ((1, 1), "u2"): ("I", "I;16B"),
- ((1, 1), "i2"): ("I", "I;16BS"),
- ((1, 1), "u4"): ("I", "I;32B"),
- ((1, 1), "i4"): ("I", "I;32BS"),
- ((1, 1), "f4"): ("F", "F;32BF"),
- ((1, 1), "f8"): ("F", "F;64BF"),
- ((1, 1, 2), "|u1"): ("LA", "LA"),
- ((1, 1, 3), "|u1"): ("RGB", "RGB"),
- ((1, 1, 4), "|u1"): ("RGBA", "RGBA"),
- # shortcuts:
- ((1, 1), _ENDIAN + "i4"): ("I", "I"),
- ((1, 1), _ENDIAN + "f4"): ("F", "F"),
-}
-
-
-def _decompression_bomb_check(size):
- if MAX_IMAGE_PIXELS is None:
- return
-
- pixels = size[0] * size[1]
-
- if pixels > 2 * MAX_IMAGE_PIXELS:
- raise DecompressionBombError(
- f"Image size ({pixels} pixels) exceeds limit of {2 * MAX_IMAGE_PIXELS} "
- "pixels, could be decompression bomb DOS attack."
- )
-
- if pixels > MAX_IMAGE_PIXELS:
- warnings.warn(
- f"Image size ({pixels} pixels) exceeds limit of {MAX_IMAGE_PIXELS} pixels, "
- "could be decompression bomb DOS attack.",
- DecompressionBombWarning,
- )
-
-
-def open(fp, mode="r", formats=None):
- """
- Opens and identifies the given image file.
-
- This is a lazy operation; this function identifies the file, but
- the file remains open and the actual image data is not read from
- the file until you try to process the data (or call the
- :py:meth:`~PIL.Image.Image.load` method). See
- :py:func:`~PIL.Image.new`. See :ref:`file-handling`.
-
- :param fp: A filename (string), pathlib.Path object or a file object.
- The file object must implement ``file.read``,
- ``file.seek``, and ``file.tell`` methods,
- and be opened in binary mode.
- :param mode: The mode. If given, this argument must be "r".
- :param formats: A list or tuple of formats to attempt to load the file in.
- This can be used to restrict the set of formats checked.
- Pass ``None`` to try all supported formats. You can print the set of
- available formats by running ``python3 -m PIL`` or using
- the :py:func:`PIL.features.pilinfo` function.
- :returns: An :py:class:`~PIL.Image.Image` object.
- :exception FileNotFoundError: If the file cannot be found.
- :exception PIL.UnidentifiedImageError: If the image cannot be opened and
- identified.
- :exception ValueError: If the ``mode`` is not "r", or if a ``StringIO``
- instance is used for ``fp``.
- :exception TypeError: If ``formats`` is not ``None``, a list or a tuple.
- """
-
- if mode != "r":
- raise ValueError(f"bad mode {repr(mode)}")
- elif isinstance(fp, io.StringIO):
- raise ValueError(
- "StringIO cannot be used to open an image. "
- "Binary data must be used instead."
- )
-
- if formats is None:
- formats = ID
- elif not isinstance(formats, (list, tuple)):
- raise TypeError("formats must be a list or tuple")
-
- exclusive_fp = False
- filename = ""
- if isinstance(fp, Path):
- filename = str(fp.resolve())
- elif is_path(fp):
- filename = fp
-
- if filename:
- fp = builtins.open(filename, "rb")
- exclusive_fp = True
-
- try:
- fp.seek(0)
- except (AttributeError, io.UnsupportedOperation):
- fp = io.BytesIO(fp.read())
- exclusive_fp = True
-
- prefix = fp.read(16)
-
- preinit()
-
- accept_warnings = []
-
- def _open_core(fp, filename, prefix, formats):
- for i in formats:
- i = i.upper()
- if i not in OPEN:
- init()
- try:
- factory, accept = OPEN[i]
- result = not accept or accept(prefix)
- if type(result) in [str, bytes]:
- accept_warnings.append(result)
- elif result:
- fp.seek(0)
- im = factory(fp, filename)
- _decompression_bomb_check(im.size)
- return im
- except (SyntaxError, IndexError, TypeError, struct.error):
- # Leave disabled by default, spams the logs with image
- # opening failures that are entirely expected.
- # logger.debug("", exc_info=True)
- continue
- except BaseException:
- if exclusive_fp:
- fp.close()
- raise
- return None
-
- im = _open_core(fp, filename, prefix, formats)
-
- if im is None:
- if init():
- im = _open_core(fp, filename, prefix, formats)
-
- if im:
- im._exclusive_fp = exclusive_fp
- return im
-
- if exclusive_fp:
- fp.close()
- for message in accept_warnings:
- warnings.warn(message)
- raise UnidentifiedImageError(
- "cannot identify image file %r" % (filename if filename else fp)
- )
-
-
-#
-# Image processing.
-
-
-def alpha_composite(im1, im2):
- """
- Alpha composite im2 over im1.
-
- :param im1: The first image. Must have mode RGBA.
- :param im2: The second image. Must have mode RGBA, and the same size as
- the first image.
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- im1.load()
- im2.load()
- return im1._new(core.alpha_composite(im1.im, im2.im))
-
-
-def blend(im1, im2, alpha):
- """
- Creates a new image by interpolating between two input images, using
- a constant alpha::
-
- out = image1 * (1.0 - alpha) + image2 * alpha
-
- :param im1: The first image.
- :param im2: The second image. Must have the same mode and size as
- the first image.
- :param alpha: The interpolation alpha factor. If alpha is 0.0, a
- copy of the first image is returned. If alpha is 1.0, a copy of
- the second image is returned. There are no restrictions on the
- alpha value. If necessary, the result is clipped to fit into
- the allowed output range.
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- im1.load()
- im2.load()
- return im1._new(core.blend(im1.im, im2.im, alpha))
-
-
-def composite(image1, image2, mask):
- """
- Create composite image by blending images using a transparency mask.
-
- :param image1: The first image.
- :param image2: The second image. Must have the same mode and
- size as the first image.
- :param mask: A mask image. This image can have mode
- "1", "L", or "RGBA", and must have the same size as the
- other two images.
- """
-
- image = image2.copy()
- image.paste(image1, None, mask)
- return image
-
-
-def eval(image, *args):
- """
- Applies the function (which should take one argument) to each pixel
- in the given image. If the image has more than one band, the same
- function is applied to each band. Note that the function is
- evaluated once for each possible pixel value, so you cannot use
- random components or other generators.
-
- :param image: The input image.
- :param function: A function object, taking one integer argument.
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- return image.point(args[0])
-
-
-def merge(mode, bands):
- """
- Merge a set of single band images into a new multiband image.
-
- :param mode: The mode to use for the output image. See:
- :ref:`concept-modes`.
- :param bands: A sequence containing one single-band image for
- each band in the output image. All bands must have the
- same size.
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- if getmodebands(mode) != len(bands) or "*" in mode:
- raise ValueError("wrong number of bands")
- for band in bands[1:]:
- if band.mode != getmodetype(mode):
- raise ValueError("mode mismatch")
- if band.size != bands[0].size:
- raise ValueError("size mismatch")
- for band in bands:
- band.load()
- return bands[0]._new(core.merge(mode, *[b.im for b in bands]))
-
-
-# --------------------------------------------------------------------
-# Plugin registry
-
-
-def register_open(id, factory, accept=None):
- """
- Register an image file plugin. This function should not be used
- in application code.
-
- :param id: An image format identifier.
- :param factory: An image file factory method.
- :param accept: An optional function that can be used to quickly
- reject images having another format.
- """
- id = id.upper()
- ID.append(id)
- OPEN[id] = factory, accept
-
-
-def register_mime(id, mimetype):
- """
- Registers an image MIME type. This function should not be used
- in application code.
-
- :param id: An image format identifier.
- :param mimetype: The image MIME type for this format.
- """
- MIME[id.upper()] = mimetype
-
-
-def register_save(id, driver):
- """
- Registers an image save function. This function should not be
- used in application code.
-
- :param id: An image format identifier.
- :param driver: A function to save images in this format.
- """
- SAVE[id.upper()] = driver
-
-
-def register_save_all(id, driver):
- """
- Registers an image function to save all the frames
- of a multiframe format. This function should not be
- used in application code.
-
- :param id: An image format identifier.
- :param driver: A function to save images in this format.
- """
- SAVE_ALL[id.upper()] = driver
-
-
-def register_extension(id, extension):
- """
- Registers an image extension. This function should not be
- used in application code.
-
- :param id: An image format identifier.
- :param extension: An extension used for this format.
- """
- EXTENSION[extension.lower()] = id.upper()
-
-
-def register_extensions(id, extensions):
- """
- Registers image extensions. This function should not be
- used in application code.
-
- :param id: An image format identifier.
- :param extensions: A list of extensions used for this format.
- """
- for extension in extensions:
- register_extension(id, extension)
-
-
-def registered_extensions():
- """
- Returns a dictionary containing all file extensions belonging
- to registered plugins
- """
- if not EXTENSION:
- init()
- return EXTENSION
-
-
-def register_decoder(name, decoder):
- """
- Registers an image decoder. This function should not be
- used in application code.
-
- :param name: The name of the decoder
- :param decoder: A callable(mode, args) that returns an
- ImageFile.PyDecoder object
-
- .. versionadded:: 4.1.0
- """
- DECODERS[name] = decoder
-
-
-def register_encoder(name, encoder):
- """
- Registers an image encoder. This function should not be
- used in application code.
-
- :param name: The name of the encoder
- :param encoder: A callable(mode, args) that returns an
- ImageFile.PyEncoder object
-
- .. versionadded:: 4.1.0
- """
- ENCODERS[name] = encoder
-
-
-# --------------------------------------------------------------------
-# Simple display support.
-
-
-def _show(image, **options):
- from . import ImageShow
-
- ImageShow.show(image, **options)
-
-
-# --------------------------------------------------------------------
-# Effects
-
-
-def effect_mandelbrot(size, extent, quality):
- """
- Generate a Mandelbrot set covering the given extent.
-
- :param size: The requested size in pixels, as a 2-tuple:
- (width, height).
- :param extent: The extent to cover, as a 4-tuple:
- (x0, y0, x1, y1).
- :param quality: Quality.
- """
- return Image()._new(core.effect_mandelbrot(size, extent, quality))
-
-
-def effect_noise(size, sigma):
- """
- Generate Gaussian noise centered around 128.
-
- :param size: The requested size in pixels, as a 2-tuple:
- (width, height).
- :param sigma: Standard deviation of noise.
- """
- return Image()._new(core.effect_noise(size, sigma))
-
-
-def linear_gradient(mode):
- """
- Generate 256x256 linear gradient from black to white, top to bottom.
-
- :param mode: Input mode.
- """
- return Image()._new(core.linear_gradient(mode))
-
-
-def radial_gradient(mode):
- """
- Generate 256x256 radial gradient from black to white, centre to edge.
-
- :param mode: Input mode.
- """
- return Image()._new(core.radial_gradient(mode))
-
-
-# --------------------------------------------------------------------
-# Resources
-
-
-def _apply_env_variables(env=None):
- if env is None:
- env = os.environ
-
- for var_name, setter in [
- ("PILLOW_ALIGNMENT", core.set_alignment),
- ("PILLOW_BLOCK_SIZE", core.set_block_size),
- ("PILLOW_BLOCKS_MAX", core.set_blocks_max),
- ]:
- if var_name not in env:
- continue
-
- var = env[var_name].lower()
-
- units = 1
- for postfix, mul in [("k", 1024), ("m", 1024 * 1024)]:
- if var.endswith(postfix):
- units = mul
- var = var[: -len(postfix)]
-
- try:
- var = int(var) * units
- except ValueError:
- warnings.warn(f"{var_name} is not int")
- continue
-
- try:
- setter(var)
- except ValueError as e:
- warnings.warn(f"{var_name}: {e}")
-
-
-_apply_env_variables()
-atexit.register(core.clear_cache)
-
-
-class Exif(MutableMapping):
- endian = None
- bigtiff = False
-
- def __init__(self):
- self._data = {}
- self._ifds = {}
- self._info = None
- self._loaded_exif = None
-
- def _fixup(self, value):
- try:
- if len(value) == 1 and isinstance(value, tuple):
- return value[0]
- except Exception:
- pass
- return value
-
- def _fixup_dict(self, src_dict):
- # Helper function
- # returns a dict with any single item tuples/lists as individual values
- return {k: self._fixup(v) for k, v in src_dict.items()}
-
- def _get_ifd_dict(self, offset):
- try:
- # an offset pointer to the location of the nested embedded IFD.
- # It should be a long, but may be corrupted.
- self.fp.seek(offset)
- except (KeyError, TypeError):
- pass
- else:
- from . import TiffImagePlugin
-
- info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
- info.load(self.fp)
- return self._fixup_dict(info)
-
- def _get_head(self):
- version = b"\x2B" if self.bigtiff else b"\x2A"
- if self.endian == "<":
- head = b"II" + version + b"\x00" + o32le(8)
- else:
- head = b"MM\x00" + version + o32be(8)
- if self.bigtiff:
- head += o32le(8) if self.endian == "<" else o32be(8)
- head += b"\x00\x00\x00\x00"
- return head
-
- def load(self, data):
- # Extract EXIF information. This is highly experimental,
- # and is likely to be replaced with something better in a future
- # version.
-
- # The EXIF record consists of a TIFF file embedded in a JPEG
- # application marker (!).
- if data == self._loaded_exif:
- return
- self._loaded_exif = data
- self._data.clear()
- self._ifds.clear()
- if data and data.startswith(b"Exif\x00\x00"):
- data = data[6:]
- if not data:
- self._info = None
- return
-
- self.fp = io.BytesIO(data)
- self.head = self.fp.read(8)
- # process dictionary
- from . import TiffImagePlugin
-
- self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
- self.endian = self._info._endian
- self.fp.seek(self._info.next)
- self._info.load(self.fp)
-
- def load_from_fp(self, fp, offset=None):
- self._loaded_exif = None
- self._data.clear()
- self._ifds.clear()
-
- # process dictionary
- from . import TiffImagePlugin
-
- self.fp = fp
- if offset is not None:
- self.head = self._get_head()
- else:
- self.head = self.fp.read(8)
- self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
- if self.endian is None:
- self.endian = self._info._endian
- if offset is None:
- offset = self._info.next
- self.fp.seek(offset)
- self._info.load(self.fp)
-
- def _get_merged_dict(self):
- merged_dict = dict(self)
-
- # get EXIF extension
- if 0x8769 in self:
- ifd = self._get_ifd_dict(self[0x8769])
- if ifd:
- merged_dict.update(ifd)
-
- # GPS
- if 0x8825 in self:
- merged_dict[0x8825] = self._get_ifd_dict(self[0x8825])
-
- return merged_dict
-
- def tobytes(self, offset=8):
- from . import TiffImagePlugin
-
- head = self._get_head()
- ifd = TiffImagePlugin.ImageFileDirectory_v2(ifh=head)
- for tag, value in self.items():
- if tag in [0x8769, 0x8225, 0x8825] and not isinstance(value, dict):
- value = self.get_ifd(tag)
- if (
- tag == 0x8769
- and 0xA005 in value
- and not isinstance(value[0xA005], dict)
- ):
- value = value.copy()
- value[0xA005] = self.get_ifd(0xA005)
- ifd[tag] = value
- return b"Exif\x00\x00" + head + ifd.tobytes(offset)
-
- def get_ifd(self, tag):
- if tag not in self._ifds:
- if tag in [0x8769, 0x8825]:
- # exif, gpsinfo
- if tag in self:
- self._ifds[tag] = self._get_ifd_dict(self[tag])
- elif tag in [0xA005, 0x927C]:
- # interop, makernote
- if 0x8769 not in self._ifds:
- self.get_ifd(0x8769)
- tag_data = self._ifds[0x8769][tag]
- if tag == 0x927C:
- # makernote
- from .TiffImagePlugin import ImageFileDirectory_v2
-
- if tag_data[:8] == b"FUJIFILM":
- ifd_offset = i32le(tag_data, 8)
- ifd_data = tag_data[ifd_offset:]
-
- makernote = {}
- for i in range(0, struct.unpack(" 4:
- (offset,) = struct.unpack("H", tag_data[:2])[0]):
- ifd_tag, typ, count, data = struct.unpack(
- ">HHL4s", tag_data[i * 12 + 2 : (i + 1) * 12 + 2]
- )
- if ifd_tag == 0x1101:
- # CameraInfo
- (offset,) = struct.unpack(">L", data)
- self.fp.seek(offset)
-
- camerainfo = {"ModelID": self.fp.read(4)}
-
- self.fp.read(4)
- # Seconds since 2000
- camerainfo["TimeStamp"] = i32le(self.fp.read(12))
-
- self.fp.read(4)
- camerainfo["InternalSerialNumber"] = self.fp.read(4)
-
- self.fp.read(12)
- parallax = self.fp.read(4)
- handler = ImageFileDirectory_v2._load_dispatch[
- TiffTags.FLOAT
- ][1]
- camerainfo["Parallax"] = handler(
- ImageFileDirectory_v2(), parallax, False
- )
-
- self.fp.read(4)
- camerainfo["Category"] = self.fp.read(2)
-
- makernote = {0x1101: dict(self._fixup_dict(camerainfo))}
- self._ifds[tag] = makernote
- else:
- # interop
- self._ifds[tag] = self._get_ifd_dict(tag_data)
- return self._ifds.get(tag, {})
-
- def __str__(self):
- if self._info is not None:
- # Load all keys into self._data
- for tag in self._info.keys():
- self[tag]
-
- return str(self._data)
-
- def __len__(self):
- keys = set(self._data)
- if self._info is not None:
- keys.update(self._info)
- return len(keys)
-
- def __getitem__(self, tag):
- if self._info is not None and tag not in self._data and tag in self._info:
- self._data[tag] = self._fixup(self._info[tag])
- del self._info[tag]
- return self._data[tag]
-
- def __contains__(self, tag):
- return tag in self._data or (self._info is not None and tag in self._info)
-
- def __setitem__(self, tag, value):
- if self._info is not None and tag in self._info:
- del self._info[tag]
- self._data[tag] = value
-
- def __delitem__(self, tag):
- if self._info is not None and tag in self._info:
- del self._info[tag]
- else:
- del self._data[tag]
-
- def __iter__(self):
- keys = set(self._data)
- if self._info is not None:
- keys.update(self._info)
- return iter(keys)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/us_population_pyramid_over_time.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/us_population_pyramid_over_time.py
deleted file mode 100644
index 624db71d5421a00400113467a76c7a23b4f25c9e..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/us_population_pyramid_over_time.py
+++ /dev/null
@@ -1,55 +0,0 @@
-'''
-US Population Pyramid Over Time
-===============================
-A population pyramid shows the distribution of age groups within a population.
-It uses a slider widget that is bound to the year to visualize the age
-distribution over time.
-'''
-# category: case studies
-import altair as alt
-from vega_datasets import data
-
-source = data.population.url
-
-slider = alt.binding_range(min=1850, max=2000, step=10)
-select_year = alt.selection_single(name='year', fields=['year'],
- bind=slider, init={'year': 2000})
-
-base = alt.Chart(source).add_selection(
- select_year
-).transform_filter(
- select_year
-).transform_calculate(
- gender=alt.expr.if_(alt.datum.sex == 1, 'Male', 'Female')
-).properties(
- width=250
-)
-
-
-color_scale = alt.Scale(domain=['Male', 'Female'],
- range=['#1f77b4', '#e377c2'])
-
-left = base.transform_filter(
- alt.datum.gender == 'Female'
-).encode(
- y=alt.Y('age:O', axis=None),
- x=alt.X('sum(people):Q',
- title='population',
- sort=alt.SortOrder('descending')),
- color=alt.Color('gender:N', scale=color_scale, legend=None)
-).mark_bar().properties(title='Female')
-
-middle = base.encode(
- y=alt.Y('age:O', axis=None),
- text=alt.Text('age:Q'),
-).mark_text().properties(width=20)
-
-right = base.transform_filter(
- alt.datum.gender == 'Male'
-).encode(
- y=alt.Y('age:O', axis=None),
- x=alt.X('sum(people):Q', title='population'),
- color=alt.Color('gender:N', scale=color_scale, legend=None)
-).mark_bar().properties(title='Male')
-
-alt.concat(left, middle, right, spacing=5)
\ No newline at end of file
diff --git a/spaces/asteph/harrywang-pokemon-lora/app.py b/spaces/asteph/harrywang-pokemon-lora/app.py
deleted file mode 100644
index c9837672ddc0c4e17dd4cf655c786fd4ef92317d..0000000000000000000000000000000000000000
--- a/spaces/asteph/harrywang-pokemon-lora/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/harrywang/pokemon-lora").launch()
\ No newline at end of file
diff --git a/spaces/avivdm1/AutoGPT/autogpt/json_utils/__init__.py b/spaces/avivdm1/AutoGPT/autogpt/json_utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/avivdm1/AutoGPT/run_continuous.bat b/spaces/avivdm1/AutoGPT/run_continuous.bat
deleted file mode 100644
index 812aa01c1c5506c452665610c0e9e83a17c426f2..0000000000000000000000000000000000000000
--- a/spaces/avivdm1/AutoGPT/run_continuous.bat
+++ /dev/null
@@ -1,3 +0,0 @@
-@echo off
-set argument=--continuous
-call run.bat %argument%
diff --git a/spaces/awacke1/2-LiveASR/app.py b/spaces/awacke1/2-LiveASR/app.py
deleted file mode 100644
index b19b04136d7b2ab879c98b3d38b872a735352641..0000000000000000000000000000000000000000
--- a/spaces/awacke1/2-LiveASR/app.py
+++ /dev/null
@@ -1,138 +0,0 @@
-import gradio as gr
-import torch
-import time
-import librosa
-import soundfile
-import nemo.collections.asr as nemo_asr
-import tempfile
-import os
-import uuid
-
-from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration
-import torch
-
-# PersistDataset -----
-import os
-import csv
-import gradio as gr
-from gradio import inputs, outputs
-import huggingface_hub
-from huggingface_hub import Repository, hf_hub_download, upload_file
-from datetime import datetime
-
-# ---------------------------------------------
-# Dataset and Token links - change awacke1 to your own HF id, and add a HF_TOKEN copy to your repo for write permissions
-# This should allow you to save your results to your own Dataset hosted on HF.
-
-DATASET_REPO_URL = "https://huggingface.co/datasets/awacke1/ASRLive.csv"
-DATASET_REPO_ID = "awacke1/ASRLive.csv"
-DATA_FILENAME = "ASRLive.csv"
-DATA_FILE = os.path.join("data", DATA_FILENAME)
-HF_TOKEN = os.environ.get("HF_TOKEN")
-
-PersistToDataset = False
-#PersistToDataset = True # uncomment to save inference output to ASRLive.csv dataset
-
-if PersistToDataset:
- try:
- hf_hub_download(
- repo_id=DATASET_REPO_ID,
- filename=DATA_FILENAME,
- cache_dir=DATA_DIRNAME,
- force_filename=DATA_FILENAME
- )
- except:
- print("file not found")
- repo = Repository(
- local_dir="data", clone_from=DATASET_REPO_URL, use_auth_token=HF_TOKEN
- )
-
-def store_message(name: str, message: str):
- if name and message:
- with open(DATA_FILE, "a") as csvfile:
- writer = csv.DictWriter(csvfile, fieldnames=["name", "message", "time"])
- writer.writerow(
- {"name": name.strip(), "message": message.strip(), "time": str(datetime.now())}
- )
- # uncomment line below to begin saving -
- commit_url = repo.push_to_hub()
- ret = ""
- with open(DATA_FILE, "r") as csvfile:
- reader = csv.DictReader(csvfile)
-
- for row in reader:
- ret += row
- ret += "\r\n"
- return ret
-
-# main -------------------------
-mname = "facebook/blenderbot-400M-distill"
-model = BlenderbotForConditionalGeneration.from_pretrained(mname)
-tokenizer = BlenderbotTokenizer.from_pretrained(mname)
-
-def take_last_tokens(inputs, note_history, history):
- filterTokenCount = 128 # filter last 128 tokens
- if inputs['input_ids'].shape[1] > filterTokenCount:
- inputs['input_ids'] = torch.tensor([inputs['input_ids'][0][-filterTokenCount:].tolist()])
- inputs['attention_mask'] = torch.tensor([inputs['attention_mask'][0][-filterTokenCount:].tolist()])
- note_history = [' '.join(note_history[0].split('')[2:])]
- history = history[1:]
- return inputs, note_history, history
-
-def add_note_to_history(note, note_history):
- note_history.append(note)
- note_history = ''.join(note_history)
- return [note_history]
-
-
-
-SAMPLE_RATE = 16000
-model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained("nvidia/stt_en_conformer_transducer_xlarge")
-model.change_decoding_strategy(None)
-model.eval()
-
-def process_audio_file(file):
- data, sr = librosa.load(file)
- if sr != SAMPLE_RATE:
- data = librosa.resample(data, orig_sr=sr, target_sr=SAMPLE_RATE)
- data = librosa.to_mono(data)
- return data
-
-
-def transcribe(audio, state = ""):
- if state is None:
- state = ""
- audio_data = process_audio_file(audio)
- with tempfile.TemporaryDirectory() as tmpdir:
- audio_path = os.path.join(tmpdir, f'audio_{uuid.uuid4()}.wav')
- soundfile.write(audio_path, audio_data, SAMPLE_RATE)
- transcriptions = model.transcribe([audio_path])
- if type(transcriptions) == tuple and len(transcriptions) == 2:
- transcriptions = transcriptions[0]
- transcriptions = transcriptions[0]
-
- if PersistToDataset:
- ret = store_message(transcriptions, state) # Save to dataset - uncomment to store into a dataset - hint you will need your HF_TOKEN
- state = state + transcriptions + " " + ret
- else:
- state = state + transcriptions
- return state, state
-
-gr.Interface(
- fn=transcribe,
- inputs=[
- gr.Audio(source="microphone", type='filepath', streaming=True),
- "state",
- ],
- outputs=[
- "textbox",
- "state"
- ],
- layout="horizontal",
- theme="huggingface",
- title="🗣️ASR-Gradio-Live🧠💾",
- description=f"Live Automatic Speech Recognition (ASR).",
- allow_flagging='never',
- live=True,
- article=f"Result💾 Dataset: [{DATASET_REPO_URL}]({DATASET_REPO_URL})"
-).launch(debug=True)
diff --git a/spaces/awacke1/ASR-openai-whisper-base/app.py b/spaces/awacke1/ASR-openai-whisper-base/app.py
deleted file mode 100644
index ddedf9ed0e7c4809c5dea4b633a52d5975f8f4c4..0000000000000000000000000000000000000000
--- a/spaces/awacke1/ASR-openai-whisper-base/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/openai/whisper-base").launch()
\ No newline at end of file
diff --git a/spaces/awacke1/FirestorePersistence/app.py b/spaces/awacke1/FirestorePersistence/app.py
deleted file mode 100644
index 25d864111289f672836321b3e8228edb498673cf..0000000000000000000000000000000000000000
--- a/spaces/awacke1/FirestorePersistence/app.py
+++ /dev/null
@@ -1,85 +0,0 @@
-import streamlit as st
-import firebase_admin
-from firebase_admin import credentials
-from firebase_admin import firestore
-from datetime import datetime
-
-now = datetime.now() # current date and time
-year = now.strftime("%Y")
-st.write("year:", year)
-month = now.strftime("%m")
-st.write("month:", month)
-day = now.strftime("%d")
-st.write("day:", day)
-time = now.strftime("%H:%M:%S")
-st.write("time:", time)
-date_time = now.strftime("%m/%d/%Y, %H:%M:%S")
-st.write("date and time:",date_time)
-
-@st.experimental_singleton
-def get_db_firestore():
- cred = credentials.Certificate('test.json')
- firebase_admin.initialize_app(cred, {'projectId': u'clinical-nlp-b9117',})
- db = firestore.client()
- return db
-
-#add data to the beastie with a generic reusable upsert function
-def upsert(collection, document, firefield, first, last, born):
- doc_ref = db.collection(collection).document(document)
- doc_ref.set({u'firefield': firefield, u'first': first, u'last': last, u'born': born
-})
-
-#read data back in firecollection
-def selectCollection(collection):
- users_ref = db.collection(collection)
- docs = users_ref.stream()
- for doc in docs:
- st.write(f'{doc.id} => {doc.to_dict()}')
-
-def selectCollectionDocument(collection, document):
- doc_ref = db.collection(collection).document(document)
- doc = doc_ref.get()
- st.write("The id is: ", doc.id)
- st.write("The contents are: ", doc.to_dict())
-
-#add data to the beastie with a generic reusable upsert function
-def upsertoftheminute(collection, document, firefield, first, last, born):
- date_time = now.strftime("%m/%d/%Y, %H:%M")
- doc_ref = db.collection(collection).document(document)
- doc_ref.set({u'firefield': firefield, u'first': first, u'last': last, u'born': date_time,})
-
-
-st.write("singleton stateful connection to cloud firestore")
-st.write(u"spin up some awesome 🤯 - episodic and semantic memory 🧠 for AI - here we come")
-db = get_db_firestore()
-
-# perceptual system processing agent that can store model
-upsert(u'firecollection', u'firedocument', u'users1', u'Ada', u'Lovelace', 1815)
-upsert(u'firecollection', u'firedocument', u'users2', u'Aaron', u'Wacker', 1971)
-upsert(u'firecollection1', u'firedocument3', u'users1', u'2022 - AI, Cognitive and Neuroscience to Assist and Augment Behavioral and Medical Health', u'https://www.youtube.com/watch?v=lvh3g7eszVQ&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L', 2022)
-upsert(u'firecollection2', u'firedocument2', u'users2', u'2022 - AI art sci-fi movies and stories 🎭🎞️🍿 by Aaron Wacker 🎬 🧠 🎨', u'https://www.youtube.com/playlist?list=PLHgX2IExbFotUCOCZgpj-5HZBzXOpFMYc', 2022)
-upsert(u'firecollection3', u'firedocument3', u'users3', u'😶🌫️ 🤯Engineering Consciousness🧠 😶🌫️', u'https://youtu.be/rIpUf-Vy2JA?list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&t=3622', 2022)
-upsert(u'firecollection4', u'firedocument4', u'users4', u'🧠🌳Yggdrasil🌳🧠', u'https://github.com/AaronCWacker/Yggdrasil', 2022)
-
-# its all stored here: https://console.firebase.google.com/u/0/project/clinical-nlp-b9117/firestore/data/~2FStreamlitSpaces
-
-
-selectCollection(u'firecollection')
-selectCollection(u'firecollection1')
-selectCollection(u'firecollection2')
-selectCollection(u'firecollection3')
-selectCollection(u'firecollection4')
-selectCollectionDocument(u"firecollection", u"firedocument")
-selectCollectionDocument(u"firecollection1", u"firedocument3")
-selectCollectionDocument(u"firecollection3", u"firedocument3")
-
-
-# from https://huggingface.co/spaces/awacke1/RealTimeVoiceASR
-selectCollectionDocument(u"ASRCollection", u"ASRDocument")
-
-
-upsert(u'firecollection4', u'firedocument4', u'users4', u'🧠🌳Yggdrasil🌳🧠', u'https://github.com/AaronCWacker/Yggdrasil', 2022)
-
-# intent - upsert at granularity of minute an aggregate document representing fields used in recent activity to replay shared state memory events
-upsertoftheminute(u'TimeSeries', u'DocumentofMinute', u'TestUser1', u'🧠🌳Yggdrasil🌳🧠', u'https://huggingface.co/spaces/awacke1/FirestorePersistence', 2022)
-selectCollectionDocument(u"TimeSeries", u"DocumentofMinute")
\ No newline at end of file
diff --git a/spaces/awacke1/NLPStoryWriterWithMemory/app.py b/spaces/awacke1/NLPStoryWriterWithMemory/app.py
deleted file mode 100644
index 463e122620440fcafd00fc0582d1b908ab8de7fd..0000000000000000000000000000000000000000
--- a/spaces/awacke1/NLPStoryWriterWithMemory/app.py
+++ /dev/null
@@ -1,103 +0,0 @@
-import gradio as gr
-import os
-
-# PersistDataset -----
-import os
-import csv
-import gradio as gr
-from gradio import inputs, outputs
-import huggingface_hub
-from huggingface_hub import Repository, hf_hub_download, upload_file
-from datetime import datetime
-
-# created new dataset as awacke1/MindfulStory.csv
-DATASET_REPO_URL = "https://huggingface.co/datasets/awacke1/MindfulStory.csv"
-DATASET_REPO_ID = "awacke1/MindfulStory.csv"
-DATA_FILENAME = "MindfulStory.csv"
-DATA_FILE = os.path.join("data", DATA_FILENAME)
-HF_TOKEN = os.environ.get("HF_TOKEN")
-# Download dataset repo using hub download
-try:
- hf_hub_download(
- repo_id=DATASET_REPO_ID,
- filename=DATA_FILENAME,
- cache_dir=DATA_DIRNAME,
- force_filename=DATA_FILENAME
- )
-except:
- print("file not found")
-
-def AIMemory(title: str, story: str):
- if title and story:
- with open(DATA_FILE, "a") as csvfile:
- writer = csv.DictWriter(csvfile, fieldnames=["title", "story", "time"])
- writer.writerow({"title": title, "story": story, "time": str(datetime.now())})
- commit_url = repo.push_to_hub()
- return ""
-
-
-# Set up cloned dataset from repo for operations
-repo = Repository(
- local_dir="data", clone_from=DATASET_REPO_URL, use_auth_token=HF_TOKEN
-)
-
-generator1 = gr.Interface.load("huggingface/gpt2-large", api_key=HF_TOKEN)
-generator2 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B", api_key=HF_TOKEN)
-generator3 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B", api_key=HF_TOKEN)
-
-def calculator(intro, operator, outro):
- if operator == "add":
- output = generator2(intro) + generator3(outro)
- title = intro + " " + outro
- saved = AIMemory(title, output)
- return output
- elif operator == "subtract":
- output = generator2(outro) + generator3(intro)
- title = outro + " " + intro
- saved = AIMemory(title, output)
- output = output.replace(intro, "").replace(outro, "")
- return output
- elif operator == "multiply":
- output = generator1(intro) + generator2(outro) + generator3(intro)
- title = intro + " " + outro + " " + intro
- saved = AIMemory(title, output)
- return output
- elif operator == "divide":
- output = generator1(outro) + generator2(intro) + generator3(outro)
- title = outro + " " + intro + " " + outro
- saved = AIMemory(title, output)
- output = output.replace(intro, "").replace(outro, "")
- return output
-
-#with open('Mindfulness.txt', 'r') as file:
-# context = file.read()
-#contextBox = gr.Textbox(lines=3, default=context, label="Story starter")
-
-examples = [
- ["Music and art make me feel", "add", "Path to Health and Happiness"],
- ["Feel better each day when you awake by", "add", "Mental Body Scan"],
- ["Feel better physically by", "add", "Stretch, Calm, Breath"],
- ["Practicing mindfulness each day", "add", "Walk Feel"],
- ["Be happier by", "add", "Brain gamification"],
- ["Meditation can improve health", "add", "Deep Breaths"],
- ["Spending time outdoors", "add", "Find Joy"],
- ["Stress is relieved by quieting your mind, getting exercise and time with nature", "add", "Relieve Pain"],
- ["Break the cycle of stress and anxiety", "add", "Yoga and Meditation"],
- ["Feel calm in stressful situations", "add", "Neocortex Tools and Techniques"],
- ["Deal with work pressure", "add", "Strengthen Attention"],
- ["Learn to reduce feelings of overwhelmed", "add", "Easy Daily Activities"]
-]
-
-demo = gr.Interface(
- calculator,
- [
- "text",
- gr.Radio(["add", "subtract", "multiply", "divide"]),
- "text"
- ],
- "text",
- examples=examples,
- article="Saved story memory dataset: https://huggingface.co/datasets/awacke1/MindfulStory.csv with available models to use from text gen: https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads",
- live=True,
-)
-demo.launch()
\ No newline at end of file
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/libs/dat.gui.min.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/libs/dat.gui.min.js
deleted file mode 100644
index 5b69be5aae03edb7be84df6398fb28e66c331086..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/libs/dat.gui.min.js
+++ /dev/null
@@ -1,14 +0,0 @@
-/**
- * dat-gui JavaScript Controller Library
- * https://github.com/dataarts/dat.gui
- *
- * Copyright 2016 Data Arts Team, Google Creative Lab
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- */
-!function(e,t){"object"==typeof exports&&"object"==typeof module?module.exports=t():"function"==typeof define&&define.amd?define([],t):"object"==typeof exports?exports.dat=t():e.dat=t()}(this,function(){return function(e){function t(o){if(n[o])return n[o].exports;var i=n[o]={exports:{},id:o,loaded:!1};return e[o].call(i.exports,i,i.exports,t),i.loaded=!0,i.exports}var n={};return t.m=e,t.c=n,t.p="",t(0)}([function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}var i=n(1),r=o(i);e.exports=r["default"]},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}t.__esModule=!0;var i=n(2),r=o(i),a=n(6),l=o(a),s=n(3),u=o(s),d=n(7),c=o(d),f=n(8),_=o(f),p=n(10),h=o(p),m=n(11),b=o(m),g=n(12),v=o(g),y=n(13),w=o(y),x=n(14),E=o(x),C=n(15),A=o(C),S=n(16),k=o(S),O=n(9),T=o(O),R=n(17),L=o(R);t["default"]={color:{Color:r["default"],math:l["default"],interpret:u["default"]},controllers:{Controller:c["default"],BooleanController:_["default"],OptionController:h["default"],StringController:b["default"],NumberController:v["default"],NumberControllerBox:w["default"],NumberControllerSlider:E["default"],FunctionController:A["default"],ColorController:k["default"]},dom:{dom:T["default"]},gui:{GUI:L["default"]},GUI:L["default"]}},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}function r(e,t,n){Object.defineProperty(e,t,{get:function(){return"RGB"===this.__state.space?this.__state[t]:(h.recalculateRGB(this,t,n),this.__state[t])},set:function(e){"RGB"!==this.__state.space&&(h.recalculateRGB(this,t,n),this.__state.space="RGB"),this.__state[t]=e}})}function a(e,t){Object.defineProperty(e,t,{get:function(){return"HSV"===this.__state.space?this.__state[t]:(h.recalculateHSV(this),this.__state[t])},set:function(e){"HSV"!==this.__state.space&&(h.recalculateHSV(this),this.__state.space="HSV"),this.__state[t]=e}})}t.__esModule=!0;var l=n(3),s=o(l),u=n(6),d=o(u),c=n(4),f=o(c),_=n(5),p=o(_),h=function(){function e(){if(i(this,e),this.__state=s["default"].apply(this,arguments),this.__state===!1)throw new Error("Failed to interpret color arguments");this.__state.a=this.__state.a||1}return e.prototype.toString=function(){return(0,f["default"])(this)},e.prototype.toHexString=function(){return(0,f["default"])(this,!0)},e.prototype.toOriginal=function(){return this.__state.conversion.write(this)},e}();h.recalculateRGB=function(e,t,n){if("HEX"===e.__state.space)e.__state[t]=d["default"].component_from_hex(e.__state.hex,n);else{if("HSV"!==e.__state.space)throw new Error("Corrupted color state");p["default"].extend(e.__state,d["default"].hsv_to_rgb(e.__state.h,e.__state.s,e.__state.v))}},h.recalculateHSV=function(e){var t=d["default"].rgb_to_hsv(e.r,e.g,e.b);p["default"].extend(e.__state,{s:t.s,v:t.v}),p["default"].isNaN(t.h)?p["default"].isUndefined(e.__state.h)&&(e.__state.h=0):e.__state.h=t.h},h.COMPONENTS=["r","g","b","h","s","v","hex","a"],r(h.prototype,"r",2),r(h.prototype,"g",1),r(h.prototype,"b",0),a(h.prototype,"h"),a(h.prototype,"s"),a(h.prototype,"v"),Object.defineProperty(h.prototype,"a",{get:function(){return this.__state.a},set:function(e){this.__state.a=e}}),Object.defineProperty(h.prototype,"hex",{get:function(){return"HEX"!==!this.__state.space&&(this.__state.hex=d["default"].rgb_to_hex(this.r,this.g,this.b)),this.__state.hex},set:function(e){this.__state.space="HEX",this.__state.hex=e}}),t["default"]=h},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}t.__esModule=!0;var i=n(4),r=o(i),a=n(5),l=o(a),s=[{litmus:l["default"].isString,conversions:{THREE_CHAR_HEX:{read:function(e){var t=e.match(/^#([A-F0-9])([A-F0-9])([A-F0-9])$/i);return null!==t&&{space:"HEX",hex:parseInt("0x"+t[1].toString()+t[1].toString()+t[2].toString()+t[2].toString()+t[3].toString()+t[3].toString(),0)}},write:r["default"]},SIX_CHAR_HEX:{read:function(e){var t=e.match(/^#([A-F0-9]{6})$/i);return null!==t&&{space:"HEX",hex:parseInt("0x"+t[1].toString(),0)}},write:r["default"]},CSS_RGB:{read:function(e){var t=e.match(/^rgb\(\s*(.+)\s*,\s*(.+)\s*,\s*(.+)\s*\)/);return null!==t&&{space:"RGB",r:parseFloat(t[1]),g:parseFloat(t[2]),b:parseFloat(t[3])}},write:r["default"]},CSS_RGBA:{read:function(e){var t=e.match(/^rgba\(\s*(.+)\s*,\s*(.+)\s*,\s*(.+)\s*,\s*(.+)\s*\)/);return null!==t&&{space:"RGB",r:parseFloat(t[1]),g:parseFloat(t[2]),b:parseFloat(t[3]),a:parseFloat(t[4])}},write:r["default"]}}},{litmus:l["default"].isNumber,conversions:{HEX:{read:function(e){return{space:"HEX",hex:e,conversionName:"HEX"}},write:function(e){return e.hex}}}},{litmus:l["default"].isArray,conversions:{RGB_ARRAY:{read:function(e){return 3===e.length&&{space:"RGB",r:e[0],g:e[1],b:e[2]}},write:function(e){return[e.r,e.g,e.b]}},RGBA_ARRAY:{read:function(e){return 4===e.length&&{space:"RGB",r:e[0],g:e[1],b:e[2],a:e[3]}},write:function(e){return[e.r,e.g,e.b,e.a]}}}},{litmus:l["default"].isObject,conversions:{RGBA_OBJ:{read:function(e){return!!(l["default"].isNumber(e.r)&&l["default"].isNumber(e.g)&&l["default"].isNumber(e.b)&&l["default"].isNumber(e.a))&&{space:"RGB",r:e.r,g:e.g,b:e.b,a:e.a}},write:function(e){return{r:e.r,g:e.g,b:e.b,a:e.a}}},RGB_OBJ:{read:function(e){return!!(l["default"].isNumber(e.r)&&l["default"].isNumber(e.g)&&l["default"].isNumber(e.b))&&{space:"RGB",r:e.r,g:e.g,b:e.b}},write:function(e){return{r:e.r,g:e.g,b:e.b}}},HSVA_OBJ:{read:function(e){return!!(l["default"].isNumber(e.h)&&l["default"].isNumber(e.s)&&l["default"].isNumber(e.v)&&l["default"].isNumber(e.a))&&{space:"HSV",h:e.h,s:e.s,v:e.v,a:e.a}},write:function(e){return{h:e.h,s:e.s,v:e.v,a:e.a}}},HSV_OBJ:{read:function(e){return!!(l["default"].isNumber(e.h)&&l["default"].isNumber(e.s)&&l["default"].isNumber(e.v))&&{space:"HSV",h:e.h,s:e.s,v:e.v}},write:function(e){return{h:e.h,s:e.s,v:e.v}}}}}],u=void 0,d=void 0,c=function(){d=!1;var e=arguments.length>1?l["default"].toArray(arguments):arguments[0];return l["default"].each(s,function(t){if(t.litmus(e))return l["default"].each(t.conversions,function(t,n){if(u=t.read(e),d===!1&&u!==!1)return d=u,u.conversionName=n,u.conversion=t,l["default"].BREAK}),l["default"].BREAK}),d};t["default"]=c},function(e,t){"use strict";t.__esModule=!0,t["default"]=function(e,t){var n=e.__state.conversionName.toString(),o=Math.round(e.r),i=Math.round(e.g),r=Math.round(e.b),a=e.a,l=Math.round(e.h),s=e.s.toFixed(1),u=e.v.toFixed(1);if(t||"THREE_CHAR_HEX"===n||"SIX_CHAR_HEX"===n){for(var d=e.hex.toString(16);d.length<6;)d="0"+d;return"#"+d}return"CSS_RGB"===n?"rgb("+o+","+i+","+r+")":"CSS_RGBA"===n?"rgba("+o+","+i+","+r+","+a+")":"HEX"===n?"0x"+e.hex.toString(16):"RGB_ARRAY"===n?"["+o+","+i+","+r+"]":"RGBA_ARRAY"===n?"["+o+","+i+","+r+","+a+"]":"RGB_OBJ"===n?"{r:"+o+",g:"+i+",b:"+r+"}":"RGBA_OBJ"===n?"{r:"+o+",g:"+i+",b:"+r+",a:"+a+"}":"HSV_OBJ"===n?"{h:"+l+",s:"+s+",v:"+u+"}":"HSVA_OBJ"===n?"{h:"+l+",s:"+s+",v:"+u+",a:"+a+"}":"unknown format"}},function(e,t){"use strict";t.__esModule=!0;var n=Array.prototype.forEach,o=Array.prototype.slice,i={BREAK:{},extend:function(e){return this.each(o.call(arguments,1),function(t){var n=this.isObject(t)?Object.keys(t):[];n.forEach(function(n){this.isUndefined(t[n])||(e[n]=t[n])}.bind(this))},this),e},defaults:function(e){return this.each(o.call(arguments,1),function(t){var n=this.isObject(t)?Object.keys(t):[];n.forEach(function(n){this.isUndefined(e[n])&&(e[n]=t[n])}.bind(this))},this),e},compose:function(){var e=o.call(arguments);return function(){for(var t=o.call(arguments),n=e.length-1;n>=0;n--)t=[e[n].apply(this,t)];return t[0]}},each:function(e,t,o){if(e)if(n&&e.forEach&&e.forEach===n)e.forEach(t,o);else if(e.length===e.length+0){var i=void 0,r=void 0;for(i=0,r=e.length;i>8*t&255},hex_with_component:function(e,t,o){return o<<(n=8*t)|e&~(255<-1?t.length-t.indexOf(".")-1:0}t.__esModule=!0;var s=n(7),u=o(s),d=n(5),c=o(d),f=function(e){function t(n,o,a){i(this,t);var s=r(this,e.call(this,n,o)),u=a||{};return s.__min=u.min,s.__max=u.max,s.__step=u.step,c["default"].isUndefined(s.__step)?0===s.initialValue?s.__impliedStep=1:s.__impliedStep=Math.pow(10,Math.floor(Math.log(Math.abs(s.initialValue))/Math.LN10))/10:s.__impliedStep=s.__step,s.__precision=l(s.__impliedStep),s}return a(t,e),t.prototype.setValue=function(t){var n=t;return void 0!==this.__min&&nthis.__max&&(n=this.__max),void 0!==this.__step&&n%this.__step!==0&&(n=Math.round(n/this.__step)*this.__step),e.prototype.setValue.call(this,n)},t.prototype.min=function(e){return this.__min=e,this},t.prototype.max=function(e){return this.__max=e,this},t.prototype.step=function(e){return this.__step=e,this.__impliedStep=e,this.__precision=l(e),this},t}(u["default"]);t["default"]=f},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}function r(e,t){if(!e)throw new ReferenceError("this hasn't been initialised - super() hasn't been called");return!t||"object"!=typeof t&&"function"!=typeof t?e:t}function a(e,t){if("function"!=typeof t&&null!==t)throw new TypeError("Super expression must either be null or a function, not "+typeof t);e.prototype=Object.create(t&&t.prototype,{constructor:{value:e,enumerable:!1,writable:!0,configurable:!0}}),t&&(Object.setPrototypeOf?Object.setPrototypeOf(e,t):e.__proto__=t)}function l(e,t){var n=Math.pow(10,t);return Math.round(e*n)/n}t.__esModule=!0;var s=n(12),u=o(s),d=n(9),c=o(d),f=n(5),_=o(f),p=function(e){function t(n,o,a){function l(){var e=parseFloat(m.__input.value);_["default"].isNaN(e)||m.setValue(e)}function s(){m.__onFinishChange&&m.__onFinishChange.call(m,m.getValue())}function u(){s()}function d(e){var t=b-e.clientY;m.setValue(m.getValue()+t*m.__impliedStep),b=e.clientY}function f(){c["default"].unbind(window,"mousemove",d),c["default"].unbind(window,"mouseup",f),s()}function p(e){c["default"].bind(window,"mousemove",d),c["default"].bind(window,"mouseup",f),b=e.clientY}i(this,t);var h=r(this,e.call(this,n,o,a));h.__truncationSuspended=!1;var m=h,b=void 0;return h.__input=document.createElement("input"),h.__input.setAttribute("type","text"),c["default"].bind(h.__input,"change",l),c["default"].bind(h.__input,"blur",u),c["default"].bind(h.__input,"mousedown",p),c["default"].bind(h.__input,"keydown",function(e){13===e.keyCode&&(m.__truncationSuspended=!0,this.blur(),m.__truncationSuspended=!1,s())}),h.updateDisplay(),h.domElement.appendChild(h.__input),h}return a(t,e),t.prototype.updateDisplay=function(){return this.__input.value=this.__truncationSuspended?this.getValue():l(this.getValue(),this.__precision),e.prototype.updateDisplay.call(this)},t}(u["default"]);t["default"]=p},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}function r(e,t){if(!e)throw new ReferenceError("this hasn't been initialised - super() hasn't been called");return!t||"object"!=typeof t&&"function"!=typeof t?e:t}function a(e,t){if("function"!=typeof t&&null!==t)throw new TypeError("Super expression must either be null or a function, not "+typeof t);e.prototype=Object.create(t&&t.prototype,{constructor:{value:e,enumerable:!1,writable:!0,configurable:!0}}),t&&(Object.setPrototypeOf?Object.setPrototypeOf(e,t):e.__proto__=t)}function l(e,t,n,o,i){return o+(i-o)*((e-t)/(n-t))}t.__esModule=!0;var s=n(12),u=o(s),d=n(9),c=o(d),f=function(e){function t(n,o,a,s,u){function d(e){document.activeElement.blur(),c["default"].bind(window,"mousemove",f),c["default"].bind(window,"mouseup",_),f(e)}function f(e){e.preventDefault();var t=h.__background.getBoundingClientRect();return h.setValue(l(e.clientX,t.left,t.right,h.__min,h.__max)),!1}function _(){c["default"].unbind(window,"mousemove",f),c["default"].unbind(window,"mouseup",_),h.__onFinishChange&&h.__onFinishChange.call(h,h.getValue())}i(this,t);var p=r(this,e.call(this,n,o,{min:a,max:s,step:u})),h=p;return p.__background=document.createElement("div"),p.__foreground=document.createElement("div"),c["default"].bind(p.__background,"mousedown",d),c["default"].addClass(p.__background,"slider"),c["default"].addClass(p.__foreground,"slider-fg"),p.updateDisplay(),p.__background.appendChild(p.__foreground),p.domElement.appendChild(p.__background),p}return a(t,e),t.prototype.updateDisplay=function(){var t=(this.getValue()-this.__min)/(this.__max-this.__min);return this.__foreground.style.width=100*t+"%",e.prototype.updateDisplay.call(this)},t}(u["default"]);t["default"]=f},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}function r(e,t){if(!e)throw new ReferenceError("this hasn't been initialised - super() hasn't been called");return!t||"object"!=typeof t&&"function"!=typeof t?e:t}function a(e,t){if("function"!=typeof t&&null!==t)throw new TypeError("Super expression must either be null or a function, not "+typeof t);e.prototype=Object.create(t&&t.prototype,{constructor:{value:e,enumerable:!1,writable:!0,configurable:!0}}),t&&(Object.setPrototypeOf?Object.setPrototypeOf(e,t):e.__proto__=t)}t.__esModule=!0;var l=n(7),s=o(l),u=n(9),d=o(u),c=function(e){function t(n,o,a){i(this,t);var l=r(this,e.call(this,n,o)),s=l;return l.__button=document.createElement("div"),l.__button.innerHTML=void 0===a?"Fire":a,d["default"].bind(l.__button,"click",function(e){return e.preventDefault(),s.fire(),!1}),d["default"].addClass(l.__button,"button"),l.domElement.appendChild(l.__button),l}return a(t,e),t.prototype.fire=function(){this.__onChange&&this.__onChange.call(this),this.getValue().call(this.object),this.__onFinishChange&&this.__onFinishChange.call(this,this.getValue())},t}(s["default"]);t["default"]=c},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}function r(e,t){if(!e)throw new ReferenceError("this hasn't been initialised - super() hasn't been called");return!t||"object"!=typeof t&&"function"!=typeof t?e:t}function a(e,t){if("function"!=typeof t&&null!==t)throw new TypeError("Super expression must either be null or a function, not "+typeof t);e.prototype=Object.create(t&&t.prototype,{constructor:{value:e,enumerable:!1,writable:!0,configurable:!0}}),t&&(Object.setPrototypeOf?Object.setPrototypeOf(e,t):e.__proto__=t)}function l(e,t,n,o){e.style.background="",g["default"].each(y,function(i){e.style.cssText+="background: "+i+"linear-gradient("+t+", "+n+" 0%, "+o+" 100%); "})}function s(e){e.style.background="",e.style.cssText+="background: -moz-linear-gradient(top, #ff0000 0%, #ff00ff 17%, #0000ff 34%, #00ffff 50%, #00ff00 67%, #ffff00 84%, #ff0000 100%);",e.style.cssText+="background: -webkit-linear-gradient(top, #ff0000 0%,#ff00ff 17%,#0000ff 34%,#00ffff 50%,#00ff00 67%,#ffff00 84%,#ff0000 100%);",e.style.cssText+="background: -o-linear-gradient(top, #ff0000 0%,#ff00ff 17%,#0000ff 34%,#00ffff 50%,#00ff00 67%,#ffff00 84%,#ff0000 100%);",e.style.cssText+="background: -ms-linear-gradient(top, #ff0000 0%,#ff00ff 17%,#0000ff 34%,#00ffff 50%,#00ff00 67%,#ffff00 84%,#ff0000 100%);",e.style.cssText+="background: linear-gradient(top, #ff0000 0%,#ff00ff 17%,#0000ff 34%,#00ffff 50%,#00ff00 67%,#ffff00 84%,#ff0000 100%);"}t.__esModule=!0;var u=n(7),d=o(u),c=n(9),f=o(c),_=n(2),p=o(_),h=n(3),m=o(h),b=n(5),g=o(b),v=function(e){function t(n,o){function a(e){h(e),f["default"].bind(window,"mousemove",h),f["default"].bind(window,"mouseup",u)}function u(){f["default"].unbind(window,"mousemove",h),f["default"].unbind(window,"mouseup",u),_()}function d(){var e=(0,m["default"])(this.value);e!==!1?(y.__color.__state=e,y.setValue(y.__color.toOriginal())):this.value=y.__color.toString()}function c(){f["default"].unbind(window,"mousemove",b),f["default"].unbind(window,"mouseup",c),_()}function _(){y.__onFinishChange&&y.__onFinishChange.call(y,y.__color.toOriginal())}function h(e){e.preventDefault();var t=y.__saturation_field.getBoundingClientRect(),n=(e.clientX-t.left)/(t.right-t.left),o=1-(e.clientY-t.top)/(t.bottom-t.top);return o>1?o=1:o<0&&(o=0),n>1?n=1:n<0&&(n=0),y.__color.v=o,y.__color.s=n,y.setValue(y.__color.toOriginal()),!1}function b(e){e.preventDefault();var t=y.__hue_field.getBoundingClientRect(),n=1-(e.clientY-t.top)/(t.bottom-t.top);return n>1?n=1:n<0&&(n=0),y.__color.h=360*n,y.setValue(y.__color.toOriginal()),!1}i(this,t);var v=r(this,e.call(this,n,o));v.__color=new p["default"](v.getValue()),v.__temp=new p["default"](0);var y=v;v.domElement=document.createElement("div"),f["default"].makeSelectable(v.domElement,!1),v.__selector=document.createElement("div"),v.__selector.className="selector",v.__saturation_field=document.createElement("div"),v.__saturation_field.className="saturation-field",v.__field_knob=document.createElement("div"),v.__field_knob.className="field-knob",v.__field_knob_border="2px solid ",v.__hue_knob=document.createElement("div"),v.__hue_knob.className="hue-knob",v.__hue_field=document.createElement("div"),v.__hue_field.className="hue-field",v.__input=document.createElement("input"),v.__input.type="text",v.__input_textShadow="0 1px 1px ",f["default"].bind(v.__input,"keydown",function(e){13===e.keyCode&&d.call(this)}),f["default"].bind(v.__input,"blur",d),f["default"].bind(v.__selector,"mousedown",function(){f["default"].addClass(this,"drag").bind(window,"mouseup",function(){f["default"].removeClass(y.__selector,"drag")})});var w=document.createElement("div");return g["default"].extend(v.__selector.style,{width:"122px",height:"102px",padding:"3px",backgroundColor:"#222",boxShadow:"0px 1px 3px rgba(0,0,0,0.3)"}),g["default"].extend(v.__field_knob.style,{position:"absolute",width:"12px",height:"12px",border:v.__field_knob_border+(v.__color.v<.5?"#fff":"#000"),boxShadow:"0px 1px 3px rgba(0,0,0,0.5)",borderRadius:"12px",zIndex:1}),g["default"].extend(v.__hue_knob.style,{position:"absolute",width:"15px",height:"2px",borderRight:"4px solid #fff",zIndex:1}),g["default"].extend(v.__saturation_field.style,{width:"100px",height:"100px",border:"1px solid #555",marginRight:"3px",display:"inline-block",cursor:"pointer"}),g["default"].extend(w.style,{width:"100%",height:"100%",background:"none"}),l(w,"top","rgba(0,0,0,0)","#000"),g["default"].extend(v.__hue_field.style,{width:"15px",height:"100px",border:"1px solid #555",cursor:"ns-resize",position:"absolute",top:"3px",right:"3px"}),s(v.__hue_field),g["default"].extend(v.__input.style,{outline:"none",textAlign:"center",color:"#fff",border:0,fontWeight:"bold",textShadow:v.__input_textShadow+"rgba(0,0,0,0.7)"}),f["default"].bind(v.__saturation_field,"mousedown",a),f["default"].bind(v.__field_knob,"mousedown",a),f["default"].bind(v.__hue_field,"mousedown",function(e){b(e),f["default"].bind(window,"mousemove",b),f["default"].bind(window,"mouseup",c)}),v.__saturation_field.appendChild(w),v.__selector.appendChild(v.__field_knob),v.__selector.appendChild(v.__saturation_field),v.__selector.appendChild(v.__hue_field),v.__hue_field.appendChild(v.__hue_knob),v.domElement.appendChild(v.__input),v.domElement.appendChild(v.__selector),v.updateDisplay(),v}return a(t,e),t.prototype.updateDisplay=function(){var e=(0,m["default"])(this.getValue());if(e!==!1){var t=!1;g["default"].each(p["default"].COMPONENTS,function(n){if(!g["default"].isUndefined(e[n])&&!g["default"].isUndefined(this.__color.__state[n])&&e[n]!==this.__color.__state[n])return t=!0,{}},this),t&&g["default"].extend(this.__color.__state,e)}g["default"].extend(this.__temp.__state,this.__color.__state),this.__temp.a=1;var n=this.__color.v<.5||this.__color.s>.5?255:0,o=255-n;g["default"].extend(this.__field_knob.style,{marginLeft:100*this.__color.s-7+"px",marginTop:100*(1-this.__color.v)-7+"px",backgroundColor:this.__temp.toHexString(),border:this.__field_knob_border+"rgb("+n+","+n+","+n+")"}),this.__hue_knob.style.marginTop=100*(1-this.__color.h/360)+"px",this.__temp.s=1,this.__temp.v=1,l(this.__saturation_field,"left","#fff",this.__temp.toHexString()),this.__input.value=this.__color.toString(),g["default"].extend(this.__input.style,{backgroundColor:this.__color.toHexString(),color:"rgb("+n+","+n+","+n+")",textShadow:this.__input_textShadow+"rgba("+o+","+o+","+o+",.7)"})},t}(d["default"]),y=["-moz-","-o-","-webkit-","-ms-",""];t["default"]=v},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t,n){var o=document.createElement("li");return t&&o.appendChild(t),n?e.__ul.insertBefore(o,n):e.__ul.appendChild(o),e.onResize(),o}function r(e,t){var n=e.__preset_select[e.__preset_select.selectedIndex];t?n.innerHTML=n.value+"*":n.innerHTML=n.value}function a(e,t,n){if(n.__li=t,n.__gui=e,U["default"].extend(n,{options:function(t){if(arguments.length>1){var o=n.__li.nextElementSibling;return n.remove(),s(e,n.object,n.property,{before:o,factoryArgs:[U["default"].toArray(arguments)]})}if(U["default"].isArray(t)||U["default"].isObject(t)){var i=n.__li.nextElementSibling;return n.remove(),s(e,n.object,n.property,{before:i,factoryArgs:[t]})}},name:function(e){return n.__li.firstElementChild.firstElementChild.innerHTML=e,n},listen:function(){return n.__gui.listen(n),n},remove:function(){
-return n.__gui.remove(n),n}}),n instanceof B["default"])!function(){var e=new N["default"](n.object,n.property,{min:n.__min,max:n.__max,step:n.__step});U["default"].each(["updateDisplay","onChange","onFinishChange","step"],function(t){var o=n[t],i=e[t];n[t]=e[t]=function(){var t=Array.prototype.slice.call(arguments);return i.apply(e,t),o.apply(n,t)}}),z["default"].addClass(t,"has-slider"),n.domElement.insertBefore(e.domElement,n.domElement.firstElementChild)}();else if(n instanceof N["default"]){var o=function(t){if(U["default"].isNumber(n.__min)&&U["default"].isNumber(n.__max)){var o=n.__li.firstElementChild.firstElementChild.innerHTML,i=n.__gui.__listening.indexOf(n)>-1;n.remove();var r=s(e,n.object,n.property,{before:n.__li.nextElementSibling,factoryArgs:[n.__min,n.__max,n.__step]});return r.name(o),i&&r.listen(),r}return t};n.min=U["default"].compose(o,n.min),n.max=U["default"].compose(o,n.max)}else n instanceof O["default"]?(z["default"].bind(t,"click",function(){z["default"].fakeEvent(n.__checkbox,"click")}),z["default"].bind(n.__checkbox,"click",function(e){e.stopPropagation()})):n instanceof R["default"]?(z["default"].bind(t,"click",function(){z["default"].fakeEvent(n.__button,"click")}),z["default"].bind(t,"mouseover",function(){z["default"].addClass(n.__button,"hover")}),z["default"].bind(t,"mouseout",function(){z["default"].removeClass(n.__button,"hover")})):n instanceof j["default"]&&(z["default"].addClass(t,"color"),n.updateDisplay=U["default"].compose(function(e){return t.style.borderLeftColor=n.__color.toString(),e},n.updateDisplay),n.updateDisplay());n.setValue=U["default"].compose(function(t){return e.getRoot().__preset_select&&n.isModified()&&r(e.getRoot(),!0),t},n.setValue)}function l(e,t){var n=e.getRoot(),o=n.__rememberedObjects.indexOf(t.object);if(o!==-1){var i=n.__rememberedObjectIndecesToControllers[o];if(void 0===i&&(i={},n.__rememberedObjectIndecesToControllers[o]=i),i[t.property]=t,n.load&&n.load.remembered){var r=n.load.remembered,a=void 0;if(r[e.preset])a=r[e.preset];else{if(!r[Q])return;a=r[Q]}if(a[o]&&void 0!==a[o][t.property]){var l=a[o][t.property];t.initialValue=l,t.setValue(l)}}}}function s(e,t,n,o){if(void 0===t[n])throw new Error('Object "'+t+'" has no property "'+n+'"');var r=void 0;if(o.color)r=new j["default"](t,n);else{var s=[t,n].concat(o.factoryArgs);r=C["default"].apply(e,s)}o.before instanceof S["default"]&&(o.before=o.before.__li),l(e,r),z["default"].addClass(r.domElement,"c");var u=document.createElement("span");z["default"].addClass(u,"property-name"),u.innerHTML=r.property;var d=document.createElement("div");d.appendChild(u),d.appendChild(r.domElement);var c=i(e,d,o.before);return z["default"].addClass(c,oe.CLASS_CONTROLLER_ROW),r instanceof j["default"]?z["default"].addClass(c,"color"):z["default"].addClass(c,g(r.getValue())),a(e,c,r),e.__controllers.push(r),r}function u(e,t){return document.location.href+"."+t}function d(e,t,n){var o=document.createElement("option");o.innerHTML=t,o.value=t,e.__preset_select.appendChild(o),n&&(e.__preset_select.selectedIndex=e.__preset_select.length-1)}function c(e,t){t.style.display=e.useLocalStorage?"block":"none"}function f(e){var t=e.__save_row=document.createElement("li");z["default"].addClass(e.domElement,"has-save"),e.__ul.insertBefore(t,e.__ul.firstChild),z["default"].addClass(t,"save-row");var n=document.createElement("span");n.innerHTML=" ",z["default"].addClass(n,"button gears");var o=document.createElement("span");o.innerHTML="Save",z["default"].addClass(o,"button"),z["default"].addClass(o,"save");var i=document.createElement("span");i.innerHTML="New",z["default"].addClass(i,"button"),z["default"].addClass(i,"save-as");var r=document.createElement("span");r.innerHTML="Revert",z["default"].addClass(r,"button"),z["default"].addClass(r,"revert");var a=e.__preset_select=document.createElement("select");e.load&&e.load.remembered?U["default"].each(e.load.remembered,function(t,n){d(e,n,n===e.preset)}):d(e,Q,!1),z["default"].bind(a,"change",function(){for(var t=0;t0&&(e.preset=this.preset,e.remembered||(e.remembered={}),e.remembered[this.preset]=h(this)),e.folders={},U["default"].each(this.__folders,function(t,n){e.folders[n]=t.getSaveObject()}),e},save:function(){this.load.remembered||(this.load.remembered={}),this.load.remembered[this.preset]=h(this),r(this,!1),this.saveToLocalStorageIfPossible()},saveAs:function(e){this.load.remembered||(this.load.remembered={},this.load.remembered[Q]=h(this,!0)),this.load.remembered[e]=h(this),this.preset=e,d(this,e,!0),this.saveToLocalStorageIfPossible()},revert:function(e){U["default"].each(this.__controllers,function(t){this.getRoot().load.remembered?l(e||this.getRoot(),t):t.setValue(t.initialValue),t.__onFinishChange&&t.__onFinishChange.call(t,t.getValue())},this),U["default"].each(this.__folders,function(e){e.revert(e)}),e||r(this.getRoot(),!1)},listen:function(e){var t=0===this.__listening.length;this.__listening.push(e),t&&b(this.__listening)},updateDisplay:function(){U["default"].each(this.__controllers,function(e){e.updateDisplay()}),U["default"].each(this.__folders,function(e){e.updateDisplay()})}}),e.exports=oe},function(e,t){"use strict";e.exports={load:function(e,t){var n=t||document,o=n.createElement("link");o.type="text/css",o.rel="stylesheet",o.href=e,n.getElementsByTagName("head")[0].appendChild(o)},inject:function(e,t){var n=t||document,o=document.createElement("style");o.type="text/css",o.innerHTML=e;var i=n.getElementsByTagName("head")[0];try{i.appendChild(o)}catch(r){}}}},function(e,t){e.exports="