diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Accessdata Password Recovery Toolkit Crack Pros and Cons of Using It for Password Recovery.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Accessdata Password Recovery Toolkit Crack Pros and Cons of Using It for Password Recovery.md deleted file mode 100644 index f350091ca36b5488a2ecfb6ce8b287610965825c..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Accessdata Password Recovery Toolkit Crack Pros and Cons of Using It for Password Recovery.md +++ /dev/null @@ -1,128 +0,0 @@ - -

Accessdata Password Recovery Toolkit Crack: What You Need to Know

-

If you need to gain access to password-protected files, then you might have heard of Accessdata Password Recovery Toolkit (PRTK), a software that can recover passwords from encrypted files and containers. But what if you don't have a license for PRTK? Can you use a crack to bypass the activation process and use the software for free?

-

In this article, we will explain what Accessdata Password Recovery Toolkit is, how it works, how to use it, and why you should avoid using a crack for it. We will also provide some alternatives to using a crack that are safer and more reliable.

-

Accessdata Password Recovery Toolkit Crack


Download Zip ✯✯✯ https://byltly.com/2uKvuo



-

How Accessdata Password Recovery Toolkit Works

-

Accessdata Password Recovery Toolkit is a part of Accessdata's Forensic Toolkit (FTK), a suite of tools for digital forensics and incident response. PRTK can recover passwords from various types of encrypted files and containers, such as MS Word, PDF, TrueCrypt, BitLocker, ZIP, RAR, and many more.

-

PRTK uses different methods to recover passwords, such as brute-force, dictionary, rainbow tables, known-plaintext, and hybrid attacks. It can also create custom dictionaries and profiles based on the characteristics of the target file or container. PRTK can run multiple password recovery attacks simultaneously on different files or containers, using multiple processors and GPUs.

-

PRTK can also integrate with other tools in FTK, such as FTK Imager, Registry Viewer, FTK Lab, and AD Enterprise. This allows you to perform comprehensive analysis and investigation on encrypted data.

-

How to Use Accessdata Password Recovery Toolkit

-

How to Install and Initialize the Software

-

To use Accessdata Password Recovery Toolkit, you need to have a valid license for FTK. You can purchase a license from Exterro, the company that acquired Accessdata in 2020. You can also request a free trial or a demo from their website.

-

Once you have a license, you can download the software from Exterro's website. You will need to register an account and provide your license information. You will also need to download FTK Imager, which is required for PRTK.

-

After downloading the software, you need to install it on your computer. You will need administrator privileges to do so. You will also need to activate the software with your license key. You can do this online or offline.

-

Once the software is installed and activated, you need to initialize it for first use. You will need to configure some settings, such as the location of your dictionaries, rainbow tables, profiles, logs, etc. You will also need to update your software regularly to get the latest features and fixes.

-

How to use Accessdata Password Recovery Toolkit Crack
-Accessdata Password Recovery Toolkit Crack download link
-Accessdata Password Recovery Toolkit Crack serial key
-Accessdata Password Recovery Toolkit Crack activation code
-Accessdata Password Recovery Toolkit Crack license key
-Accessdata Password Recovery Toolkit Crack full version
-Accessdata Password Recovery Toolkit Crack free trial
-Accessdata Password Recovery Toolkit Crack tutorial
-Accessdata Password Recovery Toolkit Crack review
-Accessdata Password Recovery Toolkit Crack alternative
-Accessdata Password Recovery Toolkit Crack vs Passware Kit Forensic
-Accessdata Password Recovery Toolkit Crack features
-Accessdata Password Recovery Toolkit Crack system requirements
-Accessdata Password Recovery Toolkit Crack user guide
-Accessdata Password Recovery Toolkit Crack support
-Accessdata Password Recovery Toolkit Crack price
-Accessdata Password Recovery Toolkit Crack discount code
-Accessdata Password Recovery Toolkit Crack coupon code
-Accessdata Password Recovery Toolkit Crack refund policy
-Accessdata Password Recovery Toolkit Crack testimonials
-Accessdata Password Recovery Toolkit Crack pros and cons
-Accessdata Password Recovery Toolkit Crack comparison
-Accessdata Password Recovery Toolkit Crack benefits
-Accessdata Password Recovery Toolkit Crack limitations
-Accessdata Password Recovery Toolkit Crack installation guide
-Accessdata Password Recovery Toolkit Crack troubleshooting
-Accessdata Password Recovery Toolkit Crack update
-Accessdata Password Recovery Toolkit Crack upgrade
-Accessdata Password Recovery Toolkit Crack compatibility
-Accessdata Password Recovery Toolkit Crack performance
-Accessdata Password Recovery Toolkit Crack quality
-Accessdata Password Recovery Toolkit Crack reliability
-Accessdata Password Recovery Toolkit Crack security
-Accessdata Password Recovery Toolkit Crack privacy
-Accessdata Password Recovery Toolkit Crack warranty
-Accessdata Password Recovery Toolkit Crack customer service
-Accessdata Password Recovery Toolkit Crack feedback
-Accessdata Password Recovery Toolkit Crack ratings
-Accessdata Password Recovery Toolkit Crack success stories
-Accessdata Password Recovery Toolkit Crack case studies
-How to get Accessdata Password Recovery Toolkit Crack for free
-How to crack Accessdata Password Recovery Toolkit password
-How to recover lost password with Accessdata Password Recovery Toolkit Crack
-How to bypass password protection with Accessdata Password Recovery Toolkit Crack
-How to decrypt encrypted files with Accessdata Password Recovery Toolkit Crack
-How to extract password hashes with Accessdata Password Recovery Toolkit Crack
-How to brute force passwords with Accessdata Password Recovery Toolkit Crack
-How to reset passwords with Accessdata Password Recovery Toolkit Crack
-How to unlock accounts with Accessdata Password Recovery Toolkit Crack

-

How to Identify Encrypted Files with FTK

-

Before you can recover passwords from encrypted files or containers, you need to identify them first. You can use FTK Imager to scan your hard drive or an image file for encrypted files or containers. FTK Imager can detect various types of encryption algorithms and formats.

-

To use FTK Imager, you need to launch it from the Start menu or the desktop shortcut. You will see a window with four tabs: Evidence Tree, File List, Gallery View, and Hex View. You can use these tabs to view different aspects of your data.

-

To scan for encrypted files or containers, you need to add an evidence item. You can do this by clicking on the File menu and selecting Add Evidence Item. You can choose from different types of evidence items, such as Physical Drive, Logical Drive, Image File, Contents of Folder, etc.

-

After adding an evidence item, you will see it in the Evidence Tree tab. You can expand it by clicking on the plus sign next to it. You will see different partitions or folders under it. You can select any partition or folder and right-click on it. You will see an option called Scan For Encrypted Files/Containers. Click on it.

-

A new window will pop up showing the progress of the scan. The scan may take some time depending on the size of your data. When the scan is complete, you will see a list of encrypted files or containers in the File List tab. You can sort them by name, size, type, encryption algorithm, etc.

-

You can select any encrypted file or container and right-click on it. You will see an option called Export Selected Files/Containers To PRTK Queue File (.pqf). Click on it. This will create a file that contains information about the encrypted file or container that you want to decrypt with PRTK.

-

How to Use the Dictionary Tool in PRTK

-

A dictionary attack is one of the methods that PRTK uses to recover passwords from encrypted files or containers. A dictionary attack tries different words or phrases from a list until it finds the correct password.

-

PRTK comes with some built-in dictionaries that contain common words or phrases that are used as passwords. However, you can also create your own custom dictionaries based on your knowledge of the target file or container.

-

To create a custom dictionary, you need to use the Dictionary Tool in PRTK. You can launch it from the Tools menu or by clicking on the icon that looks like a book in the toolbar.

-

The Dictionary Tool window has two tabs: Create Dictionary and Edit Dictionary. In the Create Dictionary tab, you can create a new dictionary by entering words or phrases in the text box at the bottom. You can also import words or phrases from a text file by clicking on the Import button.

-

You can also modify an existing dictionary by using the Edit Dictionary tab. In this tab, you can open an existing dictionary by clicking on the Open button. You can then add or delete words or phrases from it.

-

After creating or editing a dictionary, you need to save it by clicking on the Save button. You can give it any name you want but make sure it has a .dic extension.

-

How to Use Rules and Profiles in PRTK

-

Rules and profiles are another way that PRTK uses to recover passwords from encrypted files or containers. Rules are sets of instructions that tell PRTK how to modify words or phrases from dictionaries before trying them as passwords. Profiles are combinations of rules that apply different modifications at once.

-

PRTK comes with some built-in rules and profiles that cover common scenarios such as adding numbers or symbols at the end of words or phrases; changing case; replacing letters with numbers; etc.

-

However, you can also create your own custom rules and profiles based on your knowledge of the target file or container.

-

To create a custom rule, you need to use the Rule Editor in PRTK. You can launch it from the Tools menu or by clicking on the icon that looks like a wrench in the toolbar.

-

The Rule Editor window has two tabs: Create Rule and Edit Rule. In the Create Rule tab, you can create a new rule by entering commands in the text box at the bottom. Each command consists of an operator followed by one or more arguments separated by commas.

-

For example:

- - $1,2,3 adds the numbers 1, 2, and 3 at the end of the word or phrase - C changes the case of the first letter of the word or phrase - R1,2 replaces the first letter of the word or phrase with the second letter You can also use variables to represent different types of characters, such as: - %l for lowercase letters - %u for uppercase letters - %d for digits - %s for symbols You can also use modifiers to apply different conditions or operations to the commands, such as: - ! to negate a command - ? to make a command optional - * to repeat a command a random number of times - + to repeat a command one or more times - n to repeat a command n times - n,m to repeat a command between n and m times For example: - C?%l+ changes the case of the first letter of the word or phrase and adds one or more lowercase letters at the end - R%l,%d2 replaces every lowercase letter in the word or phrase with two digits You can also use parentheses to group commands together and use logical operators to combine them, such as: - & for AND - | for OR - ^ for XOR For example: - (C|R%l,%u)&$%d2 applies either changing the case of the first letter or replacing every lowercase letter with an uppercase letter and adds two digits at the end After creating a rule, you need to save it by clicking on the Save button. You can give it any name you want but make sure it has a .rul extension. You can also modify an existing rule by using the Edit Rule tab. In this tab, you can open an existing rule by clicking on the Open button. You can then add or delete commands from it. To create a custom profile, you need to use the Profile Editor in PRTK. You can launch it from the Tools menu or by clicking on the icon that looks like a folder in the toolbar. The Profile Editor window has two tabs: Create Profile and Edit Profile. In the Create Profile tab, you can create a new profile by selecting rules from the list on the left and adding them to the list on the right. You can also change the order of the rules by dragging and dropping them. You can also import rules from a text file by clicking on the Import button. The text file should contain one rule per line with its name and extension. After creating a profile, you need to save it by clicking on the Save button. You can give it any name you want but make sure it has a .pro extension. You can also modify an existing profile by using the Edit Profile tab. In this tab, you can open an existing profile by clicking on the Open button. You can then add or delete rules from it.

How to Decrypt Files and Containers with PRTK

-

After creating or selecting your dictionaries, rules, and profiles, you are ready to use PRTK to decrypt files and containers. To do this, you need to launch PRTK from the Start menu or the desktop shortcut.

-

You will see a window with four tabs: Queue Manager, Attack Manager, Results Manager, and Log Viewer. You can use these tabs to manage your password recovery tasks.

-

To decrypt files and containers with PRTK, you need to add them to the Queue Manager tab. You can do this by clicking on the Add button and selecting one of these options:

- - Add Files/Containers: This allows you to browse your computer and select individual files or containers that you want to decrypt. - Add PQF File: This allows you to select a PQF file that contains information about encrypted files or containers that you want to decrypt. You can create a PQF file using FTK Imager as explained earlier. - Add Folder: This allows you to select a folder that contains encrypted files or containers that you want to decrypt. After adding files or containers to the Queue Manager tab, you will see them in a list with some information such as name, size, type, encryption algorithm, etc. You can select any file or container and right-click on it. You will see some options such as: - Attack: This allows you to start a password recovery attack on the selected file or container. - Properties: This allows you to view more details about the selected file or container. - Remove: This allows you to remove the selected file or container from the list. - Remove All: This allows you to remove all files or containers from the list. To start a password recovery attack on a file or container, you need to select it and click on the Attack button. A new window will pop up showing different options for your attack. You can choose from different types of attacks such as: - Brute Force: This tries all possible combinations of characters until it finds the correct password. - Dictionary: This tries different words or phrases from a list until it finds the correct password. - Rainbow Tables: This uses precomputed tables of hashes and passwords to find matches. - Known Plaintext: This uses known parts of plaintext and ciphertext to find patterns. - Hybrid: This combines different types of attacks together. You can also choose different dictionaries, rules, and profiles for your attack. You can select from built-in ones or custom ones that you created earlier. You can also adjust some settings for your attack such as: - Timeout: This sets how long PRTK will try each password before moving on to the next one. - Threads: This sets how many processors PRTK will use for your attack. - GPUs: This sets how many graphics cards PRTK will use for your attack. - Priority: This sets how much CPU power PRTK will use for your attack. After choosing your options for your attack, you need to click on the Start button. PRTK will start trying different passwords for your file or container. You can monitor your attack in the Attack Manager tab. You will see some information such as status, progress, speed, elapsed time, estimated time left, etc. You can also pause or stop your attack at any time by clicking on the Pause or Stop buttons. If PRTK finds a password for your file or container, it will show it in green in the Results Manager tab. You will also see some information such as name, size, type, encryption algorithm, password length, etc. You can select any file or container and right-click on it. You will see some options such as: - Decrypt: This allows you to decrypt your file or container using PRTK. - Copy Password: This allows you to copy your password to clipboard. - Export Results: This allows you to export your results to a text file. - Remove: This allows you to remove your file or container from the list. - Remove All: This allows you to remove all files or containers from the list. the Decrypt button. A new window will pop up asking you to select a destination folder for your decrypted file or container. You can also choose to overwrite the original file or container or keep both. After selecting your destination folder, you need to click on the Decrypt button. PRTK will decrypt your file or container and save it in the destination folder. You can also decrypt your file or container using other tools such as FTK Imager or FTK Lab. You just need to copy the password from PRTK and paste it in the other tool.

The Risks and Challenges of Using a Crack for Accessdata Password Recovery Toolkit

-

As you can see, Accessdata Password Recovery Toolkit is a powerful and useful software that can help you recover passwords from encrypted files and containers. However, it is not a cheap software. A license for FTK can cost thousands of dollars per year.

-

That's why some people might be tempted to use a crack for PRTK. A crack is a program that modifies the software to bypass the activation process and use it for free. You can find many cracks for PRTK on the internet, especially on torrent sites.

-

However, using a crack for PRTK is not a good idea. There are many risks and challenges that come with using a crack. Here are some of them:

-

Legal and Ethical Issues

-

Using a crack for PRTK is illegal and unethical. It violates the terms of service and the license agreement of the software. It also infringes on the intellectual property rights of Exterro, the company that owns Accessdata.

-

If you use a crack for PRTK, you could face legal consequences such as fines, lawsuits, or even criminal charges. You could also damage your reputation and credibility as a professional or a student.

-

Moreover, using a crack for PRTK could raise ethical questions about your motives and intentions. Why do you need to recover passwords from encrypted files or containers? Are you authorized to do so? Are you respecting the privacy and security of the owners of those files or containers?

-

Using a crack for PRTK could make you look suspicious and untrustworthy. You could lose the trust and respect of your clients, colleagues, teachers, or peers.

-

Security and Quality Issues

-

Using a crack for PRTK is risky and unreliable. It could expose you to malware and compromise your results.

-

Many cracks for PRTK are infected with viruses, trojans, worms, spyware, ransomware, or other malicious programs. These programs could harm your computer, steal your data, encrypt your files, or demand money from you.

-

Even if the crack for PRTK is not infected with malware, it could still cause problems with your software. It could make it unstable, slow, buggy, or incompatible with other tools. It could also prevent you from updating your software or getting technical support from Exterro.

-

Furthermore, using a crack for PRTK could affect the quality and accuracy of your password recovery results. It could make your software miss some passwords, generate false positives, or corrupt your files or containers.

-

Using a crack for PRTK could jeopardize your work and waste your time and resources.

-

Alternatives to Using a Crack

-

Using a crack for PRTK is not worth it. There are better alternatives that are safer and more reliable.

-

One alternative is to get a legitimate license for FTK. You can purchase a license from Exterro's website or contact them for more information. You can also request a free trial or a demo to test the software before buying it.

-

A legitimate license for FTK will give you access to all the features and benefits of PRTK without any risks or challenges. You will be able to use the software legally and ethically, update it regularly, get technical support from Exterro, and ensure the quality and accuracy of your password recovery results.

-

Another alternative is to use other tools for password recovery that are free or cheaper than FTK. There are many tools available on the internet that can recover passwords from encrypted files or containers. Some examples are:

- - John the Ripper: A free and open source password cracker that supports many encryption algorithms and formats. - Hashcat: A free and open source password recovery tool that uses GPUs to accelerate password cracking. - Elcomsoft Password Recovery Bundle: A commercial password recovery suite that supports various file types and encryption methods. - Passware Kit Forensic: A commercial password recovery software that integrates with FTK Imager and supports many file types and encryption methods. These tools may not have all the features and capabilities of PRTK, but they can still help you recover passwords from encrypted files or containers in some cases.

Conclusion

-

In conclusion, Accessdata Password Recovery Toolkit is a powerful and useful software that can recover passwords from encrypted files and containers. However, it is not a cheap software. That's why some people might be tempted to use a crack for it.

-

However, using a crack for PRTK is not a good idea. There are many risks and challenges that come with using a crack. It is illegal and unethical; it exposes you to malware and compromises your results; it makes you look suspicious and untrustworthy.

-

Instead of using a crack for PRTK, you should consider getting a legitimate license for FTK or using other tools for password recovery that are free or cheaper than FTK. These alternatives are safer and more reliable than using a crack.

-

If you need to gain access to password-protected files, then don't use a crack for PRTK. Use a legitimate license or another tool instead.

-

FAQs

-

What is Accessdata Password Recovery Toolkit?

-

Accessdata Password Recovery Toolkit (PRTK) is a software that can recover passwords from encrypted files and containers.

-

What is a crack for Accessdata Password Recovery Toolkit?

-

A crack for Accessdata Password Recovery Toolkit (PRTK) is a program that modifies the software to bypass the activation process and use it for free.

-

Why should I avoid using a crack for Accessdata Password Recovery Toolkit?

-

You should avoid using a crack for Accessdata Password Recovery Toolkit (PRTK) because it is illegal and unethical; it exposes you to malware and compromises your results; it makes you look suspicious and untrustworthy.

-

What are some alternatives to using a crack for Accessdata Password Recovery Toolkit?

-

Some alternatives to using a crack for Accessdata Password Recovery Toolkit (PRTK) are getting a legitimate license for FTK or using other tools for password recovery that are free or cheaper than FTK.

-

Where can I get more information about Accessdata Password Recovery Toolkit?

-

You can get more information about Accessdata Password Recovery Toolkit (PRTK) from Exterro's website: https://www.exterro.com/ftk-product-downloads/password-recovery-toolkit-prtk-version-8-2-1

-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/AnyDesk Client The Ultimate Guide to Downloading and Using the Best Remote Desktop Software.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/AnyDesk Client The Ultimate Guide to Downloading and Using the Best Remote Desktop Software.md deleted file mode 100644 index 2582b0309feb7b82f8b281f2c74e10b6f9355ce7..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/AnyDesk Client The Ultimate Guide to Downloading and Using the Best Remote Desktop Software.md +++ /dev/null @@ -1,41 +0,0 @@ -
-

How to Download and Use AnyDesk Client for Remote Desktop Access

-

AnyDesk is a fast and secure remote desktop software that allows you to access, control and administrate all your devices when working remotely. Whether you need to work from home, provide technical support, collaborate with your team, or access your personal computer, AnyDesk can help you do it easily and efficiently.

-

anydesk client download


DOWNLOAD ––– https://byltly.com/2uKw5s



-

In this article, we will show you how to download and use AnyDesk client for Windows, one of the most popular operating systems supported by AnyDesk. You can also download AnyDesk for other platforms, such as macOS, Linux, Android, iOS, and more.

- -

How to Download AnyDesk Client for Windows

-

Downloading AnyDesk client for Windows is very simple and fast. Just follow these steps:

-
    -
  1. Go to https://anydesk.com/en/downloads/windows and click on the "Download Now" button. This will start downloading the latest version of AnyDesk for Windows (v7.1.11).
  2. -
  3. Once the download is complete, open the file and follow the installation wizard. You can choose to install AnyDesk on your computer or run it as a portable application.
  4. -
  5. After the installation is done, you will see the AnyDesk interface on your screen. You can now start using AnyDesk client for remote desktop access.
  6. -
- -

How to Use AnyDesk Client for Remote Desktop Access

-

Using AnyDesk client for remote desktop access is very easy and intuitive. Here are some basic steps to get you started:

- - -

Why Choose AnyDesk Client for Remote Desktop Access

-

AnyDesk client is one of the best remote desktop software available in the market. Here are some of the reasons why you should choose AnyDesk for your remote desktop needs:

-

- - -

Conclusion

-

AnyDesk client is a powerful and reliable remote desktop software that can help you work remotely with ease and efficiency. Whether you need to access your personal computer, provide technical support, collaborate with your team, or manage your devices, AnyDesk can do it all for you.

-

To download and use AnyDesk client for Windows, just follow the simple steps we have shown you in this article. You can also download AnyDesk for other platforms from https://anydesk.com/.

-

If you have any questions or feedback about AnyDesk client, feel free to contact

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Diner De ConsLe Movie Download __LINK__ 720p Movie.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Diner De ConsLe Movie Download __LINK__ 720p Movie.md deleted file mode 100644 index 15500b781f1828d73a1f1b0ddd688ed89e68076f..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Diner De ConsLe Movie Download __LINK__ 720p Movie.md +++ /dev/null @@ -1,21 +0,0 @@ - -

How to Download Diner De Cons,Le Movie in 720p Quality

-

Diner De Cons,Le is a classic French comedy movie that was released in 1998. The movie tells the story of a group of friends who have a weekly dinner where they invite a fool to make fun of him. The movie is based on a play by Francis Veber and stars Thierry Lhermitte, Jacques Villeret and Francis Huster.

-

Diner De Cons,Le Movie Download 720p Movie


Download Zip ⚹⚹⚹ https://byltly.com/2uKzT2



-

If you are looking for a way to download Diner De Cons,Le movie in 720p quality, you have come to the right place. In this article, we will show you how to use a torrent site to find and download the movie in high definition. Here are the steps you need to follow:

-
    -
  1. Go to a torrent site that has Diner De Cons,Le movie available. You can use sites like YTS.MX[^2^], Archive.org[^3^] or Crinponogu[^4^] to search for the movie.
  2. -
  3. Select the movie quality you want to download. We recommend choosing 720p BluRay as it offers a good balance between file size and video quality.
  4. -
  5. Download the torrent file or magnet link of the movie. You will need a torrent client software like uTorrent or BitTorrent to open the file or link and start downloading the movie.
  6. -
  7. Wait for the download to finish. Depending on your internet speed and the number of seeders and peers, it may take from a few minutes to a few hours to complete the download.
  8. -
  9. Enjoy watching Diner De Cons,Le movie in 720p quality. You can use any media player that supports MP4 format to play the movie on your computer or device.
  10. -
-

Diner De Cons,Le is a hilarious and witty movie that will make you laugh out loud. It is one of the best French comedies ever made and has won several awards and nominations. If you want to watch this movie in 720p quality, follow the steps above and download it from a torrent site.

- -

If you want to know more about the plot of Diner De Cons,Le movie, here is a brief summary. The movie focuses on the interaction between Pierre Brochant and François Pignon, who are very different in personality and intelligence. Pierre is a successful and arrogant publisher, who enjoys mocking and humiliating others. François is a naive and kind-hearted tax inspector, who loves making matchstick models of famous monuments.

-

Pierre invites François to his apartment, hoping to take him to the dinner of fools later. However, he injures his back and has to stay home. He tries to get rid of François, but he keeps making things worse for him. He accidentally reveals Pierre's affair with his mistress Marlène to his wife Christine, who leaves him. He also invites a ruthless tax auditor Lucien Cheval to Pierre's apartment, who discovers Pierre's tax evasion. He also causes trouble with Pierre's old friend Juste Leblanc, who still loves Christine.

-

-

Through a series of hilarious and absurd situations, Pierre realizes that François is not as stupid as he thought. He also learns to appreciate his friendship and loyalty. He reconciles with Christine and Leblanc, and manages to escape from Cheval's investigation. He also decides to quit the dinner of fools, and invites François to have dinner with him as a friend.

-

Diner De Cons,Le movie is a brilliant comedy that explores the themes of human nature, friendship, and social class. It shows how appearances can be deceiving, and how people can surprise us with their hidden talents and qualities. It also criticizes the cruelty and snobbery of the rich and powerful, who exploit and ridicule the weak and innocent. It is a movie that will make you laugh and think at the same time.

cec2833e83
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fizika masalalar yechish usullari pdf Oquv qollanma va namunalar.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fizika masalalar yechish usullari pdf Oquv qollanma va namunalar.md deleted file mode 100644 index 9856ec066e876d9e4ec3fad8de2c0e93bbc5df96..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fizika masalalar yechish usullari pdf Oquv qollanma va namunalar.md +++ /dev/null @@ -1,105 +0,0 @@ - -

Fizika Masalalar Yechish Usullari PDF

-

Introduction

-

Fizika masalalar yechish usullari (physics problem solving methods) are a set of strategies and techniques that can help you solve various types of physics problems. Physics problems are often challenging and complex, requiring you to apply your knowledge, skills, and creativity to find the correct solutions. Learning and practicing fizika masalalar yechish usullari can help you improve your understanding of physics concepts, develop your logical thinking and analytical skills, and enhance your confidence and motivation in learning physics.

-

fizika masalalar yechish usullari pdf


Download ===== https://byltly.com/2uKzj9



-

Whether you are a student or a teacher of physics, you can benefit from using fizika masalalar yechish usullari. As a student, you can use these methods to tackle homework assignments, prepare for exams, and participate in competitions. As a teacher, you can use these methods to design effective learning activities, assess student performance, and provide feedback and guidance. In this article, you will learn about the types of physics problems, the general steps of problem solving in physics, some specific methods and techniques that can help you solve different types of problems, and some skills that you need to develop to become a better problem solver in physics. You will also find a link to download a PDF file that contains these methods and examples for your reference.

-

Types of Physics Problems

-

Physics problems can be classified into different types based on their level of difficulty, content, and format. Some common types of physics problems are:

- -

Each type of problem has its own features and challenges. For example, qualitative problems may require you to use your intuition or common sense, but they may also involve misconceptions or vague terms. Quantitative problems may require you to memorize or recall formulas or equations, but they may also involve errors or uncertainties in measurements or calculations. Conceptual problems may require you to synthesize or integrate multiple concepts or principles, but they may also involve assumptions or simplifications that may not be valid in reality. Application problems may require you to model or simulate real-world situations or systems, but they may also involve complex or unknown variables or parameters. Multiple-choice problems may require you to eliminate incorrect options or compare different options, but they may also involve distractors or tricks that may confuse you.

-

Problem Solving Methods

-

The general steps of problem solving in physics are:

-
    -
  1. Read and understand the problem: In this step, you need to identify what is given and what is asked in the problem. You need to pay attention to the keywords, units, symbols, diagrams, graphs, tables, or other information that are provided in the problem statement. You also need to check if there are any missing or extra information that may affect the solution.
  2. -
  3. Plan a strategy: In this step, you need to decide how to approach the problem. You need to choose an appropriate type of problem solving method based on the type of problem. You also need to select relevant concepts, formulas, equations, principles, laws, rules, or relationships that are applicable to the problem.
  4. -
  5. Execute the solution: In this step, you need to implement your strategy by performing calculations, manipulations, operations, or other actions that are required by your chosen method. You need to show your work clearly and systematically by writing down each step with proper notation, units, and explanations. You also need to check your work for errors, consistency, and reasonableness.
  6. -
  7. Evaluate the result: In this step, you need to verify your result by comparing it with the given information, the expected outcome, or other sources. You need to check if your result makes sense physically, logically, and mathematically. You also need to report your result with appropriate units, significance, and accuracy.
  8. -
-

In addition to these general steps, there are some specific methods and techniques that can help you solve different types of problems. Some examples of these methods and techniques are:

-

fizikadan masalalar yechish texnologiyasi
-kimyodan masalalar yechish usullari o'quv qo'llanma
-fizik masalalar turlari ularni yechish metodlari
-fizikadan masalalar yechish algoritmik usuli
-fizikadan masalalar yechish fanining vazifasi
-fizikadan masalalar yechishning ahamiyati
-fizikadan masalalar yechish jarayonida fanlararo aloqa
-fizikadan masalalar yechish rejasini tuzish
-fizikadan masalalar yechishning tarbiyaviy ahamiyati
-fizikadan masalalar yechish nazorat ishlarini o'tkazish metodikasi
-fizikadan masalalar yechish olimpiada masalalari
-fizikadan masalalar yechish zamonaviy pedagogik texnologiya vositalari
-fizikadan masalalar yechish innovatsion texnologiya metodlari
-fizika o'qitishda masala yechishning asosiy bosqichlari
-fizika o'qitishda masala yechishning algoritmik usuli
-fizika o'qitishda masala yechishning ijodiy usullari
-fizika o'qitishda masala shartini tahlil qilish
-fizika o'qitishda masala grafik usuli bilan yechish
-fizika o'qitishda masala eksperimental usuli bilan yechish
-fizika o'qitishda masala sifat usuli bilan yechish
-fizika o'qitishda masala matematik usuli bilan yechish
-fizika o'qitishda masala logarifmik usuli bilan yechish
-fizika o'qitishda masala trigonometrik usuli bilan yechish
-fizika o'qitishda masala vektorlar usuli bilan yechish
-fizika o'qitishda masala integral hisob usuli bilan yechish
-fizika o'qitishda masala differensial hisob usuli bilan yechish
-fizika o'qitishda masala matritsa hisob usuli bilan yechish
-fizika o'qitishda masala koordinata tizimlari orasida o'tkaziladigan formulalar
-fizika o'qitishda masala kinematika qonunlari va formulalari
-fizika o'qitishda masala dinamika qonunlari va formulalari
-fizika o'qitishda masala statika qonunlari va formulalari
-fizika o'qitishda masala molekulyar kinetik nazariyasi va formulalari
-fizika o'qitishda masala termodinamika qonunlari va formulalari
-fizika o'qitishda masala elektrostatika qonunlari va formulalari
-fizika o'qitishda masala elektrodinamika qonunlari va formulalari
-fizika o'qitishda masala magnetizm qonunlari va formulalari
-fizika o'qitishda masala optika qonunlari va formulalari
-fizika o'qitishda masala akustika qonunlari va formulalari
-fizika o'qitishda masala atom nazariyasi va formulalari
-fizika o'qitishda masala nukleon nazariyasi va formulalari
-fizika o'qitishda masala kvant nazariyasi va formulalari

- -

These are just some examples of the many methods and techniques that you can use to solve physics problems. You may need to use one or more of these methods or techniques depending on the type and complexity of the problem. You may also need to combine these methods or techniques with other skills or tools such as calculators, graphs, charts, or software.

-

Problem Solving Skills

-

To become a better problem solver in physics, you need to develop some skills that are essential for effective and efficient problem solving. Some of these skills are:

- -

To improve these skills, you need to practice regularly and systematically. You need to solve different types and levels of physics problems that challenge your knowledge, skills, and creativity. You also need to use feedback, reflection, and self-assessment to evaluate your performance and identify your strengths and weaknesses.

-

Conclusion

-

In this article, you have learned about fizika masalalar yechish usullari (physics problem solving methods), a set of strategies and techniques that can help you solve various types of physics problems. You have learned about the types of physics problems, the general steps of problem solving in physics, some specific methods and techniques that can help you solve different types of problems, and some skills that you need to develop to become a better problem solver in physics. You have also found a link to download a PDF file that contains these methods and examples for your reference.

-

Learning and practicing fizika masalalar yechish usullari can help you improve your understanding of physics concepts, develop your logical thinking and analytical skills, and enhance your confidence and motivation in learning physics. Whether you are a student or a teacher of physics, you can benefit from using fizika masalalar yechish usullari.

-

If you want to learn more about fizika masalalar yechish usullari, you can download this PDF file that contains these methods and examples: Fizika Masalalar Yechish Usullari PDF. You can also watch this video that explains some types of physics problems and how to solve them: Fizik masalalar turlari, ularni yechish metodlari.

-

We hope you enjoyed this article and found it useful. We encourage you to try out these methods and share your feedback with us. Happy problem solving!

-

FAQs

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/FoneLab 9.1.58 Crack With Activation Number Free Download 2019.md b/spaces/1gistliPinn/ChatGPT4/Examples/FoneLab 9.1.58 Crack With Activation Number Free Download 2019.md deleted file mode 100644 index f8b2681ca881737412ff91f7bddb18d3eca01a1f..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/FoneLab 9.1.58 Crack With Activation Number Free Download 2019.md +++ /dev/null @@ -1,32 +0,0 @@ - -

FoneLab 9.1.58 Crack With Activation Number Free Download 2019

-

FoneLab 9.1.58 Crack is a powerful and easy-to-use software that helps you recover deleted or lost data from your iOS devices, iTunes backup, or iCloud backup. It can recover various types of data, such as contacts, messages, photos, videos, notes, call history, WhatsApp, Safari bookmarks, and more. Whether you accidentally deleted your data, lost your device, or damaged it by water, virus, or system crash, FoneLab can help you get your data back in minutes.

-

In this article, we will show you how to download and install FoneLab 9.1.58 Crack with activation number for free. You will also learn about the features and benefits of using this software.

-

FoneLab 9.1.58 Crack With Activation Number Free Download 2019


DOWNLOAD >>>>> https://imgfil.com/2uxZ1T



-

Features of FoneLab 9.1.58 Crack

-

FoneLab 9.1.58 Crack has many features that make it stand out from other data recovery software. Here are some of them:

- -

How to Download and Install FoneLab 9.1.58 Crack With Activation Number Free

-

If you want to try FoneLab 9.1.58 Crack with activation number for free, you can follow these steps:

-
    -
  1. Download the FoneLab 9.1.58 Crack setup file from this link.
  2. -
  3. Extract the file using WinRAR or any other extraction tool.
  4. -
  5. Run the setup file and follow the installation wizard.
  6. -
  7. Copy the crack file from the crack folder and paste it into the installation directory.
  8. -
  9. Run the software and enter the activation number from the readme file.
  10. -
  11. Enjoy FoneLab 9.1.58 Crack with full features for free.
  12. -
-

Conclusion

-

FoneLab 9.1.58 Crack is a reliable and professional data recovery software that can help you recover your lost or deleted data from your iOS devices, iTunes backup, or iCloud backup in various

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/APK Unlock Tool The Best Way to Play PUBG Mobile at 90 FPS.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/APK Unlock Tool The Best Way to Play PUBG Mobile at 90 FPS.md deleted file mode 100644 index 114ea5d857d1396c92dfd3d134b5ba43248ad87e..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/APK Unlock Tool The Best Way to Play PUBG Mobile at 90 FPS.md +++ /dev/null @@ -1,115 +0,0 @@ - -

How to Unlock 90 FPS in PUBG Mobile and Why You Should Do It

-

PUBG Mobile is one of the most popular and competitive mobile games in the world. Millions of players enjoy the thrilling battle royale experience every day. However, not everyone gets to play the game at its full potential. If you want to take your PUBG Mobile gameplay to the next level, you should consider unlocking 90 FPS mode.

-

In this article, we will explain what FPS is and why it matters in PUBG Mobile, how to check your FPS in the game, how to enable 90 FPS mode, and what are the advantages and disadvantages of playing at 90 FPS. We will also share some tips and tricks to optimize your PUBG Mobile performance at 90 FPS. Finally, we will answer some frequently asked questions about 90 FPS mode in PUBG Mobile. Let's get started!

-

apk unlock 90 fps pubg mobile


Download » https://urlin.us/2uSUWr



-

What is FPS and Why Does It Matter in PUBG Mobile?

-

FPS stands for frames per second and it determines how smooth the game looks and feels on your screen. The higher the FPS, the more frames are displayed per second, resulting in a smoother and more realistic motion. The lower the FPS, the fewer frames are displayed per second, resulting in a choppier and more laggy motion.

-

Why does FPS matter in PUBG Mobile? Well, because it can affect your gameplay experience and performance in various ways. Higher FPS can give you an advantage in PUBG Mobile by improving your aim, reaction time, and visibility. Here are some of the benefits of playing PUBG Mobile at higher FPS:

- -

Of course, playing PUBG Mobile at higher FPS also has some drawbacks, such as higher battery consumption, more heat generation, and potential compatibility issues. We will discuss these in more detail later in this article.

-

How to Check Your FPS in PUBG Mobile

-

Before you enable 90 FPS mode in PUBG Mobile, you may want to check your current FPS in the game. This way, you can see how much improvement you can get from 90 FPS mode. There are two ways to check your FPS in PUBG Mobile:

- -

Once you have checked your FPS in PUBG Mobile, you can proceed to enable 90 FPS mode if you want to enjoy a smoother gameplay experience.

-

How to Enable 90 FPS in PUBG Mobile

-

Enabling 90 FPS mode in PUBG Mobile is not very difficult, but it may not be available for everyone. Not all devices support 90 FPS mode in PUBG Mobile, only a few models from OnePlus, Samsung, Xiaomi, Google, and Apple. You can check the list of supported devices on various websites or forums.

-

If you have a supported device, you can enable 90 FPS mode in PUBG Mobile by following these steps:

-
    -
  1. Go to Settings > Graphics > Frame Rate
  2. -
  3. Select 90 FPS from the options
  4. -
  5. Enjoy the game at 90 FPS!
  6. -
-

You may need to set your graphics quality to Smooth to unlock 90 FPS option. This will lower the resolution and texture quality of the game, but it will also improve the performance and stability of the game. You can also adjust other graphics settings according to your preference and device capability.

-

apk unlock 90 fps pubg mobile download
-apk unlock 90 fps pubg mobile global
-apk unlock 90 fps pubg mobile korea
-apk unlock 90 fps pubg mobile lite
-apk unlock 90 fps pubg mobile new era
-apk unlock 90 fps pubg mobile no root
-apk unlock 90 fps pubg mobile season 19
-apk unlock 90 fps pubg mobile vietnam
-how to apk unlock 90 fps pubg mobile
-how to install apk unlock 90 fps pubg mobile
-best apk unlock 90 fps pubg mobile
-free apk unlock 90 fps pubg mobile
-latest apk unlock 90 fps pubg mobile
-safe apk unlock 90 fps pubg mobile
-working apk unlock 90 fps pubg mobile
-apk unlock 90 fps pubg mobile android
-apk unlock 90 fps pubg mobile ios
-apk unlock 90 fps pubg mobile emulator
-apk unlock 90 fps pubg mobile pc
-apk unlock 90 fps pubg mobile phone
-apk unlock 90 fps pubg mobile smooth
-apk unlock 90 fps pubg mobile balanced
-apk unlock 90 fps pubg mobile hd
-apk unlock 90 fps pubg mobile hdr
-apk unlock 90 fps pubg mobile ultra hd
-apk unlock 90 fps pubg mobile appbrain[^2^]
-apk unlock 90 fps pubg mobile appcombo[^1^]
-apk unlock 90 fps pubg mobile apkpure
-apk unlock 90 fps pubg mobile uptodown
-apk unlock 90 fps pubg mobile play store
-apk unlock tool for pubg mobile 90 fps
-maxfpspubgm - apk unlock tool for pubg mobile 90 fps[^1^]
-beatsoft - developer of maxfpspubgm - apk unlock tool for pubg mobile 90 fps[^1^]
-review of maxfpspubgm - apk unlock tool for pubg mobile 90 fps[^1^]
-tutorial of maxfpspubgm - apk unlock tool for pubg mobile 90 fps[^1^]
-update of maxfpspubgm - apk unlock tool for pubg mobile 90 fps[^1^]
-download size of maxfpspubgm - apk unlock tool for pubg mobile 90 fps[^2^]
-android version required for maxfpspubgm - apk unlock tool for pubg mobile 90 fps[^2^]
-download link of maxfpspubgm - apk unlock tool for pubg mobile 90 fps[^2^]
-alternative of maxfpspubgm - apk unlock tool for pubg mobile 90 fps[^2^]
-benefits of unlocking 90 fps in pubg mobile with an apk
-drawbacks of unlocking 90 fps in pubg mobile with an apk
-risks of unlocking 90 fps in pubg mobile with an apk
-tips and tricks for unlocking 90 fps in pubg mobile with an apk
-faq about unlocking 90 fps in pubg mobile with an apk

-

Advantages and Disadvantages of Playing PUBG Mobile at 90 FPS

-

As we mentioned earlier, playing PUBG Mobile at 90 FPS has its pros and cons. Here are some of the advantages and disadvantages of playing PUBG Mobile at 90 FPS:

-

Advantages

- -

Disadvantages

- -

Tips and Tricks to Optimize Your PUBG Mobile Performance at 90 FPS

-

If you want to play PUBG Mobile at 90 FPS and get the best performance possible, you should follow some tips and tricks to optimize your device and game settings. Here are some of the tips and tricks that you can try:

- -

Conclusion

-

PUBG Mobile is a fun and exciting game that can be enjoyed by anyone. However, if you want to have a more smooth and competitive gameplay experience, you should try playing the game at 90 FPS mode. This mode can make the game look and feel more realistic and responsive, giving you an edge over your opponents. However, you should also be aware of the drawbacks of playing at 90 FPS mode, such as higher battery consumption, more heat generation, and potential compatibility issues. You should also follow some tips and tricks to optimize your PUBG Mobile performance at 90 FPS mode, such as using a high refresh rate device, closing background apps, adjusting sensitivity settings and controls, using headphones or earphones, and practicing your skills in training mode or arcade mode.

-

We hope this article has helped you understand how to unlock 90 FPS mode in PUBG Mobile and why you should do it. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!

-

FAQs

-

Q1. How can I play PUBG Mobile at 90 FPS on unsupported devices?

-

A1. You may need to use a third-party tool like GFX Tool or FlashDog to modify the game files and enable 90 FPS option. However, this is not recommended as it may violate the game's terms of service and result in a ban.

-

Q2. How can I check if my device supports 90 FPS in PUBG Mobile?

-

A2. You can check the list of supported devices on various websites or forums. Alternatively, you can go to Settings > Graphics > Frame Rate and see if the 90 FPS option is available for you.

-

Q3. What is the difference between 60 FPS and 90 FPS in PUBG Mobile?

-

A3. The difference between 60 FPS and 90 FPS is that the latter displays more frames per second, making the game look smoother and more responsive. However, the difference may not be noticeable for some people or on some devices.

-

Q4. Does playing PUBG Mobile at 90 FPS affect my ping or network latency?

-

A4. No, playing PUBG Mobile at 90 FPS does not affect your ping or network latency. Ping is determined by your internet connection speed and quality, not by your frame rate.

-

Q5. What are some other ways to improve my PUBG Mobile performance besides enabling 90 FPS?

-

A5. Some other ways to improve your PUBG Mobile performance are updating your game and device software, clearing your cache and storage space, using a stable Wi-Fi connection, and avoiding playing in hot or humid environments.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Monster in My Pocket APK and Play the Classic Atari Remake.md b/spaces/1phancelerku/anime-remove-background/Download Monster in My Pocket APK and Play the Classic Atari Remake.md deleted file mode 100644 index 675dfbfbc9aa42ac1f10ce251a8a1cec62803ccd..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Monster in My Pocket APK and Play the Classic Atari Remake.md +++ /dev/null @@ -1,140 +0,0 @@ -
-

Monster in My Pocket Game Download for Android

-

Do you love monsters and platform games? Do you want to relive your childhood memories of collecting and playing with tiny monster figures? If you answered yes to any of these questions, then you might be interested in Monster in My Pocket, a classic NES game that is now available for Android devices. In this article, we will tell you everything you need to know about this game, including what it is, how to download and play it on your Android phone or tablet, and why you should give it a try.

-

monster in my pocket game download for android


DOWNLOADhttps://jinyurl.com/2uNMMX



-

What is Monster in My Pocket?

-

Monster in My Pocket is a media franchise that was developed by American company Morrison Entertainment Group in the late 1980s and early 1990s. The franchise focused on monsters and fantastical creatures from various sources, such as religion, mythology, folklore, fairy tales, literature, science fiction, and cryptozoology. The franchise produced trading cards, comic books, books, toys, a board game, a video game, and an animated special, along with music, clothing, kites, stickers, and other items.

-

A brief history of the franchise

-

The most popular product of the franchise was the toy line, which was released by Matchbox in 1990. It consisted of small, soft plastic figures representing different monsters, each with a point value assigned to them. There were over 200 monsters in the collection, ranging from well-known ones like Dracula, Frankenstein's Monster, and Werewolf, to obscure ones like Catoblepas, Haniver, and Tengu. The toys were initially solid-colored, but later series added more painted colors and details.

-

The toy line also inspired a comic book series that was published by Harvey Comics from 1991 to 1992. The comic book series followed the story of Vampire and Monster (the two main protagonists of the franchise) as they battled against Warlock (the main antagonist) and his army of evil monsters. The comic book series also introduced new characters and monsters that were not part of the toy line.

-

monster in my pocket game apk download for android
-how to download monster in my pocket game on android
-monster in my pocket game free download for android devices
-monster in my pocket game android download full version
-best monster in my pocket game download for android phones
-monster in my pocket game download for android tablet
-monster in my pocket game download for android emulator
-monster in my pocket game download for android offline
-monster in my pocket game download for android online
-monster in my pocket game download for android mod apk
-monster in my pocket game download for android latest version
-monster in my pocket game download for android without ads
-monster in my pocket game download for android with cheats
-monster in my pocket game download for android no root
-monster in my pocket game download for android hack
-monster in my pocket game download for android review
-monster in my pocket game download for android gameplay
-monster in my pocket game download for android tips and tricks
-monster in my pocket game download for android walkthrough
-monster in my pocket game download for android guide
-monster in my pocket game download for android features
-monster in my pocket game download for android requirements
-monster in my pocket game download for android size
-monster in my pocket game download for android compatibility
-monster in my pocket game download for android update
-monster in my pocket game download for android new version
-monster in my pocket game download for android old version
-monster in my pocket game download for android beta version
-monster in my pocket game download for android demo version
-monster in my pocket game download for android pro version
-monster in my pocket game download for android premium version
-monster in my pocket game download for android deluxe version
-monster in my pocket game download for android ultimate version
-monster in my pocket game download for android special edition
-monster in my pocket game download for android collector's edition
-monster in my pocket game download for android limited edition
-monster in my pocket game download for android exclusive edition
-monster in my pocket game download for android gold edition
-monster in my pocket game download for android platinum edition
-monster in my pocket game download for android diamond edition
-where to find monster in my pocket game download for android
-where to get monster in my pocket game download for android
-where to buy monster in my pocket game download for android
-where to play monster in my pocket game on android
-how to install monster in my pocket game on android
-how to uninstall monster in my pocket game on android
-how to update monster in my pocket game on android
-how to play monster in my pocket game on android with friends

-

The video game adaptation of the franchise was produced by Konami in 1992 for the NES platform. It was a platformer game that followed the storyline of the comic book series moderately close. It featured Vampire and Monster as playable characters who had to stop Warlock's plan to take over the world. The game had six stages that took place in different locations, such as a house, a kitchen, a sewer, a city, an oriental temple, and a mountain. The game also had various enemies and bosses that were based on the toy figures.

-

The main features of the game

-

The game had several features that made it stand out from other platformer games at the time. Some of these features were:

- -

How

How to download and play Monster in My Pocket on Android?

-

Now that you know what Monster in My Pocket is and why it is such a great game, you might be wondering how you can download and play it on your Android device. Well, the good news is that it is not very difficult to do so, as long as you follow these simple steps:

-

The steps to download the APK file

-

The first thing you need to do is to download the APK file of the game, which is a file format that allows you to install and run applications on Android devices. There are many websites that offer APK files of various games, but not all of them are safe and reliable. Therefore, we recommend that you use a trusted and reputable source, such as [APKPure] or [APKMirror]. Here are the steps to download the APK file of Monster in My Pocket from APKPure:

-
    -
  1. Go to the [APKPure website] and search for "Monster in My Pocket" in the search bar.
  2. -
  3. Select the game from the list of results and click on the "Download APK" button.
  4. -
  5. Wait for the download to finish and locate the file in your device's storage.
  6. -
-

The steps to install and run the game

-

The next thing you need to do is to install and run the game on your device. However, before you do that, you need to make sure that your device allows the installation of apps from unknown sources, which are sources other than the Google Play Store. To do that, you need to follow these steps:

-
    -
  1. Go to your device's settings and look for the option "Security" or "Privacy".
  2. -
  3. Tap on it and find the option "Unknown sources" or "Install unknown apps".
  4. -
  5. Enable it by toggling the switch or checking the box.
  6. -
-

Once you have done that, you can proceed to install and run the game by following these steps:

-
    -
  1. Go to your device's file manager and locate the APK file of Monster in My Pocket that you downloaded earlier.
  2. -
  3. Tap on it and follow the instructions on the screen to install the game.
  4. -
  5. Wait for the installation to finish and look for the game's icon on your device's home screen or app drawer.
  6. -
  7. Tap on it and enjoy playing Monster in My Pocket on your Android device.
  8. -
-

The tips and tricks to enjoy the game

-

To make the most out of your gaming experience, here are some tips and tricks that you can use while playing Monster in My Pocket:

- -

Why should you play Monster in My Pocket on Android?

-

You might be wondering why you should play Monster in My Pocket on Android when there are so many other games available for this platform. Well, there are many reasons why this game is worth playing on Android, such as:

-

The benefits of playing on a mobile device

-

Playing Monster in My Pocket on Android has several advantages over playing it on other platforms, such as:

- -

The challenges and fun of the game

-

Playing Monster in My Pocket on Android is also challenging and fun, because:

- -

The nostalgia and charm of the game

-

Playing Monster in My Pocket on Android is also nostalgic and charming, because:

- -

Conclusion

-

In conclusion, Monster in My Pocket is a classic NES game that is now available for Android devices. It is a platformer game that features monsters and fantastical creatures from various sources. It has several features that make it stand out from other platformer games, such as co-op mode, double jump mechanic, unique attack system, variety of items, colorful graphics, and catchy soundtrack. It also has several benefits, challenges, fun, nostalgia, and charm that make it worth playing on Android devices. If you are a fan of monsters and platform games, you should definitely download and play Monster in My Pocket on your Android device today.

-

A call to action for the readers

-

If you are interested in playing Monster in My Pocket on your Android device, you can download it from [APKPure] or [APKMirror] by following the steps that we explained above. You can also check out other sources that offer APK files of this game, but make sure that they are safe and reliable. Once you have downloaded and installed the game, you can start playing it right away and enjoy its features, benefits, challenges, fun, nostalgia, and charm.

-

FAQs

-

Here are some frequently asked questions about Monster in My Pocket:

-
    -
  1. Q: Is Monster in My Pocket free to play on Android?
    A: Yes, Monster in My Pocket is free to play on Android devices. However, some websites may require you to register or complete surveys before downloading the APK file of the game.
  2. -
  3. Q: Is Monster in My Pocket compatible with all Android devices?
    A: No, Monster in My Pocket may not be compatible with some Android devices due to different hardware specifications or software versions. If you encounter any problems while playing the game on your device, you may need to update your device's system or use an emulator.
  4. -
  5. Q: Is Monster in My Pocket safe to play on Android?
    A: Yes, Monster in My Pocket is safe to play on Android devices as long as you download it from a trusted and reputable source, such as [APKPure] or [APKMirror]. However, you should always be careful when downloading any APK file from unknown sources, as they may contain viruses or malware that could harm your device or steal your data.
  6. -
  7. Q: Is Monster in My Pocket still popular today?
    A: Yes, Monster in My Pocket still has a loyal fan base today who appreciate its retro appeal and nostalgic charm. The franchise also has a cult following among collectors who seek rare or exclusive items related to it.
  8. -
  9. Q: Are there any other games like Monster in My Pocket?
    A: Yes, there are many other games like Monster in My Pocket that feature monsters and platform games, such as: - Castlevania: A series of games that feature vampire hunters and other supernatural creatures in a Gothic setting. - Ghosts 'n Goblins: A series of games that feature a knight who has to rescue his princess from demons and undead. - Little Nemo: The Dream Master: A game that features a boy who can enter the dream world and transform into different animals. - Kid Dracula: A game that features a young vampire who has to defeat his father's enemies and reclaim his throne. - Monster Party: A game that features a boy who teams up with a monster to fight against bizarre and grotesque enemies.
  10. -
-

I hope you enjoyed reading this article and learned something new about Monster in My Pocket. If you have any questions or comments, feel free to leave them below. Thank you for your time and attention.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Spider-Man 2000 APK - The Classic Web-Slinging Adventure on Android.md b/spaces/1phancelerku/anime-remove-background/Download Spider-Man 2000 APK - The Classic Web-Slinging Adventure on Android.md deleted file mode 100644 index 2537146e1a527a3ca6860842786addc0c61eb2e4..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Spider-Man 2000 APK - The Classic Web-Slinging Adventure on Android.md +++ /dev/null @@ -1,96 +0,0 @@ - -

Spider-Man (2000 Video Game) Download APK: How to Play the Classic Game on Your Android Device

-

If you are a fan of Spider-Man, you might remember the classic video game that was released in 2000 for PlayStation, Nintendo 64, Dreamcast, PC, and Game Boy Color. The game was based on the comic book series and featured an original story that involved Spider-Man fighting against various villains such as Venom, Carnage, Doctor Octopus, Mysterio, Rhino, Scorpion, and more. The game was praised for its graphics, gameplay, voice acting, and faithful adaptation of the Spider-Man universe.

-

spider-man (2000 video game) download apk


Download File ⚙⚙⚙ https://jinyurl.com/2uNNwV



-

But what if you want to play this game in 2023 on your Android device? Is it possible to download and install an apk file that will let you enjoy this classic game on your smartphone or tablet? The answer is yes, but there are some things you need to know before you do so. In this article, we will show you how to download and install Spider-Man (2000 video game) apk on your Android device, what are the features and gameplay of this game, what are the pros and cons of playing it on your mobile device, and whether it is worth playing in 2023.

-

How to download and install Spider-Man (2000 video game) apk on your Android device?

-

The first thing you need to do is to find a reliable source that offers the apk file for Spider-Man (2000 video game). There are many websites that claim to provide this file, but not all of them are trustworthy or safe. Some of them might contain malware or viruses that can harm your device or steal your personal information. Therefore, you need to be careful and do some research before you download anything from an unknown source.

-

One of the websites that we recommend is APKCombo, which is a reputable platform that offers free apk files for various Android games and apps. You can visit their website and search for "Spider-Man 2", which is the name of the apk file for Spider-Man (2000 video game). You will see a page that shows you some information about the file, such as its size, version, developer, rating, and screenshots. You can also read some reviews from other users who have downloaded and played the game. If you are satisfied with the file, you can click on the "Download APK" button and save the file to your device.

-

The next thing you need to do is to install the apk file on your device. Before you do that, you need to enable the "Unknown sources" option in your device's settings. This will allow you to install apps that are not from the official Google Play Store. To do this, go to your device's settings, then security, then unknown sources, and toggle it on. You might see a warning message that tells you about the risks of installing apps from unknown sources, but you can ignore it if you trust the source of the apk file.

-

spider-man 2000 game apk free download
-spider-man 2000 android game download
-spider-man 2000 game download for mobile
-spider-man 2000 game apk mod
-spider-man 2000 game apk obb
-spider-man 2000 game apk offline
-spider-man 2000 game apk data
-spider-man 2000 game apk full version
-spider-man 2000 game apk highly compressed
-spider-man 2000 game apk latest version
-spider-man 2000 game download for android
-spider-man 2000 game download for pc
-spider-man 2000 game download for windows 10
-spider-man 2000 game download for windows 7
-spider-man 2000 game download for laptop
-spider-man 2000 game download for mac
-spider-man 2000 game download for ios
-spider-man 2000 game download for ps4
-spider-man 2000 game download for xbox one
-spider-man 2000 game download for psp
-how to download spider-man 2000 game apk
-how to install spider-man 2000 game apk
-how to play spider-man 2000 game apk
-how to run spider-man 2000 game apk
-how to update spider-man 2000 game apk
-where to download spider-man 2000 game apk
-where to find spider-man 2000 game apk
-where to get spider-man 2000 game apk
-where to buy spider-man 2000 game apk
-where to stream spider-man 2000 game apk
-best site to download spider-man 2000 game apk
-best app to download spider-man 2000 game apk
-best way to download spider-man 2000 game apk
-best source to download spider-man 2000 game apk
-best alternative to download spider-man 2000 game apk
-is it safe to download spider-man 2000 game apk
-is it legal to download spider-man 2000 game apk
-is it possible to download spider-man 2000 game apk
-is it easy to download spider-man 2000 game apk
-is it worth to download spider-man 2000 game apk
-reviews of spider-man 2000 game apk download
-ratings of spider-man 2000 game apk download
-feedback of spider-man 2000 game apk download
-testimonials of spider-man 2000 game apk download
-benefits of spider-man 2000 game apk download
-features of spider-man 2000 game apk download
-tips and tricks of spider-man 2000 game apk download
-guide and tutorial of spider-man 2000 game apk download
-walkthrough and gameplay of spider-man 2000 game apk download

-

Once you have enabled the unknown sources option, you can go to your device's file manager and locate the apk file that you have downloaded. Tap on it and follow the instructions on the screen to install it. You might see some permissions requests that ask you to allow the app to access your device's storage, camera, microphone, etc. You can grant these permissions if you want to enjoy the full features of the game, or deny them if you are concerned about your privacy. After the installation is complete, you will see an icon for Spider-Man 2 on your device's home screen or app drawer. Tap on it and start playing!

-

What are the features and gameplay of Spider-Man (2000 video game)?

-

Spider-Man (2000 video game) is an action-adventure game that lets you control Spider-Man as he swings, crawls, fights, and explores New York City. The game has a third-person perspective and uses a combination of 3D graphics and 2D sprites. The game also features voice acting from some of the actors who played Spider-Man and his allies and enemies in various animated series, such as Rino Romano, Jennifer Hale, Dee Bradley Baker, Daran Norris, and Mark Hamill.

-

The game has four modes: training, story, what if?, and gallery. The training mode teaches you the basic controls and moves of Spider-Man, such as web swinging, wall crawling, web shooting, punching, kicking, etc. The story mode follows the main plot of the game, which involves Spider-Man being framed for a bank robbery by a mysterious imposter and having to clear his name while facing various threats from his enemies. The what if? mode is a variation of the story mode that changes some of the events and outcomes of the game based on different choices and actions. The gallery mode lets you view various artworks, comic book covers, character bios, and cheats that you can unlock by playing the game.

-

The gameplay of Spider-Man (2000 video game) is divided into levels that have different objectives and challenges. Some of the levels require you to reach a certain destination, while others require you to defeat a certain number of enemies or a boss. Some of the levels also have optional objectives that can reward you with bonus points or items. The game also has some stealth elements that require you to avoid detection by enemies or cameras. The game also has some puzzle elements that require you to use your webbing or other items to solve problems or access hidden areas.

-

The game also has a variety of villains that Spider-Man has to face throughout the game. Some of them are classic foes from the comic books, such as Venom, Carnage, Doctor Octopus, Mysterio, Rhino, Scorpion, Lizard, Electro, Sandman, and Vulture. Some of them are original creations for the game, such as Monster Ock, a fusion of Doctor Octopus and Carnage, and the Spider-Slayers, robotic enemies that hunt Spider-Man. The game also has some allies that help Spider-Man along the way, such as Black Cat, Daredevil, Captain America, Human Torch, and Punisher.

-

The game also has some easter eggs that reference other Marvel characters and events, such as the Fantastic Four, Iron Man, Thor, Hulk, X-Men, Blade, Ghost Rider, and more. Some of these easter eggs can be found by exploring the levels or using certain cheats. For example, if you enter the cheat code "GBHSRSPM", you can play as Spider-Man wearing a Fantastic Four costume with a paper bag over his head, which is a reference to an issue of the comic book where Spider-Man did the same thing.

-

What are the pros and cons of playing Spider-Man (2000 video game) on your Android device?

-

Playing Spider-Man (2000 video game) on your Android device can have some advantages and disadvantages compared to playing it on a console or a PC. Here are some of them:

-

Pros

- -

Cons

- -

Conclusion: Is Spider-Man (2000 video game) worth playing in 2023?

-

Spider-Man (2000 video game) is a classic game that many Spider-Man fans and gamers still love and enjoy today. The game has a captivating story, engaging gameplay, impressive graphics, and memorable voice acting. The game also has a lot of features and easter eggs that make it fun and rewarding to play. The game is also compatible with Android devices, which means you can play it on your smartphone or tablet with ease.

-

However, playing Spider-Man (2000 video game) on your Android device also has some drawbacks that you need to consider. The game might not run smoothly or properly on some devices due to technical issues or limitations. The game might also consume a lot of your device's battery, storage, or data. The game might also pose some risks to your device's security or privacy if you download an apk file from an unreliable source or grant unnecessary permissions to the app. The game might also infringe some legal or ethical rules if you download an apk file from an unofficial source or alter the game's content without authorization.

-

Therefore, whether Spider-Man (2000 video game) is worth playing in 2023 depends on your personal preference and judgment. If you are a fan of Spider-Man or retro games, and you have a compatible and secure device, you might enjoy playing this game and reliving its nostalgia. However, if you are not interested in Spider-Man or old games, or you have an incompatible or unsafe device, you might not like playing this game and find it outdated or boring.

-

The choice is yours. If you want to try out Spider-Man (2000 video game) on your Android device, you can follow the steps we have provided in this article to download and install the apk file from APKCombo. If you have any questions or feedback about this game or this article, feel free to leave a comment below. We would love to hear from you!

-

FAQs

-

Q: What is the difference between Spider-Man (2000 video game) and Spider-Man 2: Enter Electro?

A: Spider-Man 2: Enter Electro is the sequel to Spider-Man (2000 video game) that was released in 2001 for PlayStation and PC. The game follows a new story that involves Spider-Man trying to stop Electro from obtaining a powerful device that can amplify his powers. The game has some improvements and additions over the first game, such as new moves, costumes, levels, enemies, and bosses. However, the game also has some drawbacks, such as lower graphics quality, shorter gameplay, and less voice acting.

Q: How can I play Spider-Man (2000 video game) on other devices besides Android?

- A: If you want to play Spider-Man (2000 video game) on other devices, you have a few options. You can play the original version of the game on PlayStation, Nintendo 64, Dreamcast, PC, or Game Boy Color if you have these consoles or emulators. You can also play a remastered version of the game on PlayStation 3 or Xbox 360 if you have these consoles or emulators. You can also play a ported version of the game on iOS or Windows Phone if you have these devices or emulators.

Q: What are some of the cheats and mods that I can use for Spider-Man (2000 video game)?

- A: There are many cheats and mods that you can use for Spider-Man (2000 video game) to make it more fun or challenging. Some of the cheats are codes that you can enter in the main menu or during the game to unlock various features, such as costumes, levels, characters, abilities, etc. Some of the mods are files that you can download and install on your device to change some aspects of the game, such as graphics, sound, gameplay, etc. You can find some of the cheats and mods online from various sources, such as YouTube, Reddit, APKPure, etc.

Q: What are some of the best Spider-Man games that I can play in 2023?

- A: There are many Spider-Man games that you can play in 2023 that are based on different versions of Spider-Man from different media, such as comics, movies, cartoons, etc. Some of the best Spider-Man games that we recommend are: - Spider-Man: Miles Morales (2020): A spin-off of Spider-Man (2018) that follows Miles Morales as he becomes the new Spider-Man and faces new threats in New York City. - Marvel's Spider-Man (2018): A critically acclaimed game that features an original story and gameplay that lets you explore a realistic and dynamic open-world New York City as Spider-Man. - Spider-Man: Shattered Dimensions (2010): A unique game that lets you play as four different versions of Spider-Man from different dimensions and timelines, each with their own abilities and styles. - Ultimate Spider-Man (2005): A comic book-inspired game that lets you play as both Spider-Man and Venom in a cel-shaded world that follows the Ultimate Marvel storyline. - Spider-Man 2 (2004): A movie-based game that lets you swing freely around a large and detailed New York City as Spider-Man and face various villains and challenges.

Q: How can I learn more about Spider-Man and his universe?

- A: If you are interested in learning more about Spider-Man and his universe, there are many sources that you can check out. You can read some of the comic books that feature Spider-Man and his allies and enemies from different eras and genres. You can watch some of the movies or shows that adapt or expand on Spider-Man's stories and characters from different perspectives and styles. You can also visit some of the websites or forums that discuss or analyze Spider-Man's lore and trivia from various angles and viewpoints.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/FIFA Mobile 22 Hack How to Unlock All Players Kits and Stadiums for Free.md b/spaces/1phancelerku/anime-remove-background/FIFA Mobile 22 Hack How to Unlock All Players Kits and Stadiums for Free.md deleted file mode 100644 index 4b0bec610d5299e57d4a59642044c2df957d4856..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/FIFA Mobile 22 Hack How to Unlock All Players Kits and Stadiums for Free.md +++ /dev/null @@ -1,101 +0,0 @@ -
-

Download FIFA Mobile 22 Hack: How to Get Unlimited Coins and Gems

-

FIFA Mobile 22 is one of the most popular and exciting soccer games on mobile devices. It allows you to build your ultimate team with over 15,000 authentic players from more than 600 clubs, compete in various modes such as World Cup, Champions League, Manager Mode, or Head-to-Head, and enjoy realistic graphics and gameplay. However, FIFA Mobile 22 also has some challenges and limitations that might frustrate some players. For example, you need coins and gems to buy players, upgrade your team, unlock features, or access premium content. Coins and gems are not easy to come by in the game, especially if you are a free-to-play user. You might have to spend a lot of time, effort, or even real money to get enough coins and gems for your needs.

-

download fifa mobile 22 hack


Download Zip ○○○ https://jinyurl.com/2uNLEr



-

That's why some players might want to hack FIFA Mobile 22 to get unlimited coins and gems. By hacking FIFA Mobile 22, you can bypass the restrictions and enjoy the game without any hassle. You can buy any player you want, upgrade your team to the max level, unlock all the features and events, or access any premium content you desire. Sounds tempting, right? But before you rush to download FIFA Mobile 22 hack, you should be aware of the risks and consequences of doing so. In this article, we will explain what are the dangers of hacking FIFA Mobile 22, how to download FIFA Mobile 22 hack safely and easily, and what are some alternative ways to get coins and gems without hacking.

-

Disclaimer: The risks and consequences of hacking FIFA Mobile 22

-

Before we proceed, we have to warn you about the potential dangers of hacking FIFA Mobile 22. Hacking is an illegal and unethical activity that violates the terms of service of EA Sports, the developer of FIFA Mobile 22. If you hack FIFA Mobile 22, you might face some serious consequences, such as:

- -

Therefore, we advise you to proceed at your own risk and responsibility if you decide to hack FIFA Mobile 22. We also suggest you to use a reliable and trusted source for downloading FIFA Mobile 22 hack, such as the one we will provide in the next section. Do not download FIFA Mobile 22 hack from unknown or suspicious websites, as they might contain viruses or malware that can harm your device or steal your information.

-

How to download FIFA Mobile 22 hack: A step-by-step guide

-

If you are still interested in hacking FIFA Mobile 22, here is a step-by-step guide on how to download FIFA Mobile 22 hack from a reputable website. Follow these instructions carefully and you will be able to get unlimited coins and gems in no time.

-

download fifa mobile 22 mod apk unlimited money
-download fifa mobile 22 cheat engine
-download fifa mobile 22 hack tool
-download fifa mobile 22 mod menu
-download fifa mobile 22 hack ios
-download fifa mobile 22 hack android
-download fifa mobile 22 hack no verification
-download fifa mobile 22 hack online
-download fifa mobile 22 hack generator
-download fifa mobile 22 hack apk obb
-download fifa mobile 22 mod apk latest version
-download fifa mobile 22 cheat codes
-download fifa mobile 22 hack for pc
-download fifa mobile 22 mod apk revdl
-download fifa mobile 22 hack without human verification
-download fifa mobile 22 hack coins and points
-download fifa mobile 22 mod apk data
-download fifa mobile 22 cheat sheet
-download fifa mobile 22 hack app
-download fifa mobile 22 mod apk offline
-download fifa mobile 22 hack no survey
-download fifa mobile 22 hack free
-download fifa mobile 22 hack unlimited everything
-download fifa mobile 22 mod apk rexdl
-download fifa mobile 22 cheat app
-download fifa mobile 22 hack for ios no jailbreak
-download fifa mobile 22 mod apk android 1
-download fifa mobile 22 cheat mod apk
-download fifa mobile 22 hack version
-download fifa mobile 22 mod apk happymod
-download fifa mobile 22 hack for android no root
-download fifa mobile 22 mod apk pure
-download fifa mobile 22 cheat engine apk
-download fifa mobile 22 hack file
-download fifa mobile 22 mod apk full unlocked
-download fifa mobile 22 hack with lucky patcher
-download fifa mobile 22 mod apk unlimited coins and points
-download fifa mobile 22 cheat tool apk
-download fifa mobile 22 hack by game guardian
-download fifa mobile 22 mod apk mega mod

-
    -
  1. Go to the website [text], which is one of the best and safest sources for downloading FIFA Mobile 22 hack. You can access the website from any browser or device.
  2. -
  3. On the homepage, you will see a button that says "Download FIFA Mobile 22 Hack". Click on it and you will be redirected to a verification page.
  4. -
  5. On the verification page, you will have to complete a short and simple survey or offer to prove that you are a human and not a bot. This is a necessary step to prevent abuse and ensure the quality of the service. The survey or offer will only take a few minutes and will not cost you anything.
  6. -
  7. After completing the verification, you will be able to download FIFA Mobile 22 hack as an APK file. Save the file on your device and locate it using a file manager.
  8. -
  9. Before installing the APK file, make sure that you have enabled the "Unknown Sources" option in your device settings. This will allow you to install apps from sources other than the Google Play Store.
  10. -
  11. Tap on the APK file and follow the instructions on the screen to install FIFA Mobile 22 hack on your device. You might have to grant some permissions to the app for it to work properly.
  12. -
  13. Once the installation is done, you can launch FIFA Mobile 22 hack from your app drawer or home screen. You will see a user-friendly interface that will let you customize your preferences and settings.
  14. -
  15. Enter the amount of coins and gems that you want to generate and click on the "Start Hack" button. The hack will start working and inject the resources into your game account.
  16. -
  17. Wait for a few seconds or minutes until the hack is finished. You will see a confirmation message on the screen when it is done.
  18. -
  19. Open FIFA Mobile 22 and enjoy your unlimited coins and gems. You can use them to buy players, upgrade your team, unlock features, or access premium content as much as you want.
  20. -
-

Congratulations! You have successfully hacked FIFA Mobile 22 and got unlimited coins and gems. You can now enjoy the game without any limitations or restrictions. However, if you are not comfortable with hacking FIFA Mobile 22 or want to try some alternative ways to get coins and gems without hacking, keep reading.

-

Alternative ways to get coins and gems without hacking

-

Hacking FIFA Mobile 22 is not the only way to get coins and gems in the game. There are some legitimate and safe methods that you can use to earn coins and gems without breaking any rules or risking any consequences. Here are some of them:

- - - - - - -
MethodDescriptionBenefitsDrawbacks
Completing tasksFIFA Mobile 22 offers various tasks that you can complete to earn coins and gems. These tasks include daily, weekly, monthly, seasonal, or special tasks that require you to perform certain actions or achieve certain goals in the game.- Easy and simple - Rewarding and satisfying - Diverse and varied- Time-consuming - Repetitive - Limited
Participating in eventsFIFA Mobile 22 also features various events that you can participate in to earn coins and gems. These events include World Cup, Champions League, Manager Mode, Head-to-Head, or other themed events that offer different challenges and rewards.- Fun and exciting - Competitive and challenging - Generous and lucrative- Difficult - Demanding - Seasonal
Achieving achievementsFIFA Mobile 22 has a list of achievements that you can achieve to earn coins and gems. These achievements include milestones, records, feats, or accomplishments that reflect your progress and performance in the game.- Motivating and inspiring - Reflective and rewarding - Incremental and cumulative- Hard and rare - Fixed and finite - Hidden and obscure
Watching adsFIFA Mobile 22 allows you to watch ads to earn coins and gems. These ads are usually short and relevant to the game or your interests. You can watch ads from the store, the rewards center, or the events page.- Quick and easy - Free and unlimited - Optional and voluntary- Boring and annoying - Low and variable - Intrusive and distracting
-

As you can see, there are some pros and cons of each method. You can choose the one that suits your preferences, goals, and playstyle. You can also combine different methods to maximize your coin and gem income. Here are some tips and tricks on how to optimize your coin and gem income in FIFA Mobile 22:

- -

Conclusion: Summarize the main points and give a final verdict

-

In conclusion, FIFA Mobile 22 is a great soccer game that offers a lot of fun and excitement for mobile gamers. However, it also has some challenges and limitations that might make some players want to hack it to get unlimited coins and gems. In this article, we have explained what are the risks and consequences of hacking FIFA Mobile 22, how to download FIFA Mobile 22 hack safely and easily, and what are some alternative ways to get coins and gems without hacking.

-

Our final verdict is that hacking FIFA Mobile 22 is not worth it. It is illegal, unethical, risky, and unnecessary. You might end up losing more than you gain by hacking FIFA Mobile 22. You might lose your account, your progress, your data, or even your device. You might also lose the fun, challenge, satisfaction, and integrity of playing FIFA Mobile 22.

-

We recommend you to play FIFA Mobile 22 without hacking it. You can still enjoy the game without unlimited coins and gems. You can still earn enough coins and gems by playing the game legitimately and safely. You can still build your ultimate team, compete in various modes, and enjoy realistic graphics and gameplay.

-

We hope you found this article helpful and informative. If you have any questions, comments, or feedback, please feel free to share them in the comments section below. We would love to hear from you. Thank you for reading!

-

FAQs

-
    -
  1. Q: Is FIFA Mobile 22 hack safe to use?
    A: No, FIFA Mobile 22 hack is not safe to use. It might contain viruses or malware that can harm your device or steal your information. It might also get you banned from the game or sued by EA Sports.
  2. -
  3. Q: Is FIFA Mobile 22 hack free to download?
    A: Yes, FIFA Mobile 22 hack is free to download from some websites. However, you might have to complete a verification process before downloading it. You might also have to pay for some features or updates of the hack.
  4. -
  5. Q: Is FIFA Mobile 22 hack compatible with all devices?
    A: No, FIFA Mobile 22 hack is not compatible with all devices. It might only work on certain devices or operating systems. It might also require root or jailbreak access for some devices.
  6. -
  7. Q: Is FIFA Mobile 22 hack legal to use?
    A: No, FIFA Mobile 22 hack is not legal to use. It violates the terms of service of EA Sports, the developer of FIFA Mobile 22. It also infringes on their intellectual property rights.
  8. -
  9. Q: Is FIFA Mobile 22 hack ethical to use?
    A: No, FIFA Mobile 22 hack is not ethical to use. It gives you an unfair advantage over other players who play the game without hacking it. It also ruins the fun, challenge, satisfaction, and integrity of playing FIFA Mobile 22.
  10. -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/2ndelement/voicevox/voicevox_engine/utility/connect_base64_waves.py b/spaces/2ndelement/voicevox/voicevox_engine/utility/connect_base64_waves.py deleted file mode 100644 index 37f95240966f9bfed1cfe6e9090f871cea331ef7..0000000000000000000000000000000000000000 --- a/spaces/2ndelement/voicevox/voicevox_engine/utility/connect_base64_waves.py +++ /dev/null @@ -1,60 +0,0 @@ -import base64 -import io -from typing import List, Tuple - -import numpy as np -import soundfile -from scipy.signal import resample - - -class ConnectBase64WavesException(Exception): - def __init__(self, message: str): - self.message = message - - -def decode_base64_waves(waves: List[str]) -> List[Tuple[np.ndarray, int]]: - """ - base64エンコードされた複数のwavデータをデコードする - Parameters - ---------- - waves: list[str] - base64エンコードされたwavデータのリスト - Returns - ------- - waves_nparray_sr: List[Tuple[np.ndarray, int]] - (NumPy配列の音声波形データ, サンプリングレート) 形式のタプルのリスト - """ - if len(waves) == 0: - raise ConnectBase64WavesException("wavファイルが含まれていません") - - waves_nparray_sr = [] - for wave in waves: - try: - wav_bin = base64.standard_b64decode(wave) - except ValueError: - raise ConnectBase64WavesException("base64デコードに失敗しました") - try: - _data = soundfile.read(io.BytesIO(wav_bin)) - except Exception: - raise ConnectBase64WavesException("wavファイルを読み込めませんでした") - waves_nparray_sr.append(_data) - - return waves_nparray_sr - - -def connect_base64_waves(waves: List[str]) -> Tuple[np.ndarray, int]: - waves_nparray_sr = decode_base64_waves(waves) - - max_sampling_rate = max([sr for _, sr in waves_nparray_sr]) - max_channels = max([x.ndim for x, _ in waves_nparray_sr]) - assert 0 < max_channels <= 2 - - waves_nparray_list = [] - for nparray, sr in waves_nparray_sr: - if sr != max_sampling_rate: - nparray = resample(nparray, max_sampling_rate * len(nparray) // sr) - if nparray.ndim < max_channels: - nparray = np.array([nparray, nparray]).T - waves_nparray_list.append(nparray) - - return np.concatenate(waves_nparray_list), max_sampling_rate diff --git a/spaces/AIFILMS/StyleGANEX/models/mtcnn/__init__.py b/spaces/AIFILMS/StyleGANEX/models/mtcnn/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/box_utils.py b/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/box_utils.py deleted file mode 100644 index 1e8081b73639a7d70e4391b3d45417569550ddc6..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/box_utils.py +++ /dev/null @@ -1,238 +0,0 @@ -import numpy as np -from PIL import Image - - -def nms(boxes, overlap_threshold=0.5, mode='union'): - """Non-maximum suppression. - - Arguments: - boxes: a float numpy array of shape [n, 5], - where each row is (xmin, ymin, xmax, ymax, score). - overlap_threshold: a float number. - mode: 'union' or 'min'. - - Returns: - list with indices of the selected boxes - """ - - # if there are no boxes, return the empty list - if len(boxes) == 0: - return [] - - # list of picked indices - pick = [] - - # grab the coordinates of the bounding boxes - x1, y1, x2, y2, score = [boxes[:, i] for i in range(5)] - - area = (x2 - x1 + 1.0) * (y2 - y1 + 1.0) - ids = np.argsort(score) # in increasing order - - while len(ids) > 0: - - # grab index of the largest value - last = len(ids) - 1 - i = ids[last] - pick.append(i) - - # compute intersections - # of the box with the largest score - # with the rest of boxes - - # left top corner of intersection boxes - ix1 = np.maximum(x1[i], x1[ids[:last]]) - iy1 = np.maximum(y1[i], y1[ids[:last]]) - - # right bottom corner of intersection boxes - ix2 = np.minimum(x2[i], x2[ids[:last]]) - iy2 = np.minimum(y2[i], y2[ids[:last]]) - - # width and height of intersection boxes - w = np.maximum(0.0, ix2 - ix1 + 1.0) - h = np.maximum(0.0, iy2 - iy1 + 1.0) - - # intersections' areas - inter = w * h - if mode == 'min': - overlap = inter / np.minimum(area[i], area[ids[:last]]) - elif mode == 'union': - # intersection over union (IoU) - overlap = inter / (area[i] + area[ids[:last]] - inter) - - # delete all boxes where overlap is too big - ids = np.delete( - ids, - np.concatenate([[last], np.where(overlap > overlap_threshold)[0]]) - ) - - return pick - - -def convert_to_square(bboxes): - """Convert bounding boxes to a square form. - - Arguments: - bboxes: a float numpy array of shape [n, 5]. - - Returns: - a float numpy array of shape [n, 5], - squared bounding boxes. - """ - - square_bboxes = np.zeros_like(bboxes) - x1, y1, x2, y2 = [bboxes[:, i] for i in range(4)] - h = y2 - y1 + 1.0 - w = x2 - x1 + 1.0 - max_side = np.maximum(h, w) - square_bboxes[:, 0] = x1 + w * 0.5 - max_side * 0.5 - square_bboxes[:, 1] = y1 + h * 0.5 - max_side * 0.5 - square_bboxes[:, 2] = square_bboxes[:, 0] + max_side - 1.0 - square_bboxes[:, 3] = square_bboxes[:, 1] + max_side - 1.0 - return square_bboxes - - -def calibrate_box(bboxes, offsets): - """Transform bounding boxes to be more like true bounding boxes. - 'offsets' is one of the outputs of the nets. - - Arguments: - bboxes: a float numpy array of shape [n, 5]. - offsets: a float numpy array of shape [n, 4]. - - Returns: - a float numpy array of shape [n, 5]. - """ - x1, y1, x2, y2 = [bboxes[:, i] for i in range(4)] - w = x2 - x1 + 1.0 - h = y2 - y1 + 1.0 - w = np.expand_dims(w, 1) - h = np.expand_dims(h, 1) - - # this is what happening here: - # tx1, ty1, tx2, ty2 = [offsets[:, i] for i in range(4)] - # x1_true = x1 + tx1*w - # y1_true = y1 + ty1*h - # x2_true = x2 + tx2*w - # y2_true = y2 + ty2*h - # below is just more compact form of this - - # are offsets always such that - # x1 < x2 and y1 < y2 ? - - translation = np.hstack([w, h, w, h]) * offsets - bboxes[:, 0:4] = bboxes[:, 0:4] + translation - return bboxes - - -def get_image_boxes(bounding_boxes, img, size=24): - """Cut out boxes from the image. - - Arguments: - bounding_boxes: a float numpy array of shape [n, 5]. - img: an instance of PIL.Image. - size: an integer, size of cutouts. - - Returns: - a float numpy array of shape [n, 3, size, size]. - """ - - num_boxes = len(bounding_boxes) - width, height = img.size - - [dy, edy, dx, edx, y, ey, x, ex, w, h] = correct_bboxes(bounding_boxes, width, height) - img_boxes = np.zeros((num_boxes, 3, size, size), 'float32') - - for i in range(num_boxes): - img_box = np.zeros((h[i], w[i], 3), 'uint8') - - img_array = np.asarray(img, 'uint8') - img_box[dy[i]:(edy[i] + 1), dx[i]:(edx[i] + 1), :] = \ - img_array[y[i]:(ey[i] + 1), x[i]:(ex[i] + 1), :] - - # resize - img_box = Image.fromarray(img_box) - img_box = img_box.resize((size, size), Image.BILINEAR) - img_box = np.asarray(img_box, 'float32') - - img_boxes[i, :, :, :] = _preprocess(img_box) - - return img_boxes - - -def correct_bboxes(bboxes, width, height): - """Crop boxes that are too big and get coordinates - with respect to cutouts. - - Arguments: - bboxes: a float numpy array of shape [n, 5], - where each row is (xmin, ymin, xmax, ymax, score). - width: a float number. - height: a float number. - - Returns: - dy, dx, edy, edx: a int numpy arrays of shape [n], - coordinates of the boxes with respect to the cutouts. - y, x, ey, ex: a int numpy arrays of shape [n], - corrected ymin, xmin, ymax, xmax. - h, w: a int numpy arrays of shape [n], - just heights and widths of boxes. - - in the following order: - [dy, edy, dx, edx, y, ey, x, ex, w, h]. - """ - - x1, y1, x2, y2 = [bboxes[:, i] for i in range(4)] - w, h = x2 - x1 + 1.0, y2 - y1 + 1.0 - num_boxes = bboxes.shape[0] - - # 'e' stands for end - # (x, y) -> (ex, ey) - x, y, ex, ey = x1, y1, x2, y2 - - # we need to cut out a box from the image. - # (x, y, ex, ey) are corrected coordinates of the box - # in the image. - # (dx, dy, edx, edy) are coordinates of the box in the cutout - # from the image. - dx, dy = np.zeros((num_boxes,)), np.zeros((num_boxes,)) - edx, edy = w.copy() - 1.0, h.copy() - 1.0 - - # if box's bottom right corner is too far right - ind = np.where(ex > width - 1.0)[0] - edx[ind] = w[ind] + width - 2.0 - ex[ind] - ex[ind] = width - 1.0 - - # if box's bottom right corner is too low - ind = np.where(ey > height - 1.0)[0] - edy[ind] = h[ind] + height - 2.0 - ey[ind] - ey[ind] = height - 1.0 - - # if box's top left corner is too far left - ind = np.where(x < 0.0)[0] - dx[ind] = 0.0 - x[ind] - x[ind] = 0.0 - - # if box's top left corner is too high - ind = np.where(y < 0.0)[0] - dy[ind] = 0.0 - y[ind] - y[ind] = 0.0 - - return_list = [dy, edy, dx, edx, y, ey, x, ex, w, h] - return_list = [i.astype('int32') for i in return_list] - - return return_list - - -def _preprocess(img): - """Preprocessing step before feeding the network. - - Arguments: - img: a float numpy array of shape [h, w, c]. - - Returns: - a float numpy array of shape [1, c, h, w]. - """ - img = img.transpose((2, 0, 1)) - img = np.expand_dims(img, 0) - img = (img - 127.5) * 0.0078125 - return img diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/__init__.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/portaspeech/portaspeech.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/portaspeech/portaspeech.py deleted file mode 100644 index e313add6037ae87f41361633fcf6d92f39e8fd92..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/portaspeech/portaspeech.py +++ /dev/null @@ -1,230 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import Linear - -from modules.commons.conv import ConvBlocks, ConditionalConvBlocks -from modules.commons.common_layers import Embedding -from modules.commons.rel_transformer import RelTransformerEncoder -from modules.commons.transformer import MultiheadAttention, FFTBlocks -from modules.commons.align_ops import clip_mel2token_to_multiple, build_word_mask, expand_states, mel2ph_to_mel2word -from modules.portaspeech.fs import FS_DECODERS, FastSpeech -from modules.portaspeech.fvae import FVAE -from utils.tts_utils import group_hidden_by_segs -from utils.hparams import hparams - -class SinusoidalPosEmb(nn.Module): - def __init__(self, dim): - super().__init__() - self.dim = dim - - def forward(self, x): - """ - - :param x: [B, T] - :return: [B, T, H] - """ - device = x.device - half_dim = self.dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, device=device) * -emb) - emb = x[:, :, None] * emb[None, :] - emb = torch.cat((emb.sin(), emb.cos()), dim=-1) - return emb - - -class PortaSpeech(FastSpeech): - def __init__(self, ph_dictionary, word_dictionary, out_dims=None): - super().__init__(ph_dictionary, out_dims) - # build linguistic encoder - if hparams['use_word_encoder']: - # default False, use independent word embedding instead of phoneme encoding to represent word - self.word_encoder = RelTransformerEncoder( - len(word_dictionary), self.hidden_size, self.hidden_size, self.hidden_size, 2, - hparams['word_enc_layers'], hparams['enc_ffn_kernel_size']) - if hparams['dur_level'] == 'word': - if hparams['word_encoder_type'] == 'rel_fft': - self.ph2word_encoder = RelTransformerEncoder( - 0, self.hidden_size, self.hidden_size, self.hidden_size, 2, - hparams['word_enc_layers'], hparams['enc_ffn_kernel_size']) - if hparams['word_encoder_type'] == 'fft': - self.ph2word_encoder = FFTBlocks( - self.hidden_size, hparams['word_enc_layers'], 1, num_heads=hparams['num_heads']) - self.sin_pos = SinusoidalPosEmb(self.hidden_size) - self.enc_pos_proj = nn.Linear(2 * self.hidden_size, self.hidden_size) - self.dec_query_proj = nn.Linear(2 * self.hidden_size, self.hidden_size) - self.dec_res_proj = nn.Linear(2 * self.hidden_size, self.hidden_size) - self.attn = MultiheadAttention(self.hidden_size, 1, encoder_decoder_attention=True, bias=False) - self.attn.enable_torch_version = False - if hparams['text_encoder_postnet']: - self.text_encoder_postnet = ConvBlocks( - self.hidden_size, self.hidden_size, [1] * 3, 5, layers_in_block=2) - else: - self.sin_pos = SinusoidalPosEmb(self.hidden_size) - # build VAE decoder - if hparams['use_fvae']: - del self.decoder - del self.mel_out - self.fvae = FVAE( - c_in_out=self.out_dims, - hidden_size=hparams['fvae_enc_dec_hidden'], c_latent=hparams['latent_size'], - kernel_size=hparams['fvae_kernel_size'], - enc_n_layers=hparams['fvae_enc_n_layers'], - dec_n_layers=hparams['fvae_dec_n_layers'], - c_cond=self.hidden_size, - use_prior_flow=hparams['use_prior_flow'], - flow_hidden=hparams['prior_flow_hidden'], - flow_kernel_size=hparams['prior_flow_kernel_size'], - flow_n_steps=hparams['prior_flow_n_blocks'], - strides=[hparams['fvae_strides']], - encoder_type=hparams['fvae_encoder_type'], - decoder_type=hparams['fvae_decoder_type'], - ) - else: - self.decoder = FS_DECODERS[hparams['decoder_type']](hparams) - self.mel_out = Linear(self.hidden_size, self.out_dims, bias=True) - if hparams['use_pitch_embed']: - self.pitch_embed = Embedding(300, self.hidden_size, 0) - if hparams['add_word_pos']: - self.word_pos_proj = Linear(self.hidden_size, self.hidden_size) - - def build_embedding(self, dictionary, embed_dim): - num_embeddings = len(dictionary) - emb = Embedding(num_embeddings, embed_dim, self.padding_idx) - return emb - - def forward(self, txt_tokens, word_tokens, ph2word, word_len, mel2word=None, mel2ph=None, - spk_embed=None, spk_id=None, pitch=None, infer=False, tgt_mels=None, - global_step=None, *args, **kwargs): - ret = {} - style_embed = self.forward_style_embed(spk_embed, spk_id) - x, tgt_nonpadding = self.run_text_encoder( - txt_tokens, word_tokens, ph2word, word_len, mel2word, mel2ph, style_embed, ret, **kwargs) - x = x * tgt_nonpadding - ret['nonpadding'] = tgt_nonpadding - if hparams['use_pitch_embed']: - x = x + self.pitch_embed(pitch) - ret['decoder_inp'] = x - ret['mel_out_fvae'] = ret['mel_out'] = self.run_decoder(x, tgt_nonpadding, ret, infer, tgt_mels, global_step) - return ret - - def run_text_encoder(self, txt_tokens, word_tokens, ph2word, word_len, mel2word, mel2ph, style_embed, ret, **kwargs): - word2word = torch.arange(word_len)[None, :].to(ph2word.device) + 1 # [B, T_mel, T_word] - src_nonpadding = (txt_tokens > 0).float()[:, :, None] - use_bert = hparams.get("use_bert") is True - if use_bert: - ph_encoder_out = self.ph_encoder(txt_tokens, bert_feats=kwargs['bert_feats'], ph2word=ph2word, - graph_lst=kwargs['graph_lst'], etypes_lst=kwargs['etypes_lst'], - cl_feats=kwargs['cl_feats'], ret=ret) * src_nonpadding + style_embed - else: - ph_encoder_out = self.ph_encoder(txt_tokens) * src_nonpadding + style_embed - if hparams['use_word_encoder']: - word_encoder_out = self.word_encoder(word_tokens) + style_embed - ph_encoder_out = ph_encoder_out + expand_states(word_encoder_out, ph2word) - if hparams['dur_level'] == 'word': - word_encoder_out = 0 - h_ph_gb_word = group_hidden_by_segs(ph_encoder_out, ph2word, word_len)[0] - word_encoder_out = word_encoder_out + self.ph2word_encoder(h_ph_gb_word) - if hparams['use_word_encoder']: - word_encoder_out = word_encoder_out + self.word_encoder(word_tokens) - mel2word = self.forward_dur(ph_encoder_out, mel2word, ret, ph2word=ph2word, word_len=word_len) - mel2word = clip_mel2token_to_multiple(mel2word, hparams['frames_multiple']) - tgt_nonpadding = (mel2word > 0).float()[:, :, None] - enc_pos = self.get_pos_embed(word2word, ph2word) # [B, T_ph, H] - dec_pos = self.get_pos_embed(word2word, mel2word) # [B, T_mel, H] - dec_word_mask = build_word_mask(mel2word, ph2word) # [B, T_mel, T_ph] - x, weight = self.attention(ph_encoder_out, enc_pos, word_encoder_out, dec_pos, mel2word, dec_word_mask) - if hparams['add_word_pos']: - x = x + self.word_pos_proj(dec_pos) - ret['attn'] = weight - else: - mel2ph = self.forward_dur(ph_encoder_out, mel2ph, ret) - mel2ph = clip_mel2token_to_multiple(mel2ph, hparams['frames_multiple']) - mel2word = mel2ph_to_mel2word(mel2ph, ph2word) - x = expand_states(ph_encoder_out, mel2ph) - if hparams['add_word_pos']: - dec_pos = self.get_pos_embed(word2word, mel2word) # [B, T_mel, H] - x = x + self.word_pos_proj(dec_pos) - tgt_nonpadding = (mel2ph > 0).float()[:, :, None] - if hparams['use_word_encoder']: - x = x + expand_states(word_encoder_out, mel2word) - return x, tgt_nonpadding - - def attention(self, ph_encoder_out, enc_pos, word_encoder_out, dec_pos, mel2word, dec_word_mask): - ph_kv = self.enc_pos_proj(torch.cat([ph_encoder_out, enc_pos], -1)) - word_enc_out_expend = expand_states(word_encoder_out, mel2word) - word_enc_out_expend = torch.cat([word_enc_out_expend, dec_pos], -1) - if hparams['text_encoder_postnet']: - word_enc_out_expend = self.dec_res_proj(word_enc_out_expend) - word_enc_out_expend = self.text_encoder_postnet(word_enc_out_expend) - dec_q = x_res = word_enc_out_expend - else: - dec_q = self.dec_query_proj(word_enc_out_expend) - x_res = self.dec_res_proj(word_enc_out_expend) - ph_kv, dec_q = ph_kv.transpose(0, 1), dec_q.transpose(0, 1) - x, (weight, _) = self.attn(dec_q, ph_kv, ph_kv, attn_mask=(1 - dec_word_mask) * -1e9) - x = x.transpose(0, 1) - x = x + x_res - return x, weight - - def run_decoder(self, x, tgt_nonpadding, ret, infer, tgt_mels=None, global_step=0): - if not hparams['use_fvae']: - x = self.decoder(x) - x = self.mel_out(x) - ret['kl'] = 0 - return x * tgt_nonpadding - else: - decoder_inp = x - x = x.transpose(1, 2) # [B, H, T] - tgt_nonpadding_BHT = tgt_nonpadding.transpose(1, 2) # [B, H, T] - if infer: - z = self.fvae(cond=x, infer=True) - else: - tgt_mels = tgt_mels.transpose(1, 2) # [B, 80, T] - z, ret['kl'], ret['z_p'], ret['m_q'], ret['logs_q'] = self.fvae( - tgt_mels, tgt_nonpadding_BHT, cond=x) - if global_step < hparams['posterior_start_steps']: - z = torch.randn_like(z) - x_recon = self.fvae.decoder(z, nonpadding=tgt_nonpadding_BHT, cond=x).transpose(1, 2) - ret['pre_mel_out'] = x_recon - return x_recon - - def forward_dur(self, dur_input, mel2word, ret, **kwargs): - """ - - :param dur_input: [B, T_txt, H] - :param mel2ph: [B, T_mel] - :param txt_tokens: [B, T_txt] - :param ret: - :return: - """ - src_padding = dur_input.data.abs().sum(-1) == 0 - dur_input = dur_input.detach() + hparams['predictor_grad'] * (dur_input - dur_input.detach()) - dur = self.dur_predictor(dur_input, src_padding) - if hparams['dur_level'] == 'word': - word_len = kwargs['word_len'] - ph2word = kwargs['ph2word'] - B, T_ph = ph2word.shape - dur = torch.zeros([B, word_len.max() + 1]).to(ph2word.device).scatter_add(1, ph2word, dur) - dur = dur[:, 1:] - ret['dur'] = dur - if mel2word is None: - mel2word = self.length_regulator(dur).detach() - return mel2word - - def get_pos_embed(self, word2word, x2word): - x_pos = build_word_mask(word2word, x2word).float() # [B, T_word, T_ph] - x_pos = (x_pos.cumsum(-1) / x_pos.sum(-1).clamp(min=1)[..., None] * x_pos).sum(1) - x_pos = self.sin_pos(x_pos.float()) # [B, T_ph, H] - return x_pos - - def store_inverse_all(self): - def remove_weight_norm(m): - try: - if hasattr(m, 'store_inverse'): - m.store_inverse() - nn.utils.remove_weight_norm(m) - except ValueError: # this module didn't have weight norm - return - - self.apply(remove_weight_norm) diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/version.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/version.py deleted file mode 100644 index fc79d63d5430b972ac6ec1c4bfea9af80922da4d..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = '0.2.1' diff --git a/spaces/AIZero2HeroBootcamp/ChatGPTandLangchain/app.py b/spaces/AIZero2HeroBootcamp/ChatGPTandLangchain/app.py deleted file mode 100644 index 6e86abff95351769056a696503ff05e34c7117c9..0000000000000000000000000000000000000000 --- a/spaces/AIZero2HeroBootcamp/ChatGPTandLangchain/app.py +++ /dev/null @@ -1,442 +0,0 @@ -import streamlit as st -import openai -import os -import base64 -import glob -import json -import mistune -import pytz -import math -import requests -import time -import re -import textract - -from datetime import datetime -from openai import ChatCompletion -from xml.etree import ElementTree as ET -from bs4 import BeautifulSoup -from collections import deque -from audio_recorder_streamlit import audio_recorder - -from dotenv import load_dotenv -from PyPDF2 import PdfReader -from langchain.text_splitter import CharacterTextSplitter -from langchain.embeddings import OpenAIEmbeddings -from langchain.vectorstores import FAISS -from langchain.chat_models import ChatOpenAI -from langchain.memory import ConversationBufferMemory -from langchain.chains import ConversationalRetrievalChain -from templates import css, bot_template, user_template - - - -def generate_filename(prompt, file_type): - central = pytz.timezone('US/Central') - safe_date_time = datetime.now(central).strftime("%m%d_%H%M") # Date and time DD-HHMM - safe_prompt = "".join(x for x in prompt if x.isalnum())[:90] # Limit file name size and trim whitespace - return f"{safe_date_time}_{safe_prompt}.{file_type}" # Return a safe file name - - -def transcribe_audio(openai_key, file_path, model): - OPENAI_API_URL = "https://api.openai.com/v1/audio/transcriptions" - headers = { - "Authorization": f"Bearer {openai_key}", - } - with open(file_path, 'rb') as f: - data = {'file': f} - response = requests.post(OPENAI_API_URL, headers=headers, files=data, data={'model': model}) - if response.status_code == 200: - st.write(response.json()) - chatResponse = chat_with_model(response.json().get('text'), '') # ************************************* - transcript = response.json().get('text') - #st.write('Responses:') - #st.write(chatResponse) - filename = generate_filename(transcript, 'txt') - create_file(filename, transcript, chatResponse) - return transcript - else: - st.write(response.json()) - st.error("Error in API call.") - return None - -def save_and_play_audio(audio_recorder): - audio_bytes = audio_recorder() - if audio_bytes: - filename = generate_filename("Recording", "wav") - with open(filename, 'wb') as f: - f.write(audio_bytes) - st.audio(audio_bytes, format="audio/wav") - return filename - return None - -def create_file(filename, prompt, response): - if filename.endswith(".txt"): - with open(filename, 'w') as file: - file.write(f"{prompt}\n{response}") - elif filename.endswith(".htm"): - with open(filename, 'w') as file: - file.write(f"{prompt} {response}") - elif filename.endswith(".md"): - with open(filename, 'w') as file: - file.write(f"{prompt}\n\n{response}") - -def truncate_document(document, length): - return document[:length] -def divide_document(document, max_length): - return [document[i:i+max_length] for i in range(0, len(document), max_length)] - -def get_table_download_link(file_path): - with open(file_path, 'r') as file: - try: - data = file.read() - except: - st.write('') - return file_path - b64 = base64.b64encode(data.encode()).decode() - file_name = os.path.basename(file_path) - ext = os.path.splitext(file_name)[1] # get the file extension - if ext == '.txt': - mime_type = 'text/plain' - elif ext == '.py': - mime_type = 'text/plain' - elif ext == '.xlsx': - mime_type = 'text/plain' - elif ext == '.csv': - mime_type = 'text/plain' - elif ext == '.htm': - mime_type = 'text/html' - elif ext == '.md': - mime_type = 'text/markdown' - else: - mime_type = 'application/octet-stream' # general binary data type - href = f'{file_name}' - return href - -def CompressXML(xml_text): - root = ET.fromstring(xml_text) - for elem in list(root.iter()): - if isinstance(elem.tag, str) and 'Comment' in elem.tag: - elem.parent.remove(elem) - return ET.tostring(root, encoding='unicode', method="xml") - -def read_file_content(file,max_length): - if file.type == "application/json": - content = json.load(file) - return str(content) - elif file.type == "text/html" or file.type == "text/htm": - content = BeautifulSoup(file, "html.parser") - return content.text - elif file.type == "application/xml" or file.type == "text/xml": - tree = ET.parse(file) - root = tree.getroot() - xml = CompressXML(ET.tostring(root, encoding='unicode')) - return xml - elif file.type == "text/markdown" or file.type == "text/md": - md = mistune.create_markdown() - content = md(file.read().decode()) - return content - elif file.type == "text/plain": - return file.getvalue().decode() - else: - return "" - -def chat_with_model(prompt, document_section, model_choice='gpt-3.5-turbo'): - model = model_choice - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.append({'role': 'user', 'content': prompt}) - if len(document_section)>0: - conversation.append({'role': 'assistant', 'content': document_section}) - - start_time = time.time() - report = [] - res_box = st.empty() - collected_chunks = [] - collected_messages = [] - - for chunk in openai.ChatCompletion.create( - model='gpt-3.5-turbo', - messages=conversation, - temperature=0.5, - stream=True - ): - - collected_chunks.append(chunk) # save the event response - chunk_message = chunk['choices'][0]['delta'] # extract the message - collected_messages.append(chunk_message) # save the message - - content=chunk["choices"][0].get("delta",{}).get("content") - - try: - report.append(content) - if len(content) > 0: - result = "".join(report).strip() - #result = result.replace("\n", "") - res_box.markdown(f'*{result}*') - except: - st.write(' ') - - full_reply_content = ''.join([m.get('content', '') for m in collected_messages]) - st.write("Elapsed time:") - st.write(time.time() - start_time) - return full_reply_content - -def chat_with_file_contents(prompt, file_content, model_choice='gpt-3.5-turbo'): - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.append({'role': 'user', 'content': prompt}) - if len(file_content)>0: - conversation.append({'role': 'assistant', 'content': file_content}) - response = openai.ChatCompletion.create(model=model_choice, messages=conversation) - return response['choices'][0]['message']['content'] - -def extract_mime_type(file): - # Check if the input is a string - if isinstance(file, str): - pattern = r"type='(.*?)'" - match = re.search(pattern, file) - if match: - return match.group(1) - else: - raise ValueError(f"Unable to extract MIME type from {file}") - # If it's not a string, assume it's a streamlit.UploadedFile object - elif isinstance(file, streamlit.UploadedFile): - return file.type - else: - raise TypeError("Input should be a string or a streamlit.UploadedFile object") - -from io import BytesIO -import re - -def extract_file_extension(file): - # get the file name directly from the UploadedFile object - file_name = file.name - pattern = r".*?\.(.*?)$" - match = re.search(pattern, file_name) - if match: - return match.group(1) - else: - raise ValueError(f"Unable to extract file extension from {file_name}") - -def pdf2txt(docs): - text = "" - for file in docs: - file_extension = extract_file_extension(file) - # print the file extension - st.write(f"File type extension: {file_extension}") - - # read the file according to its extension - try: - if file_extension.lower() in ['py', 'txt', 'html', 'htm', 'xml', 'json']: - text += file.getvalue().decode('utf-8') - elif file_extension.lower() == 'pdf': - from PyPDF2 import PdfReader - pdf = PdfReader(BytesIO(file.getvalue())) - for page in range(len(pdf.pages)): - text += pdf.pages[page].extract_text() # new PyPDF2 syntax - except Exception as e: - st.write(f"Error processing file {file.name}: {e}") - - return text - -def pdf2txt_old(pdf_docs): - st.write(pdf_docs) - for file in pdf_docs: - mime_type = extract_mime_type(file) - st.write(f"MIME type of file: {mime_type}") - - text = "" - for pdf in pdf_docs: - pdf_reader = PdfReader(pdf) - for page in pdf_reader.pages: - text += page.extract_text() - return text - -def txt2chunks(text): - text_splitter = CharacterTextSplitter(separator="\n", chunk_size=1000, chunk_overlap=200, length_function=len) - return text_splitter.split_text(text) - -def vector_store(text_chunks): - key = os.getenv('OPENAI_API_KEY') - embeddings = OpenAIEmbeddings(openai_api_key=key) - return FAISS.from_texts(texts=text_chunks, embedding=embeddings) - -def get_chain(vectorstore): - llm = ChatOpenAI() - memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True) - return ConversationalRetrievalChain.from_llm(llm=llm, retriever=vectorstore.as_retriever(), memory=memory) - -def process_user_input(user_question): - response = st.session_state.conversation({'question': user_question}) - st.session_state.chat_history = response['chat_history'] - for i, message in enumerate(st.session_state.chat_history): - template = user_template if i % 2 == 0 else bot_template - st.write(template.replace("{{MSG}}", message.content), unsafe_allow_html=True) - # Save file output from PDF query results - filename = generate_filename(user_question, 'txt') - create_file(filename, user_question, message.content) - - #st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - -def divide_prompt(prompt, max_length): - words = prompt.split() - chunks = [] - current_chunk = [] - current_length = 0 - for word in words: - if len(word) + current_length <= max_length: - current_length += len(word) + 1 # Adding 1 to account for spaces - current_chunk.append(word) - else: - chunks.append(' '.join(current_chunk)) - current_chunk = [word] - current_length = len(word) - chunks.append(' '.join(current_chunk)) # Append the final chunk - return chunks - -def main(): - # Sidebar and global - openai.api_key = os.getenv('OPENAI_API_KEY') - st.set_page_config(page_title="GPT Streamlit Document Reasoner",layout="wide") - - # File type for output, model choice - menu = ["txt", "htm", "xlsx", "csv", "md", "py"] #619 - choice = st.sidebar.selectbox("Output File Type:", menu) - model_choice = st.sidebar.radio("Select Model:", ('gpt-3.5-turbo', 'gpt-3.5-turbo-0301')) - - # Audio, transcribe, GPT: - filename = save_and_play_audio(audio_recorder) - if filename is not None: - transcription = transcribe_audio(openai.api_key, filename, "whisper-1") - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - filename=None # since transcription is finished next time just use the saved transcript - - # prompt interfaces - user_prompt = st.text_area("Enter prompts, instructions & questions:", '', height=100) - - # file section interface for prompts against large documents as context - collength, colupload = st.columns([2,3]) # adjust the ratio as needed - with collength: - max_length = st.slider("File section length for large files", min_value=1000, max_value=128000, value=12000, step=1000) - with colupload: - uploaded_file = st.file_uploader("Add a file for context:", type=["pdf", "xml", "json", "xlsx","csv","html", "htm", "md", "txt"]) - - # Document section chat - document_sections = deque() - document_responses = {} - if uploaded_file is not None: - file_content = read_file_content(uploaded_file, max_length) - document_sections.extend(divide_document(file_content, max_length)) - if len(document_sections) > 0: - if st.button("👁️ View Upload"): - st.markdown("**Sections of the uploaded file:**") - for i, section in enumerate(list(document_sections)): - st.markdown(f"**Section {i+1}**\n{section}") - st.markdown("**Chat with the model:**") - for i, section in enumerate(list(document_sections)): - if i in document_responses: - st.markdown(f"**Section {i+1}**\n{document_responses[i]}") - else: - if st.button(f"Chat about Section {i+1}"): - st.write('Reasoning with your inputs...') - response = chat_with_model(user_prompt, section, model_choice) # ************************************* - st.write('Response:') - st.write(response) - document_responses[i] = response - filename = generate_filename(f"{user_prompt}_section_{i+1}", choice) - create_file(filename, user_prompt, response) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - if st.button('💬 Chat'): - st.write('Reasoning with your inputs...') - - #response = chat_with_model(user_prompt, ''.join(list(document_sections,)), model_choice) # ************************************* - - # Divide the user_prompt into smaller sections - user_prompt_sections = divide_prompt(user_prompt, max_length) - full_response = '' - for prompt_section in user_prompt_sections: - # Process each section with the model - response = chat_with_model(prompt_section, ''.join(list(document_sections)), model_choice) - full_response += response + '\n' # Combine the responses - - #st.write('Response:') - #st.write(full_response) - - response = full_response - st.write('Response:') - st.write(response) - - filename = generate_filename(user_prompt, choice) - create_file(filename, user_prompt, response) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - all_files = glob.glob("*.*") - all_files = [file for file in all_files if len(os.path.splitext(file)[0]) >= 20] # exclude files with short names - all_files.sort(key=lambda x: (os.path.splitext(x)[1], x), reverse=True) # sort by file type and file name in descending order - - # sidebar of files - file_contents='' - next_action='' - for file in all_files: - col1, col2, col3, col4, col5 = st.sidebar.columns([1,6,1,1,1]) # adjust the ratio as needed - with col1: - if st.button("🌐", key="md_"+file): # md emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='md' - with col2: - st.markdown(get_table_download_link(file), unsafe_allow_html=True) - with col3: - if st.button("📂", key="open_"+file): # open emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='open' - with col4: - if st.button("🔍", key="read_"+file): # search emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='search' - with col5: - if st.button("🗑", key="delete_"+file): - os.remove(file) - st.experimental_rerun() - - if len(file_contents) > 0: - if next_action=='open': - file_content_area = st.text_area("File Contents:", file_contents, height=500) - if next_action=='md': - st.markdown(file_contents) - if next_action=='search': - file_content_area = st.text_area("File Contents:", file_contents, height=500) - st.write('Reasoning with your inputs...') - response = chat_with_model(user_prompt, file_contents, model_choice) - filename = generate_filename(file_contents, choice) - create_file(filename, file_contents, response) - - st.experimental_rerun() - #st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - -if __name__ == "__main__": - main() - -load_dotenv() -st.write(css, unsafe_allow_html=True) - -st.header("Chat with documents :books:") -user_question = st.text_input("Ask a question about your documents:") -if user_question: - process_user_input(user_question) - -with st.sidebar: - st.subheader("Your documents") - docs = st.file_uploader("import documents", accept_multiple_files=True) - with st.spinner("Processing"): - raw = pdf2txt(docs) - if len(raw) > 0: - length = str(len(raw)) - text_chunks = txt2chunks(raw) - vectorstore = vector_store(text_chunks) - st.session_state.conversation = get_chain(vectorstore) - st.markdown('# AI Search Index of Length:' + length + ' Created.') # add timing - filename = generate_filename(raw, 'txt') - create_file(filename, raw, '') \ No newline at end of file diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/_base_/default_runtime.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/_base_/default_runtime.py deleted file mode 100644 index 561d574fa757fa295f349394bf57047a2d8b576d..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/_base_/default_runtime.py +++ /dev/null @@ -1,49 +0,0 @@ -default_scope = 'mmpose' - -# hooks -default_hooks = dict( - timer=dict(type='IterTimerHook'), - logger=dict(type='LoggerHook', interval=50), - param_scheduler=dict(type='ParamSchedulerHook'), - checkpoint=dict(type='CheckpointHook', interval=10), - sampler_seed=dict(type='DistSamplerSeedHook'), - visualization=dict(type='PoseVisualizationHook', enable=False), -) - -# custom hooks -custom_hooks = [ - # Synchronize model buffers such as running_mean and running_var in BN - # at the end of each epoch - dict(type='SyncBuffersHook') -] - -# multi-processing backend -env_cfg = dict( - cudnn_benchmark=False, - mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), - dist_cfg=dict(backend='nccl'), -) - -# visualizer -vis_backends = [ - dict(type='LocalVisBackend'), - # dict(type='TensorboardVisBackend'), - # dict(type='WandbVisBackend'), -] -visualizer = dict( - type='PoseLocalVisualizer', vis_backends=vis_backends, name='visualizer') - -# logger -log_processor = dict( - type='LogProcessor', window_size=50, by_epoch=True, num_digits=6) -log_level = 'INFO' -load_from = None -resume = False - -# file I/O backend -backend_args = dict(backend='local') - -# training/validation/testing progress -train_cfg = dict(by_epoch=True) -val_cfg = dict() -test_cfg = dict() diff --git a/spaces/Aditya9790/yolo7-object-tracking/app.py b/spaces/Aditya9790/yolo7-object-tracking/app.py deleted file mode 100644 index d621ffdb8407864cf8c0e74c866737f580264e56..0000000000000000000000000000000000000000 --- a/spaces/Aditya9790/yolo7-object-tracking/app.py +++ /dev/null @@ -1,293 +0,0 @@ -import gradio as gr -import os - -import argparse -import time -from pathlib import Path - -import cv2 -import torch -import torch.backends.cudnn as cudnn -from numpy import random - -from models.experimental import attempt_load -from utils.datasets import LoadStreams, LoadImages -from utils.general import check_img_size, check_requirements, check_imshow, non_max_suppression, apply_classifier, \ - scale_coords, xyxy2xywh, strip_optimizer, set_logging, increment_path -from utils.plots import plot_one_box -from utils.torch_utils import select_device, load_classifier, time_synchronized, TracedModel -from PIL import Image - -from sort import * - -from huggingface_hub import hf_hub_download - -def load_model(model_name): - model_path = hf_hub_download(repo_id=f"Yolov7/{model_name}", filename=f"{model_name}.pt") - - return model_path - - -model_names = ["yolov7"] - -models = {model_name: load_model(model_name) for model_name in model_names} - -################################## -# """Function to Draw Bounding boxes""" -def draw_boxes(img, bbox, identities=None, categories=None, confidences = None, names=None, colors = None): - for i, box in enumerate(bbox): - x1, y1, x2, y2 = [int(i) for i in box] - tl = opt.thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1 # line/font thickness - - cat = int(categories[i]) if categories is not None else 0 - id = int(identities[i]) if identities is not None else 0 - # conf = confidences[i] if confidences is not None else 0 - - color = colors[cat] - - if not opt.nobbox: - cv2.rectangle(img, (x1, y1), (x2, y2), color, tl) - - if not opt.nolabel: - label = str(id) + ":"+ names[cat] if identities is not None else f'{names[cat]} {confidences[i]:.2f}' - tf = max(tl - 1, 1) # font thickness - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - c2 = x1 + t_size[0], y1 - t_size[1] - 3 - cv2.rectangle(img, (x1, y1), c2, color, -1, cv2.LINE_AA) # filled - cv2.putText(img, label, (x1, y1 - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA) - - - return img -################################## - - -def detect(save_img=True): - parser = argparse.ArgumentParser() - parser.add_argument('--weights', nargs='+', type=str, default='yolov7.pt', help='model.pt path(s)') - parser.add_argument('--source', type=str, default='inference/images', help='source') # file/folder, 0 for webcam - parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)') - parser.add_argument('--conf-thres', type=float, default=0.25, help='object confidence threshold') - parser.add_argument('--iou-thres', type=float, default=0.45, help='IOU threshold for NMS') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--view-img', action='store_true', help='display results') - parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') - parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels') - parser.add_argument('--nosave', action='store_true', help='do not save images/videos') - parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3') - parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS') - parser.add_argument('--augment', action='store_true', help='augmented inference') - parser.add_argument('--update', action='store_true', help='update all models') - parser.add_argument('--project', default='runs/detect', help='save results to project/name') - parser.add_argument('--name', default='exp', help='save results to project/name') - parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--no-trace', action='store_true', help='don`t trace model') - - parser.add_argument('--track', action='store_true', help='run tracking') - parser.add_argument('--show-track', action='store_true', help='show tracked path') - parser.add_argument('--show-fps', action='store_true', help='show fps') - parser.add_argument('--thickness', type=int, default=2, help='bounding box and font size thickness') - parser.add_argument('--seed', type=int, default=1, help='random seed to control bbox colors') - parser.add_argument('--nobbox', action='store_true', help='don`t show bounding box') - parser.add_argument('--nolabel', action='store_true', help='don`t show label') - parser.add_argument('--unique-track-color', action='store_true', help='show each track in unique color') - - opt = parser.parse_args() - np.random.seed(opt.seed) - - sort_tracker = Sort(max_age=5, - min_hits=2, - iou_threshold=0.2) - - source, weights, view_img, save_txt, imgsz, trace = opt.source, opt.weights, opt.view_img, opt.save_txt, opt.img_size, not opt.no_trace - save_img = not opt.nosave and not source.endswith('.txt') # save inference images - webcam = source.isnumeric() or source.endswith('.txt') or source.lower().startswith( - ('rtsp://', 'rtmp://', 'http://', 'https://')) - save_dir = Path(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) # increment run - if not opt.nosave: - (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir - - # Initialize - set_logging() - device = select_device(opt.device) - half = device.type != 'cpu' # half precision only supported on CUDA - - # Load model - model = attempt_load(weights, map_location=device) # load FP32 model - stride = int(model.stride.max()) # model stride - imgsz = check_img_size(imgsz, s=stride) # check img_size - - if trace: - model = TracedModel(model, device, opt.img_size) - - if half: - model.half() # to FP16 - - # Second-stage classifier - classify = False - if classify: - modelc = load_classifier(name='resnet101', n=2) # initialize - modelc.load_state_dict(torch.load('weights/resnet101.pt', map_location=device)['model']).to(device).eval() - - # Set Dataloader - vid_path, vid_writer = None, None - if webcam: - view_img = check_imshow() - cudnn.benchmark = True # set True to speed up constant image size inference - dataset = LoadStreams(source, img_size=imgsz, stride=stride) - else: - dataset = LoadImages(source, img_size=imgsz, stride=stride) - - # Get names and colors - names = model.module.names if hasattr(model, 'module') else model.names - colors = [[random.randint(0, 255) for _ in range(3)] for _ in names] - - # Run inference - if device.type != 'cpu': - model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run once - old_img_w = old_img_h = imgsz - old_img_b = 1 - - t0 = time.time() - ################################### - startTime = 0 - ################################### - for path, img, im0s, vid_cap in dataset: - img = torch.from_numpy(img).to(device) - img = img.half() if half else img.float() # uint8 to fp16/32 - img /= 255.0 # 0 - 255 to 0.0 - 1.0 - if img.ndimension() == 3: - img = img.unsqueeze(0) - - # Warmup - if device.type != 'cpu' and (old_img_b != img.shape[0] or old_img_h != img.shape[2] or old_img_w != img.shape[3]): - old_img_b = img.shape[0] - old_img_h = img.shape[2] - old_img_w = img.shape[3] - for i in range(3): - model(img, augment=opt.augment)[0] - - # Inference - t1 = time_synchronized() - pred = model(img, augment=opt.augment)[0] - t2 = time_synchronized() - - # Apply NMS - pred = non_max_suppression(pred, opt.conf_thres, opt.iou_thres, classes=opt.classes, agnostic=opt.agnostic_nms) - t3 = time_synchronized() - - # Apply Classifier - if classify: - pred = apply_classifier(pred, modelc, img, im0s) - - # Process detections - for i, det in enumerate(pred): # detections per image - if webcam: # batch_size >= 1 - p, s, im0, frame = path[i], '%g: ' % i, im0s[i].copy(), dataset.count - else: - p, s, im0, frame = path, '', im0s, getattr(dataset, 'frame', 0) - - p = Path(p) # to Path - save_path = str(save_dir / p.name) # img.jpg - txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # img.txt - gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh - if len(det): - # Rescale boxes from img_size to im0 size - det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round() - - # Print results - for c in det[:, -1].unique(): - n = (det[:, -1] == c).sum() # detections per class - s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string - - dets_to_sort = np.empty((0,6)) - # NOTE: We send in detected object class too - for x1,y1,x2,y2,conf,detclass in det.cpu().detach().numpy(): - dets_to_sort = np.vstack((dets_to_sort, - np.array([x1, y1, x2, y2, conf, detclass]))) - - - if opt.track: - - tracked_dets = sort_tracker.update(dets_to_sort, opt.unique_track_color) - tracks =sort_tracker.getTrackers() - - # draw boxes for visualization - if len(tracked_dets)>0: - bbox_xyxy = tracked_dets[:,:4] - identities = tracked_dets[:, 8] - categories = tracked_dets[:, 4] - confidences = None - - if opt.show_track: - #loop over tracks - for t, track in enumerate(tracks): - - track_color = colors[int(track.detclass)] if not opt.unique_track_color else sort_tracker.color_list[t] - - [cv2.line(im0, (int(track.centroidarr[i][0]), - int(track.centroidarr[i][1])), - (int(track.centroidarr[i+1][0]), - int(track.centroidarr[i+1][1])), - track_color, thickness=opt.thickness) - for i,_ in enumerate(track.centroidarr) - if i < len(track.centroidarr)-1 ] - else: - bbox_xyxy = dets_to_sort[:,:4] - identities = None - categories = dets_to_sort[:, 5] - confidences = dets_to_sort[:, 4] - - im0 = draw_boxes(im0, bbox_xyxy, identities, categories, confidences, names, colors) - - # Print time (inference + NMS) - print(f'{s}Done. ({(1E3 * (t2 - t1)):.1f}ms) Inference, ({(1E3 * (t3 - t2)):.1f}ms) NMS') - - # Stream results - ###################################################### - if dataset.mode != 'image' and opt.show_fps: - currentTime = time.time() - - fps = 1/(currentTime - startTime) - startTime = currentTime - cv2.putText(im0, "FPS: " + str(int(fps)), (20, 70), cv2.FONT_HERSHEY_PLAIN, 2, (0,255,0),2) - - ####################################################### - if view_img: - cv2.imshow(str(p), im0) - cv2.waitKey(1) # 1 millisecond - - # Save results (image with detections) - if save_img: - if dataset.mode == 'image': - cv2.imwrite(save_path, im0) - print(f" The image with the result is saved in: {save_path}") - else: # 'video' or 'stream' - if vid_path != save_path: # new video - vid_path = save_path - if isinstance(vid_writer, cv2.VideoWriter): - vid_writer.release() # release previous video writer - if vid_cap: # video - fps = vid_cap.get(cv2.CAP_PROP_FPS) - w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - else: # stream - fps, w, h = 30, im0.shape[1], im0.shape[0] - save_path += '.mp4' - vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h)) - vid_writer.write(im0) - - if save_txt or save_img: - s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' - #print(f"Results saved to {save_dir}{s}") - - print(f'Done. ({time.time() - t0:.3f}s)') - return img - - - -desc = "demo for WongKinYiu/yolov7 Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors" -gr.Interface(detect, - inputs = [gr.Video(format="mp4")], - outputs = gr.Video(format="mp4"), - title="Yolov7",description=desc).launch() -# gr.Interface(detect,[gr.Image(type="pil"),gr.Dropdown(choices=model_names)], gr.Image(type="pil"),title="Yolov7",examples=[["horses.jpeg", "yolov7"]],description="demo for WongKinYiu/yolov7 Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors").launch() \ No newline at end of file diff --git a/spaces/Aer0xander/sd-to-diffusers/utils.py b/spaces/Aer0xander/sd-to-diffusers/utils.py deleted file mode 100644 index ff1c065d186347ca51b47d010a697dbe1814695c..0000000000000000000000000000000000000000 --- a/spaces/Aer0xander/sd-to-diffusers/utils.py +++ /dev/null @@ -1,6 +0,0 @@ -def is_google_colab(): - try: - import google.colab - return True - except: - return False \ No newline at end of file diff --git a/spaces/Amrrs/DragGan-Inversion/torch_utils/ops/conv2d_gradfix.py b/spaces/Amrrs/DragGan-Inversion/torch_utils/ops/conv2d_gradfix.py deleted file mode 100644 index 563543d23df5ae0432461a2c637aec71a4bee9ca..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/torch_utils/ops/conv2d_gradfix.py +++ /dev/null @@ -1,225 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom replacement for `torch.nn.functional.conv2d` that supports -arbitrarily high order gradients with zero performance penalty.""" - -import contextlib -import torch - -# pylint: disable=redefined-builtin -# pylint: disable=arguments-differ -# pylint: disable=protected-access - -# ---------------------------------------------------------------------------- - -# Enable the custom op by setting this to true. -enabled = False -# Forcefully disable computation of gradients with respect to the weights. -weight_gradients_disabled = False - - -@contextlib.contextmanager -def no_weight_gradients(disable=True): - global weight_gradients_disabled - old = weight_gradients_disabled - if disable: - weight_gradients_disabled = True - yield - weight_gradients_disabled = old - -# ---------------------------------------------------------------------------- - - -def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1): - if _should_use_custom_op(input): - return _conv2d_gradfix(transpose=False, weight_shape=weight.shape, stride=stride, padding=padding, output_padding=0, dilation=dilation, groups=groups).apply(input, weight, bias) - return torch.nn.functional.conv2d(input=input, weight=weight, bias=bias, stride=stride, padding=padding, dilation=dilation, groups=groups) - - -def conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1): - if _should_use_custom_op(input): - return _conv2d_gradfix(transpose=True, weight_shape=weight.shape, stride=stride, padding=padding, output_padding=output_padding, groups=groups, dilation=dilation).apply(input, weight, bias) - return torch.nn.functional.conv_transpose2d(input=input, weight=weight, bias=bias, stride=stride, padding=padding, output_padding=output_padding, groups=groups, dilation=dilation) - -# ---------------------------------------------------------------------------- - - -def _should_use_custom_op(input): - assert isinstance(input, torch.Tensor) - if (not enabled) or (not torch.backends.cudnn.enabled): - return False - if input.device.type != 'cuda': - return False - return True - - -def _tuple_of_ints(xs, ndim): - xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim - assert len(xs) == ndim - assert all(isinstance(x, int) for x in xs) - return xs - -# ---------------------------------------------------------------------------- - - -_conv2d_gradfix_cache = dict() -_null_tensor = torch.empty([0]) - - -def _conv2d_gradfix(transpose, weight_shape, stride, padding, output_padding, dilation, groups): - # Parse arguments. - ndim = 2 - weight_shape = tuple(weight_shape) - stride = _tuple_of_ints(stride, ndim) - padding = _tuple_of_ints(padding, ndim) - output_padding = _tuple_of_ints(output_padding, ndim) - dilation = _tuple_of_ints(dilation, ndim) - - # Lookup from cache. - key = (transpose, weight_shape, stride, padding, - output_padding, dilation, groups) - if key in _conv2d_gradfix_cache: - return _conv2d_gradfix_cache[key] - - # Validate arguments. - assert groups >= 1 - assert len(weight_shape) == ndim + 2 - assert all(stride[i] >= 1 for i in range(ndim)) - assert all(padding[i] >= 0 for i in range(ndim)) - assert all(dilation[i] >= 0 for i in range(ndim)) - if not transpose: - assert all(output_padding[i] == 0 for i in range(ndim)) - else: # transpose - assert all(0 <= output_padding[i] < max( - stride[i], dilation[i]) for i in range(ndim)) - - # Helpers. - common_kwargs = dict(stride=stride, padding=padding, - dilation=dilation, groups=groups) - - def calc_output_padding(input_shape, output_shape): - if transpose: - return [0, 0] - return [ - input_shape[i + 2] - - (output_shape[i + 2] - 1) * stride[i] - - (1 - 2 * padding[i]) - - dilation[i] * (weight_shape[i + 2] - 1) - for i in range(ndim) - ] - - # Forward & backward. - class Conv2d(torch.autograd.Function): - @staticmethod - def forward(ctx, input, weight, bias): - assert weight.shape == weight_shape - ctx.save_for_backward( - input if weight.requires_grad else _null_tensor, - weight if input.requires_grad else _null_tensor, - ) - ctx.input_shape = input.shape - - # Simple 1x1 convolution => cuBLAS (only on Volta, not on Ampere). - if weight_shape[2:] == stride == dilation == (1, 1) and padding == (0, 0) and torch.cuda.get_device_capability(input.device) < (8, 0): - a = weight.reshape( - groups, weight_shape[0] // groups, weight_shape[1]) - b = input.reshape( - input.shape[0], groups, input.shape[1] // groups, -1) - c = (a.transpose(1, 2) if transpose else a) @ b.permute(1, - 2, 0, 3).flatten(2) - c = c.reshape(-1, input.shape[0], - *input.shape[2:]).transpose(0, 1) - c = c if bias is None else c + \ - bias.unsqueeze(0).unsqueeze(2).unsqueeze(3) - return c.contiguous(memory_format=(torch.channels_last if input.stride(1) == 1 else torch.contiguous_format)) - - # General case => cuDNN. - if transpose: - return torch.nn.functional.conv_transpose2d(input=input, weight=weight, bias=bias, output_padding=output_padding, **common_kwargs) - return torch.nn.functional.conv2d(input=input, weight=weight, bias=bias, **common_kwargs) - - @staticmethod - def backward(ctx, grad_output): - input, weight = ctx.saved_tensors - input_shape = ctx.input_shape - grad_input = None - grad_weight = None - grad_bias = None - - if ctx.needs_input_grad[0]: - p = calc_output_padding( - input_shape=input_shape, output_shape=grad_output.shape) - op = _conv2d_gradfix(transpose=( - not transpose), weight_shape=weight_shape, output_padding=p, **common_kwargs) - grad_input = op.apply(grad_output, weight, None) - assert grad_input.shape == input_shape - - if ctx.needs_input_grad[1] and not weight_gradients_disabled: - grad_weight = Conv2dGradWeight.apply(grad_output, input) - assert grad_weight.shape == weight_shape - - if ctx.needs_input_grad[2]: - grad_bias = grad_output.sum([0, 2, 3]) - - return grad_input, grad_weight, grad_bias - - # Gradient with respect to the weights. - class Conv2dGradWeight(torch.autograd.Function): - @staticmethod - def forward(ctx, grad_output, input): - ctx.save_for_backward( - grad_output if input.requires_grad else _null_tensor, - input if grad_output.requires_grad else _null_tensor, - ) - ctx.grad_output_shape = grad_output.shape - ctx.input_shape = input.shape - - # Simple 1x1 convolution => cuBLAS (on both Volta and Ampere). - if weight_shape[2:] == stride == dilation == (1, 1) and padding == (0, 0): - a = grad_output.reshape( - grad_output.shape[0], groups, grad_output.shape[1] // groups, -1).permute(1, 2, 0, 3).flatten(2) - b = input.reshape( - input.shape[0], groups, input.shape[1] // groups, -1).permute(1, 2, 0, 3).flatten(2) - c = (b @ a.transpose(1, 2) if transpose else a @ - b.transpose(1, 2)).reshape(weight_shape) - return c.contiguous(memory_format=(torch.channels_last if input.stride(1) == 1 else torch.contiguous_format)) - - # General case => cuDNN. - name = 'aten::cudnn_convolution_transpose_backward_weight' if transpose else 'aten::cudnn_convolution_backward_weight' - flags = [torch.backends.cudnn.benchmark, - torch.backends.cudnn.deterministic, torch.backends.cudnn.allow_tf32] - return torch._C._jit_get_operation(name)(weight_shape, grad_output, input, padding, stride, dilation, groups, *flags) - - @staticmethod - def backward(ctx, grad2_grad_weight): - grad_output, input = ctx.saved_tensors - grad_output_shape = ctx.grad_output_shape - input_shape = ctx.input_shape - grad2_grad_output = None - grad2_input = None - - if ctx.needs_input_grad[0]: - grad2_grad_output = Conv2d.apply( - input, grad2_grad_weight, None) - assert grad2_grad_output.shape == grad_output_shape - - if ctx.needs_input_grad[1]: - p = calc_output_padding( - input_shape=input_shape, output_shape=grad_output_shape) - op = _conv2d_gradfix(transpose=( - not transpose), weight_shape=weight_shape, output_padding=p, **common_kwargs) - grad2_input = op.apply(grad_output, grad2_grad_weight, None) - assert grad2_input.shape == input_shape - - return grad2_grad_output, grad2_input - - _conv2d_gradfix_cache[key] = Conv2d - return Conv2d - -# ---------------------------------------------------------------------------- diff --git a/spaces/Amrrs/DragGan-Inversion/viz/drag_widget.py b/spaces/Amrrs/DragGan-Inversion/viz/drag_widget.py deleted file mode 100644 index 348ab36de2daff2eff97589204b00d391b3a1e7e..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/viz/drag_widget.py +++ /dev/null @@ -1,173 +0,0 @@ -import os -import torch -import numpy as np -import imgui -import dnnlib -from gui_utils import imgui_utils - -# ---------------------------------------------------------------------------- - - -class DragWidget: - def __init__(self, viz): - self.viz = viz - self.point = [-1, -1] - self.points = [] - self.targets = [] - self.is_point = True - self.last_click = False - self.is_drag = False - self.iteration = 0 - self.mode = 'point' - self.r_mask = 50 - self.show_mask = False - self.mask = torch.ones(256, 256) - self.lambda_mask = 20 - self.feature_idx = 5 - self.r1 = 3 - self.r2 = 12 - self.path = os.path.abspath(os.path.join( - os.path.dirname(__file__), '..', '_screenshots')) - self.defer_frames = 0 - self.disabled_time = 0 - - def action(self, click, down, x, y): - if self.mode == 'point': - self.add_point(click, x, y) - elif down: - self.draw_mask(x, y) - - def add_point(self, click, x, y): - if click: - self.point = [y, x] - elif self.last_click: - if self.is_drag: - self.stop_drag() - if self.is_point: - self.points.append(self.point) - self.is_point = False - else: - self.targets.append(self.point) - self.is_point = True - self.last_click = click - - def init_mask(self, w, h): - self.width, self.height = w, h - self.mask = torch.ones(h, w) - - def draw_mask(self, x, y): - X = torch.linspace(0, self.width, self.width) - Y = torch.linspace(0, self.height, self.height) - yy, xx = torch.meshgrid(Y, X) - circle = (xx - x)**2 + (yy - y)**2 < self.r_mask**2 - if self.mode == 'flexible': - self.mask[circle] = 0 - elif self.mode == 'fixed': - self.mask[circle] = 1 - - def stop_drag(self): - self.is_drag = False - self.iteration = 0 - - def set_points(self, points): - self.points = points - - def reset_point(self): - self.points = [] - self.targets = [] - self.is_point = True - - def load_points(self, suffix): - points = [] - point_path = self.path + f'_{suffix}.txt' - try: - with open(point_path, "r") as f: - for line in f.readlines(): - y, x = line.split() - points.append([int(y), int(x)]) - except: - print(f'Wrong point file path: {point_path}') - return points - - @imgui_utils.scoped_by_object_id - def __call__(self, show=True): - viz = self.viz - reset = False - if show: - with imgui_utils.grayed_out(self.disabled_time != 0): - imgui.text('Drag') - imgui.same_line(viz.label_w) - - if imgui_utils.button('Add point', width=viz.button_w, enabled='image' in viz.result): - self.mode = 'point' - - imgui.same_line() - reset = False - if imgui_utils.button('Reset point', width=viz.button_w, enabled='image' in viz.result): - self.reset_point() - reset = True - - imgui.text(' ') - imgui.same_line(viz.label_w) - if imgui_utils.button('Start', width=viz.button_w, enabled='image' in viz.result): - self.is_drag = True - if len(self.points) > len(self.targets): - self.points = self.points[:len(self.targets)] - - imgui.same_line() - if imgui_utils.button('Stop', width=viz.button_w, enabled='image' in viz.result): - self.stop_drag() - - imgui.text(' ') - imgui.same_line(viz.label_w) - imgui.text(f'Steps: {self.iteration}') - - imgui.text('Mask') - imgui.same_line(viz.label_w) - if imgui_utils.button('Flexible area', width=viz.button_w, enabled='image' in viz.result): - self.mode = 'flexible' - self.show_mask = True - - imgui.same_line() - if imgui_utils.button('Fixed area', width=viz.button_w, enabled='image' in viz.result): - self.mode = 'fixed' - self.show_mask = True - - imgui.text(' ') - imgui.same_line(viz.label_w) - if imgui_utils.button('Reset mask', width=viz.button_w, enabled='image' in viz.result): - self.mask = torch.ones(self.height, self.width) - imgui.same_line() - _clicked, self.show_mask = imgui.checkbox( - 'Show mask', self.show_mask) - - imgui.text(' ') - imgui.same_line(viz.label_w) - with imgui_utils.item_width(viz.font_size * 6): - changed, self.r_mask = imgui.input_int( - 'Radius', self.r_mask) - - imgui.text(' ') - imgui.same_line(viz.label_w) - with imgui_utils.item_width(viz.font_size * 6): - changed, self.lambda_mask = imgui.input_int( - 'Lambda', self.lambda_mask) - - self.disabled_time = max(self.disabled_time - viz.frame_delta, 0) - if self.defer_frames > 0: - self.defer_frames -= 1 - viz.args.is_drag = self.is_drag - if self.is_drag: - self.iteration += 1 - viz.args.iteration = self.iteration - viz.args.points = [point for point in self.points] - viz.args.targets = [point for point in self.targets] - viz.args.mask = self.mask - viz.args.lambda_mask = self.lambda_mask - viz.args.feature_idx = self.feature_idx - viz.args.r1 = self.r1 - viz.args.r2 = self.r2 - viz.args.reset = reset - - -# ---------------------------------------------------------------------------- diff --git a/spaces/Amrrs/portfolio/index.html b/spaces/Amrrs/portfolio/index.html deleted file mode 100644 index ff8968021c46781010dc0b1de7ead4803a35898a..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/portfolio/index.html +++ /dev/null @@ -1,107 +0,0 @@ - - - - Welcome to 1littlecoder - - - - - -
-

About Me

-

Hey! I'm 1littlecoder from India.. I Like Coding R Python Data Science Machine Learning

-

~ 1littlecoder

-
-
-

My Works

-

Here Are Some Of My Works

- - -
Telegram Channel
-
- - -
Github Account
-
- - -
My Website
-
-
-

Resources I Use

- - -
Github
-
- - -
Telegram
-
- - -
VS Code Editor
-
- - -
Python
-
- - -
PHP
-
- - -
Ubuntu
-
-
-
-

My Skills

- -
-
-

Follow Me

-
- - Instagram - - - Twitter - - - GitHub - - - Telegram - - - YouTube - - - Email - -
-
-
-

Contact Us

- - - -
-
Made with ❤️ By - 1littlecoder -
- - - - diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/deis.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/deis.md deleted file mode 100644 index 9ab8418210983d4920c677de1aa4a865ab2bfca8..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/deis.md +++ /dev/null @@ -1,22 +0,0 @@ - - -# DEIS - -Fast Sampling of Diffusion Models with Exponential Integrator. - -## Overview - -Original paper can be found [here](https://arxiv.org/abs/2204.13902). The original implementation can be found [here](https://github.com/qsh-zh/deis). - -## DEISMultistepScheduler -[[autodoc]] DEISMultistepScheduler diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/euler.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/euler.md deleted file mode 100644 index f107623363bf49763fc0552bbccd70f7529592f7..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/euler.md +++ /dev/null @@ -1,21 +0,0 @@ - - -# Euler scheduler - -## Overview - -Euler scheduler (Algorithm 2) from the paper [Elucidating the Design Space of Diffusion-Based Generative Models](https://arxiv.org/abs/2206.00364) by Karras et al. (2022). Based on the original [k-diffusion](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L51) implementation by Katherine Crowson. -Fast scheduler which often times generates good outputs with 20-30 steps. - -## EulerDiscreteScheduler -[[autodoc]] EulerDiscreteScheduler \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/attention_flax.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/attention_flax.py deleted file mode 100644 index 0b160d2384311c1fb426b87c11e5fa1572584070..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/attention_flax.py +++ /dev/null @@ -1,446 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import functools -import math - -import flax.linen as nn -import jax -import jax.numpy as jnp - - -def _query_chunk_attention(query, key, value, precision, key_chunk_size: int = 4096): - """Multi-head dot product attention with a limited number of queries.""" - num_kv, num_heads, k_features = key.shape[-3:] - v_features = value.shape[-1] - key_chunk_size = min(key_chunk_size, num_kv) - query = query / jnp.sqrt(k_features) - - @functools.partial(jax.checkpoint, prevent_cse=False) - def summarize_chunk(query, key, value): - attn_weights = jnp.einsum("...qhd,...khd->...qhk", query, key, precision=precision) - - max_score = jnp.max(attn_weights, axis=-1, keepdims=True) - max_score = jax.lax.stop_gradient(max_score) - exp_weights = jnp.exp(attn_weights - max_score) - - exp_values = jnp.einsum("...vhf,...qhv->...qhf", value, exp_weights, precision=precision) - max_score = jnp.einsum("...qhk->...qh", max_score) - - return (exp_values, exp_weights.sum(axis=-1), max_score) - - def chunk_scanner(chunk_idx): - # julienne key array - key_chunk = jax.lax.dynamic_slice( - operand=key, - start_indices=[0] * (key.ndim - 3) + [chunk_idx, 0, 0], # [...,k,h,d] - slice_sizes=list(key.shape[:-3]) + [key_chunk_size, num_heads, k_features], # [...,k,h,d] - ) - - # julienne value array - value_chunk = jax.lax.dynamic_slice( - operand=value, - start_indices=[0] * (value.ndim - 3) + [chunk_idx, 0, 0], # [...,v,h,d] - slice_sizes=list(value.shape[:-3]) + [key_chunk_size, num_heads, v_features], # [...,v,h,d] - ) - - return summarize_chunk(query, key_chunk, value_chunk) - - chunk_values, chunk_weights, chunk_max = jax.lax.map(f=chunk_scanner, xs=jnp.arange(0, num_kv, key_chunk_size)) - - global_max = jnp.max(chunk_max, axis=0, keepdims=True) - max_diffs = jnp.exp(chunk_max - global_max) - - chunk_values *= jnp.expand_dims(max_diffs, axis=-1) - chunk_weights *= max_diffs - - all_values = chunk_values.sum(axis=0) - all_weights = jnp.expand_dims(chunk_weights, -1).sum(axis=0) - - return all_values / all_weights - - -def jax_memory_efficient_attention( - query, key, value, precision=jax.lax.Precision.HIGHEST, query_chunk_size: int = 1024, key_chunk_size: int = 4096 -): - r""" - Flax Memory-efficient multi-head dot product attention. https://arxiv.org/abs/2112.05682v2 - https://github.com/AminRezaei0x443/memory-efficient-attention - - Args: - query (`jnp.ndarray`): (batch..., query_length, head, query_key_depth_per_head) - key (`jnp.ndarray`): (batch..., key_value_length, head, query_key_depth_per_head) - value (`jnp.ndarray`): (batch..., key_value_length, head, value_depth_per_head) - precision (`jax.lax.Precision`, *optional*, defaults to `jax.lax.Precision.HIGHEST`): - numerical precision for computation - query_chunk_size (`int`, *optional*, defaults to 1024): - chunk size to divide query array value must divide query_length equally without remainder - key_chunk_size (`int`, *optional*, defaults to 4096): - chunk size to divide key and value array value must divide key_value_length equally without remainder - - Returns: - (`jnp.ndarray`) with shape of (batch..., query_length, head, value_depth_per_head) - """ - num_q, num_heads, q_features = query.shape[-3:] - - def chunk_scanner(chunk_idx, _): - # julienne query array - query_chunk = jax.lax.dynamic_slice( - operand=query, - start_indices=([0] * (query.ndim - 3)) + [chunk_idx, 0, 0], # [...,q,h,d] - slice_sizes=list(query.shape[:-3]) + [min(query_chunk_size, num_q), num_heads, q_features], # [...,q,h,d] - ) - - return ( - chunk_idx + query_chunk_size, # unused ignore it - _query_chunk_attention( - query=query_chunk, key=key, value=value, precision=precision, key_chunk_size=key_chunk_size - ), - ) - - _, res = jax.lax.scan( - f=chunk_scanner, init=0, xs=None, length=math.ceil(num_q / query_chunk_size) # start counter # stop counter - ) - - return jnp.concatenate(res, axis=-3) # fuse the chunked result back - - -class FlaxAttention(nn.Module): - r""" - A Flax multi-head attention module as described in: https://arxiv.org/abs/1706.03762 - - Parameters: - query_dim (:obj:`int`): - Input hidden states dimension - heads (:obj:`int`, *optional*, defaults to 8): - Number of heads - dim_head (:obj:`int`, *optional*, defaults to 64): - Hidden states dimension inside each head - dropout (:obj:`float`, *optional*, defaults to 0.0): - Dropout rate - use_memory_efficient_attention (`bool`, *optional*, defaults to `False`): - enable memory efficient attention https://arxiv.org/abs/2112.05682 - dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): - Parameters `dtype` - - """ - query_dim: int - heads: int = 8 - dim_head: int = 64 - dropout: float = 0.0 - use_memory_efficient_attention: bool = False - dtype: jnp.dtype = jnp.float32 - - def setup(self): - inner_dim = self.dim_head * self.heads - self.scale = self.dim_head**-0.5 - - # Weights were exported with old names {to_q, to_k, to_v, to_out} - self.query = nn.Dense(inner_dim, use_bias=False, dtype=self.dtype, name="to_q") - self.key = nn.Dense(inner_dim, use_bias=False, dtype=self.dtype, name="to_k") - self.value = nn.Dense(inner_dim, use_bias=False, dtype=self.dtype, name="to_v") - - self.proj_attn = nn.Dense(self.query_dim, dtype=self.dtype, name="to_out_0") - self.dropout_layer = nn.Dropout(rate=self.dropout) - - def reshape_heads_to_batch_dim(self, tensor): - batch_size, seq_len, dim = tensor.shape - head_size = self.heads - tensor = tensor.reshape(batch_size, seq_len, head_size, dim // head_size) - tensor = jnp.transpose(tensor, (0, 2, 1, 3)) - tensor = tensor.reshape(batch_size * head_size, seq_len, dim // head_size) - return tensor - - def reshape_batch_dim_to_heads(self, tensor): - batch_size, seq_len, dim = tensor.shape - head_size = self.heads - tensor = tensor.reshape(batch_size // head_size, head_size, seq_len, dim) - tensor = jnp.transpose(tensor, (0, 2, 1, 3)) - tensor = tensor.reshape(batch_size // head_size, seq_len, dim * head_size) - return tensor - - def __call__(self, hidden_states, context=None, deterministic=True): - context = hidden_states if context is None else context - - query_proj = self.query(hidden_states) - key_proj = self.key(context) - value_proj = self.value(context) - - query_states = self.reshape_heads_to_batch_dim(query_proj) - key_states = self.reshape_heads_to_batch_dim(key_proj) - value_states = self.reshape_heads_to_batch_dim(value_proj) - - if self.use_memory_efficient_attention: - query_states = query_states.transpose(1, 0, 2) - key_states = key_states.transpose(1, 0, 2) - value_states = value_states.transpose(1, 0, 2) - - # this if statement create a chunk size for each layer of the unet - # the chunk size is equal to the query_length dimension of the deepest layer of the unet - - flatten_latent_dim = query_states.shape[-3] - if flatten_latent_dim % 64 == 0: - query_chunk_size = int(flatten_latent_dim / 64) - elif flatten_latent_dim % 16 == 0: - query_chunk_size = int(flatten_latent_dim / 16) - elif flatten_latent_dim % 4 == 0: - query_chunk_size = int(flatten_latent_dim / 4) - else: - query_chunk_size = int(flatten_latent_dim) - - hidden_states = jax_memory_efficient_attention( - query_states, key_states, value_states, query_chunk_size=query_chunk_size, key_chunk_size=4096 * 4 - ) - - hidden_states = hidden_states.transpose(1, 0, 2) - else: - # compute attentions - attention_scores = jnp.einsum("b i d, b j d->b i j", query_states, key_states) - attention_scores = attention_scores * self.scale - attention_probs = nn.softmax(attention_scores, axis=2) - - # attend to values - hidden_states = jnp.einsum("b i j, b j d -> b i d", attention_probs, value_states) - - hidden_states = self.reshape_batch_dim_to_heads(hidden_states) - hidden_states = self.proj_attn(hidden_states) - return self.dropout_layer(hidden_states, deterministic=deterministic) - - -class FlaxBasicTransformerBlock(nn.Module): - r""" - A Flax transformer block layer with `GLU` (Gated Linear Unit) activation function as described in: - https://arxiv.org/abs/1706.03762 - - - Parameters: - dim (:obj:`int`): - Inner hidden states dimension - n_heads (:obj:`int`): - Number of heads - d_head (:obj:`int`): - Hidden states dimension inside each head - dropout (:obj:`float`, *optional*, defaults to 0.0): - Dropout rate - only_cross_attention (`bool`, defaults to `False`): - Whether to only apply cross attention. - dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): - Parameters `dtype` - use_memory_efficient_attention (`bool`, *optional*, defaults to `False`): - enable memory efficient attention https://arxiv.org/abs/2112.05682 - """ - dim: int - n_heads: int - d_head: int - dropout: float = 0.0 - only_cross_attention: bool = False - dtype: jnp.dtype = jnp.float32 - use_memory_efficient_attention: bool = False - - def setup(self): - # self attention (or cross_attention if only_cross_attention is True) - self.attn1 = FlaxAttention( - self.dim, self.n_heads, self.d_head, self.dropout, self.use_memory_efficient_attention, dtype=self.dtype - ) - # cross attention - self.attn2 = FlaxAttention( - self.dim, self.n_heads, self.d_head, self.dropout, self.use_memory_efficient_attention, dtype=self.dtype - ) - self.ff = FlaxFeedForward(dim=self.dim, dropout=self.dropout, dtype=self.dtype) - self.norm1 = nn.LayerNorm(epsilon=1e-5, dtype=self.dtype) - self.norm2 = nn.LayerNorm(epsilon=1e-5, dtype=self.dtype) - self.norm3 = nn.LayerNorm(epsilon=1e-5, dtype=self.dtype) - self.dropout_layer = nn.Dropout(rate=self.dropout) - - def __call__(self, hidden_states, context, deterministic=True): - # self attention - residual = hidden_states - if self.only_cross_attention: - hidden_states = self.attn1(self.norm1(hidden_states), context, deterministic=deterministic) - else: - hidden_states = self.attn1(self.norm1(hidden_states), deterministic=deterministic) - hidden_states = hidden_states + residual - - # cross attention - residual = hidden_states - hidden_states = self.attn2(self.norm2(hidden_states), context, deterministic=deterministic) - hidden_states = hidden_states + residual - - # feed forward - residual = hidden_states - hidden_states = self.ff(self.norm3(hidden_states), deterministic=deterministic) - hidden_states = hidden_states + residual - - return self.dropout_layer(hidden_states, deterministic=deterministic) - - -class FlaxTransformer2DModel(nn.Module): - r""" - A Spatial Transformer layer with Gated Linear Unit (GLU) activation function as described in: - https://arxiv.org/pdf/1506.02025.pdf - - - Parameters: - in_channels (:obj:`int`): - Input number of channels - n_heads (:obj:`int`): - Number of heads - d_head (:obj:`int`): - Hidden states dimension inside each head - depth (:obj:`int`, *optional*, defaults to 1): - Number of transformers block - dropout (:obj:`float`, *optional*, defaults to 0.0): - Dropout rate - use_linear_projection (`bool`, defaults to `False`): tbd - only_cross_attention (`bool`, defaults to `False`): tbd - dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): - Parameters `dtype` - use_memory_efficient_attention (`bool`, *optional*, defaults to `False`): - enable memory efficient attention https://arxiv.org/abs/2112.05682 - """ - in_channels: int - n_heads: int - d_head: int - depth: int = 1 - dropout: float = 0.0 - use_linear_projection: bool = False - only_cross_attention: bool = False - dtype: jnp.dtype = jnp.float32 - use_memory_efficient_attention: bool = False - - def setup(self): - self.norm = nn.GroupNorm(num_groups=32, epsilon=1e-5) - - inner_dim = self.n_heads * self.d_head - if self.use_linear_projection: - self.proj_in = nn.Dense(inner_dim, dtype=self.dtype) - else: - self.proj_in = nn.Conv( - inner_dim, - kernel_size=(1, 1), - strides=(1, 1), - padding="VALID", - dtype=self.dtype, - ) - - self.transformer_blocks = [ - FlaxBasicTransformerBlock( - inner_dim, - self.n_heads, - self.d_head, - dropout=self.dropout, - only_cross_attention=self.only_cross_attention, - dtype=self.dtype, - use_memory_efficient_attention=self.use_memory_efficient_attention, - ) - for _ in range(self.depth) - ] - - if self.use_linear_projection: - self.proj_out = nn.Dense(inner_dim, dtype=self.dtype) - else: - self.proj_out = nn.Conv( - inner_dim, - kernel_size=(1, 1), - strides=(1, 1), - padding="VALID", - dtype=self.dtype, - ) - - self.dropout_layer = nn.Dropout(rate=self.dropout) - - def __call__(self, hidden_states, context, deterministic=True): - batch, height, width, channels = hidden_states.shape - residual = hidden_states - hidden_states = self.norm(hidden_states) - if self.use_linear_projection: - hidden_states = hidden_states.reshape(batch, height * width, channels) - hidden_states = self.proj_in(hidden_states) - else: - hidden_states = self.proj_in(hidden_states) - hidden_states = hidden_states.reshape(batch, height * width, channels) - - for transformer_block in self.transformer_blocks: - hidden_states = transformer_block(hidden_states, context, deterministic=deterministic) - - if self.use_linear_projection: - hidden_states = self.proj_out(hidden_states) - hidden_states = hidden_states.reshape(batch, height, width, channels) - else: - hidden_states = hidden_states.reshape(batch, height, width, channels) - hidden_states = self.proj_out(hidden_states) - - hidden_states = hidden_states + residual - return self.dropout_layer(hidden_states, deterministic=deterministic) - - -class FlaxFeedForward(nn.Module): - r""" - Flax module that encapsulates two Linear layers separated by a non-linearity. It is the counterpart of PyTorch's - [`FeedForward`] class, with the following simplifications: - - The activation function is currently hardcoded to a gated linear unit from: - https://arxiv.org/abs/2002.05202 - - `dim_out` is equal to `dim`. - - The number of hidden dimensions is hardcoded to `dim * 4` in [`FlaxGELU`]. - - Parameters: - dim (:obj:`int`): - Inner hidden states dimension - dropout (:obj:`float`, *optional*, defaults to 0.0): - Dropout rate - dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): - Parameters `dtype` - """ - dim: int - dropout: float = 0.0 - dtype: jnp.dtype = jnp.float32 - - def setup(self): - # The second linear layer needs to be called - # net_2 for now to match the index of the Sequential layer - self.net_0 = FlaxGEGLU(self.dim, self.dropout, self.dtype) - self.net_2 = nn.Dense(self.dim, dtype=self.dtype) - - def __call__(self, hidden_states, deterministic=True): - hidden_states = self.net_0(hidden_states, deterministic=deterministic) - hidden_states = self.net_2(hidden_states) - return hidden_states - - -class FlaxGEGLU(nn.Module): - r""" - Flax implementation of a Linear layer followed by the variant of the gated linear unit activation function from - https://arxiv.org/abs/2002.05202. - - Parameters: - dim (:obj:`int`): - Input hidden states dimension - dropout (:obj:`float`, *optional*, defaults to 0.0): - Dropout rate - dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): - Parameters `dtype` - """ - dim: int - dropout: float = 0.0 - dtype: jnp.dtype = jnp.float32 - - def setup(self): - inner_dim = self.dim * 4 - self.proj = nn.Dense(inner_dim * 2, dtype=self.dtype) - self.dropout_layer = nn.Dropout(rate=self.dropout) - - def __call__(self, hidden_states, deterministic=True): - hidden_states = self.proj(hidden_states) - hidden_linear, hidden_gelu = jnp.split(hidden_states, 2, axis=2) - return self.dropout_layer(hidden_linear * nn.gelu(hidden_gelu), deterministic=deterministic) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/pipeline_utils.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/pipeline_utils.py deleted file mode 100644 index 133bf3a7a2f8f26d83371cb103410cfe812aaea7..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/pipeline_utils.py +++ /dev/null @@ -1,1698 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import fnmatch -import importlib -import inspect -import os -import re -import sys -import warnings -from dataclasses import dataclass -from pathlib import Path -from typing import Any, Callable, Dict, List, Optional, Union - -import numpy as np -import PIL -import torch -from huggingface_hub import ModelCard, hf_hub_download, model_info, snapshot_download -from packaging import version -from requests.exceptions import HTTPError -from tqdm.auto import tqdm - -import diffusers - -from .. import __version__ -from ..configuration_utils import ConfigMixin -from ..models.modeling_utils import _LOW_CPU_MEM_USAGE_DEFAULT -from ..schedulers.scheduling_utils import SCHEDULER_CONFIG_NAME -from ..utils import ( - CONFIG_NAME, - DEPRECATED_REVISION_ARGS, - DIFFUSERS_CACHE, - HF_HUB_OFFLINE, - SAFETENSORS_WEIGHTS_NAME, - WEIGHTS_NAME, - BaseOutput, - deprecate, - get_class_from_dynamic_module, - is_accelerate_available, - is_accelerate_version, - is_compiled_module, - is_safetensors_available, - is_torch_version, - is_transformers_available, - logging, - numpy_to_pil, -) - - -if is_transformers_available(): - import transformers - from transformers import PreTrainedModel - from transformers.utils import FLAX_WEIGHTS_NAME as TRANSFORMERS_FLAX_WEIGHTS_NAME - from transformers.utils import SAFE_WEIGHTS_NAME as TRANSFORMERS_SAFE_WEIGHTS_NAME - from transformers.utils import WEIGHTS_NAME as TRANSFORMERS_WEIGHTS_NAME - -from ..utils import FLAX_WEIGHTS_NAME, ONNX_EXTERNAL_WEIGHTS_NAME, ONNX_WEIGHTS_NAME - - -if is_accelerate_available(): - import accelerate - - -INDEX_FILE = "diffusion_pytorch_model.bin" -CUSTOM_PIPELINE_FILE_NAME = "pipeline.py" -DUMMY_MODULES_FOLDER = "diffusers.utils" -TRANSFORMERS_DUMMY_MODULES_FOLDER = "transformers.utils" -CONNECTED_PIPES_KEYS = ["prior"] - - -logger = logging.get_logger(__name__) - - -LOADABLE_CLASSES = { - "diffusers": { - "ModelMixin": ["save_pretrained", "from_pretrained"], - "SchedulerMixin": ["save_pretrained", "from_pretrained"], - "DiffusionPipeline": ["save_pretrained", "from_pretrained"], - "OnnxRuntimeModel": ["save_pretrained", "from_pretrained"], - }, - "transformers": { - "PreTrainedTokenizer": ["save_pretrained", "from_pretrained"], - "PreTrainedTokenizerFast": ["save_pretrained", "from_pretrained"], - "PreTrainedModel": ["save_pretrained", "from_pretrained"], - "FeatureExtractionMixin": ["save_pretrained", "from_pretrained"], - "ProcessorMixin": ["save_pretrained", "from_pretrained"], - "ImageProcessingMixin": ["save_pretrained", "from_pretrained"], - }, - "onnxruntime.training": { - "ORTModule": ["save_pretrained", "from_pretrained"], - }, -} - -ALL_IMPORTABLE_CLASSES = {} -for library in LOADABLE_CLASSES: - ALL_IMPORTABLE_CLASSES.update(LOADABLE_CLASSES[library]) - - -@dataclass -class ImagePipelineOutput(BaseOutput): - """ - Output class for image pipelines. - - Args: - images (`List[PIL.Image.Image]` or `np.ndarray`) - List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, - num_channels)`. - """ - - images: Union[List[PIL.Image.Image], np.ndarray] - - -@dataclass -class AudioPipelineOutput(BaseOutput): - """ - Output class for audio pipelines. - - Args: - audios (`np.ndarray`) - List of denoised audio samples of a NumPy array of shape `(batch_size, num_channels, sample_rate)`. - """ - - audios: np.ndarray - - -def is_safetensors_compatible(filenames, variant=None, passed_components=None) -> bool: - """ - Checking for safetensors compatibility: - - By default, all models are saved with the default pytorch serialization, so we use the list of default pytorch - files to know which safetensors files are needed. - - The model is safetensors compatible only if there is a matching safetensors file for every default pytorch file. - - Converting default pytorch serialized filenames to safetensors serialized filenames: - - For models from the diffusers library, just replace the ".bin" extension with ".safetensors" - - For models from the transformers library, the filename changes from "pytorch_model" to "model", and the ".bin" - extension is replaced with ".safetensors" - """ - pt_filenames = [] - - sf_filenames = set() - - passed_components = passed_components or [] - - for filename in filenames: - _, extension = os.path.splitext(filename) - - if len(filename.split("/")) == 2 and filename.split("/")[0] in passed_components: - continue - - if extension == ".bin": - pt_filenames.append(filename) - elif extension == ".safetensors": - sf_filenames.add(filename) - - for filename in pt_filenames: - # filename = 'foo/bar/baz.bam' -> path = 'foo/bar', filename = 'baz', extention = '.bam' - path, filename = os.path.split(filename) - filename, extension = os.path.splitext(filename) - - if filename.startswith("pytorch_model"): - filename = filename.replace("pytorch_model", "model") - else: - filename = filename - - expected_sf_filename = os.path.join(path, filename) - expected_sf_filename = f"{expected_sf_filename}.safetensors" - - if expected_sf_filename not in sf_filenames: - logger.warning(f"{expected_sf_filename} not found") - return False - - return True - - -def variant_compatible_siblings(filenames, variant=None) -> Union[List[os.PathLike], str]: - weight_names = [ - WEIGHTS_NAME, - SAFETENSORS_WEIGHTS_NAME, - FLAX_WEIGHTS_NAME, - ONNX_WEIGHTS_NAME, - ONNX_EXTERNAL_WEIGHTS_NAME, - ] - - if is_transformers_available(): - weight_names += [TRANSFORMERS_WEIGHTS_NAME, TRANSFORMERS_SAFE_WEIGHTS_NAME, TRANSFORMERS_FLAX_WEIGHTS_NAME] - - # model_pytorch, diffusion_model_pytorch, ... - weight_prefixes = [w.split(".")[0] for w in weight_names] - # .bin, .safetensors, ... - weight_suffixs = [w.split(".")[-1] for w in weight_names] - # -00001-of-00002 - transformers_index_format = r"\d{5}-of-\d{5}" - - if variant is not None: - # `diffusion_pytorch_model.fp16.bin` as well as `model.fp16-00001-of-00002.safetensors` - variant_file_re = re.compile( - rf"({'|'.join(weight_prefixes)})\.({variant}|{variant}-{transformers_index_format})\.({'|'.join(weight_suffixs)})$" - ) - # `text_encoder/pytorch_model.bin.index.fp16.json` - variant_index_re = re.compile( - rf"({'|'.join(weight_prefixes)})\.({'|'.join(weight_suffixs)})\.index\.{variant}\.json$" - ) - - # `diffusion_pytorch_model.bin` as well as `model-00001-of-00002.safetensors` - non_variant_file_re = re.compile( - rf"({'|'.join(weight_prefixes)})(-{transformers_index_format})?\.({'|'.join(weight_suffixs)})$" - ) - # `text_encoder/pytorch_model.bin.index.json` - non_variant_index_re = re.compile(rf"({'|'.join(weight_prefixes)})\.({'|'.join(weight_suffixs)})\.index\.json") - - if variant is not None: - variant_weights = {f for f in filenames if variant_file_re.match(f.split("/")[-1]) is not None} - variant_indexes = {f for f in filenames if variant_index_re.match(f.split("/")[-1]) is not None} - variant_filenames = variant_weights | variant_indexes - else: - variant_filenames = set() - - non_variant_weights = {f for f in filenames if non_variant_file_re.match(f.split("/")[-1]) is not None} - non_variant_indexes = {f for f in filenames if non_variant_index_re.match(f.split("/")[-1]) is not None} - non_variant_filenames = non_variant_weights | non_variant_indexes - - # all variant filenames will be used by default - usable_filenames = set(variant_filenames) - - def convert_to_variant(filename): - if "index" in filename: - variant_filename = filename.replace("index", f"index.{variant}") - elif re.compile(f"^(.*?){transformers_index_format}").match(filename) is not None: - variant_filename = f"{filename.split('-')[0]}.{variant}-{'-'.join(filename.split('-')[1:])}" - else: - variant_filename = f"{filename.split('.')[0]}.{variant}.{filename.split('.')[1]}" - return variant_filename - - for f in non_variant_filenames: - variant_filename = convert_to_variant(f) - if variant_filename not in usable_filenames: - usable_filenames.add(f) - - return usable_filenames, variant_filenames - - -def warn_deprecated_model_variant(pretrained_model_name_or_path, use_auth_token, variant, revision, model_filenames): - info = model_info( - pretrained_model_name_or_path, - use_auth_token=use_auth_token, - revision=None, - ) - filenames = {sibling.rfilename for sibling in info.siblings} - comp_model_filenames, _ = variant_compatible_siblings(filenames, variant=revision) - comp_model_filenames = [".".join(f.split(".")[:1] + f.split(".")[2:]) for f in comp_model_filenames] - - if set(comp_model_filenames) == set(model_filenames): - warnings.warn( - f"You are loading the variant {revision} from {pretrained_model_name_or_path} via `revision='{revision}'` even though you can load it via `variant=`{revision}`. Loading model variants via `revision='{revision}'` is deprecated and will be removed in diffusers v1. Please use `variant='{revision}'` instead.", - FutureWarning, - ) - else: - warnings.warn( - f"You are loading the variant {revision} from {pretrained_model_name_or_path} via `revision='{revision}'`. This behavior is deprecated and will be removed in diffusers v1. One should use `variant='{revision}'` instead. However, it appears that {pretrained_model_name_or_path} currently does not have the required variant filenames in the 'main' branch. \n The Diffusers team and community would be very grateful if you could open an issue: https://github.com/huggingface/diffusers/issues/new with the title '{pretrained_model_name_or_path} is missing {revision} files' so that the correct variant file can be added.", - FutureWarning, - ) - - -def maybe_raise_or_warn( - library_name, library, class_name, importable_classes, passed_class_obj, name, is_pipeline_module -): - """Simple helper method to raise or warn in case incorrect module has been passed""" - if not is_pipeline_module: - library = importlib.import_module(library_name) - class_obj = getattr(library, class_name) - class_candidates = {c: getattr(library, c, None) for c in importable_classes.keys()} - - expected_class_obj = None - for class_name, class_candidate in class_candidates.items(): - if class_candidate is not None and issubclass(class_obj, class_candidate): - expected_class_obj = class_candidate - - # Dynamo wraps the original model in a private class. - # I didn't find a public API to get the original class. - sub_model = passed_class_obj[name] - model_cls = sub_model.__class__ - if is_compiled_module(sub_model): - model_cls = sub_model._orig_mod.__class__ - - if not issubclass(model_cls, expected_class_obj): - raise ValueError( - f"{passed_class_obj[name]} is of type: {model_cls}, but should be" f" {expected_class_obj}" - ) - else: - logger.warning( - f"You have passed a non-standard module {passed_class_obj[name]}. We cannot verify whether it" - " has the correct type" - ) - - -def get_class_obj_and_candidates(library_name, class_name, importable_classes, pipelines, is_pipeline_module): - """Simple helper method to retrieve class object of module as well as potential parent class objects""" - if is_pipeline_module: - pipeline_module = getattr(pipelines, library_name) - - class_obj = getattr(pipeline_module, class_name) - class_candidates = {c: class_obj for c in importable_classes.keys()} - else: - # else we just import it from the library. - library = importlib.import_module(library_name) - - class_obj = getattr(library, class_name) - class_candidates = {c: getattr(library, c, None) for c in importable_classes.keys()} - - return class_obj, class_candidates - - -def _get_pipeline_class( - class_obj, config, load_connected_pipeline=False, custom_pipeline=None, cache_dir=None, revision=None -): - if custom_pipeline is not None: - if custom_pipeline.endswith(".py"): - path = Path(custom_pipeline) - # decompose into folder & file - file_name = path.name - custom_pipeline = path.parent.absolute() - else: - file_name = CUSTOM_PIPELINE_FILE_NAME - - return get_class_from_dynamic_module( - custom_pipeline, module_file=file_name, cache_dir=cache_dir, revision=revision - ) - - if class_obj != DiffusionPipeline: - return class_obj - - diffusers_module = importlib.import_module(class_obj.__module__.split(".")[0]) - pipeline_cls = getattr(diffusers_module, config["_class_name"]) - - if load_connected_pipeline: - from .auto_pipeline import _get_connected_pipeline - - connected_pipeline_cls = _get_connected_pipeline(pipeline_cls) - if connected_pipeline_cls is not None: - logger.info( - f"Loading connected pipeline {connected_pipeline_cls.__name__} instead of {pipeline_cls.__name__} as specified via `load_connected_pipeline=True`" - ) - else: - logger.info(f"{pipeline_cls.__name__} has no connected pipeline class. Loading {pipeline_cls.__name__}.") - - pipeline_cls = connected_pipeline_cls or pipeline_cls - - return pipeline_cls - - -def load_sub_model( - library_name: str, - class_name: str, - importable_classes: List[Any], - pipelines: Any, - is_pipeline_module: bool, - pipeline_class: Any, - torch_dtype: torch.dtype, - provider: Any, - sess_options: Any, - device_map: Optional[Union[Dict[str, torch.device], str]], - max_memory: Optional[Dict[Union[int, str], Union[int, str]]], - offload_folder: Optional[Union[str, os.PathLike]], - offload_state_dict: bool, - model_variants: Dict[str, str], - name: str, - from_flax: bool, - variant: str, - low_cpu_mem_usage: bool, - cached_folder: Union[str, os.PathLike], -): - """Helper method to load the module `name` from `library_name` and `class_name`""" - # retrieve class candidates - class_obj, class_candidates = get_class_obj_and_candidates( - library_name, class_name, importable_classes, pipelines, is_pipeline_module - ) - - load_method_name = None - # retrive load method name - for class_name, class_candidate in class_candidates.items(): - if class_candidate is not None and issubclass(class_obj, class_candidate): - load_method_name = importable_classes[class_name][1] - - # if load method name is None, then we have a dummy module -> raise Error - if load_method_name is None: - none_module = class_obj.__module__ - is_dummy_path = none_module.startswith(DUMMY_MODULES_FOLDER) or none_module.startswith( - TRANSFORMERS_DUMMY_MODULES_FOLDER - ) - if is_dummy_path and "dummy" in none_module: - # call class_obj for nice error message of missing requirements - class_obj() - - raise ValueError( - f"The component {class_obj} of {pipeline_class} cannot be loaded as it does not seem to have" - f" any of the loading methods defined in {ALL_IMPORTABLE_CLASSES}." - ) - - load_method = getattr(class_obj, load_method_name) - - # add kwargs to loading method - loading_kwargs = {} - if issubclass(class_obj, torch.nn.Module): - loading_kwargs["torch_dtype"] = torch_dtype - if issubclass(class_obj, diffusers.OnnxRuntimeModel): - loading_kwargs["provider"] = provider - loading_kwargs["sess_options"] = sess_options - - is_diffusers_model = issubclass(class_obj, diffusers.ModelMixin) - - if is_transformers_available(): - transformers_version = version.parse(version.parse(transformers.__version__).base_version) - else: - transformers_version = "N/A" - - is_transformers_model = ( - is_transformers_available() - and issubclass(class_obj, PreTrainedModel) - and transformers_version >= version.parse("4.20.0") - ) - - # When loading a transformers model, if the device_map is None, the weights will be initialized as opposed to diffusers. - # To make default loading faster we set the `low_cpu_mem_usage=low_cpu_mem_usage` flag which is `True` by default. - # This makes sure that the weights won't be initialized which significantly speeds up loading. - if is_diffusers_model or is_transformers_model: - loading_kwargs["device_map"] = device_map - loading_kwargs["max_memory"] = max_memory - loading_kwargs["offload_folder"] = offload_folder - loading_kwargs["offload_state_dict"] = offload_state_dict - loading_kwargs["variant"] = model_variants.pop(name, None) - if from_flax: - loading_kwargs["from_flax"] = True - - # the following can be deleted once the minimum required `transformers` version - # is higher than 4.27 - if ( - is_transformers_model - and loading_kwargs["variant"] is not None - and transformers_version < version.parse("4.27.0") - ): - raise ImportError( - f"When passing `variant='{variant}'`, please make sure to upgrade your `transformers` version to at least 4.27.0.dev0" - ) - elif is_transformers_model and loading_kwargs["variant"] is None: - loading_kwargs.pop("variant") - - # if `from_flax` and model is transformer model, can currently not load with `low_cpu_mem_usage` - if not (from_flax and is_transformers_model): - loading_kwargs["low_cpu_mem_usage"] = low_cpu_mem_usage - else: - loading_kwargs["low_cpu_mem_usage"] = False - - # check if the module is in a subdirectory - if os.path.isdir(os.path.join(cached_folder, name)): - loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs) - else: - # else load from the root directory - loaded_sub_model = load_method(cached_folder, **loading_kwargs) - - return loaded_sub_model - - -class DiffusionPipeline(ConfigMixin): - r""" - Base class for all pipelines. - - [`DiffusionPipeline`] stores all components (models, schedulers, and processors) for diffusion pipelines and - provides methods for loading, downloading and saving models. It also includes methods to: - - - move all PyTorch modules to the device of your choice - - enable/disable the progress bar for the denoising iteration - - Class attributes: - - - **config_name** (`str`) -- The configuration filename that stores the class and module names of all the - diffusion pipeline's components. - - **_optional_components** (`List[str]`) -- List of all optional components that don't have to be passed to the - pipeline to function (should be overridden by subclasses). - """ - config_name = "model_index.json" - _optional_components = [] - _exclude_from_cpu_offload = [] - _load_connected_pipes = False - _is_onnx = False - - def register_modules(self, **kwargs): - # import it here to avoid circular import - from diffusers import pipelines - - for name, module in kwargs.items(): - # retrieve library - if module is None: - register_dict = {name: (None, None)} - else: - # register the config from the original module, not the dynamo compiled one - if is_compiled_module(module): - not_compiled_module = module._orig_mod - else: - not_compiled_module = module - - library = not_compiled_module.__module__.split(".")[0] - - # check if the module is a pipeline module - module_path_items = not_compiled_module.__module__.split(".") - pipeline_dir = module_path_items[-2] if len(module_path_items) > 2 else None - - path = not_compiled_module.__module__.split(".") - is_pipeline_module = pipeline_dir in path and hasattr(pipelines, pipeline_dir) - - # if library is not in LOADABLE_CLASSES, then it is a custom module. - # Or if it's a pipeline module, then the module is inside the pipeline - # folder so we set the library to module name. - if is_pipeline_module: - library = pipeline_dir - elif library not in LOADABLE_CLASSES: - library = not_compiled_module.__module__ - - # retrieve class_name - class_name = not_compiled_module.__class__.__name__ - - register_dict = {name: (library, class_name)} - - # save model index config - self.register_to_config(**register_dict) - - # set models - setattr(self, name, module) - - def __setattr__(self, name: str, value: Any): - if name in self.__dict__ and hasattr(self.config, name): - # We need to overwrite the config if name exists in config - if isinstance(getattr(self.config, name), (tuple, list)): - if value is not None and self.config[name][0] is not None: - class_library_tuple = (value.__module__.split(".")[0], value.__class__.__name__) - else: - class_library_tuple = (None, None) - - self.register_to_config(**{name: class_library_tuple}) - else: - self.register_to_config(**{name: value}) - - super().__setattr__(name, value) - - def save_pretrained( - self, - save_directory: Union[str, os.PathLike], - safe_serialization: bool = False, - variant: Optional[str] = None, - ): - """ - Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its - class implements both a save and loading method. The pipeline is easily reloaded using the - [`~DiffusionPipeline.from_pretrained`] class method. - - Arguments: - save_directory (`str` or `os.PathLike`): - Directory to save a pipeline to. Will be created if it doesn't exist. - safe_serialization (`bool`, *optional*, defaults to `False`): - Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`. - variant (`str`, *optional*): - If specified, weights are saved in the format `pytorch_model..bin`. - """ - model_index_dict = dict(self.config) - model_index_dict.pop("_class_name", None) - model_index_dict.pop("_diffusers_version", None) - model_index_dict.pop("_module", None) - model_index_dict.pop("_name_or_path", None) - - expected_modules, optional_kwargs = self._get_signature_keys(self) - - def is_saveable_module(name, value): - if name not in expected_modules: - return False - if name in self._optional_components and value[0] is None: - return False - return True - - model_index_dict = {k: v for k, v in model_index_dict.items() if is_saveable_module(k, v)} - for pipeline_component_name in model_index_dict.keys(): - sub_model = getattr(self, pipeline_component_name) - model_cls = sub_model.__class__ - - # Dynamo wraps the original model in a private class. - # I didn't find a public API to get the original class. - if is_compiled_module(sub_model): - sub_model = sub_model._orig_mod - model_cls = sub_model.__class__ - - save_method_name = None - # search for the model's base class in LOADABLE_CLASSES - for library_name, library_classes in LOADABLE_CLASSES.items(): - if library_name in sys.modules: - library = importlib.import_module(library_name) - else: - logger.info( - f"{library_name} is not installed. Cannot save {pipeline_component_name} as {library_classes} from {library_name}" - ) - - for base_class, save_load_methods in library_classes.items(): - class_candidate = getattr(library, base_class, None) - if class_candidate is not None and issubclass(model_cls, class_candidate): - # if we found a suitable base class in LOADABLE_CLASSES then grab its save method - save_method_name = save_load_methods[0] - break - if save_method_name is not None: - break - - if save_method_name is None: - logger.warn(f"self.{pipeline_component_name}={sub_model} of type {type(sub_model)} cannot be saved.") - # make sure that unsaveable components are not tried to be loaded afterward - self.register_to_config(**{pipeline_component_name: (None, None)}) - continue - - save_method = getattr(sub_model, save_method_name) - - # Call the save method with the argument safe_serialization only if it's supported - save_method_signature = inspect.signature(save_method) - save_method_accept_safe = "safe_serialization" in save_method_signature.parameters - save_method_accept_variant = "variant" in save_method_signature.parameters - - save_kwargs = {} - if save_method_accept_safe: - save_kwargs["safe_serialization"] = safe_serialization - if save_method_accept_variant: - save_kwargs["variant"] = variant - - save_method(os.path.join(save_directory, pipeline_component_name), **save_kwargs) - - # finally save the config - self.save_config(save_directory) - - def to( - self, - torch_device: Optional[Union[str, torch.device]] = None, - torch_dtype: Optional[torch.dtype] = None, - silence_dtype_warnings: bool = False, - ): - if torch_device is None and torch_dtype is None: - return self - - # throw warning if pipeline is in "offloaded"-mode but user tries to manually set to GPU. - def module_is_sequentially_offloaded(module): - if not is_accelerate_available() or is_accelerate_version("<", "0.14.0"): - return False - - return hasattr(module, "_hf_hook") and not isinstance( - module._hf_hook, (accelerate.hooks.CpuOffload, accelerate.hooks.AlignDevicesHook) - ) - - def module_is_offloaded(module): - if not is_accelerate_available() or is_accelerate_version("<", "0.17.0.dev0"): - return False - - return hasattr(module, "_hf_hook") and isinstance(module._hf_hook, accelerate.hooks.CpuOffload) - - # .to("cuda") would raise an error if the pipeline is sequentially offloaded, so we raise our own to make it clearer - pipeline_is_sequentially_offloaded = any( - module_is_sequentially_offloaded(module) for _, module in self.components.items() - ) - if pipeline_is_sequentially_offloaded and torch.device(torch_device).type == "cuda": - raise ValueError( - "It seems like you have activated sequential model offloading by calling `enable_sequential_cpu_offload`, but are now attempting to move the pipeline to GPU. This is not compatible with offloading. Please, move your pipeline `.to('cpu')` or consider removing the move altogether if you use sequential offloading." - ) - - # Display a warning in this case (the operation succeeds but the benefits are lost) - pipeline_is_offloaded = any(module_is_offloaded(module) for _, module in self.components.items()) - if pipeline_is_offloaded and torch.device(torch_device).type == "cuda": - logger.warning( - f"It seems like you have activated model offloading by calling `enable_model_cpu_offload`, but are now manually moving the pipeline to GPU. It is strongly recommended against doing so as memory gains from offloading are likely to be lost. Offloading automatically takes care of moving the individual components {', '.join(self.components.keys())} to GPU when needed. To make sure offloading works as expected, you should consider moving the pipeline back to CPU: `pipeline.to('cpu')` or removing the move altogether if you use offloading." - ) - - module_names, _ = self._get_signature_keys(self) - modules = [getattr(self, n, None) for n in module_names] - modules = [m for m in modules if isinstance(m, torch.nn.Module)] - - is_offloaded = pipeline_is_offloaded or pipeline_is_sequentially_offloaded - for module in modules: - is_loaded_in_8bit = hasattr(module, "is_loaded_in_8bit") and module.is_loaded_in_8bit - - if is_loaded_in_8bit and torch_dtype is not None: - logger.warning( - f"The module '{module.__class__.__name__}' has been loaded in 8bit and conversion to {torch_dtype} is not yet supported. Module is still in 8bit precision." - ) - - if is_loaded_in_8bit and torch_device is not None: - logger.warning( - f"The module '{module.__class__.__name__}' has been loaded in 8bit and moving it to {torch_dtype} via `.to()` is not yet supported. Module is still on {module.device}." - ) - else: - module.to(torch_device, torch_dtype) - - if ( - module.dtype == torch.float16 - and str(torch_device) in ["cpu"] - and not silence_dtype_warnings - and not is_offloaded - ): - logger.warning( - "Pipelines loaded with `torch_dtype=torch.float16` cannot run with `cpu` device. It" - " is not recommended to move them to `cpu` as running them will fail. Please make" - " sure to use an accelerator to run the pipeline in inference, due to the lack of" - " support for`float16` operations on this device in PyTorch. Please, remove the" - " `torch_dtype=torch.float16` argument, or use another device for inference." - ) - return self - - @property - def device(self) -> torch.device: - r""" - Returns: - `torch.device`: The torch device on which the pipeline is located. - """ - module_names, _ = self._get_signature_keys(self) - modules = [getattr(self, n, None) for n in module_names] - modules = [m for m in modules if isinstance(m, torch.nn.Module)] - - for module in modules: - return module.device - - return torch.device("cpu") - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs): - r""" - Instantiate a PyTorch diffusion pipeline from pretrained pipeline weights. - - The pipeline is set in evaluation mode (`model.eval()`) by default. - - If you get the error message below, you need to finetune the weights for your downstream task: - - ``` - Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: - - conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated - You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. - ``` - - Parameters: - pretrained_model_name_or_path (`str` or `os.PathLike`, *optional*): - Can be either: - - - A string, the *repo id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained pipeline - hosted on the Hub. - - A path to a *directory* (for example `./my_pipeline_directory/`) containing pipeline weights - saved using - [`~DiffusionPipeline.save_pretrained`]. - torch_dtype (`str` or `torch.dtype`, *optional*): - Override the default `torch.dtype` and load the model with another dtype. If "auto" is passed, the - dtype is automatically derived from the model's weights. - custom_pipeline (`str`, *optional*): - - - - 🧪 This is an experimental feature and may change in the future. - - - - Can be either: - - - A string, the *repo id* (for example `hf-internal-testing/diffusers-dummy-pipeline`) of a custom - pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines - the custom pipeline. - - A string, the *file name* of a community pipeline hosted on GitHub under - [Community](https://github.com/huggingface/diffusers/tree/main/examples/community). Valid file - names must match the file name and not the pipeline script (`clip_guided_stable_diffusion` - instead of `clip_guided_stable_diffusion.py`). Community pipelines are always loaded from the - current main branch of GitHub. - - A path to a directory (`./my_pipeline_directory/`) containing a custom pipeline. The directory - must contain a file called `pipeline.py` that defines the custom pipeline. - - For more information on how to load and create custom pipelines, please have a look at [Loading and - Adding Custom - Pipelines](https://huggingface.co/docs/diffusers/using-diffusers/custom_pipeline_overview) - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force the (re-)download of the model weights and configuration files, overriding the - cached versions if they exist. - cache_dir (`Union[str, os.PathLike]`, *optional*): - Path to a directory where a downloaded pretrained model configuration is cached if the standard cache - is not used. - resume_download (`bool`, *optional*, defaults to `False`): - Whether or not to resume downloading the model weights and configuration files. If set to `False`, any - incompletely downloaded files are deleted. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. - output_loading_info(`bool`, *optional*, defaults to `False`): - Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. - local_files_only (`bool`, *optional*, defaults to `False`): - Whether to only load local model weights and configuration files or not. If set to `True`, the model - won't be downloaded from the Hub. - use_auth_token (`str` or *bool*, *optional*): - The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from - `diffusers-cli login` (stored in `~/.huggingface`) is used. - revision (`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier - allowed by Git. - custom_revision (`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, or a commit id similar to - `revision` when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a - custom pipeline from GitHub, otherwise it defaults to `"main"` when loading from the Hub. - mirror (`str`, *optional*): - Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not - guarantee the timeliness or safety of the source, and you should refer to the mirror site for more - information. - device_map (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*): - A map that specifies where each submodule should go. It doesn’t need to be defined for each - parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the - same device. - - Set `device_map="auto"` to have 🤗 Accelerate automatically compute the most optimized `device_map`. For - more information about each option see [designing a device - map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map). - max_memory (`Dict`, *optional*): - A dictionary device identifier for the maximum memory. Will default to the maximum memory available for - each GPU and the available CPU RAM if unset. - offload_folder (`str` or `os.PathLike`, *optional*): - The path to offload weights if device_map contains the value `"disk"`. - offload_state_dict (`bool`, *optional*): - If `True`, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if - the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to `True` - when there is some disk offload. - low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`): - Speed up model loading only loading the pretrained weights and not initializing the weights. This also - tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. - Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this - argument to `True` will raise an error. - use_safetensors (`bool`, *optional*, defaults to `None`): - If set to `None`, the safetensors weights are downloaded if they're available **and** if the - safetensors library is installed. If set to `True`, the model is forcibly loaded from safetensors - weights. If set to `False`, safetensors weights are not loaded. - use_onnx (`bool`, *optional*, defaults to `None`): - If set to `True`, ONNX weights will always be downloaded if present. If set to `False`, ONNX weights - will never be downloaded. By default `use_onnx` defaults to the `_is_onnx` class attribute which is - `False` for non-ONNX pipelines and `True` for ONNX pipelines. ONNX weights include both files ending - with `.onnx` and `.pb`. - kwargs (remaining dictionary of keyword arguments, *optional*): - Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline - class). The overwritten components are passed directly to the pipelines `__init__` method. See example - below for more information. - variant (`str`, *optional*): - Load weights from a specified variant filename such as `"fp16"` or `"ema"`. This is ignored when - loading `from_flax`. - - - - To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with - `huggingface-cli login`. - - - - Examples: - - ```py - >>> from diffusers import DiffusionPipeline - - >>> # Download pipeline from huggingface.co and cache. - >>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") - - >>> # Download pipeline that requires an authorization token - >>> # For more information on access tokens, please refer to this section - >>> # of the documentation](https://huggingface.co/docs/hub/security-tokens) - >>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") - - >>> # Use a different scheduler - >>> from diffusers import LMSDiscreteScheduler - - >>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) - >>> pipeline.scheduler = scheduler - ``` - """ - cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE) - resume_download = kwargs.pop("resume_download", False) - force_download = kwargs.pop("force_download", False) - proxies = kwargs.pop("proxies", None) - local_files_only = kwargs.pop("local_files_only", HF_HUB_OFFLINE) - use_auth_token = kwargs.pop("use_auth_token", None) - revision = kwargs.pop("revision", None) - from_flax = kwargs.pop("from_flax", False) - torch_dtype = kwargs.pop("torch_dtype", None) - custom_pipeline = kwargs.pop("custom_pipeline", None) - custom_revision = kwargs.pop("custom_revision", None) - provider = kwargs.pop("provider", None) - sess_options = kwargs.pop("sess_options", None) - device_map = kwargs.pop("device_map", None) - max_memory = kwargs.pop("max_memory", None) - offload_folder = kwargs.pop("offload_folder", None) - offload_state_dict = kwargs.pop("offload_state_dict", False) - low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT) - variant = kwargs.pop("variant", None) - use_safetensors = kwargs.pop("use_safetensors", None if is_safetensors_available() else False) - load_connected_pipeline = kwargs.pop("load_connected_pipeline", False) - - # 1. Download the checkpoints and configs - # use snapshot download here to get it working from from_pretrained - if not os.path.isdir(pretrained_model_name_or_path): - cached_folder = cls.download( - pretrained_model_name_or_path, - cache_dir=cache_dir, - resume_download=resume_download, - force_download=force_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - from_flax=from_flax, - use_safetensors=use_safetensors, - custom_pipeline=custom_pipeline, - custom_revision=custom_revision, - variant=variant, - load_connected_pipeline=load_connected_pipeline, - **kwargs, - ) - else: - cached_folder = pretrained_model_name_or_path - - config_dict = cls.load_config(cached_folder) - - # pop out "_ignore_files" as it is only needed for download - config_dict.pop("_ignore_files", None) - - # 2. Define which model components should load variants - # We retrieve the information by matching whether variant - # model checkpoints exist in the subfolders - model_variants = {} - if variant is not None: - for folder in os.listdir(cached_folder): - folder_path = os.path.join(cached_folder, folder) - is_folder = os.path.isdir(folder_path) and folder in config_dict - variant_exists = is_folder and any( - p.split(".")[1].startswith(variant) for p in os.listdir(folder_path) - ) - if variant_exists: - model_variants[folder] = variant - - # 3. Load the pipeline class, if using custom module then load it from the hub - # if we load from explicit class, let's use it - pipeline_class = _get_pipeline_class( - cls, - config_dict, - load_connected_pipeline=load_connected_pipeline, - custom_pipeline=custom_pipeline, - cache_dir=cache_dir, - revision=custom_revision, - ) - - # DEPRECATED: To be removed in 1.0.0 - if pipeline_class.__name__ == "StableDiffusionInpaintPipeline" and version.parse( - version.parse(config_dict["_diffusers_version"]).base_version - ) <= version.parse("0.5.1"): - from diffusers import StableDiffusionInpaintPipeline, StableDiffusionInpaintPipelineLegacy - - pipeline_class = StableDiffusionInpaintPipelineLegacy - - deprecation_message = ( - "You are using a legacy checkpoint for inpainting with Stable Diffusion, therefore we are loading the" - f" {StableDiffusionInpaintPipelineLegacy} class instead of {StableDiffusionInpaintPipeline}. For" - " better inpainting results, we strongly suggest using Stable Diffusion's official inpainting" - " checkpoint: https://huggingface.co/runwayml/stable-diffusion-inpainting instead or adapting your" - f" checkpoint {pretrained_model_name_or_path} to the format of" - " https://huggingface.co/runwayml/stable-diffusion-inpainting. Note that we do not actively maintain" - " the {StableDiffusionInpaintPipelineLegacy} class and will likely remove it in version 1.0.0." - ) - deprecate("StableDiffusionInpaintPipelineLegacy", "1.0.0", deprecation_message, standard_warn=False) - - # 4. Define expected modules given pipeline signature - # and define non-None initialized modules (=`init_kwargs`) - - # some modules can be passed directly to the init - # in this case they are already instantiated in `kwargs` - # extract them here - expected_modules, optional_kwargs = cls._get_signature_keys(pipeline_class) - passed_class_obj = {k: kwargs.pop(k) for k in expected_modules if k in kwargs} - passed_pipe_kwargs = {k: kwargs.pop(k) for k in optional_kwargs if k in kwargs} - - init_dict, unused_kwargs, _ = pipeline_class.extract_init_dict(config_dict, **kwargs) - - # define init kwargs - init_kwargs = {k: init_dict.pop(k) for k in optional_kwargs if k in init_dict} - init_kwargs = {**init_kwargs, **passed_pipe_kwargs} - - # remove `null` components - def load_module(name, value): - if value[0] is None: - return False - if name in passed_class_obj and passed_class_obj[name] is None: - return False - return True - - init_dict = {k: v for k, v in init_dict.items() if load_module(k, v)} - - # Special case: safety_checker must be loaded separately when using `from_flax` - if from_flax and "safety_checker" in init_dict and "safety_checker" not in passed_class_obj: - raise NotImplementedError( - "The safety checker cannot be automatically loaded when loading weights `from_flax`." - " Please, pass `safety_checker=None` to `from_pretrained`, and load the safety checker" - " separately if you need it." - ) - - # 5. Throw nice warnings / errors for fast accelerate loading - if len(unused_kwargs) > 0: - logger.warning( - f"Keyword arguments {unused_kwargs} are not expected by {pipeline_class.__name__} and will be ignored." - ) - - if low_cpu_mem_usage and not is_accelerate_available(): - low_cpu_mem_usage = False - logger.warning( - "Cannot initialize model with low cpu memory usage because `accelerate` was not found in the" - " environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install" - " `accelerate` for faster and less memory-intense model loading. You can do so with: \n```\npip" - " install accelerate\n```\n." - ) - - if device_map is not None and not is_torch_version(">=", "1.9.0"): - raise NotImplementedError( - "Loading and dispatching requires torch >= 1.9.0. Please either update your PyTorch version or set" - " `device_map=None`." - ) - - if low_cpu_mem_usage is True and not is_torch_version(">=", "1.9.0"): - raise NotImplementedError( - "Low memory initialization requires torch >= 1.9.0. Please either update your PyTorch version or set" - " `low_cpu_mem_usage=False`." - ) - - if low_cpu_mem_usage is False and device_map is not None: - raise ValueError( - f"You cannot set `low_cpu_mem_usage` to False while using device_map={device_map} for loading and" - " dispatching. Please make sure to set `low_cpu_mem_usage=True`." - ) - - # import it here to avoid circular import - from diffusers import pipelines - - # 6. Load each module in the pipeline - for name, (library_name, class_name) in tqdm(init_dict.items(), desc="Loading pipeline components..."): - # 6.1 - now that JAX/Flax is an official framework of the library, we might load from Flax names - if class_name.startswith("Flax"): - class_name = class_name[4:] - - # 6.2 Define all importable classes - is_pipeline_module = hasattr(pipelines, library_name) - importable_classes = ALL_IMPORTABLE_CLASSES - loaded_sub_model = None - - # 6.3 Use passed sub model or load class_name from library_name - if name in passed_class_obj: - # if the model is in a pipeline module, then we load it from the pipeline - # check that passed_class_obj has correct parent class - maybe_raise_or_warn( - library_name, library, class_name, importable_classes, passed_class_obj, name, is_pipeline_module - ) - - loaded_sub_model = passed_class_obj[name] - else: - # load sub model - loaded_sub_model = load_sub_model( - library_name=library_name, - class_name=class_name, - importable_classes=importable_classes, - pipelines=pipelines, - is_pipeline_module=is_pipeline_module, - pipeline_class=pipeline_class, - torch_dtype=torch_dtype, - provider=provider, - sess_options=sess_options, - device_map=device_map, - max_memory=max_memory, - offload_folder=offload_folder, - offload_state_dict=offload_state_dict, - model_variants=model_variants, - name=name, - from_flax=from_flax, - variant=variant, - low_cpu_mem_usage=low_cpu_mem_usage, - cached_folder=cached_folder, - ) - logger.info( - f"Loaded {name} as {class_name} from `{name}` subfolder of {pretrained_model_name_or_path}." - ) - - init_kwargs[name] = loaded_sub_model # UNet(...), # DiffusionSchedule(...) - - if pipeline_class._load_connected_pipes and os.path.isfile(os.path.join(cached_folder, "README.md")): - modelcard = ModelCard.load(os.path.join(cached_folder, "README.md")) - connected_pipes = {prefix: getattr(modelcard.data, prefix, [None])[0] for prefix in CONNECTED_PIPES_KEYS} - load_kwargs = { - "cache_dir": cache_dir, - "resume_download": resume_download, - "force_download": force_download, - "proxies": proxies, - "local_files_only": local_files_only, - "use_auth_token": use_auth_token, - "revision": revision, - "torch_dtype": torch_dtype, - "custom_pipeline": custom_pipeline, - "custom_revision": custom_revision, - "provider": provider, - "sess_options": sess_options, - "device_map": device_map, - "max_memory": max_memory, - "offload_folder": offload_folder, - "offload_state_dict": offload_state_dict, - "low_cpu_mem_usage": low_cpu_mem_usage, - "variant": variant, - "use_safetensors": use_safetensors, - } - connected_pipes = { - prefix: DiffusionPipeline.from_pretrained(repo_id, **load_kwargs.copy()) - for prefix, repo_id in connected_pipes.items() - if repo_id is not None - } - - for prefix, connected_pipe in connected_pipes.items(): - # add connected pipes to `init_kwargs` with _, e.g. "prior_text_encoder" - init_kwargs.update( - {"_".join([prefix, name]): component for name, component in connected_pipe.components.items()} - ) - - # 7. Potentially add passed objects if expected - missing_modules = set(expected_modules) - set(init_kwargs.keys()) - passed_modules = list(passed_class_obj.keys()) - optional_modules = pipeline_class._optional_components - if len(missing_modules) > 0 and missing_modules <= set(passed_modules + optional_modules): - for module in missing_modules: - init_kwargs[module] = passed_class_obj.get(module, None) - elif len(missing_modules) > 0: - passed_modules = set(list(init_kwargs.keys()) + list(passed_class_obj.keys())) - optional_kwargs - raise ValueError( - f"Pipeline {pipeline_class} expected {expected_modules}, but only {passed_modules} were passed." - ) - - # 8. Instantiate the pipeline - model = pipeline_class(**init_kwargs) - - # 9. Save where the model was instantiated from - model.register_to_config(_name_or_path=pretrained_model_name_or_path) - return model - - @property - def name_or_path(self) -> str: - return getattr(self.config, "_name_or_path", None) - - @property - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - [`~DiffusionPipeline.enable_sequential_cpu_offload`] the execution device can only be inferred from - Accelerate's module hooks. - """ - for name, model in self.components.items(): - if not isinstance(model, torch.nn.Module) or name in self._exclude_from_cpu_offload: - continue - - if not hasattr(model, "_hf_hook"): - return self.device - for module in model.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - def enable_sequential_cpu_offload(self, gpu_id: int = 0, device: Union[torch.device, str] = "cuda"): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - Note that offloading happens on a submodule basis. Memory savings are higher than with - `enable_model_cpu_offload`, but performance is lower. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.14.0"): - from accelerate import cpu_offload - else: - raise ImportError("`enable_sequential_cpu_offload` requires `accelerate v0.14.0` or higher") - - if device == "cuda": - device = torch.device(f"{device}:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - device_mod = getattr(torch, self.device.type, None) - if hasattr(device_mod, "empty_cache") and device_mod.is_available(): - device_mod.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - for name, model in self.components.items(): - if not isinstance(model, torch.nn.Module): - continue - - if name in self._exclude_from_cpu_offload: - model.to(device) - else: - # make sure to offload buffers if not all high level weights - # are of type nn.Module - offload_buffers = len(model._parameters) > 0 - cpu_offload(model, device, offload_buffers=offload_buffers) - - @classmethod - def download(cls, pretrained_model_name, **kwargs) -> Union[str, os.PathLike]: - r""" - Download and cache a PyTorch diffusion pipeline from pretrained pipeline weights. - - Parameters: - pretrained_model_name (`str` or `os.PathLike`, *optional*): - A string, the *repository id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained pipeline - hosted on the Hub. - custom_pipeline (`str`, *optional*): - Can be either: - - - A string, the *repository id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained - pipeline hosted on the Hub. The repository must contain a file called `pipeline.py` that defines - the custom pipeline. - - - A string, the *file name* of a community pipeline hosted on GitHub under - [Community](https://github.com/huggingface/diffusers/tree/main/examples/community). Valid file - names must match the file name and not the pipeline script (`clip_guided_stable_diffusion` - instead of `clip_guided_stable_diffusion.py`). Community pipelines are always loaded from the - current `main` branch of GitHub. - - - A path to a *directory* (`./my_pipeline_directory/`) containing a custom pipeline. The directory - must contain a file called `pipeline.py` that defines the custom pipeline. - - - - 🧪 This is an experimental feature and may change in the future. - - - - For more information on how to load and create custom pipelines, take a look at [How to contribute a - community pipeline](https://huggingface.co/docs/diffusers/main/en/using-diffusers/contribute_pipeline). - - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force the (re-)download of the model weights and configuration files, overriding the - cached versions if they exist. - resume_download (`bool`, *optional*, defaults to `False`): - Whether or not to resume downloading the model weights and configuration files. If set to `False`, any - incompletely downloaded files are deleted. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. - output_loading_info(`bool`, *optional*, defaults to `False`): - Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. - local_files_only (`bool`, *optional*, defaults to `False`): - Whether to only load local model weights and configuration files or not. If set to `True`, the model - won't be downloaded from the Hub. - use_auth_token (`str` or *bool*, *optional*): - The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from - `diffusers-cli login` (stored in `~/.huggingface`) is used. - revision (`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier - allowed by Git. - custom_revision (`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, or a commit id similar to - `revision` when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a - custom pipeline from GitHub, otherwise it defaults to `"main"` when loading from the Hub. - mirror (`str`, *optional*): - Mirror source to resolve accessibility issues if you're downloading a model in China. We do not - guarantee the timeliness or safety of the source, and you should refer to the mirror site for more - information. - variant (`str`, *optional*): - Load weights from a specified variant filename such as `"fp16"` or `"ema"`. This is ignored when - loading `from_flax`. - use_safetensors (`bool`, *optional*, defaults to `None`): - If set to `None`, the safetensors weights are downloaded if they're available **and** if the - safetensors library is installed. If set to `True`, the model is forcibly loaded from safetensors - weights. If set to `False`, safetensors weights are not loaded. - use_onnx (`bool`, *optional*, defaults to `False`): - If set to `True`, ONNX weights will always be downloaded if present. If set to `False`, ONNX weights - will never be downloaded. By default `use_onnx` defaults to the `_is_onnx` class attribute which is - `False` for non-ONNX pipelines and `True` for ONNX pipelines. ONNX weights include both files ending - with `.onnx` and `.pb`. - - Returns: - `os.PathLike`: - A path to the downloaded pipeline. - - - - To use private or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models), log-in with - `huggingface-cli login`. - - - - """ - cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE) - resume_download = kwargs.pop("resume_download", False) - force_download = kwargs.pop("force_download", False) - proxies = kwargs.pop("proxies", None) - local_files_only = kwargs.pop("local_files_only", HF_HUB_OFFLINE) - use_auth_token = kwargs.pop("use_auth_token", None) - revision = kwargs.pop("revision", None) - from_flax = kwargs.pop("from_flax", False) - custom_pipeline = kwargs.pop("custom_pipeline", None) - custom_revision = kwargs.pop("custom_revision", None) - variant = kwargs.pop("variant", None) - use_safetensors = kwargs.pop("use_safetensors", None) - use_onnx = kwargs.pop("use_onnx", None) - load_connected_pipeline = kwargs.pop("load_connected_pipeline", False) - - if use_safetensors and not is_safetensors_available(): - raise ValueError( - "`use_safetensors`=True but safetensors is not installed. Please install safetensors with `pip install safetensors" - ) - - allow_pickle = False - if use_safetensors is None: - use_safetensors = is_safetensors_available() - allow_pickle = True - - allow_patterns = None - ignore_patterns = None - - model_info_call_error: Optional[Exception] = None - if not local_files_only: - try: - info = model_info( - pretrained_model_name, - use_auth_token=use_auth_token, - revision=revision, - ) - except HTTPError as e: - logger.warn(f"Couldn't connect to the Hub: {e}.\nWill try to load from local cache.") - local_files_only = True - model_info_call_error = e # save error to reraise it if model is not cached locally - - if not local_files_only: - config_file = hf_hub_download( - pretrained_model_name, - cls.config_name, - cache_dir=cache_dir, - revision=revision, - proxies=proxies, - force_download=force_download, - resume_download=resume_download, - use_auth_token=use_auth_token, - ) - - config_dict = cls._dict_from_json_file(config_file) - - ignore_filenames = config_dict.pop("_ignore_files", []) - - # retrieve all folder_names that contain relevant files - folder_names = [k for k, v in config_dict.items() if isinstance(v, list)] - - filenames = {sibling.rfilename for sibling in info.siblings} - model_filenames, variant_filenames = variant_compatible_siblings(filenames, variant=variant) - - if len(variant_filenames) == 0 and variant is not None: - deprecation_message = ( - f"You are trying to load the model files of the `variant={variant}`, but no such modeling files are available." - f"The default model files: {model_filenames} will be loaded instead. Make sure to not load from `variant={variant}`" - "if such variant modeling files are not available. Doing so will lead to an error in v0.22.0 as defaulting to non-variant" - "modeling files is deprecated." - ) - deprecate("no variant default", "0.22.0", deprecation_message, standard_warn=False) - - # remove ignored filenames - model_filenames = set(model_filenames) - set(ignore_filenames) - variant_filenames = set(variant_filenames) - set(ignore_filenames) - - # if the whole pipeline is cached we don't have to ping the Hub - if revision in DEPRECATED_REVISION_ARGS and version.parse( - version.parse(__version__).base_version - ) >= version.parse("0.20.0"): - warn_deprecated_model_variant( - pretrained_model_name, use_auth_token, variant, revision, model_filenames - ) - - model_folder_names = {os.path.split(f)[0] for f in model_filenames if os.path.split(f)[0] in folder_names} - - # all filenames compatible with variant will be added - allow_patterns = list(model_filenames) - - # allow all patterns from non-model folders - # this enables downloading schedulers, tokenizers, ... - allow_patterns += [f"{k}/*" for k in folder_names if k not in model_folder_names] - # also allow downloading config.json files with the model - allow_patterns += [os.path.join(k, "config.json") for k in model_folder_names] - - allow_patterns += [ - SCHEDULER_CONFIG_NAME, - CONFIG_NAME, - cls.config_name, - CUSTOM_PIPELINE_FILE_NAME, - ] - - # retrieve passed components that should not be downloaded - pipeline_class = _get_pipeline_class( - cls, - config_dict, - load_connected_pipeline=load_connected_pipeline, - custom_pipeline=custom_pipeline, - cache_dir=cache_dir, - revision=custom_revision, - ) - expected_components, _ = cls._get_signature_keys(pipeline_class) - passed_components = [k for k in expected_components if k in kwargs] - - if ( - use_safetensors - and not allow_pickle - and not is_safetensors_compatible( - model_filenames, variant=variant, passed_components=passed_components - ) - ): - raise EnvironmentError( - f"Could not found the necessary `safetensors` weights in {model_filenames} (variant={variant})" - ) - if from_flax: - ignore_patterns = ["*.bin", "*.safetensors", "*.onnx", "*.pb"] - elif use_safetensors and is_safetensors_compatible( - model_filenames, variant=variant, passed_components=passed_components - ): - ignore_patterns = ["*.bin", "*.msgpack"] - - use_onnx = use_onnx if use_onnx is not None else pipeline_class._is_onnx - if not use_onnx: - ignore_patterns += ["*.onnx", "*.pb"] - - safetensors_variant_filenames = {f for f in variant_filenames if f.endswith(".safetensors")} - safetensors_model_filenames = {f for f in model_filenames if f.endswith(".safetensors")} - if ( - len(safetensors_variant_filenames) > 0 - and safetensors_model_filenames != safetensors_variant_filenames - ): - logger.warn( - f"\nA mixture of {variant} and non-{variant} filenames will be loaded.\nLoaded {variant} filenames:\n[{', '.join(safetensors_variant_filenames)}]\nLoaded non-{variant} filenames:\n[{', '.join(safetensors_model_filenames - safetensors_variant_filenames)}\nIf this behavior is not expected, please check your folder structure." - ) - else: - ignore_patterns = ["*.safetensors", "*.msgpack"] - - use_onnx = use_onnx if use_onnx is not None else pipeline_class._is_onnx - if not use_onnx: - ignore_patterns += ["*.onnx", "*.pb"] - - bin_variant_filenames = {f for f in variant_filenames if f.endswith(".bin")} - bin_model_filenames = {f for f in model_filenames if f.endswith(".bin")} - if len(bin_variant_filenames) > 0 and bin_model_filenames != bin_variant_filenames: - logger.warn( - f"\nA mixture of {variant} and non-{variant} filenames will be loaded.\nLoaded {variant} filenames:\n[{', '.join(bin_variant_filenames)}]\nLoaded non-{variant} filenames:\n[{', '.join(bin_model_filenames - bin_variant_filenames)}\nIf this behavior is not expected, please check your folder structure." - ) - - # Don't download any objects that are passed - allow_patterns = [ - p for p in allow_patterns if not (len(p.split("/")) == 2 and p.split("/")[0] in passed_components) - ] - - if pipeline_class._load_connected_pipes: - allow_patterns.append("README.md") - - # Don't download index files of forbidden patterns either - ignore_patterns = ignore_patterns + [f"{i}.index.*json" for i in ignore_patterns] - - re_ignore_pattern = [re.compile(fnmatch.translate(p)) for p in ignore_patterns] - re_allow_pattern = [re.compile(fnmatch.translate(p)) for p in allow_patterns] - - expected_files = [f for f in filenames if not any(p.match(f) for p in re_ignore_pattern)] - expected_files = [f for f in expected_files if any(p.match(f) for p in re_allow_pattern)] - - snapshot_folder = Path(config_file).parent - pipeline_is_cached = all((snapshot_folder / f).is_file() for f in expected_files) - - if pipeline_is_cached and not force_download: - # if the pipeline is cached, we can directly return it - # else call snapshot_download - return snapshot_folder - - user_agent = {"pipeline_class": cls.__name__} - if custom_pipeline is not None and not custom_pipeline.endswith(".py"): - user_agent["custom_pipeline"] = custom_pipeline - - # download all allow_patterns - ignore_patterns - try: - cached_folder = snapshot_download( - pretrained_model_name, - cache_dir=cache_dir, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - allow_patterns=allow_patterns, - ignore_patterns=ignore_patterns, - user_agent=user_agent, - ) - - # retrieve pipeline class from local file - cls_name = cls.load_config(os.path.join(cached_folder, "model_index.json")).get("_class_name", None) - pipeline_class = getattr(diffusers, cls_name, None) - - if pipeline_class is not None and pipeline_class._load_connected_pipes: - modelcard = ModelCard.load(os.path.join(cached_folder, "README.md")) - connected_pipes = sum([getattr(modelcard.data, k, []) for k in CONNECTED_PIPES_KEYS], []) - for connected_pipe_repo_id in connected_pipes: - download_kwargs = { - "cache_dir": cache_dir, - "resume_download": resume_download, - "force_download": force_download, - "proxies": proxies, - "local_files_only": local_files_only, - "use_auth_token": use_auth_token, - "variant": variant, - "use_safetensors": use_safetensors, - } - DiffusionPipeline.download(connected_pipe_repo_id, **download_kwargs) - - return cached_folder - - except FileNotFoundError: - # Means we tried to load pipeline with `local_files_only=True` but the files have not been found in local cache. - # This can happen in two cases: - # 1. If the user passed `local_files_only=True` => we raise the error directly - # 2. If we forced `local_files_only=True` when `model_info` failed => we raise the initial error - if model_info_call_error is None: - # 1. user passed `local_files_only=True` - raise - else: - # 2. we forced `local_files_only=True` when `model_info` failed - raise EnvironmentError( - f"Cannot load model {pretrained_model_name}: model is not cached locally and an error occured" - " while trying to fetch metadata from the Hub. Please check out the root cause in the stacktrace" - " above." - ) from model_info_call_error - - @staticmethod - def _get_signature_keys(obj): - parameters = inspect.signature(obj.__init__).parameters - required_parameters = {k: v for k, v in parameters.items() if v.default == inspect._empty} - optional_parameters = set({k for k, v in parameters.items() if v.default != inspect._empty}) - expected_modules = set(required_parameters.keys()) - {"self"} - return expected_modules, optional_parameters - - @property - def components(self) -> Dict[str, Any]: - r""" - The `self.components` property can be useful to run different pipelines with the same weights and - configurations without reallocating additional memory. - - Returns (`dict`): - A dictionary containing all the modules needed to initialize the pipeline. - - Examples: - - ```py - >>> from diffusers import ( - ... StableDiffusionPipeline, - ... StableDiffusionImg2ImgPipeline, - ... StableDiffusionInpaintPipeline, - ... ) - - >>> text2img = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") - >>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components) - >>> inpaint = StableDiffusionInpaintPipeline(**text2img.components) - ``` - """ - expected_modules, optional_parameters = self._get_signature_keys(self) - components = { - k: getattr(self, k) for k in self.config.keys() if not k.startswith("_") and k not in optional_parameters - } - - if set(components.keys()) != expected_modules: - raise ValueError( - f"{self} has been incorrectly initialized or {self.__class__} is incorrectly implemented. Expected" - f" {expected_modules} to be defined, but {components.keys()} are defined." - ) - - return components - - @staticmethod - def numpy_to_pil(images): - """ - Convert a NumPy image or a batch of images to a PIL image. - """ - return numpy_to_pil(images) - - def progress_bar(self, iterable=None, total=None): - if not hasattr(self, "_progress_bar_config"): - self._progress_bar_config = {} - elif not isinstance(self._progress_bar_config, dict): - raise ValueError( - f"`self._progress_bar_config` should be of type `dict`, but is {type(self._progress_bar_config)}." - ) - - if iterable is not None: - return tqdm(iterable, **self._progress_bar_config) - elif total is not None: - return tqdm(total=total, **self._progress_bar_config) - else: - raise ValueError("Either `total` or `iterable` has to be defined.") - - def set_progress_bar_config(self, **kwargs): - self._progress_bar_config = kwargs - - def enable_xformers_memory_efficient_attention(self, attention_op: Optional[Callable] = None): - r""" - Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this - option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed - up during training is not guaranteed. - - - - ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes - precedent. - - - - Parameters: - attention_op (`Callable`, *optional*): - Override the default `None` operator for use as `op` argument to the - [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention) - function of xFormers. - - Examples: - - ```py - >>> import torch - >>> from diffusers import DiffusionPipeline - >>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp - - >>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) - >>> pipe = pipe.to("cuda") - >>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) - >>> # Workaround for not accepting attention shape using VAE for Flash Attention - >>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) - ``` - """ - self.set_use_memory_efficient_attention_xformers(True, attention_op) - - def disable_xformers_memory_efficient_attention(self): - r""" - Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). - """ - self.set_use_memory_efficient_attention_xformers(False) - - def set_use_memory_efficient_attention_xformers( - self, valid: bool, attention_op: Optional[Callable] = None - ) -> None: - # Recursively walk through all the children. - # Any children which exposes the set_use_memory_efficient_attention_xformers method - # gets the message - def fn_recursive_set_mem_eff(module: torch.nn.Module): - if hasattr(module, "set_use_memory_efficient_attention_xformers"): - module.set_use_memory_efficient_attention_xformers(valid, attention_op) - - for child in module.children(): - fn_recursive_set_mem_eff(child) - - module_names, _ = self._get_signature_keys(self) - modules = [getattr(self, n, None) for n in module_names] - modules = [m for m in modules if isinstance(m, torch.nn.Module)] - - for module in modules: - fn_recursive_set_mem_eff(module) - - def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"): - r""" - Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor - in slices to compute attention in several steps. This is useful to save some memory in exchange for a small - speed decrease. - - Args: - slice_size (`str` or `int`, *optional*, defaults to `"auto"`): - When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If - `"max"`, maximum amount of memory will be saved by running only one slice at a time. If a number is - provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim` - must be a multiple of `slice_size`. - """ - self.set_attention_slice(slice_size) - - def disable_attention_slicing(self): - r""" - Disable sliced attention computation. If `enable_attention_slicing` was previously called, attention is - computed in one step. - """ - # set slice_size = `None` to disable `attention slicing` - self.enable_attention_slicing(None) - - def set_attention_slice(self, slice_size: Optional[int]): - module_names, _ = self._get_signature_keys(self) - modules = [getattr(self, n, None) for n in module_names] - modules = [m for m in modules if isinstance(m, torch.nn.Module) and hasattr(m, "set_attention_slice")] - - for module in modules: - module.set_attention_slice(slice_size) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/spectrogram_diffusion/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/spectrogram_diffusion/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Andy1621/uniformer_image_detection/configs/_base_/datasets/coco_detection.py b/spaces/Andy1621/uniformer_image_detection/configs/_base_/datasets/coco_detection.py deleted file mode 100644 index 09a75c404687223c71dcdf0abc7af827f2e498a6..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/_base_/datasets/coco_detection.py +++ /dev/null @@ -1,48 +0,0 @@ -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_train2017.json', - img_prefix=data_root + 'train2017/', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline)) -evaluation = dict(interval=1, metric='bbox') diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_769x769_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_769x769_40k_cityscapes.py deleted file mode 100644 index 59508248490b3edbac1c46b4fcc7891f99655b9b..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_769x769_40k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './ann_r50-d8_769x769_40k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r101-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r101-d8_512x512_160k_ade20k.py deleted file mode 100644 index 7f6795e5ef0e4bf1d10ee7ed4f608bf93ac24216..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r101-d8_512x512_160k_ade20k.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './psanet_r50-d8_512x512_160k_ade20k.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Artgor/digit-draw-detect/src/utils.py b/spaces/Artgor/digit-draw-detect/src/utils.py deleted file mode 100644 index db5636edd9a5dc324bd33acca3b46453dbb75072..0000000000000000000000000000000000000000 --- a/spaces/Artgor/digit-draw-detect/src/utils.py +++ /dev/null @@ -1,105 +0,0 @@ -import datetime -import json -import os -import uuid -from typing import List - -import boto3 -import matplotlib -import matplotlib.patches as patches -import matplotlib.pyplot as plt -import numpy.typing as npt -import streamlit as st -import tomli - -AWS_ACCESS_KEY_ID = '' -AWS_SECRET_ACCESS_KEY = '' -try: - if st.secrets is not None: - AWS_ACCESS_KEY_ID = st.secrets['AWS_ACCESS_KEY_ID'] - AWS_SECRET_ACCESS_KEY = st.secrets['AWS_SECRET_ACCESS_KEY'] -except BaseException: - pass - -if os.path.exists('config.toml'): - with open('config.toml', 'rb') as f: - config = tomli.load(f) - AWS_ACCESS_KEY_ID = config['AWS_ACCESS_KEY_ID'] - AWS_SECRET_ACCESS_KEY = config['AWS_SECRET_ACCESS_KEY'] - -client = boto3.client('s3', aws_access_key_id=AWS_ACCESS_KEY_ID, aws_secret_access_key=AWS_SECRET_ACCESS_KEY) - - -def plot_img_with_rects( - img: npt.ArrayLike, boxes: List[List], threshold: float = 0.5, coef: int = 400 -) -> matplotlib.figure.Figure: - """ - Plot image with rectangles. - - Args: - img: image as a numpy array - boxes: the list of the bboxes - threshold: threshold for bbox probability - coef: coefficient to multiply images. Can be changed when the original image is a different size - - Returns: - image with bboxes - """ - fig, ax = plt.subplots(1, figsize=(4, 4)) - - # Display the image - ax.imshow(img) - - # Create a Rectangle patch - for _, rect in enumerate(b for b in boxes if b[1] > threshold): - label, _, xc, yc, w, h = rect - xc, yc, w, h = xc * coef, yc * coef, w * coef, h * coef - # the coordinates from center-based to left top corner - x = xc - w / 2 - y = yc - h / 2 - label = int(label) - label = label if label != 10 else 'censored' - label = label if label != 11 else 'other' - rect = [x, y, x + w, y + h] - - rect_ = patches.Rectangle( - (rect[0], rect[1]), rect[2] - rect[0], rect[3] - rect[1], linewidth=2, edgecolor='blue', facecolor='none' - ) - plt.text(rect[2], rect[1], f'{label}', color='blue') - # Add the patch to the Axes - ax.add_patch(rect_) - return fig - - -def save_object_to_s3(filename, s3_filename): - client.upload_file(filename, 'digitdrawdetect', s3_filename) - - -@st.cache_data(show_spinner=False) -def save_image(image: npt.ArrayLike, pred: List[List]) -> str: - """ - Save the image and upload the image with bboxes to s3. - - Args: - image: np.array with image - pred: bboxes - - Returns: - image name - - """ - # create a figure and save it - fig, ax = plt.subplots(1, figsize=(4, 4)) - ax.imshow(image) - file_name = str(datetime.datetime.today().date()) + str(uuid.uuid1()) - fig.savefig(f'{file_name}.png') - - # dump bboxes in a local file - with open(f'{file_name}.json', 'w') as j_f: - json.dump({f'{file_name}.png': pred}, j_f) - - # upload the image and the bboxes to s3. - save_object_to_s3(f'{file_name}.png', f'images/{file_name}.png') - save_object_to_s3(f'{file_name}.json', f'labels/{file_name}.json') - - return file_name diff --git a/spaces/Artrajz/vits-simple-api/utils/nlp.py b/spaces/Artrajz/vits-simple-api/utils/nlp.py deleted file mode 100644 index be03d8849e8badde58dffbfa063eff97fcbba34e..0000000000000000000000000000000000000000 --- a/spaces/Artrajz/vits-simple-api/utils/nlp.py +++ /dev/null @@ -1,97 +0,0 @@ -import regex as re -import config -from .utils import check_is_none -from logger import logger - -# 读取配置选择语种识别库 -clf = getattr(config, "LANGUAGE_IDENTIFICATION_LIBRARY", "fastlid") - - -def clasify_lang(text, speaker_lang): - pattern = r'[\!\"\#\$\%\&\'\(\)\*\+\,\-\.\/\:\;\<\>\=\?\@\[\]\{\}\\\\\^\_\`' \ - r'\!?。"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」' \ - r'『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘\'\‛\“\”\„\‟…‧﹏.]+' - words = re.split(pattern, text) - - pre = "" - p = 0 - - if clf.upper() == "FASTLID" or clf.upper() == "FASTTEXT": - from fastlid import fastlid - detect = fastlid - if speaker_lang != None: fastlid.set_languages = speaker_lang - elif clf.upper() == "LANGID": - import langid - detect = langid.classify - if speaker_lang != None: langid.set_languages(speaker_lang) - else: - raise ValueError(f"Wrong LANGUAGE_IDENTIFICATION_LIBRARY in config.py") - - for word in words: - - if check_is_none(word): continue - - lang = detect(word)[0] - - if pre == "": - text = text[:p] + text[p:].replace(word, f'[{lang.upper()}]' + word, 1) - p += len(f'[{lang.upper()}]') - elif pre != lang: - text = text[:p] + text[p:].replace(word, f'[{pre.upper()}][{lang.upper()}]' + word, 1) - p += len(f'[{pre.upper()}][{lang.upper()}]') - pre = lang - p += text[p:].index(word) + len(word) - text += f"[{pre.upper()}]" - - return text - - -def cut(text, max): - pattern = r'[!(),—+\-.:;??。,、;:]+' - sentences = re.split(pattern, text) - discarded_chars = re.findall(pattern, text) - - sentence_list, count, p = [], 0, 0 - - # 按被分割的符号遍历 - for i, discarded_chars in enumerate(discarded_chars): - count += len(sentences[i]) + len(discarded_chars) - if count >= max: - sentence_list.append(text[p:p + count].strip()) - p += count - count = 0 - - # 加入最后剩余的文本 - if p < len(text): - sentence_list.append(text[p:]) - - return sentence_list - - -def sentence_split(text, max=50, lang="auto", speaker_lang=None): - # 如果该speaker只支持一种语言 - if speaker_lang is not None and len(speaker_lang) == 1: - if lang.upper() not in ["AUTO", "MIX"] and lang.lower() != speaker_lang[0]: - logger.debug( - f"lang \"{lang}\" is not in speaker_lang {speaker_lang},automatically set lang={speaker_lang[0]}") - lang = speaker_lang[0] - - sentence_list = [] - if lang.upper() != "MIX": - if max <= 0: - sentence_list.append( - clasify_lang(text, - speaker_lang) if lang.upper() == "AUTO" else f"[{lang.upper()}]{text}[{lang.upper()}]") - else: - for i in cut(text, max): - if check_is_none(i): continue - sentence_list.append( - clasify_lang(i, - speaker_lang) if lang.upper() == "AUTO" else f"[{lang.upper()}]{i}[{lang.upper()}]") - else: - sentence_list.append(text) - - for i in sentence_list: - logger.debug(i) - - return sentence_list diff --git a/spaces/Ashrafb/Tesseract-OCR/README.md b/spaces/Ashrafb/Tesseract-OCR/README.md deleted file mode 100644 index 6b953bf93788eae3ca019ec6eeac6d044165a424..0000000000000000000000000000000000000000 --- a/spaces/Ashrafb/Tesseract-OCR/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Tesseract OCR -emoji: 🐢 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.40.1 -app_file: app_blocks.py -pinned: false -duplicated_from: kneelesh48/Tesseract-OCR ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/enums.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/enums.py deleted file mode 100644 index 5e3e198233698f2b007489dd299cecb87d971067..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/enums.py +++ /dev/null @@ -1,85 +0,0 @@ -""" -All of the Enums that are used throughout the chardet package. - -:author: Dan Blanchard (dan.blanchard@gmail.com) -""" - -from enum import Enum, Flag - - -class InputState: - """ - This enum represents the different states a universal detector can be in. - """ - - PURE_ASCII = 0 - ESC_ASCII = 1 - HIGH_BYTE = 2 - - -class LanguageFilter(Flag): - """ - This enum represents the different language filters we can apply to a - ``UniversalDetector``. - """ - - NONE = 0x00 - CHINESE_SIMPLIFIED = 0x01 - CHINESE_TRADITIONAL = 0x02 - JAPANESE = 0x04 - KOREAN = 0x08 - NON_CJK = 0x10 - ALL = 0x1F - CHINESE = CHINESE_SIMPLIFIED | CHINESE_TRADITIONAL - CJK = CHINESE | JAPANESE | KOREAN - - -class ProbingState(Enum): - """ - This enum represents the different states a prober can be in. - """ - - DETECTING = 0 - FOUND_IT = 1 - NOT_ME = 2 - - -class MachineState: - """ - This enum represents the different states a state machine can be in. - """ - - START = 0 - ERROR = 1 - ITS_ME = 2 - - -class SequenceLikelihood: - """ - This enum represents the likelihood of a character following the previous one. - """ - - NEGATIVE = 0 - UNLIKELY = 1 - LIKELY = 2 - POSITIVE = 3 - - @classmethod - def get_num_categories(cls) -> int: - """:returns: The number of likelihood categories in the enum.""" - return 4 - - -class CharacterCategory: - """ - This enum represents the different categories language models for - ``SingleByteCharsetProber`` put characters into. - - Anything less than CONTROL is considered a letter. - """ - - UNDEFINED = 255 - LINE_BREAK = 254 - SYMBOL = 253 - DIGIT = 252 - CONTROL = 251 diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/mbcsgroupprober.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/mbcsgroupprober.py deleted file mode 100644 index 6cb9cc7b3bc751fbb5a54ba06eaaf953bf14ed8d..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/mbcsgroupprober.py +++ /dev/null @@ -1,57 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is Mozilla Universal charset detector code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 2001 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# Shy Shalom - original C code -# Proofpoint, Inc. -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from .big5prober import Big5Prober -from .charsetgroupprober import CharSetGroupProber -from .cp949prober import CP949Prober -from .enums import LanguageFilter -from .eucjpprober import EUCJPProber -from .euckrprober import EUCKRProber -from .euctwprober import EUCTWProber -from .gb2312prober import GB2312Prober -from .johabprober import JOHABProber -from .sjisprober import SJISProber -from .utf8prober import UTF8Prober - - -class MBCSGroupProber(CharSetGroupProber): - def __init__(self, lang_filter: LanguageFilter = LanguageFilter.NONE) -> None: - super().__init__(lang_filter=lang_filter) - self.probers = [ - UTF8Prober(), - SJISProber(), - EUCJPProber(), - GB2312Prober(), - EUCKRProber(), - CP949Prober(), - Big5Prober(), - EUCTWProber(), - JOHABProber(), - ] - self.reset() diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distro/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distro/__init__.py deleted file mode 100644 index 7686fe85a7cc94188da76bfb1c10ad2a10821256..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distro/__init__.py +++ /dev/null @@ -1,54 +0,0 @@ -from .distro import ( - NORMALIZED_DISTRO_ID, - NORMALIZED_LSB_ID, - NORMALIZED_OS_ID, - LinuxDistribution, - __version__, - build_number, - codename, - distro_release_attr, - distro_release_info, - id, - info, - like, - linux_distribution, - lsb_release_attr, - lsb_release_info, - major_version, - minor_version, - name, - os_release_attr, - os_release_info, - uname_attr, - uname_info, - version, - version_parts, -) - -__all__ = [ - "NORMALIZED_DISTRO_ID", - "NORMALIZED_LSB_ID", - "NORMALIZED_OS_ID", - "LinuxDistribution", - "build_number", - "codename", - "distro_release_attr", - "distro_release_info", - "id", - "info", - "like", - "linux_distribution", - "lsb_release_attr", - "lsb_release_info", - "major_version", - "minor_version", - "name", - "os_release_attr", - "os_release_info", - "uname_attr", - "uname_info", - "version", - "version_parts", -] - -__version__ = __version__ diff --git a/spaces/Bart92/RVC_HF/Applio-RVC-Fork/utils/README.md b/spaces/Bart92/RVC_HF/Applio-RVC-Fork/utils/README.md deleted file mode 100644 index fb45a36b5909585aa964f2033762ee59b55526b0..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/Applio-RVC-Fork/utils/README.md +++ /dev/null @@ -1,6 +0,0 @@ -# External Colab Code -Code used to make Google Colab work correctly -- Repo link: https://github.com/IAHispano/Applio-RVC-Fork/ - -Thanks to https://github.com/kalomaze/externalcolabcode - diff --git a/spaces/Benson/text-generation/Examples/60 Segundos Reatomized Apk Descargar Gratis Android.md b/spaces/Benson/text-generation/Examples/60 Segundos Reatomized Apk Descargar Gratis Android.md deleted file mode 100644 index 4a31d8f6197d24dfde020226f4a31cbf19e8ff2c..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/60 Segundos Reatomized Apk Descargar Gratis Android.md +++ /dev/null @@ -1,75 +0,0 @@ -
-

60 segundos reatomized APK: Cómo descargar y jugar en Android

-

Si usted está buscando un juego de supervivencia divertido y desafiante que pondrá a prueba sus habilidades y la toma de decisiones, es posible que desee echa un vistazo 60 Segundos Reatomized. Esta es una versión remasterizada del juego original de 60 segundos, que fue lanzado en 2015. En este juego, tienes que buscar provisiones, rescatar a tu familia y permanecer vivo en tu refugio radioactivo después de un ataque nuclear. El juego cuenta con gráficos mejorados, nuevo contenido y más formas de escapar del páramo. Pero, ¿cómo puedes jugar a este juego en tu dispositivo Android? En este artículo, le mostraremos cómo descargar e instalar el archivo APK reatomizado 60 segundos, y cómo jugar el juego en su teléfono o tableta.

-

60 segundos reatomized apk descargar gratis android


Download Zip ———>>> https://bltlly.com/2v6J0E



-

¿Qué es 60 segundos Reatomized?

-

60 Seconds Reatomized es un juego de comedia oscura post-apocalíptica desarrollado por Robot Gentleman. El juego se divide en dos fases: carroña y supervivencia. En la fase de búsqueda, tienes 60 segundos para agarrar tantos artículos y miembros de la familia como puedas de tu casa antes de que las bombas caigan. Tienes que ser rápido e inteligente, ya que todo estará en tu contra: el tiempo, tus muebles y un diseño de casa generado al azar. En la fase de supervivencia, tienes que manejar tus recursos, lidiar con eventos inesperados y tomar decisiones difíciles en tu refugio contra las consecuencias. También puede aventurarse en el páramo para buscar más suministros u oportunidades para escapar. El juego tiene múltiples finales dependiendo de tus acciones y suerte.

-

60 Seconds Reatomized tiene varias características que lo hacen diferente del juego original. Estas incluyen:

-
    -
  • Nuevo modo de juego: Desafíos de supervivencia. Estas son historias cortas que pondrán a prueba tus habilidades de supervivencia.
  • -
  • Nuevas oportunidades para escapar de la tierra baldía en forma de una historia que abarca múltiples partidas.
  • -
  • Nuevo sistema de relaciones: más historias e interacciones locas entre los miembros de la familia McDoodle.
  • - -
  • Nuevos logros: ponte a prueba y demuestra tus habilidades.
  • -
-

Cómo descargar 60 segundos reatomized APK para Android

-

Desafortunadamente, 60 segundos Reatomized no está disponible en Google Play Store. Sin embargo, todavía puede descargar e instalar el archivo APK desde otras fuentes. Un archivo APK es un paquete que contiene todos los archivos necesarios para ejecutar una aplicación Android. Sin embargo, usted tiene que tener cuidado acerca de dónde descargar archivos APK de, como algunos sitios pueden contener malware o virus. Solo descarga archivos APK de fuentes confiables que monitorean los archivos que alojan.

-

Uno de los sitios más populares para descargar archivos APK es APK Mirror. Este sitio alberga un montón de aplicaciones de Android que se pueden instalar individualmente o como actualizaciones. También verifica los archivos que aloja para asegurarse de que son seguros y auténticos. Aquí están los pasos para descargar 60 Segundos reatomized APK de APK Mirror:

-
    -
  1. Ir a APK Mirror en el navegador de su dispositivo Android.
  2. -
  3. Buscar "60 Seconds Reatomized" en la barra de búsqueda.
  4. -
  5. Seleccione la última versión de la aplicación de la lista de resultados.
  6. -
  7. Desplácese hacia abajo y toque en "Descargar APK" botón.
  8. -
  9. Aceptar cualquier ventana emergente o permisos que puedan aparecer.
  10. -
  11. Espera a que termine la descarga.
  12. -
-

Antes de poder instalar el archivo APK, debe habilitar fuentes desconocidas en su dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, siga estos pasos:

-

-
    -
  1. Ir a la configuración de su dispositivo y toque en "Seguridad".
  2. -
  3. Encuentra la opción que dice "Fuentes desconocidas" y cámbiala.
  4. -
  5. Confirme cualquier advertencia que pueda aparecer.
  6. -
-

Ahora está listo para instalar el archivo APK. Para hacer esto, siga estos pasos:

-
    -
  1. Ir al administrador de archivos de su dispositivo y localizar el archivo APK descargado.
  2. -
  3. Toque en el archivo y seleccione "Instalar".
  4. -
  5. Espere a que termine la instalación.
  6. - -
-

Felicidades! Usted ha descargado e instalado con éxito 60 Segundos reatomized APK en su dispositivo Android. Ahora puedes disfrutar del juego y sus características.

-

Cómo jugar 60 segundos Reatomized en Android

-

60 Seconds Reatomized es un juego que desafiará tus habilidades de supervivencia y toma de decisiones. El juego tiene cuatro modos diferentes: Atomic Drill, Apocalypse, Scavenge y Survival. Aquí hay un breve resumen de cada modo y algunos consejos sobre cómo jugarlos:

-

Taladro atómico

-

Este es el modo tutorial del juego. Te enseñará lo básico de la fase de búsqueda, como cómo mover, agarrar objetos y dejarlos en el refugio. También puede practicar sus habilidades en diferentes escenarios y diseños de casas. Este modo se recomienda para principiantes que quieren aprender las cuerdas antes de saltar a la acción real.

-

Apocalipsis

-

Este es el modo principal del juego. Combina las fases de búsqueda y supervivencia. Tienes que buscar provisiones y miembros de la familia en 60 segundos, luego administrar tu refugio radioactivo durante el mayor tiempo posible. Puedes elegir entre tres niveles de dificultad: Little Boy, Fat Man y Tsar Bomba. Cuanto mayor sea la dificultad, más difícil será encontrar objetos útiles, lidiar con los eventos y escapar del páramo. Este modo se recomienda para los jugadores que quieren experimentar la historia completa y el desafío del juego.

-

Carroña

-

Este es un modo que se centra solo en la fase de búsqueda. Usted puede elegir entre diferentes escenarios y diseños de la casa, y tratar de agarrar tantos elementos y miembros de la familia como sea posible en 60 segundos. También puede personalizar su propio escenario eligiendo los elementos, los miembros de la familia y el diseño de la casa. Este modo se recomienda para jugadores que quieran practicar sus habilidades de búsqueda o divertirse con diferentes combinaciones.

-

Supervivencia

- -

Consejos sobre cómo jugar 60 segundos reatomized en Android

-

Aquí hay algunos consejos generales que le ayudarán a jugar 60 segundos reatomized en Android:

-
    -
  • Planifique con anticipación: Antes de empezar a buscar basura, eche un vistazo al diseño de su casa y decida qué artículos y miembros de la familia desea agarrar. Priorice alimentos, agua, radio, botiquín, máscara de gas, mapa, hacha, rifle, maleta y miembros de la familia.
  • -
  • Sé rápido: solo tienes 60 segundos para buscar, así que no pierdas tiempo en acciones o elementos innecesarios. Usa ambas manos para agarrar objetos más rápido y déjalos cerca de la entrada del refugio para facilitar el acceso.
  • -
  • Sé inteligente: Tienes que tomar decisiones difíciles en ambas fases del juego. Piense cuidadosamente sobre qué artículos necesita, a qué eventos quiere responder, qué riesgos quiere tomar y qué consecuencias está dispuesto a enfrentar.
  • -
  • Sé flexible: El juego es impredecible y aleatorio. Nunca se sabe lo que sucederá a continuación o qué elementos se encuentran. Esté preparado para adaptarse a diferentes situaciones y resultados.
  • -
  • Diviértete: El juego está destinado a ser una comedia oscura que se burla de lo absurdo de la guerra nuclear. No te lo tomes demasiado en serio ni te frustres si las cosas salen mal. Disfruta del humor, las referencias y las sorpresas que ofrece el juego.
  • -
-

Conclusión

- -

Preguntas frecuentes

-

Aquí están algunas de las preguntas y respuestas más frecuentes sobre 60 segundos Reatomized:

-

¿Son 60 segundos tratados libremente?

-

No, 60 Segundos Reatomized no es un juego gratuito. Es un juego de pago que cuesta $3.99 en Steam y $1.99 en APK Mirror. Sin embargo, puedes descargar el archivo APK gratis de APK Mirror si quieres probar el juego en tu dispositivo Android.

-

¿Es seguro el tratamiento de 60 segundos?

-

Sí, 60 segundos Reatomized es seguro para jugar en su dispositivo Android. El archivo APK de APK Mirror es verificado y auténtico, y no contiene ningún malware o virus. Sin embargo, siempre debes tener cuidado con la descarga de archivos APK de otras fuentes, ya que pueden ser dañinos o falsos.

-

¿Es el multijugador reatomizado 60 segundos?

-

No, 60 segundos Reatomized no es un juego multijugador. Es un juego para un solo jugador que solo se puede jugar sin conexión. Sin embargo, puedes compartir tus logros y capturas de pantalla con tus amigos en línea.

-

¿Es 60 segundos Reatomized compatible con mi dispositivo?

-

60 segundos Reatomized requiere Android 4.1 o superior para ejecutarse en su dispositivo. También requiere al menos 1 GB de RAM y 500 MB de espacio de almacenamiento. Puede comprobar las especificaciones de su dispositivo en el menú de configuración.

-

¿Cómo puedo contactar a los desarrolladores de 60 Seconds Reatomized?

-

Si tienes preguntas, comentarios o problemas con el juego, puedes contactar a los desarrolladores de 60 Seconds Reatomized enviándolos un correo electrónico a support@robotgentleman.com. También puede visitar su sitio web en Robot Gentleman o seguirlos en plataformas de redes sociales como Facebook, Twitter, Instagram y YouTube.

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar El Juego De Ftbol Apk.md b/spaces/Benson/text-generation/Examples/Descargar El Juego De Ftbol Apk.md deleted file mode 100644 index eb2c8a457916d91c14e0f89b6cf38461df7b04df..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar El Juego De Ftbol Apk.md +++ /dev/null @@ -1,79 +0,0 @@ - -

Vive le Football: Un juego gratuito de gestión de fútbol móvil para Android

-

Si eres un fanático del fútbol (o del fútbol, como algunos lo llaman), quizás te interese probar un nuevo juego móvil que te permita administrar tu propio club y competir con otros jugadores en línea. El juego se llama Vive le Football, y está desarrollado por NetEase, una empresa china que también creó juegos populares como Rules of Survival and Identity V. En este artículo, te diremos qué es Vive le Football, cómo descargarlo e instalarlo en tu dispositivo Android, por qué usted debe jugar, y algunos consejos y trucos para ayudarle a tener éxito en el juego.

-

Descargar el juego de fútbol apk


Download File ⚙⚙⚙ https://bltlly.com/2v6KNF



-

¿Qué es Vive le Football?

-

Vive le Football es un juego de gestión de fútbol móvil gratuito que fue lanzado en junio de 2021. El juego le permite crear su propio club, personalizar sus jugadores, estadio, logotipo y kits, y competir con otros clubes en varios modos. También puedes participar en torneos, ligas, copas y partidos amistosos con otros jugadores de todo el mundo. El juego cuenta con gráficos realistas, física y animaciones, así como un sistema de clima dinámico que afecta el juego. También puedes chatear con otros jugadores y unirte a clubes para cooperar y socializar.

-

Características de Vive le Football

-

Algunas de las características principales de Vive le Football son:

-
    -
  • Puede elegir entre más de 100 clubes con licencia de diferentes países y regiones, o crear su propio club desde cero.
  • -
  • Puedes personalizar la apariencia, habilidades, atributos, posiciones y tácticas de tus jugadores.
  • -
  • Puede actualizar su estadio, instalaciones, personal y equipos para mejorar el rendimiento y los ingresos de su club.
  • -
  • Puedes jugar en varios modos, como el modo carrera, donde empiezas desde abajo y trabajas hasta arriba; modo desafío, donde te enfrentas a diferentes escenarios y objetivos; y modo online, donde compites con otros jugadores en partidos en tiempo real.
  • - -
  • Puedes disfrutar de gráficos realistas, física y animaciones que hacen que el juego sea más inmersivo y divertido. También puede experimentar diferentes condiciones climáticas, como lluvia, nieve, niebla y viento.
  • -
-

Cómo descargar e instalar Vive le Football APK en Android

-

Si quieres jugar Vive le Football en tu dispositivo Android, tendrás que descargar e instalar el archivo APK del juego. Un archivo APK es un archivo de paquete que contiene los archivos de instalación de una aplicación Android. Puede descargar el archivo APK de Vive le Football de varias fuentes en línea, como Filehippo.com. Sin embargo, antes de instalar el archivo APK, tendrá que habilitar la opción de instalar aplicaciones de fuentes desconocidas en su dispositivo. Para hacer esto, siga estos pasos:

-

-
    -
  1. Ir a Configuración > Seguridad > Fuentes desconocidas y activarlo.
  2. -
  3. Vaya a la ubicación donde descargó el archivo APK de Vive le Football y toque en él.
  4. -
  5. Siga las instrucciones en la pantalla para instalar la aplicación.
  6. -
  7. Una vez completada la instalación, puede iniciar la aplicación desde el cajón de la aplicación o la pantalla de inicio.
  8. -
-

Nota: Instalar aplicaciones de fuentes desconocidas puede plantear algunos riesgos para la seguridad y el rendimiento de su dispositivo. Asegúrate de descargar solo archivos APK de fuentes confiables y escanearlos en busca de virus o malware antes de instalarlos.

-

¿Por qué jugar Vive le Football?

-

Vive le Football es un juego que atraerá a los aficionados al fútbol que quieren experimentar la emoción de administrar su propio club y jugar contra otros jugadores en línea. El juego ofrece muchas características y opciones que lo hacen divertido y atractivo. Estas son algunas razones por las que deberías jugar a Vive le Football:

-

Pros y contras de Vive le Football

-

Como cualquier otro juego, Vive le Football tiene sus pros y sus contras. Aquí están algunos de ellos:

- - -Pros -Contras - - -Puedes crear y personalizar tu propio club y jugadores. - - - -Puedes jugar en varios modos y competir con otros jugadores en línea. -Usted puede encontrar algunos errores y fallos en el juego. - - -Puedes disfrutar de gráficos realistas, física y animaciones. -Es posible que necesite un dispositivo de alta gama para ejecutar el juego sin problemas. - - -Puedes chatear con otros jugadores y unirte a clubes para cooperar y socializar. -Usted puede encontrar algunos jugadores tóxicos o groseros en el chat. - - -

Consejos y trucos para jugar Vive le Football

-

Si quieres mejorar tus habilidades y rendimiento en Vive le Football, aquí tienes algunos consejos y trucos que puedes utilizar:

-
    -
  • Elija un club que se adapte a su estilo de juego y preferencias. Puede seleccionar entre más de 100 clubes con licencia o crear su propio club desde cero. Cada club tiene sus propias fortalezas y debilidades, así que elige sabiamente.
  • -
  • Actualice su estadio, instalaciones, personal y equipo con regularidad. Esto le ayudará a aumentar el rendimiento y los ingresos de su club. También puede desbloquear nuevas características y elementos mediante la actualización de su nivel de club.
  • -
  • Entrena a tus jugadores y ajusta sus habilidades, atributos, posiciones y tácticas. Puedes personalizar la apariencia, habilidades, atributos, posiciones y tácticas de tus jugadores según tu estrategia. También puedes entrenar a tus jugadores para mejorar sus habilidades y potencial.
  • -
  • Juega en diferentes modos y desafíos para ganar recompensas y experiencia. Puedes jugar en modo carrera, modo desafío, modo online, torneos, ligas, copas y partidos amistosos. Cada modo tiene sus propios objetivos y recompensas que puedes usar para mejorar tu club.
  • -
  • Controla a tus jugadores en el campo usando botones de pantalla táctil o un joystick virtual. También puede cambiar entre diferentes ángulos de cámara y niveles de zoom para obtener una mejor vista de la acción. También puedes usar gestos para realizar acciones como pasar, disparar, abordar, regatear, etc.
  • - -
-

Conclusión

-

Vive le Football es un juego de gestión de fútbol móvil gratuito que te permite crear tu propio club y competir con otros jugadores en línea. El juego cuenta con gráficos realistas, física y animaciones, así como un sistema de clima dinámico que afecta el juego. También puedes chatear con otros jugadores y unirte a clubes para cooperar y socializar. Si quieres jugar Vive le Football en tu dispositivo Android, tendrás que descargar e instalar el archivo APK del juego desde una fuente de confianza. También puedes utilizar algunos consejos y trucos para mejorar tus habilidades y rendimiento en el juego. Vive le Football es un juego que atraerá a los aficionados al fútbol que quieren experimentar la emoción de administrar su propio club y jugar contra otros jugadores en línea.

-

Preguntas frecuentes

-

Aquí hay algunas preguntas frecuentes sobre Vive le Football:

-
    -
  1. ¿Vive le Football es gratis?
  2. -

    Sí, Vive le Football es gratis. Sin embargo, algunas características y artículos pueden requerir dinero real para desbloquear o comprar.

    -
  3. ¿Vive le Football está disponible para dispositivos iOS?
  4. -

    No, Vive le Football actualmente solo está disponible para dispositivos Android. No hay información oficial sobre si el juego será lanzado para dispositivos iOS en el futuro.

    -
  5. ¿Cómo puedo contactar a los desarrolladores de Vive le Football?
  6. -

    Puede ponerse en contacto con los desarrolladores de Vive le Football enviando un correo electrónico a vlf@service.netease.com o visitando su sitio web oficial. También puedes seguirlos en Facebook o Twitter para actualizaciones y noticias sobre el juego.

    -
  7. ¿Cómo puedo reportar un error o un problema en Vive le Football?
  8. -

    Puede reportar un error o un problema en Vive le Football tocando el icono de configuración en la esquina superior derecha de la pantalla, luego tocando en la retroalimentación, luego tocando en el informe de error. También puede enviar un correo electrónico a vlf@service.netease.com con una captura de pantalla o un vídeo del error o problema.

    - -

    Puedes jugar con tus amigos en Vive le Football agregándolos como amigos en el juego. Puedes hacer esto tocando en el icono de amigos en la esquina inferior izquierda de la pantalla, luego tocando en agregar amigo, luego ingresando su nombre de usuario o ID. También puede invitarlos a unirse a su club o jugar un partido amistoso con ellos. También puede chatear con ellos en el juego o enviarles regalos.

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/BetterAPI/BetterChat/src/lib/utils/concatUint8Arrays.ts b/spaces/BetterAPI/BetterChat/src/lib/utils/concatUint8Arrays.ts deleted file mode 100644 index e53396eca7e3dee20a543fb6ac28ecf48c7e3965..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat/src/lib/utils/concatUint8Arrays.ts +++ /dev/null @@ -1,12 +0,0 @@ -import { sum } from "./sum"; - -export function concatUint8Arrays(arrays: Uint8Array[]): Uint8Array { - const totalLength = sum(arrays.map((a) => a.length)); - const result = new Uint8Array(totalLength); - let offset = 0; - for (const array of arrays) { - result.set(array, offset); - offset += array.length; - } - return result; -} diff --git a/spaces/BiTransSciencia/www/README.md b/spaces/BiTransSciencia/www/README.md deleted file mode 100644 index 1ae2d53bd64f281ac01566489ff5a37c8f34bbff..0000000000000000000000000000000000000000 --- a/spaces/BiTransSciencia/www/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Www -emoji: 🐨 -colorFrom: blue -colorTo: green -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/retries/throttling.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/retries/throttling.py deleted file mode 100644 index 34ab417299767553718f9070dcf8fecad4dbe551..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/retries/throttling.py +++ /dev/null @@ -1,55 +0,0 @@ -from collections import namedtuple - -CubicParams = namedtuple('CubicParams', ['w_max', 'k', 'last_fail']) - - -class CubicCalculator: - _SCALE_CONSTANT = 0.4 - _BETA = 0.7 - - def __init__( - self, - starting_max_rate, - start_time, - scale_constant=_SCALE_CONSTANT, - beta=_BETA, - ): - self._w_max = starting_max_rate - self._scale_constant = scale_constant - self._beta = beta - self._k = self._calculate_zero_point() - self._last_fail = start_time - - def _calculate_zero_point(self): - scaled_value = (self._w_max * (1 - self._beta)) / self._scale_constant - k = scaled_value ** (1 / 3.0) - return k - - def success_received(self, timestamp): - dt = timestamp - self._last_fail - new_rate = self._scale_constant * (dt - self._k) ** 3 + self._w_max - return new_rate - - def error_received(self, current_rate, timestamp): - # Consider not having this be the current measured rate. - - # We have a new max rate, which is the current rate we were sending - # at when we received an error response. - self._w_max = current_rate - self._k = self._calculate_zero_point() - self._last_fail = timestamp - return current_rate * self._beta - - def get_params_snapshot(self): - """Return a read-only object of the current cubic parameters. - - These parameters are intended to be used for debug/troubleshooting - purposes. These object is a read-only snapshot and cannot be used - to modify the behavior of the CUBIC calculations. - - New parameters may be added to this object in the future. - - """ - return CubicParams( - w_max=self._w_max, k=self._k, last_fail=self._last_fail - ) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/_musllinux.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/_musllinux.py deleted file mode 100644 index 8ac3059ba3c246b9a5a6fb8d14936bb07777191e..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/_musllinux.py +++ /dev/null @@ -1,136 +0,0 @@ -"""PEP 656 support. - -This module implements logic to detect if the currently running Python is -linked against musl, and what musl version is used. -""" - -import contextlib -import functools -import operator -import os -import re -import struct -import subprocess -import sys -from typing import IO, Iterator, NamedTuple, Optional, Tuple - - -def _read_unpacked(f: IO[bytes], fmt: str) -> Tuple[int, ...]: - return struct.unpack(fmt, f.read(struct.calcsize(fmt))) - - -def _parse_ld_musl_from_elf(f: IO[bytes]) -> Optional[str]: - """Detect musl libc location by parsing the Python executable. - - Based on: https://gist.github.com/lyssdod/f51579ae8d93c8657a5564aefc2ffbca - ELF header: https://refspecs.linuxfoundation.org/elf/gabi4+/ch4.eheader.html - """ - f.seek(0) - try: - ident = _read_unpacked(f, "16B") - except struct.error: - return None - if ident[:4] != tuple(b"\x7fELF"): # Invalid magic, not ELF. - return None - f.seek(struct.calcsize("HHI"), 1) # Skip file type, machine, and version. - - try: - # e_fmt: Format for program header. - # p_fmt: Format for section header. - # p_idx: Indexes to find p_type, p_offset, and p_filesz. - e_fmt, p_fmt, p_idx = { - 1: ("IIIIHHH", "IIIIIIII", (0, 1, 4)), # 32-bit. - 2: ("QQQIHHH", "IIQQQQQQ", (0, 2, 5)), # 64-bit. - }[ident[4]] - except KeyError: - return None - else: - p_get = operator.itemgetter(*p_idx) - - # Find the interpreter section and return its content. - try: - _, e_phoff, _, _, _, e_phentsize, e_phnum = _read_unpacked(f, e_fmt) - except struct.error: - return None - for i in range(e_phnum + 1): - f.seek(e_phoff + e_phentsize * i) - try: - p_type, p_offset, p_filesz = p_get(_read_unpacked(f, p_fmt)) - except struct.error: - return None - if p_type != 3: # Not PT_INTERP. - continue - f.seek(p_offset) - interpreter = os.fsdecode(f.read(p_filesz)).strip("\0") - if "musl" not in interpreter: - return None - return interpreter - return None - - -class _MuslVersion(NamedTuple): - major: int - minor: int - - -def _parse_musl_version(output: str) -> Optional[_MuslVersion]: - lines = [n for n in (n.strip() for n in output.splitlines()) if n] - if len(lines) < 2 or lines[0][:4] != "musl": - return None - m = re.match(r"Version (\d+)\.(\d+)", lines[1]) - if not m: - return None - return _MuslVersion(major=int(m.group(1)), minor=int(m.group(2))) - - -@functools.lru_cache() -def _get_musl_version(executable: str) -> Optional[_MuslVersion]: - """Detect currently-running musl runtime version. - - This is done by checking the specified executable's dynamic linking - information, and invoking the loader to parse its output for a version - string. If the loader is musl, the output would be something like:: - - musl libc (x86_64) - Version 1.2.2 - Dynamic Program Loader - """ - with contextlib.ExitStack() as stack: - try: - f = stack.enter_context(open(executable, "rb")) - except OSError: - return None - ld = _parse_ld_musl_from_elf(f) - if not ld: - return None - proc = subprocess.run([ld], stderr=subprocess.PIPE, universal_newlines=True) - return _parse_musl_version(proc.stderr) - - -def platform_tags(arch: str) -> Iterator[str]: - """Generate musllinux tags compatible to the current platform. - - :param arch: Should be the part of platform tag after the ``linux_`` - prefix, e.g. ``x86_64``. The ``linux_`` prefix is assumed as a - prerequisite for the current platform to be musllinux-compatible. - - :returns: An iterator of compatible musllinux tags. - """ - sys_musl = _get_musl_version(sys.executable) - if sys_musl is None: # Python not dynamically linked against musl. - return - for minor in range(sys_musl.minor, -1, -1): - yield f"musllinux_{sys_musl.major}_{minor}_{arch}" - - -if __name__ == "__main__": # pragma: no cover - import sysconfig - - plat = sysconfig.get_platform() - assert plat.startswith("linux-"), "not linux" - - print("plat:", plat) - print("musl:", _get_musl_version(sys.executable)) - print("tags:", end=" ") - for t in platform_tags(re.sub(r"[.-]", "_", plat.split("-", 1)[-1])): - print(t, end="\n ") diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/helpers.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/helpers.py deleted file mode 100644 index 9588b3b780159a2a2d23c7f84a4404ec350e2b65..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/helpers.py +++ /dev/null @@ -1,1088 +0,0 @@ -# helpers.py -import html.entities -import re -import typing - -from . import __diag__ -from .core import * -from .util import _bslash, _flatten, _escape_regex_range_chars - - -# -# global helpers -# -def delimited_list( - expr: Union[str, ParserElement], - delim: Union[str, ParserElement] = ",", - combine: bool = False, - min: typing.Optional[int] = None, - max: typing.Optional[int] = None, - *, - allow_trailing_delim: bool = False, -) -> ParserElement: - """Helper to define a delimited list of expressions - the delimiter - defaults to ','. By default, the list elements and delimiters can - have intervening whitespace, and comments, but this can be - overridden by passing ``combine=True`` in the constructor. If - ``combine`` is set to ``True``, the matching tokens are - returned as a single token string, with the delimiters included; - otherwise, the matching tokens are returned as a list of tokens, - with the delimiters suppressed. - - If ``allow_trailing_delim`` is set to True, then the list may end with - a delimiter. - - Example:: - - delimited_list(Word(alphas)).parse_string("aa,bb,cc") # -> ['aa', 'bb', 'cc'] - delimited_list(Word(hexnums), delim=':', combine=True).parse_string("AA:BB:CC:DD:EE") # -> ['AA:BB:CC:DD:EE'] - """ - if isinstance(expr, str_type): - expr = ParserElement._literalStringClass(expr) - - dlName = "{expr} [{delim} {expr}]...{end}".format( - expr=str(expr.copy().streamline()), - delim=str(delim), - end=" [{}]".format(str(delim)) if allow_trailing_delim else "", - ) - - if not combine: - delim = Suppress(delim) - - if min is not None: - if min < 1: - raise ValueError("min must be greater than 0") - min -= 1 - if max is not None: - if min is not None and max <= min: - raise ValueError("max must be greater than, or equal to min") - max -= 1 - delimited_list_expr = expr + (delim + expr)[min, max] - - if allow_trailing_delim: - delimited_list_expr += Opt(delim) - - if combine: - return Combine(delimited_list_expr).set_name(dlName) - else: - return delimited_list_expr.set_name(dlName) - - -def counted_array( - expr: ParserElement, - int_expr: typing.Optional[ParserElement] = None, - *, - intExpr: typing.Optional[ParserElement] = None, -) -> ParserElement: - """Helper to define a counted list of expressions. - - This helper defines a pattern of the form:: - - integer expr expr expr... - - where the leading integer tells how many expr expressions follow. - The matched tokens returns the array of expr tokens as a list - the - leading count token is suppressed. - - If ``int_expr`` is specified, it should be a pyparsing expression - that produces an integer value. - - Example:: - - counted_array(Word(alphas)).parse_string('2 ab cd ef') # -> ['ab', 'cd'] - - # in this parser, the leading integer value is given in binary, - # '10' indicating that 2 values are in the array - binary_constant = Word('01').set_parse_action(lambda t: int(t[0], 2)) - counted_array(Word(alphas), int_expr=binary_constant).parse_string('10 ab cd ef') # -> ['ab', 'cd'] - - # if other fields must be parsed after the count but before the - # list items, give the fields results names and they will - # be preserved in the returned ParseResults: - count_with_metadata = integer + Word(alphas)("type") - typed_array = counted_array(Word(alphanums), int_expr=count_with_metadata)("items") - result = typed_array.parse_string("3 bool True True False") - print(result.dump()) - - # prints - # ['True', 'True', 'False'] - # - items: ['True', 'True', 'False'] - # - type: 'bool' - """ - intExpr = intExpr or int_expr - array_expr = Forward() - - def count_field_parse_action(s, l, t): - nonlocal array_expr - n = t[0] - array_expr <<= (expr * n) if n else Empty() - # clear list contents, but keep any named results - del t[:] - - if intExpr is None: - intExpr = Word(nums).set_parse_action(lambda t: int(t[0])) - else: - intExpr = intExpr.copy() - intExpr.set_name("arrayLen") - intExpr.add_parse_action(count_field_parse_action, call_during_try=True) - return (intExpr + array_expr).set_name("(len) " + str(expr) + "...") - - -def match_previous_literal(expr: ParserElement) -> ParserElement: - """Helper to define an expression that is indirectly defined from - the tokens matched in a previous expression, that is, it looks for - a 'repeat' of a previous expression. For example:: - - first = Word(nums) - second = match_previous_literal(first) - match_expr = first + ":" + second - - will match ``"1:1"``, but not ``"1:2"``. Because this - matches a previous literal, will also match the leading - ``"1:1"`` in ``"1:10"``. If this is not desired, use - :class:`match_previous_expr`. Do *not* use with packrat parsing - enabled. - """ - rep = Forward() - - def copy_token_to_repeater(s, l, t): - if t: - if len(t) == 1: - rep << t[0] - else: - # flatten t tokens - tflat = _flatten(t.as_list()) - rep << And(Literal(tt) for tt in tflat) - else: - rep << Empty() - - expr.add_parse_action(copy_token_to_repeater, callDuringTry=True) - rep.set_name("(prev) " + str(expr)) - return rep - - -def match_previous_expr(expr: ParserElement) -> ParserElement: - """Helper to define an expression that is indirectly defined from - the tokens matched in a previous expression, that is, it looks for - a 'repeat' of a previous expression. For example:: - - first = Word(nums) - second = match_previous_expr(first) - match_expr = first + ":" + second - - will match ``"1:1"``, but not ``"1:2"``. Because this - matches by expressions, will *not* match the leading ``"1:1"`` - in ``"1:10"``; the expressions are evaluated first, and then - compared, so ``"1"`` is compared with ``"10"``. Do *not* use - with packrat parsing enabled. - """ - rep = Forward() - e2 = expr.copy() - rep <<= e2 - - def copy_token_to_repeater(s, l, t): - matchTokens = _flatten(t.as_list()) - - def must_match_these_tokens(s, l, t): - theseTokens = _flatten(t.as_list()) - if theseTokens != matchTokens: - raise ParseException( - s, l, "Expected {}, found{}".format(matchTokens, theseTokens) - ) - - rep.set_parse_action(must_match_these_tokens, callDuringTry=True) - - expr.add_parse_action(copy_token_to_repeater, callDuringTry=True) - rep.set_name("(prev) " + str(expr)) - return rep - - -def one_of( - strs: Union[typing.Iterable[str], str], - caseless: bool = False, - use_regex: bool = True, - as_keyword: bool = False, - *, - useRegex: bool = True, - asKeyword: bool = False, -) -> ParserElement: - """Helper to quickly define a set of alternative :class:`Literal` s, - and makes sure to do longest-first testing when there is a conflict, - regardless of the input order, but returns - a :class:`MatchFirst` for best performance. - - Parameters: - - - ``strs`` - a string of space-delimited literals, or a collection of - string literals - - ``caseless`` - treat all literals as caseless - (default= ``False``) - - ``use_regex`` - as an optimization, will - generate a :class:`Regex` object; otherwise, will generate - a :class:`MatchFirst` object (if ``caseless=True`` or ``asKeyword=True``, or if - creating a :class:`Regex` raises an exception) - (default= ``True``) - - ``as_keyword`` - enforce :class:`Keyword`-style matching on the - generated expressions - (default= ``False``) - - ``asKeyword`` and ``useRegex`` are retained for pre-PEP8 compatibility, - but will be removed in a future release - - Example:: - - comp_oper = one_of("< = > <= >= !=") - var = Word(alphas) - number = Word(nums) - term = var | number - comparison_expr = term + comp_oper + term - print(comparison_expr.search_string("B = 12 AA=23 B<=AA AA>12")) - - prints:: - - [['B', '=', '12'], ['AA', '=', '23'], ['B', '<=', 'AA'], ['AA', '>', '12']] - """ - asKeyword = asKeyword or as_keyword - useRegex = useRegex and use_regex - - if ( - isinstance(caseless, str_type) - and __diag__.warn_on_multiple_string_args_to_oneof - ): - warnings.warn( - "More than one string argument passed to one_of, pass" - " choices as a list or space-delimited string", - stacklevel=2, - ) - - if caseless: - isequal = lambda a, b: a.upper() == b.upper() - masks = lambda a, b: b.upper().startswith(a.upper()) - parseElementClass = CaselessKeyword if asKeyword else CaselessLiteral - else: - isequal = lambda a, b: a == b - masks = lambda a, b: b.startswith(a) - parseElementClass = Keyword if asKeyword else Literal - - symbols: List[str] = [] - if isinstance(strs, str_type): - symbols = strs.split() - elif isinstance(strs, Iterable): - symbols = list(strs) - else: - raise TypeError("Invalid argument to one_of, expected string or iterable") - if not symbols: - return NoMatch() - - # reorder given symbols to take care to avoid masking longer choices with shorter ones - # (but only if the given symbols are not just single characters) - if any(len(sym) > 1 for sym in symbols): - i = 0 - while i < len(symbols) - 1: - cur = symbols[i] - for j, other in enumerate(symbols[i + 1 :]): - if isequal(other, cur): - del symbols[i + j + 1] - break - elif masks(cur, other): - del symbols[i + j + 1] - symbols.insert(i, other) - break - else: - i += 1 - - if useRegex: - re_flags: int = re.IGNORECASE if caseless else 0 - - try: - if all(len(sym) == 1 for sym in symbols): - # symbols are just single characters, create range regex pattern - patt = "[{}]".format( - "".join(_escape_regex_range_chars(sym) for sym in symbols) - ) - else: - patt = "|".join(re.escape(sym) for sym in symbols) - - # wrap with \b word break markers if defining as keywords - if asKeyword: - patt = r"\b(?:{})\b".format(patt) - - ret = Regex(patt, flags=re_flags).set_name(" | ".join(symbols)) - - if caseless: - # add parse action to return symbols as specified, not in random - # casing as found in input string - symbol_map = {sym.lower(): sym for sym in symbols} - ret.add_parse_action(lambda s, l, t: symbol_map[t[0].lower()]) - - return ret - - except re.error: - warnings.warn( - "Exception creating Regex for one_of, building MatchFirst", stacklevel=2 - ) - - # last resort, just use MatchFirst - return MatchFirst(parseElementClass(sym) for sym in symbols).set_name( - " | ".join(symbols) - ) - - -def dict_of(key: ParserElement, value: ParserElement) -> ParserElement: - """Helper to easily and clearly define a dictionary by specifying - the respective patterns for the key and value. Takes care of - defining the :class:`Dict`, :class:`ZeroOrMore`, and - :class:`Group` tokens in the proper order. The key pattern - can include delimiting markers or punctuation, as long as they are - suppressed, thereby leaving the significant key text. The value - pattern can include named results, so that the :class:`Dict` results - can include named token fields. - - Example:: - - text = "shape: SQUARE posn: upper left color: light blue texture: burlap" - attr_expr = (label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join)) - print(attr_expr[1, ...].parse_string(text).dump()) - - attr_label = label - attr_value = Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join) - - # similar to Dict, but simpler call format - result = dict_of(attr_label, attr_value).parse_string(text) - print(result.dump()) - print(result['shape']) - print(result.shape) # object attribute access works too - print(result.as_dict()) - - prints:: - - [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']] - - color: 'light blue' - - posn: 'upper left' - - shape: 'SQUARE' - - texture: 'burlap' - SQUARE - SQUARE - {'color': 'light blue', 'shape': 'SQUARE', 'posn': 'upper left', 'texture': 'burlap'} - """ - return Dict(OneOrMore(Group(key + value))) - - -def original_text_for( - expr: ParserElement, as_string: bool = True, *, asString: bool = True -) -> ParserElement: - """Helper to return the original, untokenized text for a given - expression. Useful to restore the parsed fields of an HTML start - tag into the raw tag text itself, or to revert separate tokens with - intervening whitespace back to the original matching input text. By - default, returns astring containing the original parsed text. - - If the optional ``as_string`` argument is passed as - ``False``, then the return value is - a :class:`ParseResults` containing any results names that - were originally matched, and a single token containing the original - matched text from the input string. So if the expression passed to - :class:`original_text_for` contains expressions with defined - results names, you must set ``as_string`` to ``False`` if you - want to preserve those results name values. - - The ``asString`` pre-PEP8 argument is retained for compatibility, - but will be removed in a future release. - - Example:: - - src = "this is test bold text normal text " - for tag in ("b", "i"): - opener, closer = make_html_tags(tag) - patt = original_text_for(opener + SkipTo(closer) + closer) - print(patt.search_string(src)[0]) - - prints:: - - [' bold text '] - ['text'] - """ - asString = asString and as_string - - locMarker = Empty().set_parse_action(lambda s, loc, t: loc) - endlocMarker = locMarker.copy() - endlocMarker.callPreparse = False - matchExpr = locMarker("_original_start") + expr + endlocMarker("_original_end") - if asString: - extractText = lambda s, l, t: s[t._original_start : t._original_end] - else: - - def extractText(s, l, t): - t[:] = [s[t.pop("_original_start") : t.pop("_original_end")]] - - matchExpr.set_parse_action(extractText) - matchExpr.ignoreExprs = expr.ignoreExprs - matchExpr.suppress_warning(Diagnostics.warn_ungrouped_named_tokens_in_collection) - return matchExpr - - -def ungroup(expr: ParserElement) -> ParserElement: - """Helper to undo pyparsing's default grouping of And expressions, - even if all but one are non-empty. - """ - return TokenConverter(expr).add_parse_action(lambda t: t[0]) - - -def locatedExpr(expr: ParserElement) -> ParserElement: - """ - (DEPRECATED - future code should use the Located class) - Helper to decorate a returned token with its starting and ending - locations in the input string. - - This helper adds the following results names: - - - ``locn_start`` - location where matched expression begins - - ``locn_end`` - location where matched expression ends - - ``value`` - the actual parsed results - - Be careful if the input text contains ```` characters, you - may want to call :class:`ParserElement.parseWithTabs` - - Example:: - - wd = Word(alphas) - for match in locatedExpr(wd).searchString("ljsdf123lksdjjf123lkkjj1222"): - print(match) - - prints:: - - [[0, 'ljsdf', 5]] - [[8, 'lksdjjf', 15]] - [[18, 'lkkjj', 23]] - """ - locator = Empty().set_parse_action(lambda ss, ll, tt: ll) - return Group( - locator("locn_start") - + expr("value") - + locator.copy().leaveWhitespace()("locn_end") - ) - - -def nested_expr( - opener: Union[str, ParserElement] = "(", - closer: Union[str, ParserElement] = ")", - content: typing.Optional[ParserElement] = None, - ignore_expr: ParserElement = quoted_string(), - *, - ignoreExpr: ParserElement = quoted_string(), -) -> ParserElement: - """Helper method for defining nested lists enclosed in opening and - closing delimiters (``"("`` and ``")"`` are the default). - - Parameters: - - ``opener`` - opening character for a nested list - (default= ``"("``); can also be a pyparsing expression - - ``closer`` - closing character for a nested list - (default= ``")"``); can also be a pyparsing expression - - ``content`` - expression for items within the nested lists - (default= ``None``) - - ``ignore_expr`` - expression for ignoring opening and closing delimiters - (default= :class:`quoted_string`) - - ``ignoreExpr`` - this pre-PEP8 argument is retained for compatibility - but will be removed in a future release - - If an expression is not provided for the content argument, the - nested expression will capture all whitespace-delimited content - between delimiters as a list of separate values. - - Use the ``ignore_expr`` argument to define expressions that may - contain opening or closing characters that should not be treated as - opening or closing characters for nesting, such as quoted_string or - a comment expression. Specify multiple expressions using an - :class:`Or` or :class:`MatchFirst`. The default is - :class:`quoted_string`, but if no expressions are to be ignored, then - pass ``None`` for this argument. - - Example:: - - data_type = one_of("void int short long char float double") - decl_data_type = Combine(data_type + Opt(Word('*'))) - ident = Word(alphas+'_', alphanums+'_') - number = pyparsing_common.number - arg = Group(decl_data_type + ident) - LPAR, RPAR = map(Suppress, "()") - - code_body = nested_expr('{', '}', ignore_expr=(quoted_string | c_style_comment)) - - c_function = (decl_data_type("type") - + ident("name") - + LPAR + Opt(delimited_list(arg), [])("args") + RPAR - + code_body("body")) - c_function.ignore(c_style_comment) - - source_code = ''' - int is_odd(int x) { - return (x%2); - } - - int dec_to_hex(char hchar) { - if (hchar >= '0' && hchar <= '9') { - return (ord(hchar)-ord('0')); - } else { - return (10+ord(hchar)-ord('A')); - } - } - ''' - for func in c_function.search_string(source_code): - print("%(name)s (%(type)s) args: %(args)s" % func) - - - prints:: - - is_odd (int) args: [['int', 'x']] - dec_to_hex (int) args: [['char', 'hchar']] - """ - if ignoreExpr != ignore_expr: - ignoreExpr = ignore_expr if ignoreExpr == quoted_string() else ignoreExpr - if opener == closer: - raise ValueError("opening and closing strings cannot be the same") - if content is None: - if isinstance(opener, str_type) and isinstance(closer, str_type): - if len(opener) == 1 and len(closer) == 1: - if ignoreExpr is not None: - content = Combine( - OneOrMore( - ~ignoreExpr - + CharsNotIn( - opener + closer + ParserElement.DEFAULT_WHITE_CHARS, - exact=1, - ) - ) - ).set_parse_action(lambda t: t[0].strip()) - else: - content = empty.copy() + CharsNotIn( - opener + closer + ParserElement.DEFAULT_WHITE_CHARS - ).set_parse_action(lambda t: t[0].strip()) - else: - if ignoreExpr is not None: - content = Combine( - OneOrMore( - ~ignoreExpr - + ~Literal(opener) - + ~Literal(closer) - + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS, exact=1) - ) - ).set_parse_action(lambda t: t[0].strip()) - else: - content = Combine( - OneOrMore( - ~Literal(opener) - + ~Literal(closer) - + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS, exact=1) - ) - ).set_parse_action(lambda t: t[0].strip()) - else: - raise ValueError( - "opening and closing arguments must be strings if no content expression is given" - ) - ret = Forward() - if ignoreExpr is not None: - ret <<= Group( - Suppress(opener) + ZeroOrMore(ignoreExpr | ret | content) + Suppress(closer) - ) - else: - ret <<= Group(Suppress(opener) + ZeroOrMore(ret | content) + Suppress(closer)) - ret.set_name("nested %s%s expression" % (opener, closer)) - return ret - - -def _makeTags(tagStr, xml, suppress_LT=Suppress("<"), suppress_GT=Suppress(">")): - """Internal helper to construct opening and closing tag expressions, given a tag name""" - if isinstance(tagStr, str_type): - resname = tagStr - tagStr = Keyword(tagStr, caseless=not xml) - else: - resname = tagStr.name - - tagAttrName = Word(alphas, alphanums + "_-:") - if xml: - tagAttrValue = dbl_quoted_string.copy().set_parse_action(remove_quotes) - openTag = ( - suppress_LT - + tagStr("tag") - + Dict(ZeroOrMore(Group(tagAttrName + Suppress("=") + tagAttrValue))) - + Opt("/", default=[False])("empty").set_parse_action( - lambda s, l, t: t[0] == "/" - ) - + suppress_GT - ) - else: - tagAttrValue = quoted_string.copy().set_parse_action(remove_quotes) | Word( - printables, exclude_chars=">" - ) - openTag = ( - suppress_LT - + tagStr("tag") - + Dict( - ZeroOrMore( - Group( - tagAttrName.set_parse_action(lambda t: t[0].lower()) - + Opt(Suppress("=") + tagAttrValue) - ) - ) - ) - + Opt("/", default=[False])("empty").set_parse_action( - lambda s, l, t: t[0] == "/" - ) - + suppress_GT - ) - closeTag = Combine(Literal("", adjacent=False) - - openTag.set_name("<%s>" % resname) - # add start results name in parse action now that ungrouped names are not reported at two levels - openTag.add_parse_action( - lambda t: t.__setitem__( - "start" + "".join(resname.replace(":", " ").title().split()), t.copy() - ) - ) - closeTag = closeTag( - "end" + "".join(resname.replace(":", " ").title().split()) - ).set_name("" % resname) - openTag.tag = resname - closeTag.tag = resname - openTag.tag_body = SkipTo(closeTag()) - return openTag, closeTag - - -def make_html_tags( - tag_str: Union[str, ParserElement] -) -> Tuple[ParserElement, ParserElement]: - """Helper to construct opening and closing tag expressions for HTML, - given a tag name. Matches tags in either upper or lower case, - attributes with namespaces and with quoted or unquoted values. - - Example:: - - text = 'More info at the pyparsing wiki page' - # make_html_tags returns pyparsing expressions for the opening and - # closing tags as a 2-tuple - a, a_end = make_html_tags("A") - link_expr = a + SkipTo(a_end)("link_text") + a_end - - for link in link_expr.search_string(text): - # attributes in the tag (like "href" shown here) are - # also accessible as named results - print(link.link_text, '->', link.href) - - prints:: - - pyparsing -> https://github.com/pyparsing/pyparsing/wiki - """ - return _makeTags(tag_str, False) - - -def make_xml_tags( - tag_str: Union[str, ParserElement] -) -> Tuple[ParserElement, ParserElement]: - """Helper to construct opening and closing tag expressions for XML, - given a tag name. Matches tags only in the given upper/lower case. - - Example: similar to :class:`make_html_tags` - """ - return _makeTags(tag_str, True) - - -any_open_tag: ParserElement -any_close_tag: ParserElement -any_open_tag, any_close_tag = make_html_tags( - Word(alphas, alphanums + "_:").set_name("any tag") -) - -_htmlEntityMap = {k.rstrip(";"): v for k, v in html.entities.html5.items()} -common_html_entity = Regex("&(?P" + "|".join(_htmlEntityMap) + ");").set_name( - "common HTML entity" -) - - -def replace_html_entity(t): - """Helper parser action to replace common HTML entities with their special characters""" - return _htmlEntityMap.get(t.entity) - - -class OpAssoc(Enum): - LEFT = 1 - RIGHT = 2 - - -InfixNotationOperatorArgType = Union[ - ParserElement, str, Tuple[Union[ParserElement, str], Union[ParserElement, str]] -] -InfixNotationOperatorSpec = Union[ - Tuple[ - InfixNotationOperatorArgType, - int, - OpAssoc, - typing.Optional[ParseAction], - ], - Tuple[ - InfixNotationOperatorArgType, - int, - OpAssoc, - ], -] - - -def infix_notation( - base_expr: ParserElement, - op_list: List[InfixNotationOperatorSpec], - lpar: Union[str, ParserElement] = Suppress("("), - rpar: Union[str, ParserElement] = Suppress(")"), -) -> ParserElement: - """Helper method for constructing grammars of expressions made up of - operators working in a precedence hierarchy. Operators may be unary - or binary, left- or right-associative. Parse actions can also be - attached to operator expressions. The generated parser will also - recognize the use of parentheses to override operator precedences - (see example below). - - Note: if you define a deep operator list, you may see performance - issues when using infix_notation. See - :class:`ParserElement.enable_packrat` for a mechanism to potentially - improve your parser performance. - - Parameters: - - ``base_expr`` - expression representing the most basic operand to - be used in the expression - - ``op_list`` - list of tuples, one for each operator precedence level - in the expression grammar; each tuple is of the form ``(op_expr, - num_operands, right_left_assoc, (optional)parse_action)``, where: - - - ``op_expr`` is the pyparsing expression for the operator; may also - be a string, which will be converted to a Literal; if ``num_operands`` - is 3, ``op_expr`` is a tuple of two expressions, for the two - operators separating the 3 terms - - ``num_operands`` is the number of terms for this operator (must be 1, - 2, or 3) - - ``right_left_assoc`` is the indicator whether the operator is right - or left associative, using the pyparsing-defined constants - ``OpAssoc.RIGHT`` and ``OpAssoc.LEFT``. - - ``parse_action`` is the parse action to be associated with - expressions matching this operator expression (the parse action - tuple member may be omitted); if the parse action is passed - a tuple or list of functions, this is equivalent to calling - ``set_parse_action(*fn)`` - (:class:`ParserElement.set_parse_action`) - - ``lpar`` - expression for matching left-parentheses; if passed as a - str, then will be parsed as Suppress(lpar). If lpar is passed as - an expression (such as ``Literal('(')``), then it will be kept in - the parsed results, and grouped with them. (default= ``Suppress('(')``) - - ``rpar`` - expression for matching right-parentheses; if passed as a - str, then will be parsed as Suppress(rpar). If rpar is passed as - an expression (such as ``Literal(')')``), then it will be kept in - the parsed results, and grouped with them. (default= ``Suppress(')')``) - - Example:: - - # simple example of four-function arithmetic with ints and - # variable names - integer = pyparsing_common.signed_integer - varname = pyparsing_common.identifier - - arith_expr = infix_notation(integer | varname, - [ - ('-', 1, OpAssoc.RIGHT), - (one_of('* /'), 2, OpAssoc.LEFT), - (one_of('+ -'), 2, OpAssoc.LEFT), - ]) - - arith_expr.run_tests(''' - 5+3*6 - (5+3)*6 - -2--11 - ''', full_dump=False) - - prints:: - - 5+3*6 - [[5, '+', [3, '*', 6]]] - - (5+3)*6 - [[[5, '+', 3], '*', 6]] - - -2--11 - [[['-', 2], '-', ['-', 11]]] - """ - # captive version of FollowedBy that does not do parse actions or capture results names - class _FB(FollowedBy): - def parseImpl(self, instring, loc, doActions=True): - self.expr.try_parse(instring, loc) - return loc, [] - - _FB.__name__ = "FollowedBy>" - - ret = Forward() - if isinstance(lpar, str): - lpar = Suppress(lpar) - if isinstance(rpar, str): - rpar = Suppress(rpar) - - # if lpar and rpar are not suppressed, wrap in group - if not (isinstance(rpar, Suppress) and isinstance(rpar, Suppress)): - lastExpr = base_expr | Group(lpar + ret + rpar) - else: - lastExpr = base_expr | (lpar + ret + rpar) - - for i, operDef in enumerate(op_list): - opExpr, arity, rightLeftAssoc, pa = (operDef + (None,))[:4] - if isinstance(opExpr, str_type): - opExpr = ParserElement._literalStringClass(opExpr) - if arity == 3: - if not isinstance(opExpr, (tuple, list)) or len(opExpr) != 2: - raise ValueError( - "if numterms=3, opExpr must be a tuple or list of two expressions" - ) - opExpr1, opExpr2 = opExpr - term_name = "{}{} term".format(opExpr1, opExpr2) - else: - term_name = "{} term".format(opExpr) - - if not 1 <= arity <= 3: - raise ValueError("operator must be unary (1), binary (2), or ternary (3)") - - if rightLeftAssoc not in (OpAssoc.LEFT, OpAssoc.RIGHT): - raise ValueError("operator must indicate right or left associativity") - - thisExpr: Forward = Forward().set_name(term_name) - if rightLeftAssoc is OpAssoc.LEFT: - if arity == 1: - matchExpr = _FB(lastExpr + opExpr) + Group(lastExpr + opExpr[1, ...]) - elif arity == 2: - if opExpr is not None: - matchExpr = _FB(lastExpr + opExpr + lastExpr) + Group( - lastExpr + (opExpr + lastExpr)[1, ...] - ) - else: - matchExpr = _FB(lastExpr + lastExpr) + Group(lastExpr[2, ...]) - elif arity == 3: - matchExpr = _FB( - lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr - ) + Group(lastExpr + OneOrMore(opExpr1 + lastExpr + opExpr2 + lastExpr)) - elif rightLeftAssoc is OpAssoc.RIGHT: - if arity == 1: - # try to avoid LR with this extra test - if not isinstance(opExpr, Opt): - opExpr = Opt(opExpr) - matchExpr = _FB(opExpr.expr + thisExpr) + Group(opExpr + thisExpr) - elif arity == 2: - if opExpr is not None: - matchExpr = _FB(lastExpr + opExpr + thisExpr) + Group( - lastExpr + (opExpr + thisExpr)[1, ...] - ) - else: - matchExpr = _FB(lastExpr + thisExpr) + Group( - lastExpr + thisExpr[1, ...] - ) - elif arity == 3: - matchExpr = _FB( - lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr - ) + Group(lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr) - if pa: - if isinstance(pa, (tuple, list)): - matchExpr.set_parse_action(*pa) - else: - matchExpr.set_parse_action(pa) - thisExpr <<= (matchExpr | lastExpr).setName(term_name) - lastExpr = thisExpr - ret <<= lastExpr - return ret - - -def indentedBlock(blockStatementExpr, indentStack, indent=True, backup_stacks=[]): - """ - (DEPRECATED - use IndentedBlock class instead) - Helper method for defining space-delimited indentation blocks, - such as those used to define block statements in Python source code. - - Parameters: - - - ``blockStatementExpr`` - expression defining syntax of statement that - is repeated within the indented block - - ``indentStack`` - list created by caller to manage indentation stack - (multiple ``statementWithIndentedBlock`` expressions within a single - grammar should share a common ``indentStack``) - - ``indent`` - boolean indicating whether block must be indented beyond - the current level; set to ``False`` for block of left-most statements - (default= ``True``) - - A valid block must contain at least one ``blockStatement``. - - (Note that indentedBlock uses internal parse actions which make it - incompatible with packrat parsing.) - - Example:: - - data = ''' - def A(z): - A1 - B = 100 - G = A2 - A2 - A3 - B - def BB(a,b,c): - BB1 - def BBA(): - bba1 - bba2 - bba3 - C - D - def spam(x,y): - def eggs(z): - pass - ''' - - - indentStack = [1] - stmt = Forward() - - identifier = Word(alphas, alphanums) - funcDecl = ("def" + identifier + Group("(" + Opt(delimitedList(identifier)) + ")") + ":") - func_body = indentedBlock(stmt, indentStack) - funcDef = Group(funcDecl + func_body) - - rvalue = Forward() - funcCall = Group(identifier + "(" + Opt(delimitedList(rvalue)) + ")") - rvalue << (funcCall | identifier | Word(nums)) - assignment = Group(identifier + "=" + rvalue) - stmt << (funcDef | assignment | identifier) - - module_body = stmt[1, ...] - - parseTree = module_body.parseString(data) - parseTree.pprint() - - prints:: - - [['def', - 'A', - ['(', 'z', ')'], - ':', - [['A1'], [['B', '=', '100']], [['G', '=', 'A2']], ['A2'], ['A3']]], - 'B', - ['def', - 'BB', - ['(', 'a', 'b', 'c', ')'], - ':', - [['BB1'], [['def', 'BBA', ['(', ')'], ':', [['bba1'], ['bba2'], ['bba3']]]]]], - 'C', - 'D', - ['def', - 'spam', - ['(', 'x', 'y', ')'], - ':', - [[['def', 'eggs', ['(', 'z', ')'], ':', [['pass']]]]]]] - """ - backup_stacks.append(indentStack[:]) - - def reset_stack(): - indentStack[:] = backup_stacks[-1] - - def checkPeerIndent(s, l, t): - if l >= len(s): - return - curCol = col(l, s) - if curCol != indentStack[-1]: - if curCol > indentStack[-1]: - raise ParseException(s, l, "illegal nesting") - raise ParseException(s, l, "not a peer entry") - - def checkSubIndent(s, l, t): - curCol = col(l, s) - if curCol > indentStack[-1]: - indentStack.append(curCol) - else: - raise ParseException(s, l, "not a subentry") - - def checkUnindent(s, l, t): - if l >= len(s): - return - curCol = col(l, s) - if not (indentStack and curCol in indentStack): - raise ParseException(s, l, "not an unindent") - if curCol < indentStack[-1]: - indentStack.pop() - - NL = OneOrMore(LineEnd().set_whitespace_chars("\t ").suppress()) - INDENT = (Empty() + Empty().set_parse_action(checkSubIndent)).set_name("INDENT") - PEER = Empty().set_parse_action(checkPeerIndent).set_name("") - UNDENT = Empty().set_parse_action(checkUnindent).set_name("UNINDENT") - if indent: - smExpr = Group( - Opt(NL) - + INDENT - + OneOrMore(PEER + Group(blockStatementExpr) + Opt(NL)) - + UNDENT - ) - else: - smExpr = Group( - Opt(NL) - + OneOrMore(PEER + Group(blockStatementExpr) + Opt(NL)) - + Opt(UNDENT) - ) - - # add a parse action to remove backup_stack from list of backups - smExpr.add_parse_action( - lambda: backup_stacks.pop(-1) and None if backup_stacks else None - ) - smExpr.set_fail_action(lambda a, b, c, d: reset_stack()) - blockStatementExpr.ignore(_bslash + LineEnd()) - return smExpr.set_name("indented block") - - -# it's easy to get these comment structures wrong - they're very common, so may as well make them available -c_style_comment = Combine(Regex(r"/\*(?:[^*]|\*(?!/))*") + "*/").set_name( - "C style comment" -) -"Comment of the form ``/* ... */``" - -html_comment = Regex(r"").set_name("HTML comment") -"Comment of the form ````" - -rest_of_line = Regex(r".*").leave_whitespace().set_name("rest of line") -dbl_slash_comment = Regex(r"//(?:\\\n|[^\n])*").set_name("// comment") -"Comment of the form ``// ... (to end of line)``" - -cpp_style_comment = Combine( - Regex(r"/\*(?:[^*]|\*(?!/))*") + "*/" | dbl_slash_comment -).set_name("C++ style comment") -"Comment of either form :class:`c_style_comment` or :class:`dbl_slash_comment`" - -java_style_comment = cpp_style_comment -"Same as :class:`cpp_style_comment`" - -python_style_comment = Regex(r"#.*").set_name("Python style comment") -"Comment of the form ``# ... (to end of line)``" - - -# build list of built-in expressions, for future reference if a global default value -# gets updated -_builtin_exprs: List[ParserElement] = [ - v for v in vars().values() if isinstance(v, ParserElement) -] - - -# pre-PEP8 compatible names -delimitedList = delimited_list -countedArray = counted_array -matchPreviousLiteral = match_previous_literal -matchPreviousExpr = match_previous_expr -oneOf = one_of -dictOf = dict_of -originalTextFor = original_text_for -nestedExpr = nested_expr -makeHTMLTags = make_html_tags -makeXMLTags = make_xml_tags -anyOpenTag, anyCloseTag = any_open_tag, any_close_tag -commonHTMLEntity = common_html_entity -replaceHTMLEntity = replace_html_entity -opAssoc = OpAssoc -infixNotation = infix_notation -cStyleComment = c_style_comment -htmlComment = html_comment -restOfLine = rest_of_line -dblSlashComment = dbl_slash_comment -cppStyleComment = cpp_style_comment -javaStyleComment = java_style_comment -pythonStyleComment = python_style_comment diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/contrib/appengine.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/contrib/appengine.py deleted file mode 100644 index a5a6d91035f0aaaf7b56b4039580629037112b62..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/contrib/appengine.py +++ /dev/null @@ -1,314 +0,0 @@ -""" -This module provides a pool manager that uses Google App Engine's -`URLFetch Service `_. - -Example usage:: - - from urllib3 import PoolManager - from urllib3.contrib.appengine import AppEngineManager, is_appengine_sandbox - - if is_appengine_sandbox(): - # AppEngineManager uses AppEngine's URLFetch API behind the scenes - http = AppEngineManager() - else: - # PoolManager uses a socket-level API behind the scenes - http = PoolManager() - - r = http.request('GET', 'https://google.com/') - -There are `limitations `_ to the URLFetch service and it may not be -the best choice for your application. There are three options for using -urllib3 on Google App Engine: - -1. You can use :class:`AppEngineManager` with URLFetch. URLFetch is - cost-effective in many circumstances as long as your usage is within the - limitations. -2. You can use a normal :class:`~urllib3.PoolManager` by enabling sockets. - Sockets also have `limitations and restrictions - `_ and have a lower free quota than URLFetch. - To use sockets, be sure to specify the following in your ``app.yaml``:: - - env_variables: - GAE_USE_SOCKETS_HTTPLIB : 'true' - -3. If you are using `App Engine Flexible -`_, you can use the standard -:class:`PoolManager` without any configuration or special environment variables. -""" - -from __future__ import absolute_import - -import io -import logging -import warnings - -from ..exceptions import ( - HTTPError, - HTTPWarning, - MaxRetryError, - ProtocolError, - SSLError, - TimeoutError, -) -from ..packages.six.moves.urllib.parse import urljoin -from ..request import RequestMethods -from ..response import HTTPResponse -from ..util.retry import Retry -from ..util.timeout import Timeout -from . import _appengine_environ - -try: - from google.appengine.api import urlfetch -except ImportError: - urlfetch = None - - -log = logging.getLogger(__name__) - - -class AppEnginePlatformWarning(HTTPWarning): - pass - - -class AppEnginePlatformError(HTTPError): - pass - - -class AppEngineManager(RequestMethods): - """ - Connection manager for Google App Engine sandbox applications. - - This manager uses the URLFetch service directly instead of using the - emulated httplib, and is subject to URLFetch limitations as described in - the App Engine documentation `here - `_. - - Notably it will raise an :class:`AppEnginePlatformError` if: - * URLFetch is not available. - * If you attempt to use this on App Engine Flexible, as full socket - support is available. - * If a request size is more than 10 megabytes. - * If a response size is more than 32 megabytes. - * If you use an unsupported request method such as OPTIONS. - - Beyond those cases, it will raise normal urllib3 errors. - """ - - def __init__( - self, - headers=None, - retries=None, - validate_certificate=True, - urlfetch_retries=True, - ): - if not urlfetch: - raise AppEnginePlatformError( - "URLFetch is not available in this environment." - ) - - warnings.warn( - "urllib3 is using URLFetch on Google App Engine sandbox instead " - "of sockets. To use sockets directly instead of URLFetch see " - "https://urllib3.readthedocs.io/en/1.26.x/reference/urllib3.contrib.html.", - AppEnginePlatformWarning, - ) - - RequestMethods.__init__(self, headers) - self.validate_certificate = validate_certificate - self.urlfetch_retries = urlfetch_retries - - self.retries = retries or Retry.DEFAULT - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - # Return False to re-raise any potential exceptions - return False - - def urlopen( - self, - method, - url, - body=None, - headers=None, - retries=None, - redirect=True, - timeout=Timeout.DEFAULT_TIMEOUT, - **response_kw - ): - - retries = self._get_retries(retries, redirect) - - try: - follow_redirects = redirect and retries.redirect != 0 and retries.total - response = urlfetch.fetch( - url, - payload=body, - method=method, - headers=headers or {}, - allow_truncated=False, - follow_redirects=self.urlfetch_retries and follow_redirects, - deadline=self._get_absolute_timeout(timeout), - validate_certificate=self.validate_certificate, - ) - except urlfetch.DeadlineExceededError as e: - raise TimeoutError(self, e) - - except urlfetch.InvalidURLError as e: - if "too large" in str(e): - raise AppEnginePlatformError( - "URLFetch request too large, URLFetch only " - "supports requests up to 10mb in size.", - e, - ) - raise ProtocolError(e) - - except urlfetch.DownloadError as e: - if "Too many redirects" in str(e): - raise MaxRetryError(self, url, reason=e) - raise ProtocolError(e) - - except urlfetch.ResponseTooLargeError as e: - raise AppEnginePlatformError( - "URLFetch response too large, URLFetch only supports" - "responses up to 32mb in size.", - e, - ) - - except urlfetch.SSLCertificateError as e: - raise SSLError(e) - - except urlfetch.InvalidMethodError as e: - raise AppEnginePlatformError( - "URLFetch does not support method: %s" % method, e - ) - - http_response = self._urlfetch_response_to_http_response( - response, retries=retries, **response_kw - ) - - # Handle redirect? - redirect_location = redirect and http_response.get_redirect_location() - if redirect_location: - # Check for redirect response - if self.urlfetch_retries and retries.raise_on_redirect: - raise MaxRetryError(self, url, "too many redirects") - else: - if http_response.status == 303: - method = "GET" - - try: - retries = retries.increment( - method, url, response=http_response, _pool=self - ) - except MaxRetryError: - if retries.raise_on_redirect: - raise MaxRetryError(self, url, "too many redirects") - return http_response - - retries.sleep_for_retry(http_response) - log.debug("Redirecting %s -> %s", url, redirect_location) - redirect_url = urljoin(url, redirect_location) - return self.urlopen( - method, - redirect_url, - body, - headers, - retries=retries, - redirect=redirect, - timeout=timeout, - **response_kw - ) - - # Check if we should retry the HTTP response. - has_retry_after = bool(http_response.headers.get("Retry-After")) - if retries.is_retry(method, http_response.status, has_retry_after): - retries = retries.increment(method, url, response=http_response, _pool=self) - log.debug("Retry: %s", url) - retries.sleep(http_response) - return self.urlopen( - method, - url, - body=body, - headers=headers, - retries=retries, - redirect=redirect, - timeout=timeout, - **response_kw - ) - - return http_response - - def _urlfetch_response_to_http_response(self, urlfetch_resp, **response_kw): - - if is_prod_appengine(): - # Production GAE handles deflate encoding automatically, but does - # not remove the encoding header. - content_encoding = urlfetch_resp.headers.get("content-encoding") - - if content_encoding == "deflate": - del urlfetch_resp.headers["content-encoding"] - - transfer_encoding = urlfetch_resp.headers.get("transfer-encoding") - # We have a full response's content, - # so let's make sure we don't report ourselves as chunked data. - if transfer_encoding == "chunked": - encodings = transfer_encoding.split(",") - encodings.remove("chunked") - urlfetch_resp.headers["transfer-encoding"] = ",".join(encodings) - - original_response = HTTPResponse( - # In order for decoding to work, we must present the content as - # a file-like object. - body=io.BytesIO(urlfetch_resp.content), - msg=urlfetch_resp.header_msg, - headers=urlfetch_resp.headers, - status=urlfetch_resp.status_code, - **response_kw - ) - - return HTTPResponse( - body=io.BytesIO(urlfetch_resp.content), - headers=urlfetch_resp.headers, - status=urlfetch_resp.status_code, - original_response=original_response, - **response_kw - ) - - def _get_absolute_timeout(self, timeout): - if timeout is Timeout.DEFAULT_TIMEOUT: - return None # Defer to URLFetch's default. - if isinstance(timeout, Timeout): - if timeout._read is not None or timeout._connect is not None: - warnings.warn( - "URLFetch does not support granular timeout settings, " - "reverting to total or default URLFetch timeout.", - AppEnginePlatformWarning, - ) - return timeout.total - return timeout - - def _get_retries(self, retries, redirect): - if not isinstance(retries, Retry): - retries = Retry.from_int(retries, redirect=redirect, default=self.retries) - - if retries.connect or retries.read or retries.redirect: - warnings.warn( - "URLFetch only supports total retries and does not " - "recognize connect, read, or redirect retry parameters.", - AppEnginePlatformWarning, - ) - - return retries - - -# Alias methods from _appengine_environ to maintain public API interface. - -is_appengine = _appengine_environ.is_appengine -is_appengine_sandbox = _appengine_environ.is_appengine_sandbox -is_local_appengine = _appengine_environ.is_local_appengine -is_prod_appengine = _appengine_environ.is_prod_appengine -is_prod_appengine_mvms = _appengine_environ.is_prod_appengine_mvms diff --git a/spaces/BillBojangeles2000/bart-large-cnn-samsum/app.py b/spaces/BillBojangeles2000/bart-large-cnn-samsum/app.py deleted file mode 100644 index d0fdd1cb0671c0e2e3075054b4aef1b1eb517b3a..0000000000000000000000000000000000000000 --- a/spaces/BillBojangeles2000/bart-large-cnn-samsum/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/philschmid/bart-large-cnn-samsum").launch() \ No newline at end of file diff --git a/spaces/Blackroot/Fancy-Audiogen/audio.py b/spaces/Blackroot/Fancy-Audiogen/audio.py deleted file mode 100644 index ed84d3befd9dd076bb22ec3ec7df14e964966249..0000000000000000000000000000000000000000 --- a/spaces/Blackroot/Fancy-Audiogen/audio.py +++ /dev/null @@ -1,59 +0,0 @@ -import numpy as np -import os, re, json, sys -import torch, torchaudio, pathlib -from audiocraft.data.audio_utils import convert_audio - -def load_and_process_audio(model, duration, optional_audio, sample_rate): - if optional_audio is None: - return None - sr, optional_audio = optional_audio[0], torch.from_numpy(optional_audio[1]).to(model.device).float().t() - if optional_audio.dim() == 1: - optional_audio = optional_audio[None] - optional_audio = optional_audio[..., :int(sr * duration)] - optional_audio = convert_audio(optional_audio, sr, sr, 1) - return optional_audio - -#From https://colab.research.google.com/drive/154CqogsdP-D_TfSF9S2z8-BY98GN_na4?usp=sharing#scrollTo=exKxNU_Z4i5I -#Thank you DragonForged for the link -def extend_audio(model, prompt_waveform, prompts, prompt_sr, segments=5, overlap=2): - # Calculate the number of samples corresponding to the overlap - overlap_samples = int(overlap * prompt_sr) - - device = model.device - prompt_waveform = prompt_waveform.to(device) - - for i in range(1, segments): - # Grab the end of the waveform - end_waveform = prompt_waveform[...,-overlap_samples:] - - # Process the trimmed waveform using the model - new_audio = model.generate_continuation(end_waveform, descriptions=[prompts[i]], prompt_sample_rate=prompt_sr, progress=True) - - # Cut the seed audio off the newly generated audio - new_audio = new_audio[...,overlap_samples:] - - prompt_waveform = torch.cat([prompt_waveform, new_audio], dim=2) - - return prompt_waveform - -def predict(model, prompts, duration, melody_parameters, extension_parameters): - melody = load_and_process_audio(model, duration, **melody_parameters) - - if melody is not None: - output = model.generate_with_chroma( - descriptions=[prompts[0]], - melody_wavs=melody, - melody_sample_rate=melody_parameters['sample_rate'], - progress=False - ) - else: - output = model.generate(descriptions=[prompts[0]], progress=True) - - sample_rate = model.sample_rate - - if extension_parameters['segments'] > 1: - output_tensors = extend_audio(model, output, prompts, sample_rate, **extension_parameters).detach().cpu().float() - else: - output_tensors = output.detach().cpu().float() - - return sample_rate, output_tensors \ No newline at end of file diff --git a/spaces/Boadiwaa/Recipes/openai/api_resources/file.py b/spaces/Boadiwaa/Recipes/openai/api_resources/file.py deleted file mode 100644 index 83f3a5e602d11e2d289b057e1745ac138ad86607..0000000000000000000000000000000000000000 --- a/spaces/Boadiwaa/Recipes/openai/api_resources/file.py +++ /dev/null @@ -1,131 +0,0 @@ -import json -import os -from typing import cast - -import openai -from openai import api_requestor, util, error -from openai.api_resources.abstract import DeletableAPIResource, ListableAPIResource -from openai.util import ApiType - - -class File(ListableAPIResource, DeletableAPIResource): - OBJECT_NAME = "files" - - @classmethod - def create( - cls, - file, - purpose, - model=None, - api_key=None, - api_base=None, - api_type=None, - api_version=None, - organization=None, - user_provided_filename=None, - ): - if purpose != "search" and model is not None: - raise ValueError("'model' is only meaningful if 'purpose' is 'search'") - requestor = api_requestor.APIRequestor( - api_key, - api_base=api_base or openai.api_base, - api_type=api_type, - api_version=api_version, - organization=organization, - ) - typed_api_type, api_version = cls._get_api_type_and_version(api_type, api_version) - - if typed_api_type == ApiType.AZURE: - base = cls.class_url() - url = "/%s%s?api-version=%s" % (cls.azure_api_prefix, base, api_version) - elif typed_api_type == ApiType.OPEN_AI: - url = cls.class_url() - else: - raise error.InvalidAPIType('Unsupported API type %s' % api_type) - - # Set the filename on 'purpose' and 'model' to None so they are - # interpreted as form data. - files = [("purpose", (None, purpose))] - if model is not None: - files.append(("model", (None, model))) - if user_provided_filename is not None: - files.append(("file", (user_provided_filename, file, 'application/octet-stream'))) - else: - files.append(("file", ("file", file, 'application/octet-stream'))) - response, _, api_key = requestor.request("post", url, files=files) - return util.convert_to_openai_object( - response, api_key, api_version, organization - ) - - @classmethod - def download( - cls, - id, - api_key=None, - api_base=None, - api_type=None, - api_version=None, - organization=None - ): - requestor = api_requestor.APIRequestor( - api_key, - api_base=api_base or openai.api_base, - api_type=api_type, - api_version=api_version, - organization=organization, - ) - typed_api_type, api_version = cls._get_api_type_and_version(api_type, api_version) - - if typed_api_type == ApiType.AZURE: - base = cls.class_url() - url = "/%s%s/%s/content?api-version=%s" % (cls.azure_api_prefix, base, id, api_version) - elif typed_api_type == ApiType.OPEN_AI: - url = f"{cls.class_url()}/{id}/content" - else: - raise error.InvalidAPIType('Unsupported API type %s' % api_type) - - result = requestor.request_raw("get", url) - if not 200 <= result.status_code < 300: - raise requestor.handle_error_response( - result.content, - result.status_code, - json.loads(cast(bytes, result.content)), - result.headers, - stream_error=False, - ) - return result.content - - @classmethod - def find_matching_files( - cls, - name, - bytes, - purpose, - api_key=None, - api_base=None, - api_type=None, - api_version=None, - organization=None, - ): - """Find already uploaded files with the same name, size, and purpose.""" - all_files = cls.list( - api_key=api_key, - api_base=api_base or openai.api_base, - api_type=api_type, - api_version=api_version, - organization=organization, - ).get("data", []) - matching_files = [] - basename = os.path.basename(name) - for f in all_files: - if f["purpose"] != purpose: - continue - file_basename = os.path.basename(f["filename"]) - if file_basename != basename: - continue - if "bytes" in f and f["bytes"] != bytes: - continue - if "size" in f and int(f["size"]) != bytes: - continue - matching_files.append(f) - return matching_files diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/utils.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/utils.py deleted file mode 100644 index 87bb39a9118028f5633e01d1f2d50b48e3686788..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/utils.py +++ /dev/null @@ -1,100 +0,0 @@ -from __future__ import print_function - -import errno -import os -import numpy as np -from PIL import Image -import torch -import torch.nn as nn - - -EPS = 1e-7 - - -def assert_eq(real, expected): - assert real == expected, '%s (true) vs %s (expected)' % (real, expected) - - -def assert_array_eq(real, expected): - assert (np.abs(real-expected) < EPS).all(), \ - '%s (true) vs %s (expected)' % (real, expected) - - -def load_folder(folder, suffix): - imgs = [] - for f in sorted(os.listdir(folder)): - if f.endswith(suffix): - imgs.append(os.path.join(folder, f)) - return imgs - - -def load_imageid(folder): - images = load_folder(folder, 'jpg') - img_ids = set() - for img in images: - img_id = int(img.split('/')[-1].split('.')[0].split('_')[-1]) - img_ids.add(img_id) - return img_ids - - -def pil_loader(path): - with open(path, 'rb') as f: - with Image.open(f) as img: - return img.convert('RGB') - - -def weights_init(m): - """custom weights initialization.""" - cname = m.__class__ - if cname == nn.Linear or cname == nn.Conv2d or cname == nn.ConvTranspose2d: - m.weight.data.normal_(0.0, 0.02) - elif cname == nn.BatchNorm2d: - m.weight.data.normal_(1.0, 0.02) - m.bias.data.fill_(0) - else: - print('%s is not initialized.' % cname) - - -def init_net(net, net_file): - if net_file: - net.load_state_dict(torch.load(net_file)) - else: - net.apply(weights_init) - - -def create_dir(path): - if not os.path.exists(path): - try: - os.makedirs(path) - except OSError as exc: - if exc.errno != errno.EEXIST: - raise - - -class Logger(object): - def __init__(self, output_name): - dirname = os.path.dirname(output_name) - if not os.path.exists(dirname): - os.mkdir(dirname) - - self.log_file = open(output_name, 'w') - self.infos = {} - - def append(self, key, val): - vals = self.infos.setdefault(key, []) - vals.append(val) - - def log(self, extra_msg=''): - msgs = [extra_msg] - for key, vals in self.infos.iteritems(): - msgs.append('%s %.6f' % (key, np.mean(vals))) - msg = '\n'.join(msgs) - self.log_file.write(msg + '\n') - self.log_file.flush() - self.infos = {} - return msg - - def write(self, msg): - self.log_file.write(msg + '\n') - self.log_file.flush() - print(msg) diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.cpp b/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.cpp deleted file mode 100644 index 551243fdadfd1682b5dc6628623b67a79b3f6c74..0000000000000000000000000000000000000000 --- a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.cpp +++ /dev/null @@ -1,43 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#include - -#include -#include - -namespace groundingdino { - -at::Tensor -ms_deform_attn_cpu_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step) -{ - AT_ERROR("Not implement on cpu"); -} - -std::vector -ms_deform_attn_cpu_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step) -{ - AT_ERROR("Not implement on cpu"); -} - -} // namespace groundingdino diff --git a/spaces/Chitranshu/Dashboard-Dmart/Dockerfile b/spaces/Chitranshu/Dashboard-Dmart/Dockerfile deleted file mode 100644 index c48c4ece862fcc2970b330f60f14ba6c578f67fc..0000000000000000000000000000000000000000 --- a/spaces/Chitranshu/Dashboard-Dmart/Dockerfile +++ /dev/null @@ -1,16 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt -RUN python3 -m pip install --no-cache-dir --upgrade pip -RUN python3 -m pip install --no-cache-dir --upgrade -r /code/requirements.txt - -COPY . . - -CMD ["panel", "serve", "/code/app.py", "--address", "0.0.0.0", "--port", "7860", "--allow-websocket-origin", "*"] - -RUN mkdir /.cache -RUN chmod 777 /.cache -RUN mkdir .chroma -RUN chmod 777 .chroma diff --git a/spaces/ChrisPreston/diff-svc_minato_aqua/training/train_pipeline.py b/spaces/ChrisPreston/diff-svc_minato_aqua/training/train_pipeline.py deleted file mode 100644 index 7904a7593e942456f2e98ca06f2796cb78ac4daa..0000000000000000000000000000000000000000 --- a/spaces/ChrisPreston/diff-svc_minato_aqua/training/train_pipeline.py +++ /dev/null @@ -1,227 +0,0 @@ -import torch -from torch.nn import functional as F - -from utils.hparams import hparams -from utils.pitch_utils import f0_to_coarse, denorm_f0 - - -class Batch2Loss: - ''' - pipeline: batch -> insert1 -> module1 -> insert2 -> module2 -> insert3 -> module3 -> insert4 -> module4 -> loss - ''' - - @staticmethod - def insert1(pitch_midi, midi_dur, is_slur, # variables - midi_embed, midi_dur_layer, is_slur_embed): # modules - ''' - add embeddings for midi, midi_dur, slur - ''' - midi_embedding = midi_embed(pitch_midi) - midi_dur_embedding, slur_embedding = 0, 0 - if midi_dur is not None: - midi_dur_embedding = midi_dur_layer(midi_dur[:, :, None]) # [B, T, 1] -> [B, T, H] - if is_slur is not None: - slur_embedding = is_slur_embed(is_slur) - return midi_embedding, midi_dur_embedding, slur_embedding - - @staticmethod - def module1(fs2_encoder, # modules - txt_tokens, midi_embedding, midi_dur_embedding, slur_embedding): # variables - ''' - get *encoder_out* == fs2_encoder(*txt_tokens*, some embeddings) - ''' - encoder_out = fs2_encoder(txt_tokens, midi_embedding, midi_dur_embedding, slur_embedding) - return encoder_out - - @staticmethod - def insert2(encoder_out, spk_embed_id, spk_embed_dur_id, spk_embed_f0_id, src_nonpadding, # variables - spk_embed_proj): # modules - ''' - 1. add embeddings for pspk, spk_dur, sk_f0 - 2. get *dur_inp* ~= *encoder_out* + *spk_embed_dur* - ''' - # add ref style embed - # Not implemented - # variance encoder - var_embed = 0 - - # encoder_out_dur denotes encoder outputs for duration predictor - # in speech adaptation, duration predictor use old speaker embedding - if hparams['use_spk_embed']: - spk_embed_dur = spk_embed_f0 = spk_embed = spk_embed_proj(spk_embed_id)[:, None, :] - elif hparams['use_spk_id']: - if spk_embed_dur_id is None: - spk_embed_dur_id = spk_embed_id - if spk_embed_f0_id is None: - spk_embed_f0_id = spk_embed_id - spk_embed = spk_embed_proj(spk_embed_id)[:, None, :] - spk_embed_dur = spk_embed_f0 = spk_embed - if hparams['use_split_spk_id']: - spk_embed_dur = spk_embed_dur(spk_embed_dur_id)[:, None, :] - spk_embed_f0 = spk_embed_f0(spk_embed_f0_id)[:, None, :] - else: - spk_embed_dur = spk_embed_f0 = spk_embed = 0 - - # add dur - dur_inp = (encoder_out + var_embed + spk_embed_dur) * src_nonpadding - return var_embed, spk_embed, spk_embed_dur, spk_embed_f0, dur_inp - - @staticmethod - def module2(dur_predictor, length_regulator, # modules - dur_input, mel2ph, txt_tokens, all_vowel_tokens, ret, midi_dur=None): # variables - ''' - 1. get *dur* ~= dur_predictor(*dur_inp*) - 2. (mel2ph is None): get *mel2ph* ~= length_regulater(*dur*) - ''' - src_padding = (txt_tokens == 0) - dur_input = dur_input.detach() + hparams['predictor_grad'] * (dur_input - dur_input.detach()) - - if mel2ph is None: - dur, xs = dur_predictor.inference(dur_input, src_padding) - ret['dur'] = xs - dur = xs.squeeze(-1).exp() - 1.0 - for i in range(len(dur)): - for j in range(len(dur[i])): - if txt_tokens[i, j] in all_vowel_tokens: - if j < len(dur[i]) - 1 and txt_tokens[i, j + 1] not in all_vowel_tokens: - dur[i, j] = midi_dur[i, j] - dur[i, j + 1] - if dur[i, j] < 0: - dur[i, j] = 0 - dur[i, j + 1] = midi_dur[i, j] - else: - dur[i, j] = midi_dur[i, j] - dur[:, 0] = dur[:, 0] + 0.5 - dur_acc = F.pad(torch.round(torch.cumsum(dur, axis=1)), (1, 0)) - dur = torch.clamp(dur_acc[:, 1:] - dur_acc[:, :-1], min=0).long() - ret['dur_choice'] = dur - mel2ph = length_regulator(dur, src_padding).detach() - else: - ret['dur'] = dur_predictor(dur_input, src_padding) - ret['mel2ph'] = mel2ph - - return mel2ph - - @staticmethod - def insert3(encoder_out, mel2ph, var_embed, spk_embed_f0, src_nonpadding, tgt_nonpadding): # variables - ''' - 1. get *decoder_inp* ~= gather *encoder_out* according to *mel2ph* - 2. get *pitch_inp* ~= *decoder_inp* + *spk_embed_f0* - 3. get *pitch_inp_ph* ~= *encoder_out* + *spk_embed_f0* - ''' - decoder_inp = F.pad(encoder_out, [0, 0, 1, 0]) - mel2ph_ = mel2ph[..., None].repeat([1, 1, encoder_out.shape[-1]]) - decoder_inp = decoder_inp_origin = torch.gather(decoder_inp, 1, mel2ph_) # [B, T, H] - - pitch_inp = (decoder_inp_origin + var_embed + spk_embed_f0) * tgt_nonpadding - pitch_inp_ph = (encoder_out + var_embed + spk_embed_f0) * src_nonpadding - return decoder_inp, pitch_inp, pitch_inp_ph - - @staticmethod - def module3(pitch_predictor, pitch_embed, energy_predictor, energy_embed, # modules - pitch_inp, pitch_inp_ph, f0, uv, energy, mel2ph, is_training, ret): # variables - ''' - 1. get *ret['pitch_pred']*, *ret['energy_pred']* ~= pitch_predictor(*pitch_inp*), energy_predictor(*pitch_inp*) - 2. get *pitch_embedding* ~= pitch_embed(f0_to_coarse(denorm_f0(*f0* or *pitch_pred*)) - 3. get *energy_embedding* ~= energy_embed(energy_to_coarse(*energy* or *energy_pred*)) - ''' - - def add_pitch(decoder_inp, f0, uv, mel2ph, ret, encoder_out=None): - if hparams['pitch_type'] == 'ph': - pitch_pred_inp = encoder_out.detach() + hparams['predictor_grad'] * (encoder_out - encoder_out.detach()) - pitch_padding = (encoder_out.sum().abs() == 0) - ret['pitch_pred'] = pitch_pred = pitch_predictor(pitch_pred_inp) - if f0 is None: - f0 = pitch_pred[:, :, 0] - ret['f0_denorm'] = f0_denorm = denorm_f0(f0, None, hparams, pitch_padding=pitch_padding) - pitch = f0_to_coarse(f0_denorm) # start from 0 [B, T_txt] - pitch = F.pad(pitch, [1, 0]) - pitch = torch.gather(pitch, 1, mel2ph) # [B, T_mel] - pitch_embedding = pitch_embed(pitch) - return pitch_embedding - - decoder_inp = decoder_inp.detach() + hparams['predictor_grad'] * (decoder_inp - decoder_inp.detach()) - - pitch_padding = (mel2ph == 0) - - if hparams['pitch_ar']: - ret['pitch_pred'] = pitch_pred = pitch_predictor(decoder_inp, f0 if is_training else None) - if f0 is None: - f0 = pitch_pred[:, :, 0] - else: - ret['pitch_pred'] = pitch_pred = pitch_predictor(decoder_inp) - if f0 is None: - f0 = pitch_pred[:, :, 0] - if hparams['use_uv'] and uv is None: - uv = pitch_pred[:, :, 1] > 0 - ret['f0_denorm'] = f0_denorm = denorm_f0(f0, uv, hparams, pitch_padding=pitch_padding) - if pitch_padding is not None: - f0[pitch_padding] = 0 - - pitch = f0_to_coarse(f0_denorm) # start from 0 - pitch_embedding = pitch_embed(pitch) - return pitch_embedding - - def add_energy(decoder_inp, energy, ret): - decoder_inp = decoder_inp.detach() + hparams['predictor_grad'] * (decoder_inp - decoder_inp.detach()) - ret['energy_pred'] = energy_pred = energy_predictor(decoder_inp)[:, :, 0] - if energy is None: - energy = energy_pred - energy = torch.clamp(energy * 256 // 4, max=255).long() # energy_to_coarse - energy_embedding = energy_embed(energy) - return energy_embedding - - # add pitch and energy embed - nframes = mel2ph.size(1) - - pitch_embedding = 0 - if hparams['use_pitch_embed']: - if f0 is not None: - delta_l = nframes - f0.size(1) - if delta_l > 0: - f0 = torch.cat((f0, torch.FloatTensor([[x[-1]] * delta_l for x in f0]).to(f0.device)), 1) - f0 = f0[:, :nframes] - if uv is not None: - delta_l = nframes - uv.size(1) - if delta_l > 0: - uv = torch.cat((uv, torch.FloatTensor([[x[-1]] * delta_l for x in uv]).to(uv.device)), 1) - uv = uv[:, :nframes] - pitch_embedding = add_pitch(pitch_inp, f0, uv, mel2ph, ret, encoder_out=pitch_inp_ph) - - energy_embedding = 0 - if hparams['use_energy_embed']: - if energy is not None: - delta_l = nframes - energy.size(1) - if delta_l > 0: - energy = torch.cat( - (energy, torch.FloatTensor([[x[-1]] * delta_l for x in energy]).to(energy.device)), 1) - energy = energy[:, :nframes] - energy_embedding = add_energy(pitch_inp, energy, ret) - - return pitch_embedding, energy_embedding - - @staticmethod - def insert4(decoder_inp, pitch_embedding, energy_embedding, spk_embed, ret, tgt_nonpadding): - ''' - *decoder_inp* ~= *decoder_inp* + embeddings for spk, pitch, energy - ''' - ret['decoder_inp'] = decoder_inp = ( - decoder_inp + pitch_embedding + energy_embedding + spk_embed) * tgt_nonpadding - return decoder_inp - - @staticmethod - def module4(diff_main_loss, # modules - norm_spec, decoder_inp_t, ret, K_step, batch_size, device): # variables - ''' - training diffusion using spec as input and decoder_inp as condition. - - Args: - norm_spec: (normalized) spec - decoder_inp_t: (transposed) decoder_inp - Returns: - ret['diff_loss'] - ''' - t = torch.randint(0, K_step, (batch_size,), device=device).long() - norm_spec = norm_spec.transpose(1, 2)[:, None, :, :] # [B, 1, M, T] - ret['diff_loss'] = diff_main_loss(norm_spec, t, cond=decoder_inp_t) - # nonpadding = (mel2ph != 0).float() - # ret['diff_loss'] = self.p_losses(x, t, cond, nonpadding=nonpadding) diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/good_news/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/good_news/__init__.py deleted file mode 100644 index 18137eda7e8099b2550557021e75e854f54a4d4f..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/good_news/__init__.py +++ /dev/null @@ -1,39 +0,0 @@ -from pathlib import Path -from typing import List - -from pil_utils import BuildImage - -from meme_generator import add_meme -from meme_generator.exception import TextOverLength - -img_dir = Path(__file__).parent / "images" - - -def good_news(images, texts: List[str], args): - text = texts[0] - frame = BuildImage.open(img_dir / "0.jpg") - try: - frame.draw_text( - (50, 100, frame.width - 50, frame.height - 100), - text, - allow_wrap=True, - lines_align="center", - max_fontsize=60, - min_fontsize=30, - fill=(238, 0, 0), - stroke_ratio=1 / 15, - stroke_fill=(255, 255, 153), - ) - except ValueError: - raise TextOverLength(text) - return frame.save_png() - - -add_meme( - "good_news", - good_news, - min_texts=1, - max_texts=1, - default_texts=["悲报"], - keywords=["喜报"], -) diff --git a/spaces/CofAI/chat/g4f/Provider/Providers/Liaobots.py b/spaces/CofAI/chat/g4f/Provider/Providers/Liaobots.py deleted file mode 100644 index a04b9574f60842d424712efcd8bef5f6e1e97f4f..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat/g4f/Provider/Providers/Liaobots.py +++ /dev/null @@ -1,64 +0,0 @@ -import os -import uuid -import requests -from ...typing import sha256, Dict, get_type_hints - -url = 'https://liaobots.com' -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-4'] -supports_stream = True -needs_auth = True -working = False - -models = { - 'gpt-4': { - "id": "gpt-4", - "name": "GPT-4", - "maxLength": 24000, - "tokenLimit": 8000 - }, - 'gpt-3.5-turbo': { - "id": "gpt-3.5-turbo", - "name": "GPT-3.5", - "maxLength": 12000, - "tokenLimit": 4000 - }, - 'gpt-3.5-turbo-16k': { - "id": "gpt-3.5-turbo-16k", - "name": "GPT-3.5-16k", - "maxLength": 48000, - "tokenLimit": 16000 - }, -} - - -def _create_completion(model: str, messages: list, stream: bool, chatId: str, **kwargs): - - print(kwargs) - - headers = { - 'authority': 'liaobots.com', - 'content-type': 'application/json', - 'origin': 'https://liaobots.com', - 'referer': 'https://liaobots.com/', - 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', - 'x-auth-code': 'qlcUMVn1KLMhd' - } - - json_data = { - 'conversationId': chatId, - 'model': models[model], - 'messages': messages, - 'key': '', - 'prompt': "You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown.", - } - - response = requests.post('https://liaobots.com/api/chat', - headers=headers, json=json_data, stream=True) - - for token in response.iter_content(chunk_size=2046): - yield (token.decode('utf-8')) - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git "a/spaces/Cong723/gpt-academic-public/crazy_functions/\350\247\243\346\236\220\351\241\271\347\233\256\346\272\220\344\273\243\347\240\201.py" "b/spaces/Cong723/gpt-academic-public/crazy_functions/\350\247\243\346\236\220\351\241\271\347\233\256\346\272\220\344\273\243\347\240\201.py" deleted file mode 100644 index cd162563cb949acae49f20ef2a0949f9b5f154af..0000000000000000000000000000000000000000 --- "a/spaces/Cong723/gpt-academic-public/crazy_functions/\350\247\243\346\236\220\351\241\271\347\233\256\346\272\220\344\273\243\347\240\201.py" +++ /dev/null @@ -1,316 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import input_clipping - -def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import os, copy - from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive - msg = '正常' - inputs_array = [] - inputs_show_user_array = [] - history_array = [] - sys_prompt_array = [] - report_part_1 = [] - - assert len(file_manifest) <= 512, "源文件太多(超过512个), 请缩减输入文件的数量。或者,您也可以选择删除此行警告,并修改代码拆分file_manifest列表,从而实现分批次处理。" - ############################## <第一步,逐个文件分析,多线程> ################################## - for index, fp in enumerate(file_manifest): - # 读取文件 - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - prefix = "接下来请你逐文件分析下面的工程" if index==0 else "" - i_say = prefix + f'请对下面的程序文件做一个概述文件名是{os.path.relpath(fp, project_folder)},文件代码是 ```{file_content}```' - i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的程序文件做一个概述: {os.path.abspath(fp)}' - # 装载请求内容 - inputs_array.append(i_say) - inputs_show_user_array.append(i_say_show_user) - history_array.append([]) - sys_prompt_array.append("你是一个程序架构分析师,正在分析一个源代码项目。你的回答必须简单明了。") - - # 文件读取完成,对每一个源代码文件,生成一个请求线程,发送到chatgpt进行分析 - gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array = inputs_array, - inputs_show_user_array = inputs_show_user_array, - history_array = history_array, - sys_prompt_array = sys_prompt_array, - llm_kwargs = llm_kwargs, - chatbot = chatbot, - show_user_at_complete = True - ) - - # 全部文件解析完成,结果写入文件,准备对工程源代码进行汇总分析 - report_part_1 = copy.deepcopy(gpt_response_collection) - history_to_return = report_part_1 - res = write_results_to_file(report_part_1) - chatbot.append(("完成?", "逐个文件分析已完成。" + res + "\n\n正在开始汇总。")) - yield from update_ui(chatbot=chatbot, history=history_to_return) # 刷新界面 - - ############################## <第二步,综合,单线程,分组+迭代处理> ################################## - batchsize = 16 # 10个文件为一组 - report_part_2 = [] - previous_iteration_files = [] - last_iteration_result = "" - while True: - if len(file_manifest) == 0: break - this_iteration_file_manifest = file_manifest[:batchsize] - this_iteration_gpt_response_collection = gpt_response_collection[:batchsize*2] - file_rel_path = [os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)] - # 把“请对下面的程序文件做一个概述” 替换成 精简的 "文件名:{all_file[index]}" - for index, content in enumerate(this_iteration_gpt_response_collection): - if index%2==0: this_iteration_gpt_response_collection[index] = f"{file_rel_path[index//2]}" # 只保留文件名节省token - previous_iteration_files.extend([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)]) - previous_iteration_files_string = ', '.join(previous_iteration_files) - current_iteration_focus = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)]) - i_say = f'用一张Markdown表格简要描述以下文件的功能:{previous_iteration_files_string}。根据以上分析,用一句话概括程序的整体功能。' - inputs_show_user = f'根据以上分析,对程序的整体功能和构架重新做出概括,由于输入长度限制,可能需要分组处理,本组文件为 {current_iteration_focus} + 已经汇总的文件组。' - this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection) - this_iteration_history.append(last_iteration_result) - # 裁剪input - inputs, this_iteration_history_feed = input_clipping(inputs=i_say, history=this_iteration_history, max_token_limit=2560) - result = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=inputs, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot, - history=this_iteration_history_feed, # 迭代之前的分析 - sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。") - report_part_2.extend([i_say, result]) - last_iteration_result = result - - file_manifest = file_manifest[batchsize:] - gpt_response_collection = gpt_response_collection[batchsize*2:] - - ############################## ################################## - history_to_return.extend(report_part_2) - res = write_results_to_file(history_to_return) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history_to_return) # 刷新界面 - - -@CatchException -def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob - file_manifest = [f for f in glob.glob('./*.py') if ('test_project' not in f) and ('gpt_log' not in f)] + \ - [f for f in glob.glob('./crazy_functions/*.py') if ('test_project' not in f) and ('gpt_log' not in f)]+ \ - [f for f in glob.glob('./request_llm/*.py') if ('test_project' not in f) and ('gpt_log' not in f)] - project_folder = './' - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何python文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - -@CatchException -def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.py', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何python文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - - -@CatchException -def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.h', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.hpp', recursive=True)] #+ \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.h头文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - -@CatchException -def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.h', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.hpp', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.h头文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - - -@CatchException -def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.java', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.jar', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.xml', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.sh', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何java文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - - -@CatchException -def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.ts', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.tsx', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.json', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.js', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.vue', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.less', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.sass', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.wxml', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.wxss', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.css', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.jsx', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何前端相关文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - - -@CatchException -def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.go', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/go.mod', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/go.sum', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/go.work', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何golang文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - - -@CatchException -def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.lua', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.xml', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.json', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.toml', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何lua文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - - -@CatchException -def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.cs', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.csproj', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何CSharp文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - - -@CatchException -def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - txt_pattern = plugin_kwargs.get("advanced_arg") - txt_pattern = txt_pattern.replace(",", ",") - # 将要匹配的模式(例如: *.c, *.cpp, *.py, config.toml) - pattern_include = [_.lstrip(" ,").rstrip(" ,") for _ in txt_pattern.split(",") if _ != "" and not _.strip().startswith("^")] - if not pattern_include: pattern_include = ["*"] # 不输入即全部匹配 - # 将要忽略匹配的文件后缀(例如: ^*.c, ^*.cpp, ^*.py) - pattern_except_suffix = [_.lstrip(" ^*.,").rstrip(" ,") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^*.")] - pattern_except_suffix += ['zip', 'rar', '7z', 'tar', 'gz'] # 避免解析压缩文件 - # 将要忽略匹配的文件名(例如: ^README.md) - pattern_except_name = [_.lstrip(" ^*,").rstrip(" ,").replace(".", "\.") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^") and not _.strip().startswith("^*.")] - # 生成正则表达式 - pattern_except = '/[^/]+\.(' + "|".join(pattern_except_suffix) + ')$' - pattern_except += '|/(' + "|".join(pattern_except_name) + ')$' if pattern_except_name != [] else '' - - history.clear() - import glob, os, re - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - # 若上传压缩文件, 先寻找到解压的文件夹路径, 从而避免解析压缩文件 - maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)] - if len(maybe_dir)>0 and maybe_dir[0].endswith('.extract'): - extract_folder_path = maybe_dir[0] - else: - extract_folder_path = project_folder - # 按输入的匹配模式寻找上传的非压缩文件和已解压的文件 - file_manifest = [f for pattern in pattern_include for f in glob.glob(f'{extract_folder_path}/**/{pattern}', recursive=True) if "" != extract_folder_path and \ - os.path.isfile(f) and (not re.search(pattern_except, f) or pattern.endswith('.' + re.search(pattern_except, f).group().split('.')[-1]))] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) \ No newline at end of file diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/roi_heads/box_head/inference.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/roi_heads/box_head/inference.py deleted file mode 100644 index 595a2e61620fbd345bc36060c43191792fc010ea..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/roi_heads/box_head/inference.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import torch -import torch.nn.functional as F -from torch import nn - -from maskrcnn_benchmark.structures.bounding_box import BoxList -from maskrcnn_benchmark.structures.boxlist_ops import boxlist_nms -from maskrcnn_benchmark.structures.boxlist_ops import cat_boxlist -from maskrcnn_benchmark.modeling.box_coder import BoxCoder - - -class PostProcessor(nn.Module): - """ - From a set of classification scores, box regression and proposals, - computes the post-processed boxes, and applies NMS to obtain the - final results - """ - - def __init__( - self, - score_thresh=0.05, - nms=0.5, - detections_per_img=100, - box_coder=None, - cls_agnostic_bbox_reg=False - ): - """ - Arguments: - score_thresh (float) - nms (float) - detections_per_img (int) - box_coder (BoxCoder) - """ - super(PostProcessor, self).__init__() - self.score_thresh = score_thresh - self.nms = nms - self.detections_per_img = detections_per_img - if box_coder is None: - box_coder = BoxCoder(weights=(10., 10., 5., 5.)) - self.box_coder = box_coder - self.cls_agnostic_bbox_reg = cls_agnostic_bbox_reg - - def forward(self, x, boxes): - """ - Arguments: - x (tuple[tensor, tensor]): x contains the class logits - and the box_regression from the model. - boxes (list[BoxList]): bounding boxes that are used as - reference, one for ech image - - Returns: - results (list[BoxList]): one BoxList for each image, containing - the extra fields labels and scores - """ - class_logits, box_regression = x - class_prob = F.softmax(class_logits, -1) - - # TODO think about a representation of batch of boxes - image_shapes = [box.size for box in boxes] - boxes_per_image = [len(box) for box in boxes] - concat_boxes = torch.cat([a.bbox for a in boxes], dim=0) - - if self.cls_agnostic_bbox_reg: - box_regression = box_regression[:, -4:] - proposals = self.box_coder.decode( - box_regression.view(sum(boxes_per_image), -1), concat_boxes - ) - if self.cls_agnostic_bbox_reg: - proposals = proposals.repeat(1, class_prob.shape[1]) - - num_classes = class_prob.shape[1] - - proposals = proposals.split(boxes_per_image, dim=0) - class_prob = class_prob.split(boxes_per_image, dim=0) - - results = [] - for prob, boxes_per_img, image_shape in zip( - class_prob, proposals, image_shapes - ): - boxlist = self.prepare_boxlist(boxes_per_img, prob, image_shape) - boxlist = boxlist.clip_to_image(remove_empty=False) - boxlist = self.filter_results(boxlist, num_classes) - results.append(boxlist) - return results - - def prepare_boxlist(self, boxes, scores, image_shape): - """ - Returns BoxList from `boxes` and adds probability scores information - as an extra field - `boxes` has shape (#detections, 4 * #classes), where each row represents - a list of predicted bounding boxes for each of the object classes in the - dataset (including the background class). The detections in each row - originate from the same object proposal. - `scores` has shape (#detection, #classes), where each row represents a list - of object detection confidence scores for each of the object classes in the - dataset (including the background class). `scores[i, j]`` corresponds to the - box at `boxes[i, j * 4:(j + 1) * 4]`. - """ - boxes = boxes.reshape(-1, 4) - scores = scores.reshape(-1) - boxlist = BoxList(boxes, image_shape, mode="xyxy") - boxlist.add_field("scores", scores) - return boxlist - - def filter_results(self, boxlist, num_classes): - """Returns bounding-box detection results by thresholding on scores and - applying non-maximum suppression (NMS). - """ - # unwrap the boxlist to avoid additional overhead. - # if we had multi-class NMS, we could perform this directly on the boxlist - boxes = boxlist.bbox.reshape(-1, num_classes * 4) - scores = boxlist.get_field("scores").reshape(-1, num_classes) - - device = scores.device - result = [] - # Apply threshold on detection probabilities and apply NMS - # Skip j = 0, because it's the background class - inds_all = scores > self.score_thresh - for j in range(1, num_classes): - inds = inds_all[:, j].nonzero().squeeze(1) - scores_j = scores[inds, j] - boxes_j = boxes[inds, j * 4 : (j + 1) * 4] - boxlist_for_class = BoxList(boxes_j, boxlist.size, mode="xyxy") - boxlist_for_class.add_field("scores", scores_j) - boxlist_for_class = boxlist_nms( - boxlist_for_class, self.nms - ) - num_labels = len(boxlist_for_class) - boxlist_for_class.add_field( - "labels", torch.full((num_labels,), j, dtype=torch.int64, device=device) - ) - result.append(boxlist_for_class) - - result = cat_boxlist(result) - number_of_detections = len(result) - - # Limit to max_per_image detections **over all classes** - if number_of_detections > self.detections_per_img > 0: - cls_scores = result.get_field("scores") - image_thresh, _ = torch.kthvalue( - cls_scores.cpu(), number_of_detections - self.detections_per_img + 1 - ) - keep = cls_scores >= image_thresh.item() - keep = torch.nonzero(keep).squeeze(1) - result = result[keep] - return result - - -def make_roi_box_post_processor(cfg): - use_fpn = cfg.MODEL.ROI_HEADS.USE_FPN - - bbox_reg_weights = cfg.MODEL.ROI_HEADS.BBOX_REG_WEIGHTS - box_coder = BoxCoder(weights=bbox_reg_weights) - - score_thresh = cfg.MODEL.ROI_HEADS.SCORE_THRESH - nms_thresh = cfg.MODEL.ROI_HEADS.NMS - detections_per_img = cfg.MODEL.ROI_HEADS.DETECTIONS_PER_IMG - cls_agnostic_bbox_reg = cfg.MODEL.CLS_AGNOSTIC_BBOX_REG - - postprocessor = PostProcessor( - score_thresh, - nms_thresh, - detections_per_img, - box_coder, - cls_agnostic_bbox_reg - ) - return postprocessor diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/util/mpl_renderer.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/util/mpl_renderer.py deleted file mode 100644 index dbcb5ca19a01e3ae000986673d66def23f9c2eac..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/util/mpl_renderer.py +++ /dev/null @@ -1,613 +0,0 @@ -from __future__ import annotations - -import io -from typing import TYPE_CHECKING, Any, cast - -import matplotlib.collections as mcollections -import matplotlib.pyplot as plt -import numpy as np - -from contourpy import FillType, LineType -from contourpy.util.mpl_util import filled_to_mpl_paths, lines_to_mpl_paths, mpl_codes_to_offsets -from contourpy.util.renderer import Renderer - -if TYPE_CHECKING: - from matplotlib.axes import Axes - from matplotlib.figure import Figure - from numpy.typing import ArrayLike - - import contourpy._contourpy as cpy - - -class MplRenderer(Renderer): - _axes: Axes - _fig: Figure - _want_tight: bool - - """Utility renderer using Matplotlib to render a grid of plots over the same (x, y) range. - - Args: - nrows (int, optional): Number of rows of plots, default ``1``. - ncols (int, optional): Number of columns of plots, default ``1``. - figsize (tuple(float, float), optional): Figure size in inches, default ``(9, 9)``. - show_frame (bool, optional): Whether to show frame and axes ticks, default ``True``. - backend (str, optional): Matplotlib backend to use or ``None`` for default backend. - Default ``None``. - gridspec_kw (dict, optional): Gridspec keyword arguments to pass to ``plt.subplots``, - default None. - """ - def __init__( - self, - nrows: int = 1, - ncols: int = 1, - figsize: tuple[float, float] = (9, 9), - show_frame: bool = True, - backend: str | None = None, - gridspec_kw: dict[str, Any] | None = None, - ) -> None: - if backend is not None: - import matplotlib - matplotlib.use(backend) - - kwargs = dict(figsize=figsize, squeeze=False, sharex=True, sharey=True) - if gridspec_kw is not None: - kwargs["gridspec_kw"] = gridspec_kw - else: - kwargs["subplot_kw"] = dict(aspect="equal") - - self._fig, axes = plt.subplots(nrows, ncols, **kwargs) - self._axes = axes.flatten() - if not show_frame: - for ax in self._axes: - ax.axis("off") - - self._want_tight = True - - def __del__(self) -> None: - if hasattr(self, "_fig"): - plt.close(self._fig) - - def _autoscale(self) -> None: - # Using axes._need_autoscale attribute if need to autoscale before rendering after adding - # lines/filled. Only want to autoscale once per axes regardless of how many lines/filled - # added. - for ax in self._axes: - if getattr(ax, "_need_autoscale", False): - ax.autoscale_view(tight=True) - ax._need_autoscale = False - if self._want_tight and len(self._axes) > 1: - self._fig.tight_layout() - - def _get_ax(self, ax: Axes | int) -> Axes: - if isinstance(ax, int): - ax = self._axes[ax] - return ax - - def filled( - self, - filled: cpy.FillReturn, - fill_type: FillType, - ax: Axes | int = 0, - color: str = "C0", - alpha: float = 0.7, - ) -> None: - """Plot filled contours on a single Axes. - - Args: - filled (sequence of arrays): Filled contour data as returned by - :func:`~contourpy.ContourGenerator.filled`. - fill_type (FillType): Type of ``filled`` data, as returned by - :attr:`~contourpy.ContourGenerator.fill_type`. - ax (int or Maplotlib Axes, optional): Which axes to plot on, default ``0``. - color (str, optional): Color to plot with. May be a string color or the letter ``"C"`` - followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the - ``tab10`` colormap. Default ``"C0"``. - alpha (float, optional): Opacity to plot with, default ``0.7``. - """ - ax = self._get_ax(ax) - paths = filled_to_mpl_paths(filled, fill_type) - collection = mcollections.PathCollection( - paths, facecolors=color, edgecolors="none", lw=0, alpha=alpha) - ax.add_collection(collection) - ax._need_autoscale = True - - def grid( - self, - x: ArrayLike, - y: ArrayLike, - ax: Axes | int = 0, - color: str = "black", - alpha: float = 0.1, - point_color: str | None = None, - quad_as_tri_alpha: float = 0, - ) -> None: - """Plot quad grid lines on a single Axes. - - Args: - x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points. - y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points. - ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``. - color (str, optional): Color to plot grid lines, default ``"black"``. - alpha (float, optional): Opacity to plot lines with, default ``0.1``. - point_color (str, optional): Color to plot grid points or ``None`` if grid points - should not be plotted, default ``None``. - quad_as_tri_alpha (float, optional): Opacity to plot ``quad_as_tri`` grid, default 0. - - Colors may be a string color or the letter ``"C"`` followed by an integer in the range - ``"C0"`` to ``"C9"`` to use a color from the ``tab10`` colormap. - - Warning: - ``quad_as_tri_alpha > 0`` plots all quads as though they are unmasked. - """ - ax = self._get_ax(ax) - x, y = self._grid_as_2d(x, y) - kwargs = dict(color=color, alpha=alpha) - ax.plot(x, y, x.T, y.T, **kwargs) - if quad_as_tri_alpha > 0: - # Assumes no quad mask. - xmid = 0.25*(x[:-1, :-1] + x[1:, :-1] + x[:-1, 1:] + x[1:, 1:]) - ymid = 0.25*(y[:-1, :-1] + y[1:, :-1] + y[:-1, 1:] + y[1:, 1:]) - kwargs["alpha"] = quad_as_tri_alpha - ax.plot( - np.stack((x[:-1, :-1], xmid, x[1:, 1:])).reshape((3, -1)), - np.stack((y[:-1, :-1], ymid, y[1:, 1:])).reshape((3, -1)), - np.stack((x[1:, :-1], xmid, x[:-1, 1:])).reshape((3, -1)), - np.stack((y[1:, :-1], ymid, y[:-1, 1:])).reshape((3, -1)), - **kwargs) - if point_color is not None: - ax.plot(x, y, color=point_color, alpha=alpha, marker="o", lw=0) - ax._need_autoscale = True - - def lines( - self, - lines: cpy.LineReturn, - line_type: LineType, - ax: Axes | int = 0, - color: str = "C0", - alpha: float = 1.0, - linewidth: float = 1, - ) -> None: - """Plot contour lines on a single Axes. - - Args: - lines (sequence of arrays): Contour line data as returned by - :func:`~contourpy.ContourGenerator.lines`. - line_type (LineType): Type of ``lines`` data, as returned by - :attr:`~contourpy.ContourGenerator.line_type`. - ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``. - color (str, optional): Color to plot lines. May be a string color or the letter ``"C"`` - followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the - ``tab10`` colormap. Default ``"C0"``. - alpha (float, optional): Opacity to plot lines with, default ``1.0``. - linewidth (float, optional): Width of lines, default ``1``. - """ - ax = self._get_ax(ax) - paths = lines_to_mpl_paths(lines, line_type) - collection = mcollections.PathCollection( - paths, facecolors="none", edgecolors=color, lw=linewidth, alpha=alpha) - ax.add_collection(collection) - ax._need_autoscale = True - - def mask( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike | np.ma.MaskedArray[Any, Any], - ax: Axes | int = 0, - color: str = "black", - ) -> None: - """Plot masked out grid points as circles on a single Axes. - - Args: - x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points. - y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points. - z (masked array of shape (ny, nx): z-values. - ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``. - color (str, optional): Circle color, default ``"black"``. - """ - mask = np.ma.getmask(z) # type: ignore[no-untyped-call] - if mask is np.ma.nomask: - return - ax = self._get_ax(ax) - x, y = self._grid_as_2d(x, y) - ax.plot(x[mask], y[mask], "o", c=color) - - def save(self, filename: str, transparent: bool = False) -> None: - """Save plots to SVG or PNG file. - - Args: - filename (str): Filename to save to. - transparent (bool, optional): Whether background should be transparent, default - ``False``. - """ - self._autoscale() - self._fig.savefig(filename, transparent=transparent) - - def save_to_buffer(self) -> io.BytesIO: - """Save plots to an ``io.BytesIO`` buffer. - - Return: - BytesIO: PNG image buffer. - """ - self._autoscale() - buf = io.BytesIO() - self._fig.savefig(buf, format="png") - buf.seek(0) - return buf - - def show(self) -> None: - """Show plots in an interactive window, in the usual Matplotlib manner. - """ - self._autoscale() - plt.show() - - def title(self, title: str, ax: Axes | int = 0, color: str | None = None) -> None: - """Set the title of a single Axes. - - Args: - title (str): Title text. - ax (int or Matplotlib Axes, optional): Which Axes to set the title of, default ``0``. - color (str, optional): Color to set title. May be a string color or the letter ``"C"`` - followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the - ``tab10`` colormap. Default is ``None`` which uses Matplotlib's default title color - that depends on the stylesheet in use. - """ - if color: - self._get_ax(ax).set_title(title, color=color) - else: - self._get_ax(ax).set_title(title) - - def z_values( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike, - ax: Axes | int = 0, - color: str = "green", - fmt: str = ".1f", - quad_as_tri: bool = False, - ) -> None: - """Show ``z`` values on a single Axes. - - Args: - x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points. - y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points. - z (array-like of shape (ny, nx): z-values. - ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``. - color (str, optional): Color of added text. May be a string color or the letter ``"C"`` - followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the - ``tab10`` colormap. Default ``"green"``. - fmt (str, optional): Format to display z-values, default ``".1f"``. - quad_as_tri (bool, optional): Whether to show z-values at the ``quad_as_tri`` centers - of quads. - - Warning: - ``quad_as_tri=True`` shows z-values for all quads, even if masked. - """ - ax = self._get_ax(ax) - x, y = self._grid_as_2d(x, y) - z = np.asarray(z) - ny, nx = z.shape - for j in range(ny): - for i in range(nx): - ax.text(x[j, i], y[j, i], f"{z[j, i]:{fmt}}", ha="center", va="center", - color=color, clip_on=True) - if quad_as_tri: - for j in range(ny-1): - for i in range(nx-1): - xx = np.mean(x[j:j+2, i:i+2]) - yy = np.mean(y[j:j+2, i:i+2]) - zz = np.mean(z[j:j+2, i:i+2]) - ax.text(xx, yy, f"{zz:{fmt}}", ha="center", va="center", color=color, - clip_on=True) - - -class MplTestRenderer(MplRenderer): - """Test renderer implemented using Matplotlib. - - No whitespace around plots and no spines/ticks displayed. - Uses Agg backend, so can only save to file/buffer, cannot call ``show()``. - """ - def __init__( - self, - nrows: int = 1, - ncols: int = 1, - figsize: tuple[float, float] = (9, 9), - ) -> None: - gridspec = { - "left": 0.01, - "right": 0.99, - "top": 0.99, - "bottom": 0.01, - "wspace": 0.01, - "hspace": 0.01, - } - super().__init__( - nrows, ncols, figsize, show_frame=True, backend="Agg", gridspec_kw=gridspec, - ) - - for ax in self._axes: - ax.set_xmargin(0.0) - ax.set_ymargin(0.0) - ax.set_xticks([]) - ax.set_yticks([]) - - self._want_tight = False - - -class MplDebugRenderer(MplRenderer): - """Debug renderer implemented using Matplotlib. - - Extends ``MplRenderer`` to add extra information to help in debugging such as markers, arrows, - text, etc. - """ - def __init__( - self, - nrows: int = 1, - ncols: int = 1, - figsize: tuple[float, float] = (9, 9), - show_frame: bool = True, - ) -> None: - super().__init__(nrows, ncols, figsize, show_frame) - - def _arrow( - self, - ax: Axes, - line_start: cpy.CoordinateArray, - line_end: cpy.CoordinateArray, - color: str, - alpha: float, - arrow_size: float, - ) -> None: - mid = 0.5*(line_start + line_end) - along = line_end - line_start - along /= np.sqrt(np.dot(along, along)) # Unit vector. - right = np.asarray((along[1], -along[0])) - arrow = np.stack(( - mid - (along*0.5 - right)*arrow_size, - mid + along*0.5*arrow_size, - mid - (along*0.5 + right)*arrow_size, - )) - ax.plot(arrow[:, 0], arrow[:, 1], "-", c=color, alpha=alpha) - - def _filled_to_lists_of_points_and_offsets( - self, - filled: cpy.FillReturn, - fill_type: FillType, - ) -> tuple[list[cpy.PointArray], list[cpy.OffsetArray]]: - if fill_type == FillType.OuterCode: - if TYPE_CHECKING: - filled = cast(cpy.FillReturn_OuterCode, filled) - all_points = filled[0] - all_offsets = [mpl_codes_to_offsets(codes) for codes in filled[1]] - elif fill_type == FillType.ChunkCombinedCode: - if TYPE_CHECKING: - filled = cast(cpy.FillReturn_ChunkCombinedCode, filled) - all_points = [points for points in filled[0] if points is not None] - all_offsets = [mpl_codes_to_offsets(codes) for codes in filled[1] if codes is not None] - elif fill_type == FillType.OuterOffset: - if TYPE_CHECKING: - filled = cast(cpy.FillReturn_OuterOffset, filled) - all_points = filled[0] - all_offsets = filled[1] - elif fill_type == FillType.ChunkCombinedOffset: - if TYPE_CHECKING: - filled = cast(cpy.FillReturn_ChunkCombinedOffset, filled) - all_points = [points for points in filled[0] if points is not None] - all_offsets = [offsets for offsets in filled[1] if offsets is not None] - elif fill_type == FillType.ChunkCombinedCodeOffset: - if TYPE_CHECKING: - filled = cast(cpy.FillReturn_ChunkCombinedCodeOffset, filled) - all_points = [] - all_offsets = [] - for points, codes, outer_offsets in zip(*filled): - if points is None: - continue - if TYPE_CHECKING: - assert codes is not None and outer_offsets is not None - all_points += np.split(points, outer_offsets[1:-1]) - all_codes = np.split(codes, outer_offsets[1:-1]) - all_offsets += [mpl_codes_to_offsets(codes) for codes in all_codes] - elif fill_type == FillType.ChunkCombinedOffsetOffset: - if TYPE_CHECKING: - filled = cast(cpy.FillReturn_ChunkCombinedOffsetOffset, filled) - all_points = [] - all_offsets = [] - for points, offsets, outer_offsets in zip(*filled): - if points is None: - continue - if TYPE_CHECKING: - assert offsets is not None and outer_offsets is not None - for i in range(len(outer_offsets)-1): - offs = offsets[outer_offsets[i]:outer_offsets[i+1]+1] - all_points.append(points[offs[0]:offs[-1]]) - all_offsets.append(offs - offs[0]) - else: - raise RuntimeError(f"Rendering FillType {fill_type} not implemented") - - return all_points, all_offsets - - def _lines_to_list_of_points( - self, lines: cpy.LineReturn, line_type: LineType, - ) -> list[cpy.PointArray]: - if line_type == LineType.Separate: - if TYPE_CHECKING: - lines = cast(cpy.LineReturn_Separate, lines) - all_lines = lines - elif line_type == LineType.SeparateCode: - if TYPE_CHECKING: - lines = cast(cpy.LineReturn_SeparateCode, lines) - all_lines = lines[0] - elif line_type == LineType.ChunkCombinedCode: - if TYPE_CHECKING: - lines = cast(cpy.LineReturn_ChunkCombinedCode, lines) - all_lines = [] - for points, codes in zip(*lines): - if points is not None: - if TYPE_CHECKING: - assert codes is not None - offsets = mpl_codes_to_offsets(codes) - for i in range(len(offsets)-1): - all_lines.append(points[offsets[i]:offsets[i+1]]) - elif line_type == LineType.ChunkCombinedOffset: - if TYPE_CHECKING: - lines = cast(cpy.LineReturn_ChunkCombinedOffset, lines) - all_lines = [] - for points, all_offsets in zip(*lines): - if points is not None: - if TYPE_CHECKING: - assert all_offsets is not None - for i in range(len(all_offsets)-1): - all_lines.append(points[all_offsets[i]:all_offsets[i+1]]) - else: - raise RuntimeError(f"Rendering LineType {line_type} not implemented") - - return all_lines - - def filled( - self, - filled: cpy.FillReturn, - fill_type: FillType, - ax: Axes | int = 0, - color: str = "C1", - alpha: float = 0.7, - line_color: str = "C0", - line_alpha: float = 0.7, - point_color: str = "C0", - start_point_color: str = "red", - arrow_size: float = 0.1, - ) -> None: - super().filled(filled, fill_type, ax, color, alpha) - - if line_color is None and point_color is None: - return - - ax = self._get_ax(ax) - all_points, all_offsets = self._filled_to_lists_of_points_and_offsets(filled, fill_type) - - # Lines. - if line_color is not None: - for points, offsets in zip(all_points, all_offsets): - for start, end in zip(offsets[:-1], offsets[1:]): - xys = points[start:end] - ax.plot(xys[:, 0], xys[:, 1], c=line_color, alpha=line_alpha) - - if arrow_size > 0.0: - n = len(xys) - for i in range(n-1): - self._arrow(ax, xys[i], xys[i+1], line_color, line_alpha, arrow_size) - - # Points. - if point_color is not None: - for points, offsets in zip(all_points, all_offsets): - mask = np.ones(offsets[-1], dtype=bool) - mask[offsets[1:]-1] = False # Exclude end points. - if start_point_color is not None: - start_indices = offsets[:-1] - mask[start_indices] = False # Exclude start points. - ax.plot( - points[:, 0][mask], points[:, 1][mask], "o", c=point_color, alpha=line_alpha) - - if start_point_color is not None: - ax.plot(points[:, 0][start_indices], points[:, 1][start_indices], "o", - c=start_point_color, alpha=line_alpha) - - def lines( - self, - lines: cpy.LineReturn, - line_type: LineType, - ax: Axes | int = 0, - color: str = "C0", - alpha: float = 1.0, - linewidth: float = 1, - point_color: str = "C0", - start_point_color: str = "red", - arrow_size: float = 0.1, - ) -> None: - super().lines(lines, line_type, ax, color, alpha, linewidth) - - if arrow_size == 0.0 and point_color is None: - return - - ax = self._get_ax(ax) - all_lines = self._lines_to_list_of_points(lines, line_type) - - if arrow_size > 0.0: - for line in all_lines: - for i in range(len(line)-1): - self._arrow(ax, line[i], line[i+1], color, alpha, arrow_size) - - if point_color is not None: - for line in all_lines: - start_index = 0 - end_index = len(line) - if start_point_color is not None: - ax.plot(line[0, 0], line[0, 1], "o", c=start_point_color, alpha=alpha) - start_index = 1 - if line[0][0] == line[-1][0] and line[0][1] == line[-1][1]: - end_index -= 1 - ax.plot(line[start_index:end_index, 0], line[start_index:end_index, 1], "o", - c=color, alpha=alpha) - - def point_numbers( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike, - ax: Axes | int = 0, - color: str = "red", - ) -> None: - ax = self._get_ax(ax) - x, y = self._grid_as_2d(x, y) - z = np.asarray(z) - ny, nx = z.shape - for j in range(ny): - for i in range(nx): - quad = i + j*nx - ax.text(x[j, i], y[j, i], str(quad), ha="right", va="top", color=color, - clip_on=True) - - def quad_numbers( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike, - ax: Axes | int = 0, - color: str = "blue", - ) -> None: - ax = self._get_ax(ax) - x, y = self._grid_as_2d(x, y) - z = np.asarray(z) - ny, nx = z.shape - for j in range(1, ny): - for i in range(1, nx): - quad = i + j*nx - xmid = x[j-1:j+1, i-1:i+1].mean() - ymid = y[j-1:j+1, i-1:i+1].mean() - ax.text(xmid, ymid, str(quad), ha="center", va="center", color=color, clip_on=True) - - def z_levels( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike, - lower_level: float, - upper_level: float | None = None, - ax: Axes | int = 0, - color: str = "green", - ) -> None: - ax = self._get_ax(ax) - x, y = self._grid_as_2d(x, y) - z = np.asarray(z) - ny, nx = z.shape - for j in range(ny): - for i in range(nx): - zz = z[j, i] - if upper_level is not None and zz > upper_level: - z_level = 2 - elif zz > lower_level: - z_level = 1 - else: - z_level = 0 - ax.text(x[j, i], y[j, i], z_level, ha="left", va="bottom", color=color, - clip_on=True) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/G_M_A_P_.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/G_M_A_P_.py deleted file mode 100644 index 39b0050c5f0591a2b36c21242863655ca1f3ef47..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/G_M_A_P_.py +++ /dev/null @@ -1,142 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.textTools import tobytes, tostr, safeEval -from . import DefaultTable - -GMAPFormat = """ - > # big endian - tableVersionMajor: H - tableVersionMinor: H - flags: H - recordsCount: H - recordsOffset: H - fontNameLength: H -""" -# psFontName is a byte string which follows the record above. This is zero padded -# to the beginning of the records array. The recordsOffsst is 32 bit aligned. - -GMAPRecordFormat1 = """ - > # big endian - UV: L - cid: H - gid: H - ggid: H - name: 32s -""" - - -class GMAPRecord(object): - def __init__(self, uv=0, cid=0, gid=0, ggid=0, name=""): - self.UV = uv - self.cid = cid - self.gid = gid - self.ggid = ggid - self.name = name - - def toXML(self, writer, ttFont): - writer.begintag("GMAPRecord") - writer.newline() - writer.simpletag("UV", value=self.UV) - writer.newline() - writer.simpletag("cid", value=self.cid) - writer.newline() - writer.simpletag("gid", value=self.gid) - writer.newline() - writer.simpletag("glyphletGid", value=self.gid) - writer.newline() - writer.simpletag("GlyphletName", value=self.name) - writer.newline() - writer.endtag("GMAPRecord") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - value = attrs["value"] - if name == "GlyphletName": - self.name = value - else: - setattr(self, name, safeEval(value)) - - def compile(self, ttFont): - if self.UV is None: - self.UV = 0 - nameLen = len(self.name) - if nameLen < 32: - self.name = self.name + "\0" * (32 - nameLen) - data = sstruct.pack(GMAPRecordFormat1, self) - return data - - def __repr__(self): - return ( - "GMAPRecord[ UV: " - + str(self.UV) - + ", cid: " - + str(self.cid) - + ", gid: " - + str(self.gid) - + ", ggid: " - + str(self.ggid) - + ", Glyphlet Name: " - + str(self.name) - + " ]" - ) - - -class table_G_M_A_P_(DefaultTable.DefaultTable): - - dependencies = [] - - def decompile(self, data, ttFont): - dummy, newData = sstruct.unpack2(GMAPFormat, data, self) - self.psFontName = tostr(newData[: self.fontNameLength]) - assert ( - self.recordsOffset % 4 - ) == 0, "GMAP error: recordsOffset is not 32 bit aligned." - newData = data[self.recordsOffset :] - self.gmapRecords = [] - for i in range(self.recordsCount): - gmapRecord, newData = sstruct.unpack2( - GMAPRecordFormat1, newData, GMAPRecord() - ) - gmapRecord.name = gmapRecord.name.strip("\0") - self.gmapRecords.append(gmapRecord) - - def compile(self, ttFont): - self.recordsCount = len(self.gmapRecords) - self.fontNameLength = len(self.psFontName) - self.recordsOffset = 4 * (((self.fontNameLength + 12) + 3) // 4) - data = sstruct.pack(GMAPFormat, self) - data = data + tobytes(self.psFontName) - data = data + b"\0" * (self.recordsOffset - len(data)) - for record in self.gmapRecords: - data = data + record.compile(ttFont) - return data - - def toXML(self, writer, ttFont): - writer.comment("Most of this table will be recalculated by the compiler") - writer.newline() - formatstring, names, fixes = sstruct.getformat(GMAPFormat) - for name in names: - value = getattr(self, name) - writer.simpletag(name, value=value) - writer.newline() - writer.simpletag("PSFontName", value=self.psFontName) - writer.newline() - for gmapRecord in self.gmapRecords: - gmapRecord.toXML(writer, ttFont) - - def fromXML(self, name, attrs, content, ttFont): - if name == "GMAPRecord": - if not hasattr(self, "gmapRecords"): - self.gmapRecords = [] - gmapRecord = GMAPRecord() - self.gmapRecords.append(gmapRecord) - for element in content: - if isinstance(element, str): - continue - name, attrs, content = element - gmapRecord.fromXML(name, attrs, content, ttFont) - else: - value = attrs["value"] - if name == "PSFontName": - self.psFontName = value - else: - setattr(self, name, safeEval(value)) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/caching.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/caching.py deleted file mode 100644 index 511de1dee8f3416cf89475c9393275748df00022..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/caching.py +++ /dev/null @@ -1,804 +0,0 @@ -import collections -import functools -import logging -import math -import os -import threading -import warnings -from concurrent.futures import ThreadPoolExecutor - -logger = logging.getLogger("fsspec") - - -class BaseCache(object): - """Pass-though cache: doesn't keep anything, calls every time - - Acts as base class for other cachers - - Parameters - ---------- - blocksize: int - How far to read ahead in numbers of bytes - fetcher: func - Function of the form f(start, end) which gets bytes from remote as - specified - size: int - How big this file is - """ - - name = "none" - - def __init__(self, blocksize, fetcher, size): - self.blocksize = blocksize - self.fetcher = fetcher - self.size = size - - def _fetch(self, start, stop): - if start is None: - start = 0 - if stop is None: - stop = self.size - if start >= self.size or start >= stop: - return b"" - return self.fetcher(start, stop) - - -class MMapCache(BaseCache): - """memory-mapped sparse file cache - - Opens temporary file, which is filled blocks-wise when data is requested. - Ensure there is enough disc space in the temporary location. - - This cache method might only work on posix - """ - - name = "mmap" - - def __init__(self, blocksize, fetcher, size, location=None, blocks=None): - super().__init__(blocksize, fetcher, size) - self.blocks = set() if blocks is None else blocks - self.location = location - self.cache = self._makefile() - - def _makefile(self): - import mmap - import tempfile - - if self.size == 0: - return bytearray() - - # posix version - if self.location is None or not os.path.exists(self.location): - if self.location is None: - fd = tempfile.TemporaryFile() - self.blocks = set() - else: - fd = open(self.location, "wb+") - fd.seek(self.size - 1) - fd.write(b"1") - fd.flush() - else: - fd = open(self.location, "rb+") - - return mmap.mmap(fd.fileno(), self.size) - - def _fetch(self, start, end): - logger.debug(f"MMap cache fetching {start}-{end}") - if start is None: - start = 0 - if end is None: - end = self.size - if start >= self.size or start >= end: - return b"" - start_block = start // self.blocksize - end_block = end // self.blocksize - need = [i for i in range(start_block, end_block + 1) if i not in self.blocks] - while need: - # TODO: not a for loop so we can consolidate blocks later to - # make fewer fetch calls; this could be parallel - i = need.pop(0) - sstart = i * self.blocksize - send = min(sstart + self.blocksize, self.size) - logger.debug(f"MMap get block #{i} ({sstart}-{send}") - self.cache[sstart:send] = self.fetcher(sstart, send) - self.blocks.add(i) - - return self.cache[start:end] - - def __getstate__(self): - state = self.__dict__.copy() - # Remove the unpicklable entries. - del state["cache"] - return state - - def __setstate__(self, state): - # Restore instance attributes - self.__dict__.update(state) - self.cache = self._makefile() - - -class ReadAheadCache(BaseCache): - """Cache which reads only when we get beyond a block of data - - This is a much simpler version of BytesCache, and does not attempt to - fill holes in the cache or keep fragments alive. It is best suited to - many small reads in a sequential order (e.g., reading lines from a file). - """ - - name = "readahead" - - def __init__(self, blocksize, fetcher, size): - super().__init__(blocksize, fetcher, size) - self.cache = b"" - self.start = 0 - self.end = 0 - - def _fetch(self, start, end): - if start is None: - start = 0 - if end is None or end > self.size: - end = self.size - if start >= self.size or start >= end: - return b"" - l = end - start - if start >= self.start and end <= self.end: - # cache hit - return self.cache[start - self.start : end - self.start] - elif self.start <= start < self.end: - # partial hit - part = self.cache[start - self.start :] - l -= len(part) - start = self.end - else: - # miss - part = b"" - end = min(self.size, end + self.blocksize) - self.cache = self.fetcher(start, end) # new block replaces old - self.start = start - self.end = self.start + len(self.cache) - return part + self.cache[:l] - - -class FirstChunkCache(BaseCache): - """Caches the first block of a file only - - This may be useful for file types where the metadata is stored in the header, - but is randomly accessed. - """ - - name = "first" - - def __init__(self, blocksize, fetcher, size): - super().__init__(blocksize, fetcher, size) - self.cache = None - - def _fetch(self, start, end): - start = start or 0 - end = end or self.size - if start < self.blocksize: - if self.cache is None: - if end > self.blocksize: - data = self.fetcher(0, end) - self.cache = data[: self.blocksize] - return data[start:] - self.cache = self.fetcher(0, self.blocksize) - part = self.cache[start:end] - if end > self.blocksize: - part += self.fetcher(self.blocksize, end) - return part - else: - return self.fetcher(start, end) - - -class BlockCache(BaseCache): - """ - Cache holding memory as a set of blocks. - - Requests are only ever made ``blocksize`` at a time, and are - stored in an LRU cache. The least recently accessed block is - discarded when more than ``maxblocks`` are stored. - - Parameters - ---------- - blocksize : int - The number of bytes to store in each block. - Requests are only ever made for ``blocksize``, so this - should balance the overhead of making a request against - the granularity of the blocks. - fetcher : Callable - size : int - The total size of the file being cached. - maxblocks : int - The maximum number of blocks to cache for. The maximum memory - use for this cache is then ``blocksize * maxblocks``. - """ - - name = "blockcache" - - def __init__(self, blocksize, fetcher, size, maxblocks=32): - super().__init__(blocksize, fetcher, size) - self.nblocks = math.ceil(size / blocksize) - self.maxblocks = maxblocks - self._fetch_block_cached = functools.lru_cache(maxblocks)(self._fetch_block) - - def __repr__(self): - return "".format( - self.blocksize, self.size, self.nblocks - ) - - def cache_info(self): - """ - The statistics on the block cache. - - Returns - ------- - NamedTuple - Returned directly from the LRU Cache used internally. - """ - return self._fetch_block_cached.cache_info() - - def __getstate__(self): - state = self.__dict__ - del state["_fetch_block_cached"] - return state - - def __setstate__(self, state): - self.__dict__.update(state) - self._fetch_block_cached = functools.lru_cache(state["maxblocks"])( - self._fetch_block - ) - - def _fetch(self, start, end): - if start is None: - start = 0 - if end is None: - end = self.size - if start >= self.size or start >= end: - return b"" - - # byte position -> block numbers - start_block_number = start // self.blocksize - end_block_number = end // self.blocksize - - # these are cached, so safe to do multiple calls for the same start and end. - for block_number in range(start_block_number, end_block_number + 1): - self._fetch_block_cached(block_number) - - return self._read_cache( - start, - end, - start_block_number=start_block_number, - end_block_number=end_block_number, - ) - - def _fetch_block(self, block_number): - """ - Fetch the block of data for `block_number`. - """ - if block_number > self.nblocks: - raise ValueError( - "'block_number={}' is greater than the number of blocks ({})".format( - block_number, self.nblocks - ) - ) - - start = block_number * self.blocksize - end = start + self.blocksize - logger.info("BlockCache fetching block %d", block_number) - block_contents = super()._fetch(start, end) - return block_contents - - def _read_cache(self, start, end, start_block_number, end_block_number): - """ - Read from our block cache. - - Parameters - ---------- - start, end : int - The start and end byte positions. - start_block_number, end_block_number : int - The start and end block numbers. - """ - start_pos = start % self.blocksize - end_pos = end % self.blocksize - - if start_block_number == end_block_number: - block = self._fetch_block_cached(start_block_number) - return block[start_pos:end_pos] - - else: - # read from the initial - out = [] - out.append(self._fetch_block_cached(start_block_number)[start_pos:]) - - # intermediate blocks - # Note: it'd be nice to combine these into one big request. However - # that doesn't play nicely with our LRU cache. - for block_number in range(start_block_number + 1, end_block_number): - out.append(self._fetch_block_cached(block_number)) - - # final block - out.append(self._fetch_block_cached(end_block_number)[:end_pos]) - - return b"".join(out) - - -class BytesCache(BaseCache): - """Cache which holds data in a in-memory bytes object - - Implements read-ahead by the block size, for semi-random reads progressing - through the file. - - Parameters - ---------- - trim: bool - As we read more data, whether to discard the start of the buffer when - we are more than a blocksize ahead of it. - """ - - name = "bytes" - - def __init__(self, blocksize, fetcher, size, trim=True): - super().__init__(blocksize, fetcher, size) - self.cache = b"" - self.start = None - self.end = None - self.trim = trim - - def _fetch(self, start, end): - # TODO: only set start/end after fetch, in case it fails? - # is this where retry logic might go? - if start is None: - start = 0 - if end is None: - end = self.size - if start >= self.size or start >= end: - return b"" - if ( - self.start is not None - and start >= self.start - and self.end is not None - and end < self.end - ): - # cache hit: we have all the required data - offset = start - self.start - return self.cache[offset : offset + end - start] - - if self.blocksize: - bend = min(self.size, end + self.blocksize) - else: - bend = end - - if bend == start or start > self.size: - return b"" - - if (self.start is None or start < self.start) and ( - self.end is None or end > self.end - ): - # First read, or extending both before and after - self.cache = self.fetcher(start, bend) - self.start = start - elif start < self.start: - if self.end - end > self.blocksize: - self.cache = self.fetcher(start, bend) - self.start = start - else: - new = self.fetcher(start, self.start) - self.start = start - self.cache = new + self.cache - elif bend > self.end: - if self.end > self.size: - pass - elif end - self.end > self.blocksize: - self.cache = self.fetcher(start, bend) - self.start = start - else: - new = self.fetcher(self.end, bend) - self.cache = self.cache + new - - self.end = self.start + len(self.cache) - offset = start - self.start - out = self.cache[offset : offset + end - start] - if self.trim: - num = (self.end - self.start) // (self.blocksize + 1) - if num > 1: - self.start += self.blocksize * num - self.cache = self.cache[self.blocksize * num :] - return out - - def __len__(self): - return len(self.cache) - - -class AllBytes(BaseCache): - """Cache entire contents of the file""" - - name = "all" - - def __init__(self, blocksize=None, fetcher=None, size=None, data=None): - super().__init__(blocksize, fetcher, size) - if data is None: - data = self.fetcher(0, self.size) - self.data = data - - def _fetch(self, start, end): - return self.data[start:end] - - -class KnownPartsOfAFile(BaseCache): - """ - Cache holding known file parts. - - Parameters - ---------- - blocksize: int - How far to read ahead in numbers of bytes - fetcher: func - Function of the form f(start, end) which gets bytes from remote as - specified - size: int - How big this file is - data: dict - A dictionary mapping explicit `(start, stop)` file-offset tuples - with known bytes. - strict: bool, default True - Whether to fetch reads that go beyond a known byte-range boundary. - If `False`, any read that ends outside a known part will be zero - padded. Note that zero padding will not be used for reads that - begin outside a known byte-range. - """ - - name = "parts" - - def __init__(self, blocksize, fetcher, size, data={}, strict=True, **_): - super(KnownPartsOfAFile, self).__init__(blocksize, fetcher, size) - self.strict = strict - - # simple consolidation of contiguous blocks - if data: - old_offsets = sorted(list(data.keys())) - offsets = [old_offsets[0]] - blocks = [data.pop(old_offsets[0])] - for start, stop in old_offsets[1:]: - start0, stop0 = offsets[-1] - if start == stop0: - offsets[-1] = (start0, stop) - blocks[-1] += data.pop((start, stop)) - else: - offsets.append((start, stop)) - blocks.append(data.pop((start, stop))) - - self.data = dict(zip(offsets, blocks)) - else: - self.data = data - - def _fetch(self, start, stop): - out = b"" - for (loc0, loc1), data in self.data.items(): - # If self.strict=False, use zero-padded data - # for reads beyond the end of a "known" buffer - if loc0 <= start < loc1: - off = start - loc0 - out = data[off : off + stop - start] - if not self.strict or loc0 <= stop <= loc1: - # The request is within a known range, or - # it begins within a known range, and we - # are allowed to pad reads beyond the - # buffer with zero - out += b"\x00" * (stop - start - len(out)) - return out - else: - # The request ends outside a known range, - # and we are being "strict" about reads - # beyond the buffer - start = loc1 - break - - # We only get here if there is a request outside the - # known parts of the file. In an ideal world, this - # should never happen - if self.fetcher is None: - # We cannot fetch the data, so raise an error - raise ValueError(f"Read is outside the known file parts: {(start, stop)}. ") - # We can fetch the data, but should warn the user - # that this may be slow - warnings.warn( - f"Read is outside the known file parts: {(start, stop)}. " - f"IO/caching performance may be poor!" - ) - logger.debug(f"KnownPartsOfAFile cache fetching {start}-{stop}") - return out + super()._fetch(start, stop) - - -class UpdatableLRU: - """ - Custom implementation of LRU cache that allows updating keys - - Used by BackgroudBlockCache - """ - - CacheInfo = collections.namedtuple( - "CacheInfo", ["hits", "misses", "maxsize", "currsize"] - ) - - def __init__(self, func, max_size=128): - self._cache = collections.OrderedDict() - self._func = func - self._max_size = max_size - self._hits = 0 - self._misses = 0 - self._lock = threading.Lock() - - def __call__(self, *args): - with self._lock: - if args in self._cache: - self._cache.move_to_end(args) - self._hits += 1 - return self._cache[args] - - result = self._func(*args) - - with self._lock: - self._cache[args] = result - self._misses += 1 - if len(self._cache) > self._max_size: - self._cache.popitem(last=False) - - return result - - def is_key_cached(self, *args): - with self._lock: - return args in self._cache - - def add_key(self, result, *args): - with self._lock: - self._cache[args] = result - if len(self._cache) > self._max_size: - self._cache.popitem(last=False) - - def cache_info(self): - with self._lock: - return self.CacheInfo( - maxsize=self._max_size, - currsize=len(self._cache), - hits=self._hits, - misses=self._misses, - ) - - -class BackgroundBlockCache(BaseCache): - """ - Cache holding memory as a set of blocks with pre-loading of - the next block in the background. - - Requests are only ever made ``blocksize`` at a time, and are - stored in an LRU cache. The least recently accessed block is - discarded when more than ``maxblocks`` are stored. If the - next block is not in cache, it is loaded in a separate thread - in non-blocking way. - - Parameters - ---------- - blocksize : int - The number of bytes to store in each block. - Requests are only ever made for ``blocksize``, so this - should balance the overhead of making a request against - the granularity of the blocks. - fetcher : Callable - size : int - The total size of the file being cached. - maxblocks : int - The maximum number of blocks to cache for. The maximum memory - use for this cache is then ``blocksize * maxblocks``. - """ - - name = "background" - - def __init__(self, blocksize, fetcher, size, maxblocks=32): - super().__init__(blocksize, fetcher, size) - self.nblocks = math.ceil(size / blocksize) - self.maxblocks = maxblocks - self._fetch_block_cached = UpdatableLRU(self._fetch_block, maxblocks) - - self._thread_executor = ThreadPoolExecutor(max_workers=1) - self._fetch_future_block_number = None - self._fetch_future = None - self._fetch_future_lock = threading.Lock() - - def __repr__(self): - return "".format( - self.blocksize, self.size, self.nblocks - ) - - def cache_info(self): - """ - The statistics on the block cache. - - Returns - ------- - NamedTuple - Returned directly from the LRU Cache used internally. - """ - return self._fetch_block_cached.cache_info() - - def __getstate__(self): - state = self.__dict__ - del state["_fetch_block_cached"] - del state["_thread_executor"] - del state["_fetch_future_block_number"] - del state["_fetch_future"] - del state["_fetch_future_lock"] - return state - - def __setstate__(self, state): - self.__dict__.update(state) - self._fetch_block_cached = UpdatableLRU(self._fetch_block, state["maxblocks"]) - self._thread_executor = ThreadPoolExecutor(max_workers=1) - self._fetch_future_block_number = None - self._fetch_future = None - self._fetch_future_lock = threading.Lock() - - def _fetch(self, start, end): - if start is None: - start = 0 - if end is None: - end = self.size - if start >= self.size or start >= end: - return b"" - - # byte position -> block numbers - start_block_number = start // self.blocksize - end_block_number = end // self.blocksize - - fetch_future_block_number = None - fetch_future = None - with self._fetch_future_lock: - # Background thread is running. Check we we can or must join it. - if self._fetch_future is not None: - if self._fetch_future.done(): - logger.info("BlockCache joined background fetch without waiting.") - self._fetch_block_cached.add_key( - self._fetch_future.result(), self._fetch_future_block_number - ) - # Cleanup the fetch variables. Done with fetching the block. - self._fetch_future_block_number = None - self._fetch_future = None - else: - # Must join if we need the block for the current fetch - must_join = bool( - start_block_number - <= self._fetch_future_block_number - <= end_block_number - ) - if must_join: - # Copy to the local variables to release lock - # before waiting for result - fetch_future_block_number = self._fetch_future_block_number - fetch_future = self._fetch_future - - # Cleanup the fetch variables. Have a local copy. - self._fetch_future_block_number = None - self._fetch_future = None - - # Need to wait for the future for the current read - if fetch_future is not None: - logger.info("BlockCache waiting for background fetch.") - # Wait until result and put it in cache - self._fetch_block_cached.add_key( - fetch_future.result(), fetch_future_block_number - ) - - # these are cached, so safe to do multiple calls for the same start and end. - for block_number in range(start_block_number, end_block_number + 1): - self._fetch_block_cached(block_number) - - # fetch next block in the background if nothing is running in the background, - # the block is within file and it is not already cached - end_block_plus_1 = end_block_number + 1 - with self._fetch_future_lock: - if ( - self._fetch_future is None - and end_block_plus_1 <= self.nblocks - and not self._fetch_block_cached.is_key_cached(end_block_plus_1) - ): - self._fetch_future_block_number = end_block_plus_1 - self._fetch_future = self._thread_executor.submit( - self._fetch_block, end_block_plus_1, "async" - ) - - return self._read_cache( - start, - end, - start_block_number=start_block_number, - end_block_number=end_block_number, - ) - - def _fetch_block(self, block_number, log_info="sync"): - """ - Fetch the block of data for `block_number`. - """ - if block_number > self.nblocks: - raise ValueError( - "'block_number={}' is greater than the number of blocks ({})".format( - block_number, self.nblocks - ) - ) - - start = block_number * self.blocksize - end = start + self.blocksize - logger.info("BlockCache fetching block (%s) %d", log_info, block_number) - block_contents = super()._fetch(start, end) - return block_contents - - def _read_cache(self, start, end, start_block_number, end_block_number): - """ - Read from our block cache. - - Parameters - ---------- - start, end : int - The start and end byte positions. - start_block_number, end_block_number : int - The start and end block numbers. - """ - start_pos = start % self.blocksize - end_pos = end % self.blocksize - - if start_block_number == end_block_number: - block = self._fetch_block_cached(start_block_number) - return block[start_pos:end_pos] - - else: - # read from the initial - out = [] - out.append(self._fetch_block_cached(start_block_number)[start_pos:]) - - # intermediate blocks - # Note: it'd be nice to combine these into one big request. However - # that doesn't play nicely with our LRU cache. - for block_number in range(start_block_number + 1, end_block_number): - out.append(self._fetch_block_cached(block_number)) - - # final block - out.append(self._fetch_block_cached(end_block_number)[:end_pos]) - - return b"".join(out) - - -caches = { - # one custom case - None: BaseCache, -} - - -def register_cache(cls, clobber=False): - """'Register' cache implementation. - - Parameters - ---------- - clobber: bool, optional - If set to True (default is False) - allow to overwrite existing - entry. - - Raises - ------ - ValueError - """ - name = cls.name - if not clobber and name in caches: - raise ValueError(f"Cache with name {name!r} is already known: {caches[name]}") - caches[name] = cls - - -for c in ( - BaseCache, - MMapCache, - BytesCache, - ReadAheadCache, - BlockCache, - FirstChunkCache, - AllBytes, - KnownPartsOfAFile, - BackgroundBlockCache, -): - register_cache(c) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/implementations/github.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/implementations/github.py deleted file mode 100644 index b148124d7481bb867cb100ad1ab2213e6acadf56..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/implementations/github.py +++ /dev/null @@ -1,219 +0,0 @@ -import requests - -from ..spec import AbstractFileSystem -from ..utils import infer_storage_options -from .memory import MemoryFile - -# TODO: add GIST backend, would be very similar - - -class GithubFileSystem(AbstractFileSystem): - """Interface to files in github - - An instance of this class provides the files residing within a remote github - repository. You may specify a point in the repos history, by SHA, branch - or tag (default is current master). - - Given that code files tend to be small, and that github does not support - retrieving partial content, we always fetch whole files. - - When using fsspec.open, allows URIs of the form: - - - "github://path/file", in which case you must specify org, repo and - may specify sha in the extra args - - 'github://org:repo@/precip/catalog.yml', where the org and repo are - part of the URI - - 'github://org:repo@sha/precip/catalog.yml', where the sha is also included - - ``sha`` can be the full or abbreviated hex of the commit you want to fetch - from, or a branch or tag name (so long as it doesn't contain special characters - like "/", "?", which would have to be HTTP-encoded). - - For authorised access, you must provide username and token, which can be made - at https://github.com/settings/tokens - """ - - url = "https://api.github.com/repos/{org}/{repo}/git/trees/{sha}" - rurl = "https://raw.githubusercontent.com/{org}/{repo}/{sha}/{path}" - protocol = "github" - - def __init__(self, org, repo, sha=None, username=None, token=None, **kwargs): - super().__init__(**kwargs) - self.org = org - self.repo = repo - if (username is None) ^ (token is None): - raise ValueError("Auth required both username and token") - self.username = username - self.token = token - if sha is None: - # look up default branch (not necessarily "master") - u = "https://api.github.com/repos/{org}/{repo}" - r = requests.get(u.format(org=org, repo=repo), **self.kw) - r.raise_for_status() - sha = r.json()["default_branch"] - - self.root = sha - self.ls("") - - @property - def kw(self): - if self.username: - return {"auth": (self.username, self.token)} - return {} - - @classmethod - def repos(cls, org_or_user, is_org=True): - """List repo names for given org or user - - This may become the top level of the FS - - Parameters - ---------- - org_or_user: str - Name of the github org or user to query - is_org: bool (default True) - Whether the name is an organisation (True) or user (False) - - Returns - ------- - List of string - """ - r = requests.get( - "https://api.github.com/{part}/{org}/repos".format( - part=["users", "orgs"][is_org], org=org_or_user - ) - ) - r.raise_for_status() - return [repo["name"] for repo in r.json()] - - @property - def tags(self): - """Names of tags in the repo""" - r = requests.get( - "https://api.github.com/repos/{org}/{repo}/tags" - "".format(org=self.org, repo=self.repo), - **self.kw, - ) - r.raise_for_status() - return [t["name"] for t in r.json()] - - @property - def branches(self): - """Names of branches in the repo""" - r = requests.get( - "https://api.github.com/repos/{org}/{repo}/branches" - "".format(org=self.org, repo=self.repo), - **self.kw, - ) - r.raise_for_status() - return [t["name"] for t in r.json()] - - @property - def refs(self): - """Named references, tags and branches""" - return {"tags": self.tags, "branches": self.branches} - - def ls(self, path, detail=False, sha=None, _sha=None, **kwargs): - """List files at given path - - Parameters - ---------- - path: str - Location to list, relative to repo root - detail: bool - If True, returns list of dicts, one per file; if False, returns - list of full filenames only - sha: str (optional) - List at the given point in the repo history, branch or tag name or commit - SHA - _sha: str (optional) - List this specific tree object (used internally to descend into trees) - """ - path = self._strip_protocol(path) - if path == "": - _sha = sha or self.root - if _sha is None: - parts = path.rstrip("/").split("/") - so_far = "" - _sha = sha or self.root - for part in parts: - out = self.ls(so_far, True, sha=sha, _sha=_sha) - so_far += "/" + part if so_far else part - out = [o for o in out if o["name"] == so_far] - if not out: - raise FileNotFoundError(path) - out = out[0] - if out["type"] == "file": - if detail: - return [out] - else: - return path - _sha = out["sha"] - if path not in self.dircache or sha not in [self.root, None]: - r = requests.get( - self.url.format(org=self.org, repo=self.repo, sha=_sha), **self.kw - ) - if r.status_code == 404: - raise FileNotFoundError(path) - r.raise_for_status() - types = {"blob": "file", "tree": "directory"} - out = [ - { - "name": path + "/" + f["path"] if path else f["path"], - "mode": f["mode"], - "type": types[f["type"]], - "size": f.get("size", 0), - "sha": f["sha"], - } - for f in r.json()["tree"] - if f["type"] in types - ] - if sha in [self.root, None]: - self.dircache[path] = out - else: - out = self.dircache[path] - if detail: - return out - else: - return sorted([f["name"] for f in out]) - - def invalidate_cache(self, path=None): - self.dircache.clear() - - @classmethod - def _strip_protocol(cls, path): - opts = infer_storage_options(path) - if "username" not in opts: - return super()._strip_protocol(path) - return opts["path"].lstrip("/") - - @staticmethod - def _get_kwargs_from_urls(path): - opts = infer_storage_options(path) - if "username" not in opts: - return {} - out = {"org": opts["username"], "repo": opts["password"]} - if opts["host"]: - out["sha"] = opts["host"] - return out - - def _open( - self, - path, - mode="rb", - block_size=None, - autocommit=True, - cache_options=None, - sha=None, - **kwargs, - ): - if mode != "rb": - raise NotImplementedError - url = self.rurl.format( - org=self.org, repo=self.repo, path=path, sha=sha or self.root - ) - r = requests.get(url, **self.kw) - if r.status_code == 404: - raise FileNotFoundError(path) - r.raise_for_status() - return MemoryFile(None, None, r.content) diff --git a/spaces/DemoLou/moe-tts/text/cantonese.py b/spaces/DemoLou/moe-tts/text/cantonese.py deleted file mode 100644 index 32eae72ef7eb43d493da6d6f75dd46176d0e8808..0000000000000000000000000000000000000000 --- a/spaces/DemoLou/moe-tts/text/cantonese.py +++ /dev/null @@ -1,59 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('chinese_dialect_lexicons/jyutjyu') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ei˥'), - ('B', 'biː˥'), - ('C', 'siː˥'), - ('D', 'tiː˥'), - ('E', 'iː˥'), - ('F', 'e˥fuː˨˩'), - ('G', 'tsiː˥'), - ('H', 'ɪk̚˥tsʰyː˨˩'), - ('I', 'ɐi˥'), - ('J', 'tsei˥'), - ('K', 'kʰei˥'), - ('L', 'e˥llou˨˩'), - ('M', 'ɛːm˥'), - ('N', 'ɛːn˥'), - ('O', 'ou˥'), - ('P', 'pʰiː˥'), - ('Q', 'kʰiːu˥'), - ('R', 'aː˥lou˨˩'), - ('S', 'ɛː˥siː˨˩'), - ('T', 'tʰiː˥'), - ('U', 'juː˥'), - ('V', 'wiː˥'), - ('W', 'tʊk̚˥piː˥juː˥'), - ('X', 'ɪk̚˥siː˨˩'), - ('Y', 'waːi˥'), - ('Z', 'iː˨sɛːt̚˥') -]] - - -def number_to_cantonese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def cantonese_to_ipa(text): - text = number_to_cantonese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/DragGan/DragGan-Inversion/PTI/criteria/l2_loss.py b/spaces/DragGan/DragGan-Inversion/PTI/criteria/l2_loss.py deleted file mode 100644 index c7ac2753b02dfa9d21ccf03fa3b87b9d6fc3f01d..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/PTI/criteria/l2_loss.py +++ /dev/null @@ -1,8 +0,0 @@ -import torch - -l2_criterion = torch.nn.MSELoss(reduction='mean') - - -def l2_loss(real_images, generated_images): - loss = l2_criterion(real_images, generated_images) - return loss diff --git a/spaces/DragGan/DragGan/stylegan_human/PP_HumanSeg/deploy/infer.py b/spaces/DragGan/DragGan/stylegan_human/PP_HumanSeg/deploy/infer.py deleted file mode 100644 index df4dd04762fec4a78c423075fc5f45d892171af9..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/stylegan_human/PP_HumanSeg/deploy/infer.py +++ /dev/null @@ -1,180 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - - -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import codecs -import os -import time - -import yaml -import numpy as np -import cv2 -import paddle -import paddleseg.transforms as T -from paddle.inference import create_predictor, PrecisionType -from paddle.inference import Config as PredictConfig -from paddleseg.core.infer import reverse_transform -from paddleseg.cvlibs import manager -from paddleseg.utils import TimeAverager - -from ..scripts.optic_flow_process import optic_flow_process - - -class DeployConfig: - def __init__(self, path): - with codecs.open(path, 'r', 'utf-8') as file: - self.dic = yaml.load(file, Loader=yaml.FullLoader) - - self._transforms = self._load_transforms(self.dic['Deploy'][ - 'transforms']) - self._dir = os.path.dirname(path) - - @property - def transforms(self): - return self._transforms - - @property - def model(self): - return os.path.join(self._dir, self.dic['Deploy']['model']) - - @property - def params(self): - return os.path.join(self._dir, self.dic['Deploy']['params']) - - def _load_transforms(self, t_list): - com = manager.TRANSFORMS - transforms = [] - for t in t_list: - ctype = t.pop('type') - transforms.append(com[ctype](**t)) - - return transforms - - -class Predictor: - def __init__(self, args): - self.cfg = DeployConfig(args.cfg) - self.args = args - self.compose = T.Compose(self.cfg.transforms) - resize_h, resize_w = args.input_shape - - self.disflow = cv2.DISOpticalFlow_create( - cv2.DISOPTICAL_FLOW_PRESET_ULTRAFAST) - self.prev_gray = np.zeros((resize_h, resize_w), np.uint8) - self.prev_cfd = np.zeros((resize_h, resize_w), np.float32) - self.is_init = True - - pred_cfg = PredictConfig(self.cfg.model, self.cfg.params) - pred_cfg.disable_glog_info() - if self.args.use_gpu: - pred_cfg.enable_use_gpu(100, 0) - - self.predictor = create_predictor(pred_cfg) - if self.args.test_speed: - self.cost_averager = TimeAverager() - - def preprocess(self, img): - ori_shapes = [] - processed_imgs = [] - processed_img = self.compose(img)[0] - processed_imgs.append(processed_img) - ori_shapes.append(img.shape) - return processed_imgs, ori_shapes - - def run(self, img, bg): - input_names = self.predictor.get_input_names() - input_handle = self.predictor.get_input_handle(input_names[0]) - processed_imgs, ori_shapes = self.preprocess(img) - data = np.array(processed_imgs) - input_handle.reshape(data.shape) - input_handle.copy_from_cpu(data) - if self.args.test_speed: - start = time.time() - - self.predictor.run() - - if self.args.test_speed: - self.cost_averager.record(time.time() - start) - output_names = self.predictor.get_output_names() - output_handle = self.predictor.get_output_handle(output_names[0]) - output = output_handle.copy_to_cpu() - return self.postprocess(output, img, ori_shapes[0], bg) - - - def postprocess(self, pred, img, ori_shape, bg): - if not os.path.exists(self.args.save_dir): - os.makedirs(self.args.save_dir) - resize_w = pred.shape[-1] - resize_h = pred.shape[-2] - if self.args.soft_predict: - if self.args.use_optic_flow: - score_map = pred[:, 1, :, :].squeeze(0) - score_map = 255 * score_map - cur_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - cur_gray = cv2.resize(cur_gray, (resize_w, resize_h)) - optflow_map = optic_flow_process(cur_gray, score_map, self.prev_gray, self.prev_cfd, \ - self.disflow, self.is_init) - self.prev_gray = cur_gray.copy() - self.prev_cfd = optflow_map.copy() - self.is_init = False - - score_map = np.repeat(optflow_map[:, :, np.newaxis], 3, axis=2) - score_map = np.transpose(score_map, [2, 0, 1])[np.newaxis, ...] - score_map = reverse_transform( - paddle.to_tensor(score_map), - ori_shape, - self.cfg.transforms, - mode='bilinear') - alpha = np.transpose(score_map.numpy().squeeze(0), - [1, 2, 0]) / 255 - else: - score_map = pred[:, 1, :, :] - score_map = score_map[np.newaxis, ...] - score_map = reverse_transform( - paddle.to_tensor(score_map), - ori_shape, - self.cfg.transforms, - mode='bilinear') - alpha = np.transpose(score_map.numpy().squeeze(0), [1, 2, 0]) - - else: - if pred.ndim == 3: - pred = pred[:, np.newaxis, ...] - result = reverse_transform( - paddle.to_tensor( - pred, dtype='float32'), - ori_shape, - self.cfg.transforms, - mode='bilinear') - - result = np.array(result) - if self.args.add_argmax: - result = np.argmax(result, axis=1) - else: - result = result.squeeze(1) - alpha = np.transpose(result, [1, 2, 0]) - - # background replace - h, w, _ = img.shape - if bg is None: - bg = np.ones_like(img)*255 - else: - bg = cv2.resize(bg, (w, h)) - if bg.ndim == 2: - bg = bg[..., np.newaxis] - - comb = (alpha * img + (1 - alpha) * bg).astype(np.uint8) - return comb, alpha, bg, img diff --git a/spaces/Dragonnext/scylla-proxy/Dockerfile b/spaces/Dragonnext/scylla-proxy/Dockerfile deleted file mode 100644 index ba66edd3728c568e3b754d972b7b567679344e6f..0000000000000000000000000000000000000000 --- a/spaces/Dragonnext/scylla-proxy/Dockerfile +++ /dev/null @@ -1,12 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/Drago/oai-reverse-proxy.git /app -WORKDIR /app -RUN git checkout main -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] diff --git a/spaces/ECCV2022/bytetrack/exps/default/yolox_x.py b/spaces/ECCV2022/bytetrack/exps/default/yolox_x.py deleted file mode 100644 index ac498a1fb91f597e9362c2b73a9a002cf31445fc..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/exps/default/yolox_x.py +++ /dev/null @@ -1,15 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -# Copyright (c) Megvii, Inc. and its affiliates. - -import os - -from yolox.exp import Exp as MyExp - - -class Exp(MyExp): - def __init__(self): - super(Exp, self).__init__() - self.depth = 1.33 - self.width = 1.25 - self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0] diff --git a/spaces/ELITE-library/ELITE/app.py b/spaces/ELITE-library/ELITE/app.py deleted file mode 100644 index 580e7ccec93ff173c6a6adc3aee70af06f48fd39..0000000000000000000000000000000000000000 --- a/spaces/ELITE-library/ELITE/app.py +++ /dev/null @@ -1,100 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import pathlib - -import gradio as gr - -from model import Model - -repo_dir = pathlib.Path(__file__).parent - - -def create_demo(): - - - TITLE = '# [ELITE Demo](https://github.com/csyxwei/ELITE)' - - USAGE='''To run the demo, you should: - 1. Upload your image. - 2. **Draw a mask on the object part.** - 3. Input proper text prompts, such as "A photo of S" or "A S wearing sunglasses", where "S" denotes your customized concept. - 4. Click the Run button. You can also adjust the hyperparameters to improve the results. - ''' - - model = Model() - - with gr.Blocks(css=repo_dir / 'style.css') as demo: - gr.Markdown(TITLE) - gr.Markdown(USAGE) - with gr.Row(): - with gr.Column(): - with gr.Box(): - image = gr.Image(label='Input', tool='sketch', type='pil') - # gr.Markdown('Draw a mask on your object.') - gr.Markdown('Upload your image and **draw a mask on the object part.** Like [this](https://user-images.githubusercontent.com/23421814/224873479-c4cf44d6-8c99-4ef9-b972-87c25fe923ee.png).') - prompt = gr.Text( - label='Prompt', - placeholder='e.g. "A photo of S", "A S wearing sunglasses"', - info='Use "S" for your concept.') - lambda_ = gr.Slider( - label='Lambda', - minimum=0, - maximum=1.5, - step=0.1, - value=0.6, - info= - 'The larger the lambda, the more consistency between the generated image and the input image, but less editability.' - ) - run_button = gr.Button('Run') - with gr.Accordion(label='Advanced options', open=False): - seed = gr.Slider( - label='Seed', - minimum=-1, - maximum=1000000, - step=1, - value=-1, - info= - 'If set to -1, a different seed will be used each time.' - ) - guidance_scale = gr.Slider(label='Guidance scale', - minimum=0, - maximum=50, - step=0.1, - value=5.0) - num_steps = gr.Slider( - label='Steps', - minimum=1, - maximum=100, - step=1, - value=300, - info= - 'In the paper, the number of steps is set to 100, but in this demo the default value is 20 to reduce inference time.' - ) - with gr.Column(): - result = gr.Image(label='Result') - - paths = sorted([ - path.as_posix() - for path in (repo_dir / 'ELITE/test_datasets').glob('*') - if 'bg' not in path.stem - ]) - gr.Examples(examples=paths, inputs=image, examples_per_page=20) - - inputs = [ - image, - prompt, - seed, - guidance_scale, - lambda_, - num_steps, - ] - prompt.submit(fn=model.run, inputs=inputs, outputs=result) - run_button.click(fn=model.run, inputs=inputs, outputs=result) - return demo - - -if __name__ == '__main__': - demo = create_demo() - demo.queue(api_open=False).launch() diff --git a/spaces/Eddycrack864/Applio-Inference/tools/rvc_for_realtime.py b/spaces/Eddycrack864/Applio-Inference/tools/rvc_for_realtime.py deleted file mode 100644 index f746cde4dfd9c3b87fe844304aa3a975d68b3433..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/tools/rvc_for_realtime.py +++ /dev/null @@ -1,381 +0,0 @@ -import os -import sys -import traceback -import logging - -logger = logging.getLogger(__name__) - -from time import time as ttime - -import fairseq -import faiss -import numpy as np -import parselmouth -import pyworld -import scipy.signal as signal -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchcrepe - -from infer.lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) - -now_dir = os.getcwd() -sys.path.append(now_dir) -from multiprocessing import Manager as M - -from configs.config import Config - -config = Config() - -mm = M() -if config.dml == True: - - def forward_dml(ctx, x, scale): - ctx.scale = scale - res = x.clone().detach() - return res - - fairseq.modules.grad_multiply.GradMultiply.forward = forward_dml - - -# config.device=torch.device("cpu")########强制cpu测试 -# config.is_half=False########强制cpu测试 -class RVC: - def __init__( - self, - key, - pth_path, - index_path, - index_rate, - n_cpu, - inp_q, - opt_q, - device, - last_rvc=None, - ) -> None: - """ - 初始化 - """ - try: - global config - self.inp_q = inp_q - self.opt_q = opt_q - # device="cpu"########强制cpu测试 - self.device = device - self.f0_up_key = key - self.time_step = 160 / 16000 * 1000 - self.f0_min = 50 - self.f0_max = 1100 - self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700) - self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700) - self.sr = 16000 - self.window = 160 - self.n_cpu = n_cpu - if index_rate != 0: - self.index = faiss.read_index(index_path) - self.big_npy = self.index.reconstruct_n(0, self.index.ntotal) - logger.info("Index search enabled") - self.pth_path = pth_path - self.index_path = index_path - self.index_rate = index_rate - - if last_rvc is None: - models, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task( - ["assets/hubert/hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - self.model = hubert_model - else: - self.model = last_rvc.model - - if last_rvc is None or last_rvc.pth_path != self.pth_path: - cpt = torch.load(self.pth_path, map_location="cpu") - self.tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] - self.if_f0 = cpt.get("f0", 1) - self.version = cpt.get("version", "v1") - if self.version == "v1": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif self.version == "v2": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - self.net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del self.net_g.enc_q - logger.debug(self.net_g.load_state_dict(cpt["weight"], strict=False)) - self.net_g.eval().to(device) - # print(2333333333,device,config.device,self.device)#net_g是device,hubert是config.device - if config.is_half: - self.net_g = self.net_g.half() - else: - self.net_g = self.net_g.float() - self.is_half = config.is_half - else: - self.tgt_sr = last_rvc.tgt_sr - self.if_f0 = last_rvc.if_f0 - self.version = last_rvc.version - self.net_g = last_rvc.net_g - self.is_half = last_rvc.is_half - - if last_rvc is not None and hasattr(last_rvc, "model_rmvpe"): - self.model_rmvpe = last_rvc.model_rmvpe - except: - logger.warn(traceback.format_exc()) - - def change_key(self, new_key): - self.f0_up_key = new_key - - def change_index_rate(self, new_index_rate): - if new_index_rate != 0 and self.index_rate == 0: - self.index = faiss.read_index(self.index_path) - self.big_npy = self.index.reconstruct_n(0, self.index.ntotal) - logger.info("Index search enabled") - self.index_rate = new_index_rate - - def get_f0_post(self, f0): - f0_min = self.f0_min - f0_max = self.f0_max - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int32) - return f0_coarse, f0bak - - def get_f0(self, x, f0_up_key, n_cpu, method="harvest"): - n_cpu = int(n_cpu) - if method == "crepe": - return self.get_f0_crepe(x, f0_up_key) - if method == "rmvpe": - return self.get_f0_rmvpe(x, f0_up_key) - if method == "pm": - p_len = x.shape[0] // 160 + 1 - f0 = ( - parselmouth.Sound(x, 16000) - .to_pitch_ac( - time_step=0.01, - voicing_threshold=0.6, - pitch_floor=50, - pitch_ceiling=1100, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - # print(pad_size, p_len - len(f0) - pad_size) - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - - f0 *= pow(2, f0_up_key / 12) - return self.get_f0_post(f0) - if n_cpu == 1: - f0, t = pyworld.harvest( - x.astype(np.double), - fs=16000, - f0_ceil=1100, - f0_floor=50, - frame_period=10, - ) - f0 = signal.medfilt(f0, 3) - f0 *= pow(2, f0_up_key / 12) - return self.get_f0_post(f0) - f0bak = np.zeros(x.shape[0] // 160 + 1, dtype=np.float64) - length = len(x) - part_length = 160 * ((length // 160 - 1) // n_cpu + 1) - n_cpu = (length // 160 - 1) // (part_length // 160) + 1 - ts = ttime() - res_f0 = mm.dict() - for idx in range(n_cpu): - tail = part_length * (idx + 1) + 320 - if idx == 0: - self.inp_q.put((idx, x[:tail], res_f0, n_cpu, ts)) - else: - self.inp_q.put( - (idx, x[part_length * idx - 320 : tail], res_f0, n_cpu, ts) - ) - while 1: - res_ts = self.opt_q.get() - if res_ts == ts: - break - f0s = [i[1] for i in sorted(res_f0.items(), key=lambda x: x[0])] - for idx, f0 in enumerate(f0s): - if idx == 0: - f0 = f0[:-3] - elif idx != n_cpu - 1: - f0 = f0[2:-3] - else: - f0 = f0[2:] - f0bak[ - part_length * idx // 160 : part_length * idx // 160 + f0.shape[0] - ] = f0 - f0bak = signal.medfilt(f0bak, 3) - f0bak *= pow(2, f0_up_key / 12) - return self.get_f0_post(f0bak) - - def get_f0_crepe(self, x, f0_up_key): - if "privateuseone" in str(self.device): ###不支持dml,cpu又太慢用不成,拿pm顶替 - return self.get_f0(x, f0_up_key, 1, "pm") - audio = torch.tensor(np.copy(x))[None].float() - # print("using crepe,device:%s"%self.device) - f0, pd = torchcrepe.predict( - audio, - self.sr, - 160, - self.f0_min, - self.f0_max, - "full", - batch_size=512, - # device=self.device if self.device.type!="privateuseone" else "cpu",###crepe不用半精度全部是全精度所以不愁###cpu延迟高到没法用 - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - f0 *= pow(2, f0_up_key / 12) - return self.get_f0_post(f0) - - def get_f0_rmvpe(self, x, f0_up_key): - if hasattr(self, "model_rmvpe") == False: - from infer.lib.rmvpe import RMVPE - - logger.info("Loading rmvpe model") - self.model_rmvpe = RMVPE( - # "rmvpe.pt", is_half=self.is_half if self.device.type!="privateuseone" else False, device=self.device if self.device.type!="privateuseone"else "cpu"####dml时强制对rmvpe用cpu跑 - # "rmvpe.pt", is_half=False, device=self.device####dml配置 - # "rmvpe.pt", is_half=False, device="cpu"####锁定cpu配置 - "assets/rmvpe/rmvpe.pt", - is_half=self.is_half, - device=self.device, ####正常逻辑 - ) - # self.model_rmvpe = RMVPE("aug2_58000_half.pt", is_half=self.is_half, device=self.device) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - f0 *= pow(2, f0_up_key / 12) - return self.get_f0_post(f0) - - def infer( - self, - feats: torch.Tensor, - indata: np.ndarray, - block_frame_16k, - rate, - cache_pitch, - cache_pitchf, - f0method, - ) -> np.ndarray: - feats = feats.view(1, -1) - if config.is_half: - feats = feats.half() - else: - feats = feats.float() - feats = feats.to(self.device) - t1 = ttime() - with torch.no_grad(): - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - inputs = { - "source": feats, - "padding_mask": padding_mask, - "output_layer": 9 if self.version == "v1" else 12, - } - logits = self.model.extract_features(**inputs) - feats = ( - self.model.final_proj(logits[0]) if self.version == "v1" else logits[0] - ) - feats = F.pad(feats, (0, 0, 1, 0)) - t2 = ttime() - try: - if hasattr(self, "index") and self.index_rate != 0: - leng_replace_head = int(rate * feats[0].shape[0]) - npy = feats[0][-leng_replace_head:].cpu().numpy().astype("float32") - score, ix = self.index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(self.big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - if config.is_half: - npy = npy.astype("float16") - feats[0][-leng_replace_head:] = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * self.index_rate - + (1 - self.index_rate) * feats[0][-leng_replace_head:] - ) - else: - logger.warn("Index search FAILED or disabled") - except: - traceback.print_exc() - logger.warn("Index search FAILED") - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - t3 = ttime() - if self.if_f0 == 1: - pitch, pitchf = self.get_f0(indata, self.f0_up_key, self.n_cpu, f0method) - start_frame = block_frame_16k // 160 - end_frame = len(cache_pitch) - (pitch.shape[0] - 4) + start_frame - cache_pitch[:] = np.append(cache_pitch[start_frame:end_frame], pitch[3:-1]) - cache_pitchf[:] = np.append( - cache_pitchf[start_frame:end_frame], pitchf[3:-1] - ) - p_len = min(feats.shape[1], 13000, cache_pitch.shape[0]) - else: - cache_pitch, cache_pitchf = None, None - p_len = min(feats.shape[1], 13000) - t4 = ttime() - feats = feats[:, :p_len, :] - if self.if_f0 == 1: - cache_pitch = cache_pitch[:p_len] - cache_pitchf = cache_pitchf[:p_len] - cache_pitch = torch.LongTensor(cache_pitch).unsqueeze(0).to(self.device) - cache_pitchf = torch.FloatTensor(cache_pitchf).unsqueeze(0).to(self.device) - p_len = torch.LongTensor([p_len]).to(self.device) - ii = 0 # sid - sid = torch.LongTensor([ii]).to(self.device) - with torch.no_grad(): - if self.if_f0 == 1: - # print(12222222222,feats.device,p_len.device,cache_pitch.device,cache_pitchf.device,sid.device,rate2) - infered_audio = ( - self.net_g.infer( - feats, p_len, cache_pitch, cache_pitchf, sid, rate - )[0][0, 0] - .data - .float() - ) - else: - infered_audio = ( - self.net_g.infer(feats, p_len, sid, rate)[0][0, 0] - .data - .float() - ) - t5 = ttime() - logger.info( - "Spent time: fea = %.2fs, index = %.2fs, f0 = %.2fs, model = %.2fs", - t2 - t1, - t3 - t2, - t4 - t3, - t5 - t4, - ) - return infered_audio \ No newline at end of file diff --git a/spaces/EronSamez/RVC_HFmeu/diffq/base.py b/spaces/EronSamez/RVC_HFmeu/diffq/base.py deleted file mode 100644 index 9bd5276b51fbed3d4b898a45b93479ff19e62a7b..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/diffq/base.py +++ /dev/null @@ -1,262 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -from concurrent import futures -from fnmatch import fnmatch -from functools import partial -import io -import math -from multiprocessing import cpu_count -import typing as tp -import zlib - -import torch - - -class BaseQuantizer: - @dataclass - class _QuantizedParam: - name: str - param: torch.nn.Parameter - module: torch.nn.Module - # If a Parameter is used multiple times, `other` can be used - # to share state between the different Quantizers - other: tp.Optional[tp.Any] - - def __init__(self, model: torch.nn.Module, min_size: float = 0.01, float16: bool = False, - exclude: tp.Optional[tp.List[str]] = [], detect_bound: bool = True): - self.model = model - self.min_size = min_size - self.float16 = float16 - self.exclude = exclude - self.detect_bound = detect_bound - self._quantized = False - self._pre_handle = self.model.register_forward_pre_hook(self._forward_pre_hook) - self._post_handle = self.model.register_forward_hook(self._forward_hook) - - self._quantized_state = None - self._qparams = [] - self._float16 = [] - self._others = [] - self._rnns = [] - - self._saved = [] - - self._find_params() - - def _find_params(self): - min_params = self.min_size * 2**20 // 4 - previous = {} - for module_name, module in self.model.named_modules(): - if isinstance(module, torch.nn.RNNBase): - self._rnns.append(module) - for name, param in list(module.named_parameters(recurse=False)): - full_name = f"{module_name}.{name}" - matched = False - for pattern in self.exclude: - if fnmatch(full_name, pattern) or fnmatch(name, pattern): - matched = True - break - - if param.numel() <= min_params or matched: - if id(param) in previous: - continue - if self.detect_bound: - previous[id(param)] = None - if self.float16: - self._float16.append(param) - else: - self._others.append(param) - else: - qparam = self._register_param(name, param, module, previous.get(id(param))) - if self.detect_bound: - previous[id(param)] = qparam - self._qparams.append(qparam) - - def _register_param(self, name, param, module, other): - return self.__class__._QuantizedParam(name, param, module, other) - - def _forward_pre_hook(self, module, input): - if self.model.training: - self._quantized_state = None - if self._quantized: - self.unquantize() - if self._pre_forward_train(): - self._fix_rnns() - else: - self.quantize() - - def _forward_hook(self, module, input, output): - if self.model.training: - if self._post_forward_train(): - self._fix_rnns(flatten=False) # Hacky, next forward will flatten - - def quantize(self, save=True): - """ - Immediately apply quantization to the model parameters. - If `save` is True, save a copy of the unquantized parameters, that can be - restored with `unquantize()`. - """ - if self._quantized: - return - if save: - self._saved = [qp.param.data.to('cpu', copy=True) - for qp in self._qparams if qp.other is None] - self.restore_quantized_state(self.get_quantized_state()) - self._quantized = True - self._fix_rnns() - - def unquantize(self): - """ - Revert a previous call to `quantize()`. - """ - if not self._quantized: - raise RuntimeError("Can only be called on a quantized model.") - if not self._saved: - raise RuntimeError("Nothing to restore.") - for qparam in self._qparams: - if qparam.other is None: - qparam.param.data[:] = self._saved.pop(0) - assert len(self._saved) == 0 - self._quantized = False - self._fix_rnns() - - def _pre_forward_train(self) -> bool: - """ - Called once before each forward for continuous quantization. - Should return True if parameters were changed. - """ - return False - - def _post_forward_train(self) -> bool: - """ - Called once after each forward (to restore state for instance). - Should return True if parameters were changed. - """ - return False - - def _fix_rnns(self, flatten=True): - """ - To be called after quantization happened to fix RNNs. - """ - for rnn in self._rnns: - rnn._flat_weights = [ - (lambda wn: getattr(rnn, wn) if hasattr(rnn, wn) else None)(wn) - for wn in rnn._flat_weights_names] - if flatten: - rnn.flatten_parameters() - - def get_quantized_state(self): - """ - Returns sufficient quantized information to rebuild the model state. - - ..Note:: - To achieve maximum compression, you should compress this with - gzip or other, as quantized weights are not optimally coded! - """ - if self._quantized_state is None: - self._quantized_state = self._get_quantized_state() - return self._quantized_state - - def _get_quantized_state(self): - """ - Actual implementation for `get_quantized_state`. - """ - float16_params = [] - for p in self._float16: - q = p.data.half() - float16_params.append(q) - - return { - "quantized": [self._quantize_param(qparam) for qparam in self._qparams - if qparam.other is None], - "float16": float16_params, - "others": [p.data.clone() for p in self._others], - } - - def _quantize_param(self, qparam: _QuantizedParam) -> tp.Any: - """ - To be overriden. - """ - raise NotImplementedError() - - def _unquantize_param(self, qparam: _QuantizedParam, quantized: tp.Any) -> torch.Tensor: - """ - To be overriden. - """ - raise NotImplementedError() - - def restore_quantized_state(self, state) -> None: - """ - Restore the state of the model from the quantized state. - """ - for p, q in zip(self._float16, state["float16"]): - p.data[:] = q.to(p) - - for p, q in zip(self._others, state["others"]): - p.data[:] = q - - remaining = list(state["quantized"]) - for qparam in self._qparams: - if qparam.other is not None: - # Only unquantize first appearance of nn.Parameter. - continue - quantized = remaining.pop(0) - qparam.param.data[:] = self._unquantize_param(qparam, quantized) - self._fix_rnns() - - def detach(self) -> None: - """ - Detach from the model, removes hooks and anything else. - """ - self._pre_handle.remove() - self._post_handle.remove() - - def model_size(self) -> torch.Tensor: - """ - Returns an estimate of the quantized model size. - """ - total = torch.tensor(0.) - for p in self._float16: - total += 16 * p.numel() - for p in self._others: - total += 32 * p.numel() - return total / 2**20 / 8 # bits to MegaBytes - - def true_model_size(self) -> float: - """ - Return the true quantized model size, in MB, without extra - compression. - """ - return self.model_size().item() - - def compressed_model_size(self, compress_level=-1, num_workers=8) -> float: - """ - Return the compressed quantized model size, in MB. - - Args: - compress_level (int): compression level used with zlib, - see `zlib.compress` for details. - num_workers (int): will split the final big byte representation in that - many chunks processed in parallels. - """ - out = io.BytesIO() - torch.save(self.get_quantized_state(), out) - ms = _parallel_compress_len(out.getvalue(), compress_level, num_workers) - return ms / 2 ** 20 - - -def _compress_len(data, compress_level): - return len(zlib.compress(data, level=compress_level)) - - -def _parallel_compress_len(data, compress_level, num_workers): - num_workers = min(cpu_count(), num_workers) - chunk_size = int(math.ceil(len(data) / num_workers)) - chunks = [data[offset:offset + chunk_size] for offset in range(0, len(data), chunk_size)] - with futures.ProcessPoolExecutor(num_workers) as pool: - return sum(pool.map(partial(_compress_len, compress_level=compress_level), chunks)) diff --git a/spaces/FSDL-Fashion/fashion_img_search/fis/similarity_search/milvus/collection.py b/spaces/FSDL-Fashion/fashion_img_search/fis/similarity_search/milvus/collection.py deleted file mode 100644 index 438cbbbe7cf83f7fb07d6570837768838893f204..0000000000000000000000000000000000000000 --- a/spaces/FSDL-Fashion/fashion_img_search/fis/similarity_search/milvus/collection.py +++ /dev/null @@ -1,58 +0,0 @@ -from pymilvus import ( - Collection, - CollectionSchema, - DataType, - FieldSchema, - connections, - utility, -) - -connections.connect(host="127.0.0.1", port="19530") - - -def create_milvus_collection(collection_name: str, dim: int) -> Collection: - """Create a Milvus collection. - - Inspired by https://github.com/milvus-io/bootcamp/blob/master/solutions/reverse_image_search/1_build_image_search_engine.ipynb - - Args: - collection_name: name of the Milvus collection - dim: number of dimentions - - Returns: - Milvus collection - """ - if utility.has_collection(collection_name): - utility.drop_collection(collection_name) - - fields = [ - FieldSchema( - name="id", - dtype=DataType.INT64, - descrition="ids", - is_primary=True, - auto_id=False, - ), - FieldSchema( - name="path", - dtype=DataType.VARCHAR, - descrition="path to image", - max_length=500, - # is_primary=True, - # auto_id=False, - ), - FieldSchema( - name="embedding", - dtype=DataType.FLOAT_VECTOR, - descrition="image embedding vectors", - dim=dim, - ), - ] - - schema = CollectionSchema(fields=fields, description="reverse image search") - collection = Collection(name=collection_name, schema=schema) - - index_params = {"metric_type": "L2", "index_type": "IVF_FLAT", "params": {"nlist": 2048}} - collection.create_index(field_name="embedding", index_params=index_params) - - return collection diff --git a/spaces/FaceOnLive/Face-Recognition-SDK/gradio/demo.py b/spaces/FaceOnLive/Face-Recognition-SDK/gradio/demo.py deleted file mode 100644 index cf9a69c66c814da496a404d7f7e1519d425a15f4..0000000000000000000000000000000000000000 --- a/spaces/FaceOnLive/Face-Recognition-SDK/gradio/demo.py +++ /dev/null @@ -1,114 +0,0 @@ -import gradio as gr -import requests -import json -from PIL import Image - -def compare_face(frame1, frame2): - url = "http://127.0.0.1:8000/api/compare_face" - files = {'image1': open(frame1, 'rb'), 'image2': open(frame2, 'rb')} - - r = requests.post(url=url, files=files) - faces = None - - try: - image1 = Image.open(frame1) - image2 = Image.open(frame2) - - face1 = None - face2 = None - data = r.json().get('data') - if data.get('face1') is not None: - face = data.get('face1') - x1 = face.get('x1') - y1 = face.get('y1') - x2 = face.get('x2') - y2 = face.get('y2') - if x1 < 0: - x1 = 0 - if y1 < 0: - y1 = 0 - if x2 >= image1.width: - x2 = image1.width - 1 - if y2 >= image1.height: - y2 = image1.height - 1 - - face1 = image1.crop((x1, y1, x2, y2)) - face_image_ratio = face1.width / float(face1.height) - resized_w = int(face_image_ratio * 150) - resized_h = 150 - - face1 = face1.resize((int(resized_w), int(resized_h))) - - if data.get('face2') is not None: - face = data.get('face2') - x1 = face.get('x1') - y1 = face.get('y1') - x2 = face.get('x2') - y2 = face.get('y2') - - if x1 < 0: - x1 = 0 - if y1 < 0: - y1 = 0 - if x2 >= image2.width: - x2 = image2.width - 1 - if y2 >= image2.height: - y2 = image2.height - 1 - - face2 = image2.crop((x1, y1, x2, y2)) - face_image_ratio = face2.width / float(face2.height) - resized_w = int(face_image_ratio * 150) - resized_h = 150 - - face2 = face2.resize((int(resized_w), int(resized_h))) - - if face1 is not None and face2 is not None: - new_image = Image.new('RGB',(face1.width + face2.width + 10, 150), (80,80,80)) - - new_image.paste(face1,(0,0)) - new_image.paste(face2,(face1.width + 10, 0)) - faces = new_image.copy() - elif face1 is not None and face2 is None: - new_image = Image.new('RGB',(face1.width + face1.width + 10, 150), (80,80,80)) - - new_image.paste(face1,(0,0)) - faces = new_image.copy() - elif face1 is None and face2 is not None: - new_image = Image.new('RGB',(face2.width + face2.width + 10, 150), (80,80,80)) - - new_image.paste(face2,(face2.width + 10, 0)) - faces = new_image.copy() - except: - pass - - return [r.json(), faces] - -with gr.Blocks() as demo: - gr.Markdown( - """ - # Face Recognition - Get your own Face Recognition Server by duplicating this space.
    - Or run on your own machine using docker.
    - ```docker run -it -p 7860:7860 --platform=linux/amd64 \ - -e LICENSE_KEY="YOUR_VALUE_HERE" \ - registry.hf.space/faceonlive-face-recognition-sdk:latest ```

    - Contact us at https://faceonlive.com for issues and support.
    - """ - ) - with gr.Row(): - with gr.Column(): - compare_face_input1 = gr.Image(type='filepath', height=480) - gr.Examples(['gradio/examples/1.jpg', 'gradio/examples/2.jpg'], - inputs=compare_face_input1) - compare_face_button = gr.Button("Compare Face") - with gr.Column(): - compare_face_input2 = gr.Image(type='filepath', height=480) - gr.Examples(['gradio/examples/3.jpg', 'gradio/examples/4.jpg'], - inputs=compare_face_input2) - with gr.Column(): - compare_face_output = gr.Image(type="pil", height=150) - compare_result_output = gr.JSON(label='Result') - - compare_face_button.click(compare_face, inputs=[compare_face_input1, compare_face_input2], outputs=[compare_result_output, compare_face_output]) - -demo.launch(server_name="0.0.0.0", server_port=7860) \ No newline at end of file diff --git a/spaces/FantasticGNU/AnomalyGPT/utils/loss.py b/spaces/FantasticGNU/AnomalyGPT/utils/loss.py deleted file mode 100644 index 104c80995dc5216ce5cfa1aa44fe570551555c2a..0000000000000000000000000000000000000000 --- a/spaces/FantasticGNU/AnomalyGPT/utils/loss.py +++ /dev/null @@ -1,117 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from math import exp - -class FocalLoss(nn.Module): - """ - copy from: https://github.com/Hsuxu/Loss_ToolBox-PyTorch/blob/master/FocalLoss/FocalLoss.py - This is a implementation of Focal Loss with smooth label cross entropy supported which is proposed in - 'Focal Loss for Dense Object Detection. (https://arxiv.org/abs/1708.02002)' - Focal_Loss= -1*alpha*(1-pt)*log(pt) - :param alpha: (tensor) 3D or 4D the scalar factor for this criterion - :param gamma: (float,double) gamma > 0 reduces the relative loss for well-classified examples (p>0.5) putting more - focus on hard misclassified example - :param smooth: (float,double) smooth value when cross entropy - :param balance_index: (int) balance class index, should be specific when alpha is float - :param size_average: (bool, optional) By default, the losses are averaged over each loss element in the batch. - """ - - def __init__(self, apply_nonlin=None, alpha=None, gamma=2, balance_index=0, smooth=1e-5, size_average=True): - super(FocalLoss, self).__init__() - self.apply_nonlin = apply_nonlin - self.alpha = alpha - self.gamma = gamma - self.balance_index = balance_index - self.smooth = smooth - self.size_average = size_average - - if self.smooth is not None: - if self.smooth < 0 or self.smooth > 1.0: - raise ValueError('smooth value should be in [0,1]') - - def forward(self, logit, target): - # logit: [B, 2, 224, 224] - # target:[B, 1, 224, 224] - if self.apply_nonlin is not None: - logit = self.apply_nonlin(logit) - # 2 - num_class = logit.shape[1] - - if logit.dim() > 2: - # N,C,d1,d2 -> N,C,m (m=d1*d2*...) - # [B, 2, 224*224] - logit = logit.view(logit.size(0), logit.size(1), -1) - # [B, 224*224, 2] - logit = logit.permute(0, 2, 1).contiguous() - # [B*224*224, 2] - logit = logit.view(-1, logit.size(-1)) - target = torch.squeeze(target, 1) - # [B*224*224, 1] - target = target.view(-1, 1) - alpha = self.alpha - - if alpha is None: - alpha = torch.ones(num_class, 1) - elif isinstance(alpha, (list, np.ndarray)): - assert len(alpha) == num_class - alpha = torch.FloatTensor(alpha).view(num_class, 1) - alpha = alpha / alpha.sum() - elif isinstance(alpha, float): - alpha = torch.ones(num_class, 1) - alpha = alpha * (1 - self.alpha) - alpha[self.balance_index] = self.alpha - - else: - raise TypeError('Not support alpha type') - - if alpha.device != logit.device: - alpha = alpha.to(logit.device) - - # [B*224*224, 1] - idx = target.cpu().long() - - # [B*224*224, 2] - one_hot_key = torch.FloatTensor(target.size(0), num_class).zero_() - - one_hot_key = one_hot_key.scatter_(1, idx, 1) - if one_hot_key.device != logit.device: - one_hot_key = one_hot_key.to(logit.device) - - if self.smooth: - one_hot_key = torch.clamp( - one_hot_key, self.smooth / (num_class - 1), 1.0 - self.smooth) - pt = (one_hot_key * logit).sum(1) + self.smooth - logpt = pt.log() - - gamma = self.gamma - - alpha = alpha[idx] - alpha = torch.squeeze(alpha) - loss = -1 * alpha * torch.pow((1 - pt), gamma) * logpt - - if self.size_average: - loss = loss.mean() - return loss - - -class BinaryDiceLoss(nn.Module): - def __init__(self): - super(BinaryDiceLoss, self).__init__() - - def forward(self, input, targets): - # 获取每个批次的大小 N - N = targets.size()[0] - # 平滑变量 - smooth = 1 - # 将宽高 reshape 到同一纬度 - input_flat = input.view(N, -1) - targets_flat = targets.view(N, -1) - - # 计算交集 - intersection = input_flat * targets_flat - N_dice_eff = (2 * intersection.sum(1) + smooth) / (input_flat.sum(1) + targets_flat.sum(1) + smooth) - # 计算一个批次中平均每张图的损失 - loss = 1 - N_dice_eff.sum() / N - return loss \ No newline at end of file diff --git a/spaces/Frorozcol/financIA/src/tokenizer.py b/spaces/Frorozcol/financIA/src/tokenizer.py deleted file mode 100644 index 4e3375c4c130143db678a05de43cba8cedd920f3..0000000000000000000000000000000000000000 --- a/spaces/Frorozcol/financIA/src/tokenizer.py +++ /dev/null @@ -1,14 +0,0 @@ -from transformers import ( - AutoTokenizer, -) - - -def load_tokenizer(model_tokenizer): - """Load the tokenizer""" - return AutoTokenizer.from_pretrained(model_tokenizer) - - -def preprocessing_text(text, tokenizer): - """Tokenize the text""" - return tokenizer.encode_plus(text, max_length=130, pad_to_max_length=True, padding='max_length', truncation=True, return_tensors='pt') - diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/__init__.py b/spaces/Gen-Sim/Gen-Sim/cliport/tasks/__init__.py deleted file mode 100644 index bffa0271b0451ca134f7de08953bf68a6d91f831..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/__init__.py +++ /dev/null @@ -1,80 +0,0 @@ -"""Ravens tasks.""" - -from cliport.tasks.align_box_corner import AlignBoxCorner -from cliport.tasks.assembling_kits import AssemblingKits -from cliport.tasks.assembling_kits_seq import AssemblingKitsSeq -from cliport.tasks.block_insertion import BlockInsertion -from cliport.tasks.manipulating_rope import ManipulatingRope -from cliport.tasks.align_rope import AlignRope -from cliport.tasks.packing_boxes import PackingBoxes -from cliport.tasks.packing_shapes import PackingShapes -from cliport.tasks.packing_boxes_pairs import PackingBoxesPairs -from cliport.tasks.packing_google_objects import PackingSeenGoogleObjectsSeq -from cliport.tasks.palletizing_boxes import PalletizingBoxes -from cliport.tasks.place_red_in_green import PlaceRedInGreen -from cliport.tasks.put_block_in_bowl import PutBlockInBowl -from cliport.tasks.stack_block_pyramid import StackBlockPyramid -from cliport.tasks.stack_block_pyramid_seq import StackBlockPyramidSeq -from cliport.tasks.sweeping_piles import SweepingPiles -from cliport.tasks.separating_piles import SeparatingPiles -from cliport.tasks.task import Task -from cliport.tasks.towers_of_hanoi import TowersOfHanoi -from cliport.tasks.towers_of_hanoi_seq import TowersOfHanoiSeq -from cliport.tasks.generated_task import GeneratedTask -from cliport.tasks.extended_tasks import * - -names = { - # demo conditioned - 'align-box-corner': AlignBoxCorner, - 'assembling-kits': AssemblingKits, - 'assembling-kits-easy': AssemblingKitsEasy, - 'block-insertion': BlockInsertion, - 'block-insertion-easy': BlockInsertionEasy, - 'block-insertion-nofixture': BlockInsertionNoFixture, - 'block-insertion-sixdof': BlockInsertionSixDof, - 'block-insertion-translation': BlockInsertionTranslation, - 'manipulating-rope': ManipulatingRope, - 'packing-boxes': PackingBoxes, - 'palletizing-boxes': PalletizingBoxes, - 'place-red-in-green': PlaceRedInGreen, - 'stack-block-pyramid': StackBlockPyramid, - 'sweeping-piles': SweepingPiles, - 'towers-of-hanoi': TowersOfHanoi, - 'gen-task': GeneratedTask, - - # goal conditioned - 'align-rope': AlignRope, - 'assembling-kits-seq': AssemblingKitsSeq, - 'assembling-kits-seq-seen-colors': AssemblingKitsSeqSeenColors, - 'assembling-kits-seq-unseen-colors': AssemblingKitsSeqUnseenColors, - 'assembling-kits-seq-full': AssemblingKitsSeqFull, - 'packing-shapes': PackingShapes, - 'packing-boxes-pairs': PackingBoxesPairsSeenColors, - 'packing-boxes-pairs-seen-colors': PackingBoxesPairsSeenColors, - 'packing-boxes-pairs-unseen-colors': PackingBoxesPairsUnseenColors, - 'packing-boxes-pairs-full': PackingBoxesPairsFull, - 'packing-seen-google-objects-seq': PackingSeenGoogleObjectsSeq, - 'packing-unseen-google-objects-seq': PackingUnseenGoogleObjectsSeq, - 'packing-seen-google-objects-group': PackingSeenGoogleObjectsGroup, - 'packing-unseen-google-objects-group': PackingUnseenGoogleObjectsGroup, - 'put-block-in-bowl': PutBlockInBowlSeenColors, - 'put-block-in-bowl-seen-colors': PutBlockInBowlSeenColors, - 'put-block-in-bowl-unseen-colors': PutBlockInBowlUnseenColors, - 'put-block-in-bowl-full': PutBlockInBowlFull, - 'stack-block-pyramid-seq': StackBlockPyramidSeqSeenColors, - 'stack-block-pyramid-seq-seen-colors': StackBlockPyramidSeqSeenColors, - 'stack-block-pyramid-seq-unseen-colors': StackBlockPyramidSeqUnseenColors, - 'stack-block-pyramid-seq-full': StackBlockPyramidSeqFull, - 'separating-piles': SeparatingPilesSeenColors, - 'separating-piles-seen-colors': SeparatingPilesSeenColors, - 'separating-piles-unseen-colors': SeparatingPilesUnseenColors, - 'separating-piles-full': SeparatingPilesFull, - 'towers-of-hanoi-seq': TowersOfHanoiSeqSeenColors, - 'towers-of-hanoi-seq-seen-colors': TowersOfHanoiSeqSeenColors, - 'towers-of-hanoi-seq-unseen-colors': TowersOfHanoiSeqUnseenColors, - 'towers-of-hanoi-seq-full': TowersOfHanoiSeqFull, -} - - -from cliport.generated_tasks import new_names -names.update(new_names) \ No newline at end of file diff --git a/spaces/GipAdonimus/Real-Time-Voice-Cloning/synthesizer_preprocess_audio.py b/spaces/GipAdonimus/Real-Time-Voice-Cloning/synthesizer_preprocess_audio.py deleted file mode 100644 index fd4d01d476d77391322aef9d9d5a005adb1f5c15..0000000000000000000000000000000000000000 --- a/spaces/GipAdonimus/Real-Time-Voice-Cloning/synthesizer_preprocess_audio.py +++ /dev/null @@ -1,59 +0,0 @@ -from synthesizer.preprocess import preprocess_dataset -from synthesizer.hparams import hparams -from utils.argutils import print_args -from pathlib import Path -import argparse - - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="Preprocesses audio files from datasets, encodes them as mel spectrograms " - "and writes them to the disk. Audio files are also saved, to be used by the " - "vocoder for training.", - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - parser.add_argument("datasets_root", type=Path, help=\ - "Path to the directory containing your LibriSpeech/TTS datasets.") - parser.add_argument("-o", "--out_dir", type=Path, default=argparse.SUPPRESS, help=\ - "Path to the output directory that will contain the mel spectrograms, the audios and the " - "embeds. Defaults to /SV2TTS/synthesizer/") - parser.add_argument("-n", "--n_processes", type=int, default=None, help=\ - "Number of processes in parallel.") - parser.add_argument("-s", "--skip_existing", action="store_true", help=\ - "Whether to overwrite existing files with the same name. Useful if the preprocessing was " - "interrupted.") - parser.add_argument("--hparams", type=str, default="", help=\ - "Hyperparameter overrides as a comma-separated list of name-value pairs") - parser.add_argument("--no_trim", action="store_true", help=\ - "Preprocess audio without trimming silences (not recommended).") - parser.add_argument("--no_alignments", action="store_true", help=\ - "Use this option when dataset does not include alignments\ - (these are used to split long audio files into sub-utterances.)") - parser.add_argument("--datasets_name", type=str, default="LibriSpeech", help=\ - "Name of the dataset directory to process.") - parser.add_argument("--subfolders", type=str, default="train-clean-100, train-clean-360", help=\ - "Comma-separated list of subfolders to process inside your dataset directory") - args = parser.parse_args() - - # Process the arguments - if not hasattr(args, "out_dir"): - args.out_dir = args.datasets_root.joinpath("SV2TTS", "synthesizer") - - # Create directories - assert args.datasets_root.exists() - args.out_dir.mkdir(exist_ok=True, parents=True) - - # Verify webrtcvad is available - if not args.no_trim: - try: - import webrtcvad - except: - raise ModuleNotFoundError("Package 'webrtcvad' not found. This package enables " - "noise removal and is recommended. Please install and try again. If installation fails, " - "use --no_trim to disable this error message.") - del args.no_trim - - # Preprocess the dataset - print_args(args, parser) - args.hparams = hparams.parse(args.hparams) - preprocess_dataset(**vars(args)) diff --git a/spaces/Gmq-x/gpt-academic/crazy_functions/test_project/latex/attention/introduction.tex b/spaces/Gmq-x/gpt-academic/crazy_functions/test_project/latex/attention/introduction.tex deleted file mode 100644 index 1baa8915f4cf7aec2520894a87470fc9436d954b..0000000000000000000000000000000000000000 --- a/spaces/Gmq-x/gpt-academic/crazy_functions/test_project/latex/attention/introduction.tex +++ /dev/null @@ -1,18 +0,0 @@ -Recurrent neural networks, long short-term memory \citep{hochreiter1997} and gated recurrent \citep{gruEval14} neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation \citep{sutskever14, bahdanau2014neural, cho2014learning}. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. - -Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_{t-1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. -%\marginpar{not sure if the memory constraints are understandable here} -Recent work has achieved significant improvements in computational efficiency through factorization tricks \citep{Kuchaiev2017Factorization} and conditional computation \citep{shazeer2017outrageously}, while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains. - -%\marginpar{@all: there is work on analyzing what attention really does in seq2seq models, couldn't find it right away} - -Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences \citep{bahdanau2014neural, structuredAttentionNetworks}. In all but a few cases \citep{decomposableAttnModel}, however, such attention mechanisms are used in conjunction with a recurrent network. - -%\marginpar{not sure if "cross-positional communication" is understandable without explanation} -%\marginpar{insert exact training times and stats for the model that reaches sota earliest, maybe even a single GPU model?} - -In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs. -%\marginpar{you removed the constant number of repetitions part. I wrote it because I wanted to make it clear that the model does not only perform attention once, while it's also not recurrent. I thought that might be important to get across early.} - -% Just a standard paragraph with citations, rewrite. -%After the seminal papers of \citep{sutskever14}, \citep{bahdanau2014neural}, and \citep{cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation and language modeling with recurrent sequence models. Recent effort \citep{shazeer2017outrageously} has combined the power of conditional computation with sequence models to train very large models for machine translation, pushing SOTA at lower computational cost. Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state encumbers recurrnet models to process multiple inputs at once, and their time complexity is a linear function of the length of the input and output, both during training and inference. [What I want to say here is that although this is fine during decoding, at training time, we are given both input and output and this linear nature does not allow the RNN to process all inputs and outputs simultaneously and haven't been used on datasets that are the of the scale of the web. What's the largest dataset we have ? . Talk about Nividia and possibly other's effors to speed up things, and possibly other efforts that alleviate this, but are still limited by it's comptuational nature]. Rest of the intro: What if you could construct the state based on the actual inputs and outputs, then you could construct them all at once. This has been the foundation of many promising recent efforts, bytenet,facenet (Also talk about quasi rnn here). Now we talk about attention!! Along with cell architectures such as long short-term meory (LSTM) \citep{hochreiter1997}, and gated recurrent units (GRUs) \citep{cho2014learning}, attention has emerged as an essential ingredient in successful sequence models, in particular for machine translation. In recent years, many, if not all, state-of-the-art (SOTA) results in machine translation have been achieved with attention-based sequence models \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. Talk about the neon work on how it played with attention to do self attention! Then talk about what we do. \ No newline at end of file diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/relax/amber_minimize.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/relax/amber_minimize.py deleted file mode 100644 index d3ff9f74218bdcabe0b57d8e0e749814b583edcd..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/relax/amber_minimize.py +++ /dev/null @@ -1,543 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Restrained Amber Minimization of a structure.""" - -import io -import time -from typing import Collection, Optional, Sequence - -from absl import logging -from alphafold.common import protein -from alphafold.common import residue_constants -from alphafold.model import folding -from alphafold.relax import cleanup -from alphafold.relax import utils -import ml_collections -import numpy as np -from simtk import openmm -from simtk import unit -from simtk.openmm import app as openmm_app -from simtk.openmm.app.internal.pdbstructure import PdbStructure - - -ENERGY = unit.kilocalories_per_mole -LENGTH = unit.angstroms - - -def will_restrain(atom: openmm_app.Atom, rset: str) -> bool: - """Returns True if the atom will be restrained by the given restraint set.""" - - if rset == "non_hydrogen": - return atom.element.name != "hydrogen" - elif rset == "c_alpha": - return atom.name == "CA" - - -def _add_restraints( - system: openmm.System, - reference_pdb: openmm_app.PDBFile, - stiffness: unit.Unit, - rset: str, - exclude_residues: Sequence[int]): - """Adds a harmonic potential that restrains the system to a structure.""" - assert rset in ["non_hydrogen", "c_alpha"] - - force = openmm.CustomExternalForce( - "0.5 * k * ((x-x0)^2 + (y-y0)^2 + (z-z0)^2)") - force.addGlobalParameter("k", stiffness) - for p in ["x0", "y0", "z0"]: - force.addPerParticleParameter(p) - - for i, atom in enumerate(reference_pdb.topology.atoms()): - if atom.residue.index in exclude_residues: - continue - if will_restrain(atom, rset): - force.addParticle(i, reference_pdb.positions[i]) - logging.info("Restraining %d / %d particles.", - force.getNumParticles(), system.getNumParticles()) - system.addForce(force) - - -def _openmm_minimize( - pdb_str: str, - max_iterations: int, - tolerance: unit.Unit, - stiffness: unit.Unit, - restraint_set: str, - exclude_residues: Sequence[int]): - """Minimize energy via openmm.""" - - pdb_file = io.StringIO(pdb_str) - pdb = openmm_app.PDBFile(pdb_file) - - force_field = openmm_app.ForceField("amber99sb.xml") - constraints = openmm_app.HBonds - system = force_field.createSystem( - pdb.topology, constraints=constraints) - if stiffness > 0 * ENERGY / (LENGTH**2): - _add_restraints(system, pdb, stiffness, restraint_set, exclude_residues) - - integrator = openmm.LangevinIntegrator(0, 0.01, 0.0) - platform = openmm.Platform.getPlatformByName("CPU") - simulation = openmm_app.Simulation( - pdb.topology, system, integrator, platform) - simulation.context.setPositions(pdb.positions) - - ret = {} - state = simulation.context.getState(getEnergy=True, getPositions=True) - ret["einit"] = state.getPotentialEnergy().value_in_unit(ENERGY) - ret["posinit"] = state.getPositions(asNumpy=True).value_in_unit(LENGTH) - simulation.minimizeEnergy(maxIterations=max_iterations, - tolerance=tolerance) - state = simulation.context.getState(getEnergy=True, getPositions=True) - ret["efinal"] = state.getPotentialEnergy().value_in_unit(ENERGY) - ret["pos"] = state.getPositions(asNumpy=True).value_in_unit(LENGTH) - ret["min_pdb"] = _get_pdb_string(simulation.topology, state.getPositions()) - return ret - - -def _get_pdb_string(topology: openmm_app.Topology, positions: unit.Quantity): - """Returns a pdb string provided OpenMM topology and positions.""" - with io.StringIO() as f: - openmm_app.PDBFile.writeFile(topology, positions, f) - return f.getvalue() - - -def _check_cleaned_atoms(pdb_cleaned_string: str, pdb_ref_string: str): - """Checks that no atom positions have been altered by cleaning.""" - cleaned = openmm_app.PDBFile(io.StringIO(pdb_cleaned_string)) - reference = openmm_app.PDBFile(io.StringIO(pdb_ref_string)) - - cl_xyz = np.array(cleaned.getPositions().value_in_unit(LENGTH)) - ref_xyz = np.array(reference.getPositions().value_in_unit(LENGTH)) - - for ref_res, cl_res in zip(reference.topology.residues(), - cleaned.topology.residues()): - assert ref_res.name == cl_res.name - for rat in ref_res.atoms(): - for cat in cl_res.atoms(): - if cat.name == rat.name: - if not np.array_equal(cl_xyz[cat.index], ref_xyz[rat.index]): - raise ValueError(f"Coordinates of cleaned atom {cat} do not match " - f"coordinates of reference atom {rat}.") - - -def _check_residues_are_well_defined(prot: protein.Protein): - """Checks that all residues contain non-empty atom sets.""" - if (prot.atom_mask.sum(axis=-1) == 0).any(): - raise ValueError("Amber minimization can only be performed on proteins with" - " well-defined residues. This protein contains at least" - " one residue with no atoms.") - - -def _check_atom_mask_is_ideal(prot): - """Sanity-check the atom mask is ideal, up to a possible OXT.""" - atom_mask = prot.atom_mask - ideal_atom_mask = protein.ideal_atom_mask(prot) - utils.assert_equal_nonterminal_atom_types(atom_mask, ideal_atom_mask) - - -def clean_protein( - prot: protein.Protein, - checks: bool = True): - """Adds missing atoms to Protein instance. - - Args: - prot: A `protein.Protein` instance. - checks: A `bool` specifying whether to add additional checks to the cleaning - process. - - Returns: - pdb_string: A string of the cleaned protein. - """ - _check_atom_mask_is_ideal(prot) - - # Clean pdb. - prot_pdb_string = protein.to_pdb(prot) - pdb_file = io.StringIO(prot_pdb_string) - alterations_info = {} - fixed_pdb = cleanup.fix_pdb(pdb_file, alterations_info) - fixed_pdb_file = io.StringIO(fixed_pdb) - pdb_structure = PdbStructure(fixed_pdb_file) - cleanup.clean_structure(pdb_structure, alterations_info) - - logging.info("alterations info: %s", alterations_info) - - # Write pdb file of cleaned structure. - as_file = openmm_app.PDBFile(pdb_structure) - pdb_string = _get_pdb_string(as_file.getTopology(), as_file.getPositions()) - if checks: - _check_cleaned_atoms(pdb_string, prot_pdb_string) - return pdb_string - - -def make_atom14_positions(prot): - """Constructs denser atom positions (14 dimensions instead of 37).""" - restype_atom14_to_atom37 = [] # mapping (restype, atom14) --> atom37 - restype_atom37_to_atom14 = [] # mapping (restype, atom37) --> atom14 - restype_atom14_mask = [] - - for rt in residue_constants.restypes: - atom_names = residue_constants.restype_name_to_atom14_names[ - residue_constants.restype_1to3[rt]] - - restype_atom14_to_atom37.append([ - (residue_constants.atom_order[name] if name else 0) - for name in atom_names - ]) - - atom_name_to_idx14 = {name: i for i, name in enumerate(atom_names)} - restype_atom37_to_atom14.append([ - (atom_name_to_idx14[name] if name in atom_name_to_idx14 else 0) - for name in residue_constants.atom_types - ]) - - restype_atom14_mask.append([(1. if name else 0.) for name in atom_names]) - - # Add dummy mapping for restype 'UNK'. - restype_atom14_to_atom37.append([0] * 14) - restype_atom37_to_atom14.append([0] * 37) - restype_atom14_mask.append([0.] * 14) - - restype_atom14_to_atom37 = np.array(restype_atom14_to_atom37, dtype=np.int32) - restype_atom37_to_atom14 = np.array(restype_atom37_to_atom14, dtype=np.int32) - restype_atom14_mask = np.array(restype_atom14_mask, dtype=np.float32) - - # Create the mapping for (residx, atom14) --> atom37, i.e. an array - # with shape (num_res, 14) containing the atom37 indices for this protein. - residx_atom14_to_atom37 = restype_atom14_to_atom37[prot["aatype"]] - residx_atom14_mask = restype_atom14_mask[prot["aatype"]] - - # Create a mask for known ground truth positions. - residx_atom14_gt_mask = residx_atom14_mask * np.take_along_axis( - prot["all_atom_mask"], residx_atom14_to_atom37, axis=1).astype(np.float32) - - # Gather the ground truth positions. - residx_atom14_gt_positions = residx_atom14_gt_mask[:, :, None] * ( - np.take_along_axis(prot["all_atom_positions"], - residx_atom14_to_atom37[..., None], - axis=1)) - - prot["atom14_atom_exists"] = residx_atom14_mask - prot["atom14_gt_exists"] = residx_atom14_gt_mask - prot["atom14_gt_positions"] = residx_atom14_gt_positions - - prot["residx_atom14_to_atom37"] = residx_atom14_to_atom37 - - # Create the gather indices for mapping back. - residx_atom37_to_atom14 = restype_atom37_to_atom14[prot["aatype"]] - prot["residx_atom37_to_atom14"] = residx_atom37_to_atom14 - - # Create the corresponding mask. - restype_atom37_mask = np.zeros([21, 37], dtype=np.float32) - for restype, restype_letter in enumerate(residue_constants.restypes): - restype_name = residue_constants.restype_1to3[restype_letter] - atom_names = residue_constants.residue_atoms[restype_name] - for atom_name in atom_names: - atom_type = residue_constants.atom_order[atom_name] - restype_atom37_mask[restype, atom_type] = 1 - - residx_atom37_mask = restype_atom37_mask[prot["aatype"]] - prot["atom37_atom_exists"] = residx_atom37_mask - - # As the atom naming is ambiguous for 7 of the 20 amino acids, provide - # alternative ground truth coordinates where the naming is swapped - restype_3 = [ - residue_constants.restype_1to3[res] for res in residue_constants.restypes - ] - restype_3 += ["UNK"] - - # Matrices for renaming ambiguous atoms. - all_matrices = {res: np.eye(14, dtype=np.float32) for res in restype_3} - for resname, swap in residue_constants.residue_atom_renaming_swaps.items(): - correspondences = np.arange(14) - for source_atom_swap, target_atom_swap in swap.items(): - source_index = residue_constants.restype_name_to_atom14_names[ - resname].index(source_atom_swap) - target_index = residue_constants.restype_name_to_atom14_names[ - resname].index(target_atom_swap) - correspondences[source_index] = target_index - correspondences[target_index] = source_index - renaming_matrix = np.zeros((14, 14), dtype=np.float32) - for index, correspondence in enumerate(correspondences): - renaming_matrix[index, correspondence] = 1. - all_matrices[resname] = renaming_matrix.astype(np.float32) - renaming_matrices = np.stack([all_matrices[restype] for restype in restype_3]) - - # Pick the transformation matrices for the given residue sequence - # shape (num_res, 14, 14). - renaming_transform = renaming_matrices[prot["aatype"]] - - # Apply it to the ground truth positions. shape (num_res, 14, 3). - alternative_gt_positions = np.einsum("rac,rab->rbc", - residx_atom14_gt_positions, - renaming_transform) - prot["atom14_alt_gt_positions"] = alternative_gt_positions - - # Create the mask for the alternative ground truth (differs from the - # ground truth mask, if only one of the atoms in an ambiguous pair has a - # ground truth position). - alternative_gt_mask = np.einsum("ra,rab->rb", - residx_atom14_gt_mask, - renaming_transform) - - prot["atom14_alt_gt_exists"] = alternative_gt_mask - - # Create an ambiguous atoms mask. shape: (21, 14). - restype_atom14_is_ambiguous = np.zeros((21, 14), dtype=np.float32) - for resname, swap in residue_constants.residue_atom_renaming_swaps.items(): - for atom_name1, atom_name2 in swap.items(): - restype = residue_constants.restype_order[ - residue_constants.restype_3to1[resname]] - atom_idx1 = residue_constants.restype_name_to_atom14_names[resname].index( - atom_name1) - atom_idx2 = residue_constants.restype_name_to_atom14_names[resname].index( - atom_name2) - restype_atom14_is_ambiguous[restype, atom_idx1] = 1 - restype_atom14_is_ambiguous[restype, atom_idx2] = 1 - - # From this create an ambiguous_mask for the given sequence. - prot["atom14_atom_is_ambiguous"] = ( - restype_atom14_is_ambiguous[prot["aatype"]]) - - return prot - - -def find_violations(prot_np: protein.Protein): - """Analyzes a protein and returns structural violation information. - - Args: - prot_np: A protein. - - Returns: - violations: A `dict` of structure components with structural violations. - violation_metrics: A `dict` of violation metrics. - """ - batch = { - "aatype": prot_np.aatype, - "all_atom_positions": prot_np.atom_positions.astype(np.float32), - "all_atom_mask": prot_np.atom_mask.astype(np.float32), - "residue_index": prot_np.residue_index, - } - - batch["seq_mask"] = np.ones_like(batch["aatype"], np.float32) - batch = make_atom14_positions(batch) - - violations = folding.find_structural_violations( - batch=batch, - atom14_pred_positions=batch["atom14_gt_positions"], - config=ml_collections.ConfigDict( - {"violation_tolerance_factor": 12, # Taken from model config. - "clash_overlap_tolerance": 1.5, # Taken from model config. - })) - violation_metrics = folding.compute_violation_metrics( - batch=batch, - atom14_pred_positions=batch["atom14_gt_positions"], - violations=violations, - ) - - return violations, violation_metrics - - -def get_violation_metrics(prot: protein.Protein): - """Computes violation and alignment metrics.""" - structural_violations, struct_metrics = find_violations(prot) - violation_idx = np.flatnonzero( - structural_violations["total_per_residue_violations_mask"]) - - struct_metrics["residue_violations"] = violation_idx - struct_metrics["num_residue_violations"] = len(violation_idx) - struct_metrics["structural_violations"] = structural_violations - return struct_metrics - - -def _run_one_iteration( - *, - pdb_string: str, - max_iterations: int, - tolerance: float, - stiffness: float, - restraint_set: str, - max_attempts: int, - exclude_residues: Optional[Collection[int]] = None): - """Runs the minimization pipeline. - - Args: - pdb_string: A pdb string. - max_iterations: An `int` specifying the maximum number of L-BFGS iterations. - A value of 0 specifies no limit. - tolerance: kcal/mol, the energy tolerance of L-BFGS. - stiffness: kcal/mol A**2, spring constant of heavy atom restraining - potential. - restraint_set: The set of atoms to restrain. - max_attempts: The maximum number of minimization attempts. - exclude_residues: An optional list of zero-indexed residues to exclude from - restraints. - - Returns: - A `dict` of minimization info. - """ - exclude_residues = exclude_residues or [] - - # Assign physical dimensions. - tolerance = tolerance * ENERGY - stiffness = stiffness * ENERGY / (LENGTH**2) - - start = time.time() - minimized = False - attempts = 0 - while not minimized and attempts < max_attempts: - attempts += 1 - try: - logging.info("Minimizing protein, attempt %d of %d.", - attempts, max_attempts) - ret = _openmm_minimize( - pdb_string, max_iterations=max_iterations, - tolerance=tolerance, stiffness=stiffness, - restraint_set=restraint_set, - exclude_residues=exclude_residues) - minimized = True - except Exception as e: # pylint: disable=broad-except - logging.info(e) - if not minimized: - raise ValueError(f"Minimization failed after {max_attempts} attempts.") - ret["opt_time"] = time.time() - start - ret["min_attempts"] = attempts - return ret - - -def run_pipeline( - prot: protein.Protein, - stiffness: float, - max_outer_iterations: int = 1, - place_hydrogens_every_iteration: bool = True, - max_iterations: int = 0, - tolerance: float = 2.39, - restraint_set: str = "non_hydrogen", - max_attempts: int = 100, - checks: bool = True, - exclude_residues: Optional[Sequence[int]] = None): - """Run iterative amber relax. - - Successive relax iterations are performed until all violations have been - resolved. Each iteration involves a restrained Amber minimization, with - restraint exclusions determined by violation-participating residues. - - Args: - prot: A protein to be relaxed. - stiffness: kcal/mol A**2, the restraint stiffness. - max_outer_iterations: The maximum number of iterative minimization. - place_hydrogens_every_iteration: Whether hydrogens are re-initialized - prior to every minimization. - max_iterations: An `int` specifying the maximum number of L-BFGS steps - per relax iteration. A value of 0 specifies no limit. - tolerance: kcal/mol, the energy tolerance of L-BFGS. - The default value is the OpenMM default. - restraint_set: The set of atoms to restrain. - max_attempts: The maximum number of minimization attempts per iteration. - checks: Whether to perform cleaning checks. - exclude_residues: An optional list of zero-indexed residues to exclude from - restraints. - - Returns: - out: A dictionary of output values. - """ - - # `protein.to_pdb` will strip any poorly-defined residues so we need to - # perform this check before `clean_protein`. - _check_residues_are_well_defined(prot) - pdb_string = clean_protein(prot, checks=checks) - - exclude_residues = exclude_residues or [] - exclude_residues = set(exclude_residues) - violations = np.inf - iteration = 0 - - while violations > 0 and iteration < max_outer_iterations: - ret = _run_one_iteration( - pdb_string=pdb_string, - exclude_residues=exclude_residues, - max_iterations=max_iterations, - tolerance=tolerance, - stiffness=stiffness, - restraint_set=restraint_set, - max_attempts=max_attempts) - prot = protein.from_pdb_string(ret["min_pdb"]) - if place_hydrogens_every_iteration: - pdb_string = clean_protein(prot, checks=True) - else: - pdb_string = ret["min_pdb"] - ret.update(get_violation_metrics(prot)) - ret.update({ - "num_exclusions": len(exclude_residues), - "iteration": iteration, - }) - violations = ret["violations_per_residue"] - exclude_residues = exclude_residues.union(ret["residue_violations"]) - - logging.info("Iteration completed: Einit %.2f Efinal %.2f Time %.2f s " - "num residue violations %d num residue exclusions %d ", - ret["einit"], ret["efinal"], ret["opt_time"], - ret["num_residue_violations"], ret["num_exclusions"]) - iteration += 1 - return ret - - -def get_initial_energies(pdb_strs: Sequence[str], - stiffness: float = 0.0, - restraint_set: str = "non_hydrogen", - exclude_residues: Optional[Sequence[int]] = None): - """Returns initial potential energies for a sequence of PDBs. - - Assumes the input PDBs are ready for minimization, and all have the same - topology. - Allows time to be saved by not pdbfixing / rebuilding the system. - - Args: - pdb_strs: List of PDB strings. - stiffness: kcal/mol A**2, spring constant of heavy atom restraining - potential. - restraint_set: Which atom types to restrain. - exclude_residues: An optional list of zero-indexed residues to exclude from - restraints. - - Returns: - A list of initial energies in the same order as pdb_strs. - """ - exclude_residues = exclude_residues or [] - - openmm_pdbs = [openmm_app.PDBFile(PdbStructure(io.StringIO(p))) - for p in pdb_strs] - force_field = openmm_app.ForceField("amber99sb.xml") - system = force_field.createSystem(openmm_pdbs[0].topology, - constraints=openmm_app.HBonds) - stiffness = stiffness * ENERGY / (LENGTH**2) - if stiffness > 0 * ENERGY / (LENGTH**2): - _add_restraints(system, openmm_pdbs[0], stiffness, restraint_set, - exclude_residues) - simulation = openmm_app.Simulation(openmm_pdbs[0].topology, - system, - openmm.LangevinIntegrator(0, 0.01, 0.0), - openmm.Platform.getPlatformByName("CPU")) - energies = [] - for pdb in openmm_pdbs: - try: - simulation.context.setPositions(pdb.positions) - state = simulation.context.getState(getEnergy=True) - energies.append(state.getPotentialEnergy().value_in_unit(ENERGY)) - except Exception as e: # pylint: disable=broad-except - logging.error("Error getting initial energy, returning large value %s", e) - energies.append(unit.Quantity(1e20, ENERGY)) - return energies diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/regnet/mask_rcnn_regnetx-12GF_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/regnet/mask_rcnn_regnetx-12GF_fpn_1x_coco.py deleted file mode 100644 index 104d6d43bd958d49f75d54965b326ebac29ae330..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/regnet/mask_rcnn_regnetx-12GF_fpn_1x_coco.py +++ /dev/null @@ -1,16 +0,0 @@ -_base_ = './mask_rcnn_regnetx-3.2GF_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://regnetx_12gf', - backbone=dict( - type='RegNet', - arch='regnetx_12gf', - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[224, 448, 896, 2240], - out_channels=256, - num_outs=5)) diff --git a/spaces/Grezz/generate_human_motion/pyrender/pyrender/platforms/osmesa.py b/spaces/Grezz/generate_human_motion/pyrender/pyrender/platforms/osmesa.py deleted file mode 100644 index deaa5ff44031a107883913ae9a18fc425d650f3d..0000000000000000000000000000000000000000 --- a/spaces/Grezz/generate_human_motion/pyrender/pyrender/platforms/osmesa.py +++ /dev/null @@ -1,59 +0,0 @@ -from .base import Platform - - -__all__ = ['OSMesaPlatform'] - - -class OSMesaPlatform(Platform): - """Renders into a software buffer using OSMesa. Requires special versions - of OSMesa to be installed, plus PyOpenGL upgrade. - """ - - def __init__(self, viewport_width, viewport_height): - super(OSMesaPlatform, self).__init__(viewport_width, viewport_height) - self._context = None - self._buffer = None - - def init_context(self): - from OpenGL import arrays - from OpenGL.osmesa import ( - OSMesaCreateContextAttribs, OSMESA_FORMAT, - OSMESA_RGBA, OSMESA_PROFILE, OSMESA_CORE_PROFILE, - OSMESA_CONTEXT_MAJOR_VERSION, OSMESA_CONTEXT_MINOR_VERSION, - OSMESA_DEPTH_BITS - ) - - attrs = arrays.GLintArray.asArray([ - OSMESA_FORMAT, OSMESA_RGBA, - OSMESA_DEPTH_BITS, 24, - OSMESA_PROFILE, OSMESA_CORE_PROFILE, - OSMESA_CONTEXT_MAJOR_VERSION, 3, - OSMESA_CONTEXT_MINOR_VERSION, 3, - 0 - ]) - self._context = OSMesaCreateContextAttribs(attrs, None) - self._buffer = arrays.GLubyteArray.zeros( - (self.viewport_height, self.viewport_width, 4) - ) - - def make_current(self): - from OpenGL import GL as gl - from OpenGL.osmesa import OSMesaMakeCurrent - assert(OSMesaMakeCurrent( - self._context, self._buffer, gl.GL_UNSIGNED_BYTE, - self.viewport_width, self.viewport_height - )) - - def make_uncurrent(self): - """Make the OpenGL context uncurrent. - """ - pass - - def delete_context(self): - from OpenGL.osmesa import OSMesaDestroyContext - OSMesaDestroyContext(self._context) - self._context = None - self._buffer = None - - def supports_framebuffers(self): - return False diff --git a/spaces/HarlanHong/DaGAN/modules/AdaIN.py b/spaces/HarlanHong/DaGAN/modules/AdaIN.py deleted file mode 100644 index dac8be74031b6543e3fa5da4e5ff48ffdc83a1fe..0000000000000000000000000000000000000000 --- a/spaces/HarlanHong/DaGAN/modules/AdaIN.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch - -def calc_mean_std(feat, eps=1e-5): - # eps is a small value added to the variance to avoid divide-by-zero. - size = feat.size() - assert (len(size) == 4) - N, C = size[:2] - feat_var = feat.view(N, C, -1).var(dim=2) + eps - feat_std = feat_var.sqrt().view(N, C, 1, 1) - feat_mean = feat.view(N, C, -1).mean(dim=2).view(N, C, 1, 1) - return feat_mean, feat_std - -def adaptive_instance_normalization(content_feat, style_feat): - assert (content_feat.size()[:2] == style_feat.size()[:2]) - size = content_feat.size() - style_mean, style_std = calc_mean_std(style_feat) - content_mean, content_std = calc_mean_std(content_feat) - normalized_feat = (content_feat - content_mean.expand( - size)) / content_std.expand(size) - - return normalized_feat * style_std.expand(size) + style_mean.expand(size) - -def _calc_feat_flatten_mean_std(feat): - # takes 3D feat (C, H, W), return mean and std of array within channels - assert (feat.size()[0] == 3) - assert (isinstance(feat, torch.FloatTensor)) - feat_flatten = feat.view(3, -1) - mean = feat_flatten.mean(dim=-1, keepdim=True) - std = feat_flatten.std(dim=-1, keepdim=True) - return feat_flatten, mean, std - -def _mat_sqrt(x): - U, D, V = torch.svd(x) - return torch.mm(torch.mm(U, D.pow(0.5).diag()), V.t()) - -def coral(source, target): - # assume both source and target are 3D array (C, H, W) - # Note: flatten -> f - source_f, source_f_mean, source_f_std = _calc_feat_flatten_mean_std(source) - source_f_norm = (source_f - source_f_mean.expand_as( - source_f)) / source_f_std.expand_as(source_f) - source_f_cov_eye = \ - torch.mm(source_f_norm, source_f_norm.t()) + torch.eye(3) - - target_f, target_f_mean, target_f_std = _calc_feat_flatten_mean_std(target) - target_f_norm = (target_f - target_f_mean.expand_as( - target_f)) / target_f_std.expand_as(target_f) - target_f_cov_eye = \ - torch.mm(target_f_norm, target_f_norm.t()) + torch.eye(3) - - source_f_norm_transfer = torch.mm( - _mat_sqrt(target_f_cov_eye), - torch.mm(torch.inverse(_mat_sqrt(source_f_cov_eye)), - source_f_norm) - ) - - source_f_transfer = source_f_norm_transfer * \ - target_f_std.expand_as(source_f_norm) + \ - target_f_mean.expand_as(source_f_norm) - - return source_f_transfer.view(source.size()) \ No newline at end of file diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/simultaneous_translation/README.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/simultaneous_translation/README.md deleted file mode 100644 index 62a005e0ec6f15af9015d335e34b45df6ed89b6c..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/simultaneous_translation/README.md +++ /dev/null @@ -1,5 +0,0 @@ -# Simultaneous Translation -Examples of simultaneous translation in fairseq -- [English-to-Japanese text-to-text wait-k model](docs/enja-waitk.md) -- [English-to-Germen text-to-text monotonic multihead attention model](docs/ende-mma.md) -- [English-to-Germen speech-to-text simultaneous translation model](../speech_to_text/docs/simulst_mustc_example.md) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/simultaneous_translation/tests/test_text_models.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/simultaneous_translation/tests/test_text_models.py deleted file mode 100644 index 127adfa6337333ba5ae598fcd158956def0d520f..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/simultaneous_translation/tests/test_text_models.py +++ /dev/null @@ -1,407 +0,0 @@ -import argparse -import unittest -from typing import Any, Dict - -import torch -from examples.simultaneous_translation.models import ( - transformer_monotonic_attention -) - - -from tests.test_roberta import FakeTask - - -DEFAULT_CONFIG = { - "attention_eps": 1e-6, - "mass_preservation": True, - "noise_type": "flat", - "noise_mean": 0.0, - "noise_var": 1.0, - "energy_bias_init": -2, - "energy_bias": True -} - - -PAD_INDEX = 1 - - -def generate_config(overrides_kv): - new_dict = {key: value for key, value in DEFAULT_CONFIG.items()} - for key, value in overrides_kv.items(): - new_dict[key] = value - return new_dict - - -def make_sample_with_padding(longer_src=False) -> Dict[str, Any]: - tokens_1 = torch.LongTensor( - [ - [2, 10, 11, 12, 13, 14, 15, 10, 11, 12, 13, 14, 15, 2], - [ - 2, 11, 12, 14, 15, 10, 11, 12, 13, 14, 15, 2, - PAD_INDEX, PAD_INDEX - ], - ] - ) - tokens_2 = torch.LongTensor( - [ - [2, 11, 12, 13, 14, 2, PAD_INDEX, PAD_INDEX], - [2, 11, 22, 33, 2, PAD_INDEX, PAD_INDEX, PAD_INDEX] - ] - ) - if longer_src: - src_tokens = tokens_1[:, 1:] - prev_output_tokens = tokens_2 - else: - src_tokens = tokens_2[:, 1:8] - prev_output_tokens = tokens_1 - - src_lengths = src_tokens.ne(PAD_INDEX).sum(dim=1).long() - - sample = { - "net_input": { - "src_tokens": src_tokens, - "prev_output_tokens": prev_output_tokens, - "src_lengths": src_lengths, - }, - "target": prev_output_tokens[:, 1:], - } - return sample - - -def build_transformer_monotonic_attention(**extra_args: Any): - overrides = { - # Use characteristics dimensions - "encoder_embed_dim": 12, - "encoder_ffn_embed_dim": 14, - "decoder_embed_dim": 12, - "decoder_ffn_embed_dim": 14, - # Disable dropout so we have comparable tests. - "dropout": 0, - "attention_dropout": 0, - "activation_dropout": 0, - "encoder_layerdrop": 0, - } - overrides.update(extra_args) - # Overrides the defaults from the parser - args = argparse.Namespace(**overrides) - transformer_monotonic_attention.monotonic_tiny_architecture(args) - - torch.manual_seed(0) - task = FakeTask(args) - return ( - transformer_monotonic_attention - .TransformerModelSimulTrans - .build_model(args, task) - ) - - -def expected_alignment_formula( - p_choose, - mass_perservation=True, - padding_mask=None -): - # Online and Linear-Time Attention by Enforcing Monotonic Alignments - # https://arxiv.org/pdf/1704.00784.pdf - # Eq 18, 19 - bsz, tgt_len, src_len = p_choose.size() - alpha = torch.zeros_like(p_choose) - - if padding_mask is not None: - bsz_pad = padding_mask.size(0) - num_heads = int(bsz / bsz_pad) - padding_mask = ( - padding_mask - .unsqueeze(1) - .expand([bsz_pad, num_heads, src_len]) - .contiguous() - .view(-1, src_len) - ) - - p_choose = p_choose.masked_fill(padding_mask.unsqueeze(1), 0) - - for bsz_i in range(bsz): - for i in range(tgt_len): - for j in range(src_len): - if i == 0: - if j == 0: - # First source token - alpha[bsz_i, i, j] = p_choose[bsz_i, i, j] - else: - # First target token - alpha[bsz_i, i, j] = ( - p_choose[bsz_i, i, j] - * torch.prod( - 1 - p_choose[bsz_i, i, :j] - ) - ) - else: - alpha[bsz_i, i, j] = alpha[bsz_i, i - 1, j] - for k in range(j): - alpha[bsz_i, i, j] += ( - alpha[bsz_i, i - 1, k] - * torch.prod( - 1 - p_choose[bsz_i, i, k:j] - ) - ) - alpha[bsz_i, i, j] *= p_choose[bsz_i, i, j] - - alpha = alpha.masked_fill(padding_mask.unsqueeze(1), 0) - - if mass_perservation: - alpha = mass_perservation_formula(alpha, False, padding_mask) - - return alpha - - -def mass_perservation_formula(alpha, left_padding=False, padding_mask=None): - if padding_mask is None or alpha.size(-1) == 1: - if alpha.size(-1) > 1: - alpha[:, :, -1] = 1 - alpha[:, :, :-1].sum(dim=-1) - return alpha - - src_lens = (padding_mask.logical_not()).sum(dim=1).long() - - bsz, tgt_len, src_len = alpha.size() - - assert ( - not left_padding - or (left_padding and (not padding_mask[:, 0].any())) - ) - - alpha = alpha.masked_fill(padding_mask.unsqueeze(1), 0) - - for bsz_i in range(bsz): - if left_padding: - alpha[bsz_i, :, -1] = ( - 1 - alpha[bsz_i, :, :-1].sum(dim=-1) - ) - else: - alpha[bsz_i, :, src_lens[bsz_i] - 1] = ( - 1 - alpha[bsz_i, :, :src_lens[bsz_i] - 1].sum(dim=-1) - ) - - return alpha - - -def expected_soft_attention_formula( - alpha, - soft_energy, - padding_mask=None, - chunksize=1e10, -): - # Monotonic Infinite Lookback Attention for Simultaneous Machine Translation - # https://arxiv.org/pdf/1906.05218.pdf - # Eq 14 - - # Monotonic Chunkwise Attention - # https://arxiv.org/abs/1712.05382 - # Eq 17 - bsz, tgt_len, src_len = alpha.size() - beta = torch.zeros_like(alpha) - - if padding_mask is not None: - bsz_pad = padding_mask.size(0) - num_heads = int(bsz / bsz_pad) - # Expanding for potential head dimension - padding_mask = ( - padding_mask - .unsqueeze(1) - .expand([bsz_pad, num_heads, src_len]) - .contiguous() - .view(-1, src_len) - ) - soft_energy = soft_energy.masked_fill(padding_mask.unsqueeze(1), float('-inf')) - - for bsz_i in range(bsz): - for i in range(tgt_len): - for j in range(src_len): - for k in range(j, min([src_len, j + chunksize])): - if not padding_mask[bsz_i, j]: - beta[bsz_i, i, j] += ( - alpha[bsz_i, i, k] * torch.exp(soft_energy[bsz_i, i, j]) - / torch.sum(torch.exp(soft_energy[bsz_i, i, max([0, k - chunksize + 1]):k + 1])) - ) - return beta - - -class MonotonicAttentionTestAbstractClass(object): - def test_forward(self): - sample = make_sample_with_padding() - out, _ = self.model.forward(**sample["net_input"]) - loss = out.sum() - loss.backward() - - def test_p_choose(self): - sample = make_sample_with_padding() - _, extra_out = self.model.forward(**sample["net_input"]) - for item in extra_out.attn_list: - p_choose = item["p_choose"] - self.assertTrue(p_choose.le(1.0).all()) - self.assertTrue(p_choose.ge(0.0).all()) - - def test_expected_alignment(self): - for longer_src in [True, False]: - sample = make_sample_with_padding(longer_src) - _, extra_out = self.model.forward(**sample["net_input"]) - for item in extra_out.attn_list: - p_choose = item["p_choose"] - alpha_system = item["alpha"] - self.assertTrue(p_choose.size() == alpha_system.size()) - bsz, num_head, tgt_len, src_len = alpha_system.size() - alpha_system = alpha_system.view(-1, tgt_len, src_len) - p_choose = p_choose.view(-1, tgt_len, src_len) - - alpha_real = expected_alignment_formula( - p_choose, - self.model.decoder.layers[0].encoder_attn.mass_preservation, - sample["net_input"]["src_tokens"].eq(PAD_INDEX) - ) - - self.assertTrue( - torch.abs(alpha_system - alpha_real).le(5e-5).all(), - ) - - -class HardMonotonicAttentionTestCase( - unittest.TestCase, - MonotonicAttentionTestAbstractClass -): - def setUp(self): - self.model = build_transformer_monotonic_attention( - **generate_config({"simul_type": "hard_aligned"}) - ) - - -class InfiniteLookbackTestCase( - unittest.TestCase, - MonotonicAttentionTestAbstractClass -): - def setUp(self): - self.model = build_transformer_monotonic_attention( - **generate_config( - { - "simul_type": "infinite_lookback" - } - ) - ) - self.model.train() - - def test_fp16_for_long_input(self): - sample = { - "net_input": { - "src_tokens": torch.LongTensor([7] * 1000 + [2]).cuda().unsqueeze(0), - "prev_output_tokens": torch.LongTensor([7] * 1000 + [2]).cuda().unsqueeze(0), - "src_lengths": torch.LongTensor([1000]).cuda(), - }, - "target": torch.LongTensor([2] + [7] * 1000).unsqueeze(0).cuda() - } - self.model.cuda().half() - _, extra_out = self.model.forward(**sample["net_input"]) - for item in extra_out.attn_list: - for key in ["p_choose", "alpha", "beta", "soft_energy"]: - self.assertFalse(torch.isnan(item[key]).any()) - - def test_expected_attention(self): - for longer_src in [True, False]: - sample = make_sample_with_padding(longer_src) - _, extra_out = self.model.forward(**sample["net_input"]) - for item in extra_out.attn_list: - p_choose = item["p_choose"] - alpha_system = item["alpha"] - beta_system = item["beta"] - soft_energy_system = item["soft_energy"] - self.assertTrue(beta_system.size() == alpha_system.size()) - self.assertTrue(p_choose.size() == alpha_system.size()) - - bsz, num_head, tgt_len, src_len = alpha_system.size() - - alpha_system = alpha_system.view(-1, tgt_len, src_len) - beta_system = beta_system.view(-1, tgt_len, src_len) - p_choose = p_choose.view(-1, tgt_len, src_len) - soft_energy_system = soft_energy_system.view(-1, tgt_len, src_len) - - alpha_real = expected_alignment_formula( - p_choose, - self.model.decoder.layers[0].encoder_attn.mass_preservation, - sample["net_input"]["src_tokens"].eq(PAD_INDEX) - ) - - beta_real = expected_soft_attention_formula( - alpha_real, - soft_energy_system, - sample["net_input"]["src_tokens"].eq(PAD_INDEX), - chunksize=getattr( - self.model.decoder.layers[0].encoder_attn, - "chunk_size", - int(1e10) - ) - ) - - self.assertTrue( - torch.abs(beta_system - beta_real).le(1e-5).all(), - ) - - -class ChunkwiswTestCase( - InfiniteLookbackTestCase -): - def setUp(self): - self.model = build_transformer_monotonic_attention( - **generate_config( - { - "simul_type": "chunkwise", - "mocha_chunk_size": 3 - } - ) - ) - - -class WaitkTestCase(InfiniteLookbackTestCase): - def setUp(self): - self.model = build_transformer_monotonic_attention( - **generate_config( - { - "simul_type": "waitk", - "waitk_lagging": 3, - } - ) - ) - - def check_waitk(self, p_choose, lagging, padding_mask): - bsz, tgt_len, src_len = p_choose.size() - for bsz_i in range(bsz): - for i in range(tgt_len): - for j in range(src_len): - if not padding_mask[bsz_i, j]: - if j - i == lagging - 1: - self.assertTrue(p_choose[bsz_i, i, j] == 1) - else: - self.assertTrue(p_choose[bsz_i, i, j] == 0) - - def test_waitk_p_choose(self): - for longer_src in [True, False]: - for k in [1, 3, 10, 20, 100]: - sample = make_sample_with_padding(longer_src) - model = build_transformer_monotonic_attention( - **generate_config( - { - "simul_type": "waitk", - "waitk_lagging": k, - } - ) - ) - model.train() - _, extra_out = model.forward(**sample["net_input"]) - for item in extra_out.attn_list: - p_choose = item["p_choose"] - bsz, num_heads, tgt_len, src_len = p_choose.size() - padding_mask = sample["net_input"]["src_tokens"].eq(PAD_INDEX) - padding_mask = ( - padding_mask - .unsqueeze(1) - .expand([bsz, num_heads, src_len]) - .contiguous() - .view(-1, src_len) - ) - p_choose = p_choose.view(bsz * num_heads, tgt_len, src_len) - self.check_waitk(p_choose, k, padding_mask) diff --git a/spaces/Highway/infrastructure-cost-data-classifier/app.py b/spaces/Highway/infrastructure-cost-data-classifier/app.py deleted file mode 100644 index f83aaf3f5c859ca25e21a3d02527becfb71d794e..0000000000000000000000000000000000000000 --- a/spaces/Highway/infrastructure-cost-data-classifier/app.py +++ /dev/null @@ -1,334 +0,0 @@ -import streamlit as st -# import inflect -from transformers import AutoTokenizer, AutoModelForSequenceClassification -import torch -import string -import plotly.express as px -import pandas as pd -import nltk -from nltk.tokenize import sent_tokenize -nltk.download('punkt') - -# Note - USE "VBA_venv" environment in the local github folder - -punctuations = string.punctuation - -def prep_text(text): - # function for preprocessing text - - # remove trailing characters (\s\n) and convert to lowercase - clean_sents = [] # append clean con sentences - sent_tokens = sent_tokenize(str(text)) - for sent_token in sent_tokens: - word_tokens = [str(word_token).strip().lower() for word_token in sent_token.split()] - word_tokens = [word_token for word_token in word_tokens if word_token not in punctuations] - clean_sents.append(' '.join((word_tokens))) - joined = ' '.join(clean_sents).strip(' ') - return joined - - -# model name or path to model -checkpoint_1 = "Highway/SubCat" - -checkpoint_2 = "Highway/ExtraOver" - -checkpoint_3 = "Highway/Conversion" - - -@st.cache(allow_output_mutation=True) -def load_model_1(): - return AutoModelForSequenceClassification.from_pretrained(checkpoint_1) - - -@st.cache(allow_output_mutation=True) -def load_tokenizer_1(): - return AutoTokenizer.from_pretrained(checkpoint_1) - - -@st.cache(allow_output_mutation=True) -def load_model_2(): - return AutoModelForSequenceClassification.from_pretrained(checkpoint_2) - - -@st.cache(allow_output_mutation=True) -def load_tokenizer_2(): - return AutoTokenizer.from_pretrained(checkpoint_2) - - -@st.cache(allow_output_mutation=True) -def load_model_3(): - return AutoModelForSequenceClassification.from_pretrained(checkpoint_3) - - -@st.cache(allow_output_mutation=True) -def load_tokenizer_3(): - return AutoTokenizer.from_pretrained(checkpoint_3) - - -st.set_page_config( - page_title="Cost Data Classifier", layout= "wide", initial_sidebar_state="auto", page_icon="💷" -) - -st.title("🚦 AI Infrastructure Cost Data Classifier") -# st.header("") - -with st.expander("About this app", expanded=False): - st.write( - """ - - Artificial Intelligence (AI) and Machine learning (ML) tool for automatic classification of infrastructure cost data for benchmarking - - Classifies cost descriptions from documents such as Bills of Quantities (BOQs) and Schedule of Rates - - Can be trained to classify granular and itemised cost descriptions into any predefined categories for benchmarking - - Contact research team to discuss your data structures and suitability for the app - - It is best to use this app on a laptop or desktop computer - """ - ) - - -st.markdown("##### Description") -with st.form(key="my_form"): - Text_entry = st.text_area( - "Paste or type infrastructure cost description in the text box below (i.e., input)" - ) - submitted = st.form_submit_button(label="👉 Get SubCat and ExtraOver!") - -if submitted: - - # First prediction - - label_list_1 = [ - 'Arrow, Triangle, Circle, Letter, Numeral, Symbol and Sundries', - 'Binder', - 'Cable', - 'Catman Other Adjustment', - 'Cold Milling', - 'Disposal of Acceptable/Unacceptable Material', - 'Drain/Service Duct In Trench', - 'Erection & Dismantling of Temporary Accommodation/Facilities (All Types)', - 'Excavate And Replace Filter Material/Recycle Filter Material', - 'Excavation', - 'General TM Item', - 'Information boards', - 'Joint/Termination', - 'Line, Ancillary Line, Solid Area', - 'Loop Detector Installation', - 'Minimum Lining Visit Charge', - 'Node Marker', - 'PCC Kerb', - 'Provision of Mobile Welfare Facilities', - 'Removal of Deformable Safety Fence', - 'Removal of Line, Ancillary Line, Solid Area', - 'Removal of Traffic Sign and post(s)', - 'Road Stud', - 'Safety Barrier Or Bifurcation (Non-Concrete)', - 'Servicing of Temporary Accommodation/Facilities (All Types) (day)', - 'Tack Coat', - 'Temporary Road Markings', - 'Thin Surface Course', - 'Traffic Sign - Unknown specification', - 'Vegetation Clearance/Weed Control (m2)', - 'Others' - ] - - if Text_entry == "": - st.warning( - """This app needs text input to generate predictions. Kindly type or paste text into - the above **"Text Input"** box""", - icon="⚠️" - ) - - elif Text_entry != "": - - joined_clean_sents = prep_text(Text_entry) - - # tokenize - tokenizer_1 = load_tokenizer_1() - tokenized_text_1 = tokenizer_1(joined_clean_sents, return_tensors="pt") - - # predict - model_1 = load_model_1() - text_logits_1 = model_1(**tokenized_text_1).logits - predictions_1 = torch.softmax(text_logits_1, dim=1).tolist()[0] - predictions_1 = [round(a, 3) for a in predictions_1] - - # dictionary with label as key and percentage as value - pred_dict_1 = (dict(zip(label_list_1, predictions_1))) - - # sort 'pred_dict' by value and index the highest at [0] - sorted_preds_1 = sorted(pred_dict_1.items(), key=lambda x: x[1], reverse=True) - - # Make dataframe for plotly bar chart - u_1, v_1 = zip(*sorted_preds_1) - x_1 = list(u_1) - y_1 = list(v_1) - df2 = pd.DataFrame() - df2['SubCatName'] = x_1 - df2['Likelihood'] = y_1 - - - # Second prediction - - label_list_2 = ["False", "True"] - - joined_clean_sents = prep_text(Text_entry) - - # tokenize - tokenizer_2 = load_tokenizer_2() - tokenized_text_2 = tokenizer_2(joined_clean_sents, return_tensors="pt") - - # predict - model_2 = load_model_2() - text_logits_2 = model_2(**tokenized_text_2).logits - predictions_2 = torch.softmax(text_logits_2, dim=1).tolist()[0] - predictions_2 = [round(a_, 3) for a_ in predictions_2] - - # dictionary with label as key and percentage as value - pred_dict_2 = (dict(zip(label_list_2, predictions_2))) - - # sort 'pred_dict' by value and index the highest at [0] - sorted_preds_2 = sorted(pred_dict_2.items(), key=lambda x: x[1], reverse=True) - - # Make dataframe for plotly bar chart - u_2, v_2 = zip(*sorted_preds_2) - x_2 = list(u_2) - y_2 = list(v_2) - df3 = pd.DataFrame() - df3['ExtraOver'] = x_2 - df3['Likelihood'] = y_2 - - - # Third prediction - - label_list_3 = ['0.04', '0.045', '0.05', '0.1', '0.15', '0.2', '1.0', '7.0', '166.67', 'Others'] - - joined_clean_sents = prep_text(Text_entry) - - # tokenize - tokenizer_3 = load_tokenizer_3() - tokenized_text_3 = tokenizer_3(joined_clean_sents, return_tensors="pt") - - # predict - model_3 = load_model_3() - text_logits_3 = model_3(**tokenized_text_3).logits - predictions_3 = torch.softmax(text_logits_3, dim=1).tolist()[0] - predictions_3 = [round(a_, 3) for a_ in predictions_3] - - # dictionary with label as key and percentage as value - pred_dict_3 = (dict(zip(label_list_3, predictions_3))) - - # sort 'pred_dict' by value and index the highest at [0] - sorted_preds_3 = sorted(pred_dict_3.items(), key=lambda x: x[1], reverse=True) - - # Make dataframe for plotly bar chart - u_3, v_3 = zip(*sorted_preds_3) - x_3 = list(u_3) - y_3 = list(v_3) - df4 = pd.DataFrame() - df4['Conversion_factor'] = x_3 - df4['Likelihood'] = y_3 - - - st.empty() - - tab1, tab2, tab3, tab4 = st.tabs(["Subcategory", "Extra Over", "Conversion Factor", "Summary"]) - - with tab1: - st.header("SubCatName") - # plot graph of predictions - fig = px.bar(df2, x="Likelihood", y="SubCatName", orientation="h") - - fig.update_layout( - # barmode='stack', - template='ggplot2', - font=dict( - family="Arial", - size=14, - color="black" - ), - autosize=False, - width=900, - height=1000, - xaxis_title="Likelihood of SubCatName", - yaxis_title="SubCatNames", - # legend_title="Topics" - ) - - fig.update_xaxes(tickangle=0, tickfont=dict(family='Arial', color='black', size=14)) - fig.update_yaxes(tickangle=0, tickfont=dict(family='Arial', color='black', size=14)) - fig.update_annotations(font_size=14) # this changes y_axis, x_axis and subplot title font sizes - - # Plot - st.plotly_chart(fig, use_container_width=False) - - with tab2: - st.header("ExtraOver") - # plot graph of predictions - fig = px.bar(df3, x="Likelihood", y="ExtraOver", orientation="h") - - fig.update_layout( - # barmode='stack', - template='ggplot2', - font=dict( - family="Arial", - size=14, - color="black" - ), - autosize=False, - width=500, - height=200, - xaxis_title="Likelihood of ExtraOver", - yaxis_title="ExtraOver", - # legend_title="Topics" - ) - - fig.update_xaxes(tickangle=0, tickfont=dict(family='Arial', color='black', size=14)) - fig.update_yaxes(tickangle=0, tickfont=dict(family='Arial', color='black', size=14)) - fig.update_annotations(font_size=14) # this changes y_axis, x_axis and subplot title font sizes - - # Plot - st.plotly_chart(fig, use_container_width=False) - - with tab3: - st.header("Conversion_factor") - # plot graph of predictions - fig = px.bar(df4, x="Likelihood", y="Conversion_factor", orientation="h") - - fig.update_layout( - # barmode='stack', - template='ggplot2', - font=dict( - family="Arial", - size=14, - color="black" - ), - autosize=False, - width=500, - height=500, - xaxis_title="Likelihood of Conversion_factor", - yaxis_title="Conversion_factor", - # legend_title="Topics" - ) - - fig.update_xaxes(tickangle=0, tickfont=dict(family='Arial', color='black', size=14)) - fig.update_yaxes(tickangle=0, tickfont=dict(family='Arial', color='black', size=14)) - fig.update_annotations(font_size=14) # this changes y_axis, x_axis and subplot title font sizes - - # Plot - st.plotly_chart(fig, use_container_width=False) - - with tab4: - # subcatNames - st.header("") - predicted_1 = st.metric("Predicted SubCatName", sorted_preds_1[0][0]) - Prediction_confidence_1 = st.metric("Prediction confidence", (str(round(sorted_preds_1[0][1] * 100, 1)) + "%")) - - #ExtraOver - st.header("") - predicted_2 = st.metric("Predicted ExtraOver", sorted_preds_2[0][0]) - Prediction_confidence_2 = st.metric("Prediction confidence", (str(round(sorted_preds_2[0][1] * 100, 1)) + "%")) - - # Conversion_factor - st.header("") - predicted_3 = st.metric("Predicted Conversion_factor", sorted_preds_3[0][0]) - Prediction_confidence_3 = st.metric("Prediction confidence", (str(round(sorted_preds_3[0][1] * 100, 1)) + "%")) - - st.success("Great! Predictions successfully completed. ", icon="✅") \ No newline at end of file diff --git a/spaces/Hoodady/3DFuse/my3d.py b/spaces/Hoodady/3DFuse/my3d.py deleted file mode 100644 index eafc5e7e104c7ccb70c38fbb3df7cba8dcf64105..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/my3d.py +++ /dev/null @@ -1,161 +0,0 @@ -# some tools developed for the vision class -import numpy as np -from numpy import cross, tan -from numpy.linalg import norm, inv - - - -def normalize(v): - return v / norm(v) - - -def camera_pose(eye, front, up): - z = normalize(-1 * front) - x = normalize(cross(up, z)) - y = normalize(cross(z, x)) - - # convert to col vector - x = x.reshape(-1, 1) - y = y.reshape(-1, 1) - z = z.reshape(-1, 1) - eye = eye.reshape(-1, 1) - - pose = np.block([ - [x, y, z, eye], - [0, 0, 0, 1] - ]) - return pose - - -def compute_extrinsics(eye, front, up): - pose = camera_pose(eye, front, up) - world_2_cam = inv(pose) - return world_2_cam - - -def compute_intrinsics(aspect_ratio, fov, img_height_in_pix): - # aspect ratio is w / h - ndc = compute_proj_to_normalized(aspect_ratio, fov) - - # anything beyond [-1, 1] should be discarded - # this did not mention how to do z-clipping; - - ndc_to_img = compute_normalized_to_img_trans(aspect_ratio, img_height_in_pix) - intrinsic = ndc_to_img @ ndc - return intrinsic - - -def compute_proj_to_normalized(aspect, fov): - # compared to standard OpenGL NDC intrinsic, - # this skips the 3rd row treatment on z. hence the name partial_ndc - fov_in_rad = fov / 180 * np.pi - t = tan(fov_in_rad / 2) # tan half fov - partial_ndc_intrinsic = np.array([ - [1 / (t * aspect), 0, 0, 0], - [0, 1 / t, 0, 0], - [0, 0, -1, 0] # copy the negative distance for division - ]) - return partial_ndc_intrinsic - - -def compute_normalized_to_img_trans(aspect, img_height_in_pix): - img_h = img_height_in_pix - img_w = img_height_in_pix * aspect - - # note the OpenGL convention that (0, 0) sits at the center of the pixel; - # hence the extra -0.5 translation - # this is useful when you shoot rays through a pixel to the scene - ndc_to_img = np.array([ - [img_w / 2, 0, img_w / 2 - 0.5], - [0, img_h / 2, img_h / 2 - 0.5], - [0, 0, 1] - ]) - - img_y_coord_flip = np.array([ - [1, 0, 0], - [0, -1, img_h - 1], # note the -1 - [0, 0, 1] - ]) - - # the product of the above 2 matrices is equivalent to adding - # - sign to the (1, 1) entry - # you could have simply written - # ndc_to_img = np.array([ - # [img_w / 2, 0, img_w / 2 - 0.5], - # [0, -img_h / 2, img_h / 2 - 0.5], - # [0, 0, 1] - # ]) - - ndc_to_img = img_y_coord_flip @ ndc_to_img - return ndc_to_img - - -def unproject(K, pixel_coords, depth=1.0): - """sometimes also referred to as backproject - pixel_coords: [n, 2] pixel locations - depth: [n,] or [,] depth value. of a shape that is broadcastable with pix coords - """ - K = K[0:3, 0:3] - - pixel_coords = as_homogeneous(pixel_coords) - pixel_coords = pixel_coords.T # [2+1, n], so that mat mult is on the left - - # this will give points with z = -1, which is exactly what you want since - # your camera is facing the -ve z axis - pts = inv(K) @ pixel_coords - - pts = pts * depth # [3, n] * [n,] broadcast - pts = pts.T - pts = as_homogeneous(pts) - return pts - - -""" -these two functions are changed so that they can handle arbitrary number of -dimensions >=1 -""" - - -def homogenize(pts): - # pts: [..., d], where last dim of the d is the diviser - *front, d = pts.shape - pts = pts / pts[..., -1].reshape(*front, 1) - return pts - - -def as_homogeneous(pts, lib=np): - # pts: [..., d] - *front, d = pts.shape - points = lib.ones((*front, d + 1)) - points[..., :d] = pts - return points - - -def simple_point_render(pts, img_w, img_h, fov, eye, front, up): - """ - pts: [N, 3] - """ - canvas = np.ones((img_h, img_w, 3)) - - pts = as_homogeneous(pts) - - E = compute_extrinsics(eye, front, up) - world_2_ndc = compute_proj_to_normalized(img_w / img_h, fov) - ndc_to_img = compute_normalized_to_img_trans(img_w / img_h, img_h) - - pts = pts @ E.T - pts = pts @ world_2_ndc.T - pts = homogenize(pts) - - # now filter out outliers beyond [-1, 1] - outlier_mask = (np.abs(pts) > 1.0).any(axis=1) - pts = pts[~outlier_mask] - - pts = pts @ ndc_to_img.T - - # now draw each point - pts = np.rint(pts).astype(np.int32) - xs, ys, _ = pts.T - canvas[ys, xs] = (1, 0, 0) - - return canvas diff --git a/spaces/Hoodady/3DFuse/voxnerf/pipelines.py b/spaces/Hoodady/3DFuse/voxnerf/pipelines.py deleted file mode 100644 index 9c2d7e5f311fa53146a6a34c2766d649775dc82c..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/voxnerf/pipelines.py +++ /dev/null @@ -1,223 +0,0 @@ -import numpy as np -import torch -import imageio - -from my.utils.tqdm import tqdm -from my.utils.event import EventStorage, read_stats, get_event_storage -from my.utils.heartbeat import HeartBeat, get_heartbeat -from my.utils.debug import EarlyLoopBreak - -from .utils import PSNR, Scrambler, every, at -from .data import load_blender -from .render import ( - as_torch_tsrs, scene_box_filter, render_ray_bundle, render_one_view, rays_from_img -) -from .vis import vis, stitch_vis - - -device_glb = torch.device("cuda") - - -def all_train_rays(scene): - imgs, K, poses = load_blender("train", scene) - num_imgs = len(imgs) - ro, rd, rgbs = [], [], [] - for i in tqdm(range(num_imgs)): - img, pose = imgs[i], poses[i] - H, W = img.shape[:2] - _ro, _rd = rays_from_img(H, W, K, pose) - ro.append(_ro) - rd.append(_rd) - rgbs.append(img.reshape(-1, 3)) - - ro, rd, rgbs = [ - np.concatenate(xs, axis=0) for xs in (ro, rd, rgbs) - ] - return ro, rd, rgbs - - -class OneTestView(): - def __init__(self, scene): - imgs, K, poses = load_blender("test", scene) - self.imgs, self.K, self.poses = imgs, K, poses - self.i = 0 - - def render(self, model): - i = self.i - img, K, pose = self.imgs[i], self.K, self.poses[i] - with torch.no_grad(): - aabb = model.aabb.T.cpu().numpy() - H, W = img.shape[:2] - rgbs, depth = render_one_view(model, aabb, H, W, K, pose) - psnr = PSNR.psnr(img, rgbs) - - self.i = (self.i + 1) % len(self.imgs) - - return img, rgbs, depth, psnr - - -def train( - model, n_epoch=2, bs=4096, lr=0.02, scene="lego" -): - fuse = EarlyLoopBreak(500) - - aabb = model.aabb.T.numpy() - model = model.to(device_glb) - optim = torch.optim.Adam(model.parameters(), lr=lr) - - test_view = OneTestView(scene) - all_ro, all_rd, all_rgbs = all_train_rays(scene) - print(n_epoch, len(all_ro), bs) - with tqdm(total=(n_epoch * len(all_ro) // bs)) as pbar, \ - HeartBeat(pbar) as hbeat, EventStorage() as metric: - - ro, rd, t_min, t_max, intsct_inds = scene_box_filter(all_ro, all_rd, aabb) - rgbs = all_rgbs[intsct_inds] - print(len(ro)) - for epc in range(n_epoch): - n = len(ro) - scrambler = Scrambler(n) - ro, rd, t_min, t_max, rgbs = scrambler.apply(ro, rd, t_min, t_max, rgbs) - - num_batch = int(np.ceil(n / bs)) - for i in range(num_batch): - if fuse.on_break(): - break - s = i * bs - e = min(n, s + bs) - - optim.zero_grad() - _ro, _rd, _t_min, _t_max, _rgbs = as_torch_tsrs( - model.device, ro[s:e], rd[s:e], t_min[s:e], t_max[s:e], rgbs[s:e] - ) - pred, _, _ = render_ray_bundle(model, _ro, _rd, _t_min, _t_max) - loss = ((pred - _rgbs) ** 2).mean() - loss.backward() - optim.step() - - pbar.update() - - psnr = PSNR.psnr_from_mse(loss.item()) - metric.put_scalars(psnr=psnr, d_scale=model.d_scale.item()) - - if every(pbar, step=50): - pbar.set_description(f"TRAIN: psnr {psnr:.2f}") - - if every(pbar, percent=1): - gimg, rimg, depth, psnr = test_view.render(model) - pane = vis( - gimg, rimg, depth, - msg=f"psnr: {psnr:.2f}", return_buffer=True - ) - metric.put_artifact( - "vis", ".png", lambda fn: imageio.imwrite(fn, pane) - ) - - if at(pbar, percent=30): - model.make_alpha_mask() - - if every(pbar, percent=35): - target_xyz = (model.grid_size * 1.328).int().tolist() - model.resample(target_xyz) - optim = torch.optim.Adam(model.parameters(), lr=lr) - print(f"resamp the voxel to {model.grid_size}") - - curr_lr = update_lr(pbar, optim, lr) - metric.put_scalars(lr=curr_lr) - - metric.step() - hbeat.beat() - - metric.put_artifact( - "ckpt", ".pt", lambda fn: torch.save(model.state_dict(), fn) - ) - # metric.step(flush=True) # no need to flush since the test routine directly takes the model - - metric.put_artifact( - "train_seq", ".mp4", - lambda fn: stitch_vis(fn, read_stats(metric.output_dir, "vis")[1]) - ) - - with EventStorage("test"): - final_psnr = test(model, scene) - metric.put("test_psnr", final_psnr) - - metric.step() - - hbeat.done() - - -def update_lr(pbar, optimizer, init_lr): - i, N = pbar.n, pbar.total - factor = 0.1 ** (1 / N) - lr = init_lr * (factor ** i) - for param_group in optimizer.param_groups: - param_group['lr'] = lr - return lr - - -def last_ckpt(): - ts, ckpts = read_stats("./", "ckpt") - if len(ckpts) > 0: - fname = ckpts[-1] - last = torch.load(fname, map_location="cpu") - print(f"loaded ckpt from iter {ts[-1]}") - return last - - -def __evaluate_ckpt(model, scene): - # this is for external script that needs to evaluate an checkpoint - # currently not used - metric = get_event_storage() - - state = last_ckpt() - if state is not None: - model.load_state_dict(state) - model.to(device_glb) - - with EventStorage("test"): - final_psnr = test(model, scene) - metric.put("test_psnr", final_psnr) - - -def test(model, scene): - fuse = EarlyLoopBreak(5) - metric = get_event_storage() - hbeat = get_heartbeat() - - aabb = model.aabb.T.cpu().numpy() - model = model.to(device_glb) - - imgs, K, poses = load_blender("test", scene) - num_imgs = len(imgs) - - stats = [] - - for i in (pbar := tqdm(range(num_imgs))): - if fuse.on_break(): - break - - img, pose = imgs[i], poses[i] - H, W = img.shape[:2] - rgbs, depth = render_one_view(model, aabb, H, W, K, pose) - psnr = PSNR.psnr(img, rgbs) - - stats.append(psnr) - metric.put_scalars(psnr=psnr) - pbar.set_description(f"TEST: mean psnr {np.mean(stats):.2f}") - - plot = vis(img, rgbs, depth, msg=f"PSNR: {psnr:.2f}", return_buffer=True) - metric.put_artifact("test_vis", ".png", lambda fn: imageio.imwrite(fn, plot)) - metric.step() - hbeat.beat() - - metric.put_artifact( - "test_seq", ".mp4", - lambda fn: stitch_vis(fn, read_stats(metric.output_dir, "test_vis")[1]) - ) - - final_psnr = np.mean(stats) - metric.put("final_psnr", final_psnr) - metric.step() - - return final_psnr diff --git a/spaces/HugoDzz/spaceship_drift/build/_app/immutable/assets/0.cdd10e73.css b/spaces/HugoDzz/spaceship_drift/build/_app/immutable/assets/0.cdd10e73.css deleted file mode 100644 index 28e52e5a1df71796e8b140cc7578b7c51ca696f9..0000000000000000000000000000000000000000 --- a/spaces/HugoDzz/spaceship_drift/build/_app/immutable/assets/0.cdd10e73.css +++ /dev/null @@ -1 +0,0 @@ -*,:before,:after{box-sizing:border-box;border-width:0;border-style:solid;border-color:#e5e7eb}:before,:after{--tw-content: ""}html{line-height:1.5;-webkit-text-size-adjust:100%;-moz-tab-size:4;-o-tab-size:4;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,"Apple Color Emoji","Segoe UI Emoji",Segoe UI Symbol,"Noto Color Emoji";font-feature-settings:normal;font-variation-settings:normal}body{margin:0;line-height:inherit}hr{height:0;color:inherit;border-top-width:1px}abbr:where([title]){-webkit-text-decoration:underline dotted;text-decoration:underline dotted}h1,h2,h3,h4,h5,h6{font-size:inherit;font-weight:inherit}a{color:inherit;text-decoration:inherit}b,strong{font-weight:bolder}code,kbd,samp,pre{font-family:ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}table{text-indent:0;border-color:inherit;border-collapse:collapse}button,input,optgroup,select,textarea{font-family:inherit;font-size:100%;font-weight:inherit;line-height:inherit;color:inherit;margin:0;padding:0}button,select{text-transform:none}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button;background-color:transparent;background-image:none}:-moz-focusring{outline:auto}:-moz-ui-invalid{box-shadow:none}progress{vertical-align:baseline}::-webkit-inner-spin-button,::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}blockquote,dl,dd,h1,h2,h3,h4,h5,h6,hr,figure,p,pre{margin:0}fieldset{margin:0;padding:0}legend{padding:0}ol,ul,menu{list-style:none;margin:0;padding:0}textarea{resize:vertical}input::-moz-placeholder,textarea::-moz-placeholder{opacity:1;color:#9ca3af}input::placeholder,textarea::placeholder{opacity:1;color:#9ca3af}button,[role=button]{cursor:pointer}:disabled{cursor:default}img,svg,video,canvas,audio,iframe,embed,object{display:block;vertical-align:middle}img,video{max-width:100%;height:auto}[hidden]{display:none}@font-face{font-family:Hellovetica;font-weight:300;src:local("Hellovetica"),url(../../../fonts/hellovetica.ttf);font-display:swap}*,:before,:after{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }::backdrop{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }.fixed{position:fixed}.absolute{position:absolute}.relative{position:relative}.-bottom-\[3px\]{bottom:-3px}.-left-\[3px\]{left:-3px}.-right-\[3px\]{right:-3px}.-top-\[3px\]{top:-3px}.bottom-6{bottom:1.5rem}.z-10{z-index:10}.mt-1{margin-top:.25rem}.mt-10{margin-top:2.5rem}.mt-12{margin-top:3rem}.mt-2{margin-top:.5rem}.mt-20{margin-top:5rem}.mt-4{margin-top:1rem}.mt-6{margin-top:1.5rem}.flex{display:flex}.contents{display:contents}.h-\[3px\]{height:3px}.w-60{width:15rem}.w-\[3px\]{width:3px}.w-full{width:100%}.flex-row{flex-direction:row}.flex-col{flex-direction:column}.items-center{align-items:center}.justify-center{justify-content:center}.space-y-4>:not([hidden])~:not([hidden]){--tw-space-y-reverse: 0;margin-top:calc(1rem * calc(1 - var(--tw-space-y-reverse)));margin-bottom:calc(1rem * var(--tw-space-y-reverse))}.overflow-x-hidden{overflow-x:hidden}.border-\[2px\]{border-width:2px}.border-\[3px\]{border-width:3px}.border-slate-800{--tw-border-opacity: 1;border-color:rgb(30 41 59 / var(--tw-border-opacity))}.bg-\[\#0C0F19\]{--tw-bg-opacity: 1;background-color:rgb(12 15 25 / var(--tw-bg-opacity))}.bg-slate-800{--tw-bg-opacity: 1;background-color:rgb(30 41 59 / var(--tw-bg-opacity))}.p-4{padding:1rem}.px-3{padding-left:.75rem;padding-right:.75rem}.py-5{padding-top:1.25rem;padding-bottom:1.25rem}.text-center{text-align:center}.font-Hellovetica{font-family:Hellovetica}.text-\[9px\]{font-size:9px}.text-xl{font-size:1.25rem;line-height:1.75rem}.text-xs{font-size:.75rem;line-height:1rem}.capitalize{text-transform:capitalize}.text-slate-100{--tw-text-opacity: 1;color:rgb(241 245 249 / var(--tw-text-opacity))}.text-slate-500{--tw-text-opacity: 1;color:rgb(100 116 139 / var(--tw-text-opacity))}.underline{text-decoration-line:underline}@media (min-width: 640px){.sm\:mt-20{margin-top:5rem}} diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/new/infer.py b/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/new/infer.py deleted file mode 100644 index 3fb67151e0dc425e02d090a62b1d83e6039e6ccb..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/new/infer.py +++ /dev/null @@ -1,471 +0,0 @@ -#!/usr/bin/env python -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import ast -import hashlib -import logging -import os -import shutil -import sys -from dataclasses import dataclass, field, is_dataclass -from pathlib import Path -from typing import Any, Dict, List, Optional, Tuple, Union - -import editdistance -import torch -import torch.distributed as dist -from examples.speech_recognition.new.decoders.decoder_config import ( - DecoderConfig, - FlashlightDecoderConfig, -) -from examples.speech_recognition.new.decoders.decoder import Decoder -from fairseq import checkpoint_utils, distributed_utils, progress_bar, tasks, utils -from fairseq.data.data_utils import post_process -from fairseq.dataclass.configs import ( - CheckpointConfig, - CommonConfig, - CommonEvalConfig, - DatasetConfig, - DistributedTrainingConfig, - FairseqDataclass, -) -from fairseq.logging.meters import StopwatchMeter, TimeMeter -from fairseq.logging.progress_bar import BaseProgressBar -from fairseq.models.fairseq_model import FairseqModel -from omegaconf import OmegaConf - -import hydra -from hydra.core.config_store import ConfigStore - -logging.root.setLevel(logging.INFO) -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - -config_path = Path(__file__).resolve().parent / "conf" - - -@dataclass -class DecodingConfig(DecoderConfig, FlashlightDecoderConfig): - unique_wer_file: bool = field( - default=False, - metadata={"help": "If set, use a unique file for storing WER"}, - ) - results_path: Optional[str] = field( - default=None, - metadata={ - "help": "If set, write hypothesis and reference sentences into this directory" - }, - ) - - -@dataclass -class InferConfig(FairseqDataclass): - task: Any = None - decoding: DecodingConfig = DecodingConfig() - common: CommonConfig = CommonConfig() - common_eval: CommonEvalConfig = CommonEvalConfig() - checkpoint: CheckpointConfig = CheckpointConfig() - distributed_training: DistributedTrainingConfig = DistributedTrainingConfig() - dataset: DatasetConfig = DatasetConfig() - is_ax: bool = field( - default=False, - metadata={ - "help": "if true, assumes we are using ax for tuning and returns a tuple for ax to consume" - }, - ) - - -def reset_logging(): - root = logging.getLogger() - for handler in root.handlers: - root.removeHandler(handler) - root.setLevel(os.environ.get("LOGLEVEL", "INFO").upper()) - handler = logging.StreamHandler(sys.stdout) - handler.setFormatter( - logging.Formatter( - fmt="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - ) - ) - root.addHandler(handler) - - -class InferenceProcessor: - cfg: InferConfig - - def __init__(self, cfg: InferConfig) -> None: - self.cfg = cfg - self.task = tasks.setup_task(cfg.task) - - models, saved_cfg = self.load_model_ensemble() - self.models = models - self.saved_cfg = saved_cfg - self.tgt_dict = self.task.target_dictionary - - self.task.load_dataset( - self.cfg.dataset.gen_subset, - task_cfg=saved_cfg.task, - ) - self.generator = Decoder(cfg.decoding, self.tgt_dict) - self.gen_timer = StopwatchMeter() - self.wps_meter = TimeMeter() - self.num_sentences = 0 - self.total_errors = 0 - self.total_length = 0 - - self.hypo_words_file = None - self.hypo_units_file = None - self.ref_words_file = None - self.ref_units_file = None - - self.progress_bar = self.build_progress_bar() - - def __enter__(self) -> "InferenceProcessor": - if self.cfg.decoding.results_path is not None: - self.hypo_words_file = self.get_res_file("hypo.word") - self.hypo_units_file = self.get_res_file("hypo.units") - self.ref_words_file = self.get_res_file("ref.word") - self.ref_units_file = self.get_res_file("ref.units") - return self - - def __exit__(self, *exc) -> bool: - if self.cfg.decoding.results_path is not None: - self.hypo_words_file.close() - self.hypo_units_file.close() - self.ref_words_file.close() - self.ref_units_file.close() - return False - - def __iter__(self) -> Any: - for sample in self.progress_bar: - if not self.cfg.common.cpu: - sample = utils.move_to_cuda(sample) - - # Happens on the last batch. - if "net_input" not in sample: - continue - yield sample - - def log(self, *args, **kwargs): - self.progress_bar.log(*args, **kwargs) - - def print(self, *args, **kwargs): - self.progress_bar.print(*args, **kwargs) - - def get_res_file(self, fname: str) -> None: - fname = os.path.join(self.cfg.decoding.results_path, fname) - if self.data_parallel_world_size > 1: - fname = f"{fname}.{self.data_parallel_rank}" - return open(fname, "w", buffering=1) - - def merge_shards(self) -> None: - """Merges all shard files into shard 0, then removes shard suffix.""" - - shard_id = self.data_parallel_rank - num_shards = self.data_parallel_world_size - - if self.data_parallel_world_size > 1: - - def merge_shards_with_root(fname: str) -> None: - fname = os.path.join(self.cfg.decoding.results_path, fname) - logger.info("Merging %s on shard %d", fname, shard_id) - base_fpath = Path(f"{fname}.0") - with open(base_fpath, "a") as out_file: - for s in range(1, num_shards): - shard_fpath = Path(f"{fname}.{s}") - with open(shard_fpath, "r") as in_file: - for line in in_file: - out_file.write(line) - shard_fpath.unlink() - shutil.move(f"{fname}.0", fname) - - dist.barrier() # ensure all shards finished writing - if shard_id == (0 % num_shards): - merge_shards_with_root("hypo.word") - if shard_id == (1 % num_shards): - merge_shards_with_root("hypo.units") - if shard_id == (2 % num_shards): - merge_shards_with_root("ref.word") - if shard_id == (3 % num_shards): - merge_shards_with_root("ref.units") - dist.barrier() - - def optimize_model(self, model: FairseqModel) -> None: - model.make_generation_fast_() - if self.cfg.common.fp16: - model.half() - if not self.cfg.common.cpu: - model.cuda() - - def load_model_ensemble(self) -> Tuple[List[FairseqModel], FairseqDataclass]: - arg_overrides = ast.literal_eval(self.cfg.common_eval.model_overrides) - models, saved_cfg = checkpoint_utils.load_model_ensemble( - utils.split_paths(self.cfg.common_eval.path, separator="\\"), - arg_overrides=arg_overrides, - task=self.task, - suffix=self.cfg.checkpoint.checkpoint_suffix, - strict=(self.cfg.checkpoint.checkpoint_shard_count == 1), - num_shards=self.cfg.checkpoint.checkpoint_shard_count, - ) - for model in models: - self.optimize_model(model) - return models, saved_cfg - - def get_dataset_itr(self, disable_iterator_cache: bool = False) -> None: - return self.task.get_batch_iterator( - dataset=self.task.dataset(self.cfg.dataset.gen_subset), - max_tokens=self.cfg.dataset.max_tokens, - max_sentences=self.cfg.dataset.batch_size, - max_positions=(sys.maxsize, sys.maxsize), - ignore_invalid_inputs=self.cfg.dataset.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=self.cfg.dataset.required_batch_size_multiple, - seed=self.cfg.common.seed, - num_shards=self.data_parallel_world_size, - shard_id=self.data_parallel_rank, - num_workers=self.cfg.dataset.num_workers, - data_buffer_size=self.cfg.dataset.data_buffer_size, - disable_iterator_cache=disable_iterator_cache, - ).next_epoch_itr(shuffle=False) - - def build_progress_bar( - self, - epoch: Optional[int] = None, - prefix: Optional[str] = None, - default_log_format: str = "tqdm", - ) -> BaseProgressBar: - return progress_bar.progress_bar( - iterator=self.get_dataset_itr(), - log_format=self.cfg.common.log_format, - log_interval=self.cfg.common.log_interval, - epoch=epoch, - prefix=prefix, - tensorboard_logdir=self.cfg.common.tensorboard_logdir, - default_log_format=default_log_format, - ) - - @property - def data_parallel_world_size(self): - if self.cfg.distributed_training.distributed_world_size == 1: - return 1 - return distributed_utils.get_data_parallel_world_size() - - @property - def data_parallel_rank(self): - if self.cfg.distributed_training.distributed_world_size == 1: - return 0 - return distributed_utils.get_data_parallel_rank() - - def process_sentence( - self, - sample: Dict[str, Any], - hypo: Dict[str, Any], - sid: int, - batch_id: int, - ) -> Tuple[int, int]: - speaker = None # Speaker can't be parsed from dataset. - - if "target_label" in sample: - toks = sample["target_label"] - else: - toks = sample["target"] - toks = toks[batch_id, :] - - # Processes hypothesis. - hyp_pieces = self.tgt_dict.string(hypo["tokens"].int().cpu()) - if "words" in hypo: - hyp_words = " ".join(hypo["words"]) - else: - hyp_words = post_process(hyp_pieces, self.cfg.common_eval.post_process) - - # Processes target. - target_tokens = utils.strip_pad(toks, self.tgt_dict.pad()) - tgt_pieces = self.tgt_dict.string(target_tokens.int().cpu()) - tgt_words = post_process(tgt_pieces, self.cfg.common_eval.post_process) - - if self.cfg.decoding.results_path is not None: - print(f"{hyp_pieces} ({speaker}-{sid})", file=self.hypo_units_file) - print(f"{hyp_words} ({speaker}-{sid})", file=self.hypo_words_file) - print(f"{tgt_pieces} ({speaker}-{sid})", file=self.ref_units_file) - print(f"{tgt_words} ({speaker}-{sid})", file=self.ref_words_file) - - if not self.cfg.common_eval.quiet: - logger.info(f"HYPO: {hyp_words}") - logger.info(f"REF: {tgt_words}") - logger.info("---------------------") - - hyp_words, tgt_words = hyp_words.split(), tgt_words.split() - - return editdistance.eval(hyp_words, tgt_words), len(tgt_words) - - def process_sample(self, sample: Dict[str, Any]) -> None: - self.gen_timer.start() - hypos = self.task.inference_step( - generator=self.generator, - models=self.models, - sample=sample, - ) - num_generated_tokens = sum(len(h[0]["tokens"]) for h in hypos) - self.gen_timer.stop(num_generated_tokens) - self.wps_meter.update(num_generated_tokens) - - for batch_id, sample_id in enumerate(sample["id"].tolist()): - errs, length = self.process_sentence( - sample=sample, - sid=sample_id, - batch_id=batch_id, - hypo=hypos[batch_id][0], - ) - self.total_errors += errs - self.total_length += length - - self.log({"wps": round(self.wps_meter.avg)}) - if "nsentences" in sample: - self.num_sentences += sample["nsentences"] - else: - self.num_sentences += sample["id"].numel() - - def log_generation_time(self) -> None: - logger.info( - "Processed %d sentences (%d tokens) in %.1fs %.2f " - "sentences per second, %.2f tokens per second)", - self.num_sentences, - self.gen_timer.n, - self.gen_timer.sum, - self.num_sentences / self.gen_timer.sum, - 1.0 / self.gen_timer.avg, - ) - - -def parse_wer(wer_file: Path) -> float: - with open(wer_file, "r") as f: - return float(f.readline().strip().split(" ")[1]) - - -def get_wer_file(cfg: InferConfig) -> Path: - """Hashes the decoding parameters to a unique file ID.""" - base_path = "wer" - if cfg.decoding.results_path is not None: - base_path = os.path.join(cfg.decoding.results_path, base_path) - - if cfg.decoding.unique_wer_file: - yaml_str = OmegaConf.to_yaml(cfg.decoding) - fid = int(hashlib.md5(yaml_str.encode("utf-8")).hexdigest(), 16) - return Path(f"{base_path}.{fid % 1000000}") - else: - return Path(base_path) - - -def main(cfg: InferConfig) -> float: - """Entry point for main processing logic. - - Args: - cfg: The inferance configuration to use. - wer: Optional shared memory pointer for returning the WER. If not None, - the final WER value will be written here instead of being returned. - - Returns: - The final WER if `wer` is None, otherwise None. - """ - - yaml_str, wer_file = OmegaConf.to_yaml(cfg.decoding), get_wer_file(cfg) - - # Validates the provided configuration. - if cfg.dataset.max_tokens is None and cfg.dataset.batch_size is None: - cfg.dataset.max_tokens = 4000000 - if not cfg.common.cpu and not torch.cuda.is_available(): - raise ValueError("CUDA not found; set `cpu=True` to run without CUDA") - - with InferenceProcessor(cfg) as processor: - for sample in processor: - processor.process_sample(sample) - - processor.log_generation_time() - - if cfg.decoding.results_path is not None: - processor.merge_shards() - - errs_t, leng_t = processor.total_errors, processor.total_length - - if cfg.common.cpu: - logger.warning("Merging WER requires CUDA.") - elif processor.data_parallel_world_size > 1: - stats = torch.LongTensor([errs_t, leng_t]).cuda() - dist.all_reduce(stats, op=dist.ReduceOp.SUM) - errs_t, leng_t = stats[0].item(), stats[1].item() - - wer = errs_t * 100.0 / leng_t - - if distributed_utils.is_master(cfg.distributed_training): - with open(wer_file, "w") as f: - f.write( - ( - f"WER: {wer}\n" - f"err / num_ref_words = {errs_t} / {leng_t}\n\n" - f"{yaml_str}" - ) - ) - - return wer - - -@hydra.main(config_path=config_path, config_name="infer") -def hydra_main(cfg: InferConfig) -> Union[float, Tuple[float, Optional[float]]]: - container = OmegaConf.to_container(cfg, resolve=True, enum_to_str=True) - cfg = OmegaConf.create(container) - OmegaConf.set_struct(cfg, True) - - if cfg.common.reset_logging: - reset_logging() - - # logger.info("Config:\n%s", OmegaConf.to_yaml(cfg)) - wer = float("inf") - - try: - if cfg.common.profile: - with torch.cuda.profiler.profile(): - with torch.autograd.profiler.emit_nvtx(): - distributed_utils.call_main(cfg, main) - else: - distributed_utils.call_main(cfg, main) - - wer = parse_wer(get_wer_file(cfg)) - except BaseException as e: # pylint: disable=broad-except - if not cfg.common.suppress_crashes: - raise - else: - logger.error("Crashed! %s", str(e)) - - logger.info("Word error rate: %.4f", wer) - if cfg.is_ax: - return wer, None - - return wer - - -def cli_main() -> None: - try: - from hydra._internal.utils import ( - get_args, - ) # pylint: disable=import-outside-toplevel - - cfg_name = get_args().config_name or "infer" - except ImportError: - logger.warning("Failed to get config name from hydra args") - cfg_name = "infer" - - cs = ConfigStore.instance() - cs.store(name=cfg_name, node=InferConfig) - - for k in InferConfig.__dataclass_fields__: - if is_dataclass(InferConfig.__dataclass_fields__[k].type): - v = InferConfig.__dataclass_fields__[k].default - cs.store(name=k, node=v) - - hydra_main() # pylint: disable=no-value-for-parameter - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_text_joint_to_text/tasks/speech_text_joint.py b/spaces/ICML2022/OFA/fairseq/examples/speech_text_joint_to_text/tasks/speech_text_joint.py deleted file mode 100644 index f2b3966d2d6b103f3dc2ff170c12ab9663875684..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/speech_text_joint_to_text/tasks/speech_text_joint.py +++ /dev/null @@ -1,372 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import logging -import os -from argparse import Namespace -from pathlib import Path - -import torch -from fairseq.data import ( - encoders, - Dictionary, - ResamplingDataset, - TransformEosLangPairDataset, - ConcatDataset, -) -from fairseq.data.iterators import GroupedEpochBatchIterator -from fairseq.data.audio.multi_modality_dataset import ( - MultiModalityDataset, - LangPairMaskDataset, - ModalityDatasetItem, -) -from fairseq.data.audio.speech_to_text_dataset import SpeechToTextDataset, SpeechToTextDatasetCreator -from fairseq.data.audio.speech_to_text_joint_dataset import ( - S2TJointDataConfig, - SpeechToTextJointDatasetCreator, -) -from fairseq.tasks import register_task -from fairseq.tasks.speech_to_text import SpeechToTextTask -from fairseq.tasks.translation import load_langpair_dataset - -logger = logging.getLogger(__name__) -LANG_TAG_TEMPLATE = "" - - -@register_task("speech_text_joint_to_text") -class SpeechTextJointToTextTask(SpeechToTextTask): - """ - Task for joint training speech and text to text. - """ - - @classmethod - def add_args(cls, parser): - """Add task-specific arguments to the parser.""" - super(SpeechTextJointToTextTask, cls).add_args(parser) - ### - parser.add_argument( - "--parallel-text-data", - default="", - help="path to parallel text data directory", - ) - parser.add_argument( - "--max-tokens-text", - type=int, - metavar="N", - help="maximum tokens for encoder text input ", - ) - parser.add_argument( - "--max-positions-text", - type=int, - metavar="N", - default=400, - help="maximum tokens for per encoder text input ", - ) - parser.add_argument( - "--langpairs", - default=None, - metavar="S", - help='language pairs for text training, separated with ","', - ) - parser.add_argument( - "--speech-sample-ratio", - default=1, - type=float, - metavar="N", - help="Multiple Ratio for speech dataset with transcripts ", - ) - parser.add_argument( - "--text-sample-ratio", - default=1, - type=float, - metavar="N", - help="Multiple Ratio for text set ", - ) - parser.add_argument( - "--update-mix-data", - action="store_true", - help="use mixed data in one update when update-freq > 1", - ) - parser.add_argument( - "--load-speech-only", - action="store_true", - help="load speech data only", - ) - parser.add_argument( - "--mask-text-ratio", - type=float, - metavar="V", - default=0.0, - help="mask V source tokens for text only mode", - ) - parser.add_argument( - "--mask-text-type", - default="random", - choices=["random", "tail"], - help="mask text typed", - ) - parser.add_argument( - "--noise-token", - default="", - help="noise token for masking src text tokens if mask-text-ratio > 0", - ) - parser.add_argument( - "--infer-target-lang", - default="", - metavar="S", - help="target language for inference", - ) - - def __init__(self, args, src_dict, tgt_dict, infer_tgt_lang_id=None): - super().__init__(args, tgt_dict) - self.src_dict = src_dict - self.data_cfg = S2TJointDataConfig(Path(args.data) / args.config_yaml) - assert self.tgt_dict.pad() == self.src_dict.pad() - assert self.tgt_dict.eos() == self.src_dict.eos() - self.speech_only = args.load_speech_only - self._infer_tgt_lang_id = infer_tgt_lang_id - - @classmethod - def setup_task(cls, args, **kwargs): - """Setup the task (e.g., load dictionaries).""" - data_cfg = S2TJointDataConfig(Path(args.data) / args.config_yaml) - tgt_dict_path = Path(args.data) / data_cfg.vocab_filename - src_dict_path = Path(args.data) / data_cfg.src_vocab_filename - if (not os.path.isfile(src_dict_path)) or (not os.path.isfile(tgt_dict_path)): - raise FileNotFoundError("Dict not found: {}".format(args.data)) - src_dict = Dictionary.load(src_dict_path.as_posix()) - tgt_dict = Dictionary.load(tgt_dict_path.as_posix()) - - print("| src dictionary: {} types".format(len(src_dict))) - print("| tgt dictionary: {} types".format(len(tgt_dict))) - - if args.parallel_text_data != "": - if not os.path.isabs(args.parallel_text_data): - args.parallel_text_data = os.path.join( - args.data, args.parallel_text_data - ) - - if args.langpairs is None: - raise Exception( - "Could not infer language pair, please provide it explicitly" - ) - infer_tgt_lang_id = None - if args.infer_target_lang != "" and data_cfg.prepend_tgt_lang_tag_no_change: - tgt_lang_tag = SpeechToTextDataset.LANG_TAG_TEMPLATE.format( - args.infer_target_lang - ) - infer_tgt_lang_id = tgt_dict.index(tgt_lang_tag) - assert infer_tgt_lang_id != tgt_dict.unk() - return cls(args, src_dict, tgt_dict, infer_tgt_lang_id=infer_tgt_lang_id) - - def load_langpair_dataset(self, prepend_tgt_lang_tag=False, sampling_alpha=1.0, epoch=0): - lang_pairs = [] - text_dataset = None - split = "train" - for lp in self.args.langpairs.split(","): - src, tgt = lp.split("-") - text_dataset = load_langpair_dataset( - self.args.parallel_text_data, - split, - src, - self.src_dict, - tgt, - self.tgt_dict, - combine=True, - dataset_impl=None, - upsample_primary=1, - left_pad_source=False, - left_pad_target=False, - max_source_positions=self.args.max_positions_text, - max_target_positions=self.args.max_target_positions, - load_alignments=False, - truncate_source=False, - ) - if prepend_tgt_lang_tag: - # TODO - text_dataset = TransformEosLangPairDataset( - text_dataset, - src_eos=self.src_dict.eos(), - tgt_bos=self.tgt_dict.eos(), # 'prev_output_tokens' starts with eos - new_tgt_bos=self.tgt_dict.index(LANG_TAG_TEMPLATE.format(tgt)), - ) - lang_pairs.append(text_dataset) - if len(lang_pairs) > 1: - if sampling_alpha != 1.0: - size_ratios = SpeechToTextDatasetCreator.get_size_ratios( - self.args.langpairs.split(","), - [len(s) for s in lang_pairs], - alpha=sampling_alpha, - ) - lang_pairs = [ - ResamplingDataset( - d, size_ratio=r, epoch=epoch, replace=(r >= 1.0) - ) - for d, r in zip(lang_pairs, size_ratios) - ] - return ConcatDataset(lang_pairs) - return text_dataset - - def inference_step( - self, generator, models, sample, prefix_tokens=None, constraints=None - ): - with torch.no_grad(): - return generator.generate( - models, - sample, - prefix_tokens=prefix_tokens, - constraints=constraints, - bos_token=self._infer_tgt_lang_id, - ) - - def build_src_tokenizer(self, args): - logger.info(f"src-pre-tokenizer: {self.data_cfg.src_pre_tokenizer}") - return encoders.build_tokenizer(Namespace(**self.data_cfg.src_pre_tokenizer)) - - def build_src_bpe(self, args): - logger.info(f"tokenizer: {self.data_cfg.src_bpe_tokenizer}") - return encoders.build_bpe(Namespace(**self.data_cfg.src_bpe_tokenizer)) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - is_train_split = split.startswith("train") - pre_tokenizer = self.build_tokenizer(self.args) - bpe_tokenizer = self.build_bpe(self.args) - src_pre_tokenizer = self.build_src_tokenizer(self.args) - src_bpe_tokenizer = self.build_src_bpe(self.args) - ast_dataset = SpeechToTextJointDatasetCreator.from_tsv( - self.args.data, - self.data_cfg, - split, - self.tgt_dict, - src_dict=None if self.speech_only else self.src_dict, - pre_tokenizer=pre_tokenizer, - bpe_tokenizer=bpe_tokenizer, - src_pre_tokenizer=src_pre_tokenizer, - src_bpe_tokenizer=src_bpe_tokenizer, - is_train_split=is_train_split, - epoch=epoch, - seed=self.args.seed, - ) - noise_token_id = -1 - text_dataset = None - if self.args.parallel_text_data != "" and is_train_split: - text_dataset = self.load_langpair_dataset( - self.data_cfg.prepend_tgt_lang_tag_no_change, - 1.0, - epoch=epoch, - ) - if self.args.mask_text_ratio > 0: - # add mask - noise_token_id = ( - self.src_dict.unk() - if self.args.noise_token == "" - else self.src_dict.index(self.args.noise_token) - ) - text_dataset = LangPairMaskDataset( - text_dataset, - src_bos=self.src_dict.bos(), - src_eos=self.src_dict.eos(), - noise_id=noise_token_id, - mask_ratio=self.args.mask_text_ratio, - mask_type=self.args.mask_text_type, - ) - - if text_dataset is not None: - mdsets = [ - ModalityDatasetItem( - "sup_speech", - ast_dataset, - (self.args.max_source_positions, self.args.max_target_positions), - self.args.max_tokens, - self.args.batch_size, - ), - ModalityDatasetItem( - "text", - text_dataset, - (self.args.max_positions_text, self.args.max_target_positions), - self.args.max_tokens_text - if self.args.max_tokens_text is not None - else self.args.max_tokens, - self.args.batch_size, - ), - ] - ast_dataset = MultiModalityDataset(mdsets) - self.datasets[split] = ast_dataset - - @property - def target_dictionary(self): - """Return the :class:`~fairseq.data.Dictionary` for the language - model.""" - return self.tgt_dict - - @property - def source_dictionary(self): - """Return the source :class:`~fairseq.data.Dictionary` (if applicable - for this task).""" - return None if self.speech_only else self.src_dict - - def get_batch_iterator( - self, - dataset, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=0, - data_buffer_size=0, - disable_iterator_cache=False, - ): - - if not isinstance(dataset, MultiModalityDataset): - return super(SpeechTextJointToTextTask, self).get_batch_iterator( - dataset, - max_tokens, - max_sentences, - max_positions, - ignore_invalid_inputs, - required_batch_size_multiple, - seed, - num_shards, - shard_id, - num_workers, - epoch, - data_buffer_size, - disable_iterator_cache, - ) - - mult_ratio = [self.args.speech_sample_ratio, self.args.text_sample_ratio] - assert len(dataset.datasets) == 2 - - # initialize the dataset with the correct starting epoch - dataset.set_epoch(epoch) - - batch_samplers = dataset.get_batch_samplers( - mult_ratio, required_batch_size_multiple, seed - ) - - # return a reusable, sharded iterator - epoch_iter = GroupedEpochBatchIterator( - dataset=dataset, - collate_fn=dataset.collater, - batch_samplers=batch_samplers, - seed=seed, - num_shards=num_shards, - shard_id=shard_id, - num_workers=num_workers, - epoch=epoch, - mult_rate=1 if self.args.update_mix_data else max(self.args.update_freq), - buffer_size=data_buffer_size, - ) - self.dataset_to_epoch_iter[dataset] = {} # refresh it every epoch - return epoch_iter diff --git a/spaces/ICML2022/resefa/third_party/stylegan3_official_ops/conv2d_gradfix.py b/spaces/ICML2022/resefa/third_party/stylegan3_official_ops/conv2d_gradfix.py deleted file mode 100644 index 19aba5ca78f1228e4b8e3aafccbbe072c747f007..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/resefa/third_party/stylegan3_official_ops/conv2d_gradfix.py +++ /dev/null @@ -1,219 +0,0 @@ -# python3.7 - -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom replacement for convolution operators. - -Operators in this file support arbitrarily high order gradients with zero -performance penalty. Please set `impl` as `cuda` to use faster customized -operators, OR as `ref` to use native `torch.nn.functional.conv2d` and -`torch.nn.functional.conv_transpose2d`. - -Please refer to https://github.com/NVlabs/stylegan3 -""" - -# pylint: disable=redefined-builtin -# pylint: disable=arguments-differ -# pylint: disable=protected-access -# pylint: disable=line-too-long -# pylint: disable=global-statement -# pylint: disable=missing-class-docstring -# pylint: disable=missing-function-docstring - -import contextlib -import torch - -#---------------------------------------------------------------------------- - -enabled = True # Enable the custom op by setting this to true. -weight_gradients_disabled = False # Forcefully disable computation of gradients with respect to the weights. - -@contextlib.contextmanager -def no_weight_gradients(disable=True): - global weight_gradients_disabled - old = weight_gradients_disabled - if disable: - weight_gradients_disabled = True - yield - weight_gradients_disabled = old - -#---------------------------------------------------------------------------- - -def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1, impl='cuda'): - if impl == 'cuda' and _should_use_custom_op(input): - return _conv2d_gradfix(transpose=False, weight_shape=weight.shape, stride=stride, padding=padding, output_padding=0, dilation=dilation, groups=groups).apply(input, weight, bias) - return torch.nn.functional.conv2d(input=input, weight=weight, bias=bias, stride=stride, padding=padding, dilation=dilation, groups=groups) - -def conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1, impl='cuda'): - if impl == 'cuda' and _should_use_custom_op(input): - return _conv2d_gradfix(transpose=True, weight_shape=weight.shape, stride=stride, padding=padding, output_padding=output_padding, groups=groups, dilation=dilation).apply(input, weight, bias) - return torch.nn.functional.conv_transpose2d(input=input, weight=weight, bias=bias, stride=stride, padding=padding, output_padding=output_padding, groups=groups, dilation=dilation) - -#---------------------------------------------------------------------------- - -def _should_use_custom_op(input): - assert isinstance(input, torch.Tensor) - if (not enabled) or (not torch.backends.cudnn.enabled): - return False - if input.device.type != 'cuda': - return False - return True - -def _tuple_of_ints(xs, ndim): - xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim - assert len(xs) == ndim - assert all(isinstance(x, int) for x in xs) - return xs - -#---------------------------------------------------------------------------- - -_conv2d_gradfix_cache = dict() -_null_tensor = torch.empty([0]) - -def _conv2d_gradfix(transpose, weight_shape, stride, padding, output_padding, dilation, groups): - # Parse arguments. - ndim = 2 - weight_shape = tuple(weight_shape) - stride = _tuple_of_ints(stride, ndim) - padding = _tuple_of_ints(padding, ndim) - output_padding = _tuple_of_ints(output_padding, ndim) - dilation = _tuple_of_ints(dilation, ndim) - - # Lookup from cache. - key = (transpose, weight_shape, stride, padding, output_padding, dilation, groups) - if key in _conv2d_gradfix_cache: - return _conv2d_gradfix_cache[key] - - # Validate arguments. - assert groups >= 1 - assert len(weight_shape) == ndim + 2 - assert all(stride[i] >= 1 for i in range(ndim)) - assert all(padding[i] >= 0 for i in range(ndim)) - assert all(dilation[i] >= 0 for i in range(ndim)) - if not transpose: - assert all(output_padding[i] == 0 for i in range(ndim)) - else: # transpose - assert all(0 <= output_padding[i] < max(stride[i], dilation[i]) for i in range(ndim)) - - # Helpers. - common_kwargs = dict(stride=stride, padding=padding, dilation=dilation, groups=groups) - def calc_output_padding(input_shape, output_shape): - if transpose: - return [0, 0] - return [ - input_shape[i + 2] - - (output_shape[i + 2] - 1) * stride[i] - - (1 - 2 * padding[i]) - - dilation[i] * (weight_shape[i + 2] - 1) - for i in range(ndim) - ] - - # Forward & backward. - class Conv2d(torch.autograd.Function): - @staticmethod - def forward(ctx, input, weight, bias): - assert weight.shape == weight_shape - ctx.save_for_backward( - input if weight.requires_grad else _null_tensor, - weight if input.requires_grad else _null_tensor, - ) - ctx.input_shape = input.shape - - # Simple 1x1 convolution => cuBLAS (only on Volta, not on Ampere). - if weight_shape[2:] == stride == dilation == (1, 1) and padding == (0, 0) and torch.cuda.get_device_capability(input.device) < (8, 0): - a = weight.reshape(groups, weight_shape[0] // groups, weight_shape[1]) - b = input.reshape(input.shape[0], groups, input.shape[1] // groups, -1) - c = (a.transpose(1, 2) if transpose else a) @ b.permute(1, 2, 0, 3).flatten(2) - c = c.reshape(-1, input.shape[0], *input.shape[2:]).transpose(0, 1) - c = c if bias is None else c + bias.unsqueeze(0).unsqueeze(2).unsqueeze(3) - return c.contiguous(memory_format=(torch.channels_last if input.stride(1) == 1 else torch.contiguous_format)) - - # General case => cuDNN. - if transpose: - return torch.nn.functional.conv_transpose2d(input=input, weight=weight, bias=bias, output_padding=output_padding, **common_kwargs) - return torch.nn.functional.conv2d(input=input, weight=weight, bias=bias, **common_kwargs) - - @staticmethod - def backward(ctx, grad_output): - input, weight = ctx.saved_tensors - input_shape = ctx.input_shape - grad_input = None - grad_weight = None - grad_bias = None - - if ctx.needs_input_grad[0]: - p = calc_output_padding(input_shape=input_shape, output_shape=grad_output.shape) - op = _conv2d_gradfix(transpose=(not transpose), weight_shape=weight_shape, output_padding=p, **common_kwargs) - grad_input = op.apply(grad_output, weight, None) - assert grad_input.shape == input_shape - - if ctx.needs_input_grad[1] and not weight_gradients_disabled: - grad_weight = Conv2dGradWeight.apply(grad_output, input) - assert grad_weight.shape == weight_shape - - if ctx.needs_input_grad[2]: - grad_bias = grad_output.sum([0, 2, 3]) - - return grad_input, grad_weight, grad_bias - - # Gradient with respect to the weights. - class Conv2dGradWeight(torch.autograd.Function): - @staticmethod - def forward(ctx, grad_output, input): - ctx.save_for_backward( - grad_output if input.requires_grad else _null_tensor, - input if grad_output.requires_grad else _null_tensor, - ) - ctx.grad_output_shape = grad_output.shape - ctx.input_shape = input.shape - - # Simple 1x1 convolution => cuBLAS (on both Volta and Ampere). - if weight_shape[2:] == stride == dilation == (1, 1) and padding == (0, 0): - a = grad_output.reshape(grad_output.shape[0], groups, grad_output.shape[1] // groups, -1).permute(1, 2, 0, 3).flatten(2) - b = input.reshape(input.shape[0], groups, input.shape[1] // groups, -1).permute(1, 2, 0, 3).flatten(2) - c = (b @ a.transpose(1, 2) if transpose else a @ b.transpose(1, 2)).reshape(weight_shape) - return c.contiguous(memory_format=(torch.channels_last if input.stride(1) == 1 else torch.contiguous_format)) - - # General case => cuDNN. - name = 'aten::cudnn_convolution_transpose_backward_weight' if transpose else 'aten::cudnn_convolution_backward_weight' - flags = [torch.backends.cudnn.benchmark, torch.backends.cudnn.deterministic, torch.backends.cudnn.allow_tf32] - return torch._C._jit_get_operation(name)(weight_shape, grad_output, input, padding, stride, dilation, groups, *flags) - - @staticmethod - def backward(ctx, grad2_grad_weight): - grad_output, input = ctx.saved_tensors - grad_output_shape = ctx.grad_output_shape - input_shape = ctx.input_shape - grad2_grad_output = None - grad2_input = None - - if ctx.needs_input_grad[0]: - grad2_grad_output = Conv2d.apply(input, grad2_grad_weight, None) - assert grad2_grad_output.shape == grad_output_shape - - if ctx.needs_input_grad[1]: - p = calc_output_padding(input_shape=input_shape, output_shape=grad_output_shape) - op = _conv2d_gradfix(transpose=(not transpose), weight_shape=weight_shape, output_padding=p, **common_kwargs) - grad2_input = op.apply(grad_output, grad2_grad_weight, None) - assert grad2_input.shape == input_shape - - return grad2_grad_output, grad2_input - - _conv2d_gradfix_cache[key] = Conv2d - return Conv2d - -#---------------------------------------------------------------------------- - -# pylint: enable=redefined-builtin -# pylint: enable=arguments-differ -# pylint: enable=protected-access -# pylint: enable=line-too-long -# pylint: enable=global-statement -# pylint: enable=missing-class-docstring -# pylint: enable=missing-function-docstring diff --git a/spaces/Illumotion/Koboldcpp/.devops/full-rocm.Dockerfile b/spaces/Illumotion/Koboldcpp/.devops/full-rocm.Dockerfile deleted file mode 100644 index 6c521e9b4101fe067d7018ca469f3dc2b994c768..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/.devops/full-rocm.Dockerfile +++ /dev/null @@ -1,44 +0,0 @@ -ARG UBUNTU_VERSION=22.04 - -# This needs to generally match the container host's environment. -ARG ROCM_VERSION=5.6 - -# Target the CUDA build image -ARG BASE_ROCM_DEV_CONTAINER=rocm/dev-ubuntu-${UBUNTU_VERSION}:${ROCM_VERSION}-complete - -FROM ${BASE_ROCM_DEV_CONTAINER} as build - -# Unless otherwise specified, we make a fat build. -# List from https://github.com/ggerganov/llama.cpp/pull/1087#issuecomment-1682807878 -# This is mostly tied to rocBLAS supported archs. -ARG ROCM_DOCKER_ARCH=\ - gfx803 \ - gfx900 \ - gfx906 \ - gfx908 \ - gfx90a \ - gfx1010 \ - gfx1030 \ - gfx1100 \ - gfx1101 \ - gfx1102 - -COPY requirements.txt requirements.txt - -RUN pip install --upgrade pip setuptools wheel \ - && pip install -r requirements.txt - -WORKDIR /app - -COPY . . - -# Set nvcc architecture -ENV GPU_TARGETS=${ROCM_DOCKER_ARCH} -# Enable ROCm -ENV LLAMA_HIPBLAS=1 -ENV CC=/opt/rocm/llvm/bin/clang -ENV CXX=/opt/rocm/llvm/bin/clang++ - -RUN make - -ENTRYPOINT ["/app/.devops/tools.sh"] diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/weights/README.md b/spaces/Jasonyoyo/CodeFormer/CodeFormer/weights/README.md deleted file mode 100644 index 67ad334bd672eeb9f82813cd54e8885331bbb2f2..0000000000000000000000000000000000000000 --- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/weights/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# Weights - -Put the downloaded pre-trained models to this folder. \ No newline at end of file diff --git a/spaces/JeremyK/JewelryVision/README.md b/spaces/JeremyK/JewelryVision/README.md deleted file mode 100644 index f8b269b96d5658581197876e4d8d939a3ff410a5..0000000000000000000000000000000000000000 --- a/spaces/JeremyK/JewelryVision/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: JewelryVision -emoji: 🦀 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KVNAditya/Personal_News_Summarization_Assistant/README.md b/spaces/KVNAditya/Personal_News_Summarization_Assistant/README.md deleted file mode 100644 index 618b8d12bbf9a94fd412fcb9116446a6be2779d0..0000000000000000000000000000000000000000 --- a/spaces/KVNAditya/Personal_News_Summarization_Assistant/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Personal News Summarization Assistant -emoji: 👀 -colorFrom: green -colorTo: purple -sdk: streamlit -sdk_version: 1.27.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kairi7865/Kairi2/Dockerfile b/spaces/Kairi7865/Kairi2/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/Kairi7865/Kairi2/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/fovea_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/fovea_head.py deleted file mode 100644 index 89353deac7f0189c1e464288521ee8e4238f0107..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/fovea_head.py +++ /dev/null @@ -1,509 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Dict, List, Optional, Tuple - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.ops import DeformConv2d -from mmengine.config import ConfigDict -from mmengine.model import BaseModule -from mmengine.structures import InstanceData -from torch import Tensor - -from mmdet.registry import MODELS -from mmdet.utils import InstanceList, OptInstanceList, OptMultiConfig -from ..utils import filter_scores_and_topk, multi_apply -from .anchor_free_head import AnchorFreeHead - -INF = 1e8 - - -class FeatureAlign(BaseModule): - """Feature Align Module. - - Feature Align Module is implemented based on DCN v1. - It uses anchor shape prediction rather than feature map to - predict offsets of deform conv layer. - - Args: - in_channels (int): Number of channels in the input feature map. - out_channels (int): Number of channels in the output feature map. - kernel_size (int): Size of the convolution kernel. - ``norm_cfg=dict(type='GN', num_groups=32, requires_grad=True)``. - deform_groups: (int): Group number of DCN in - FeatureAdaption module. - init_cfg (:obj:`ConfigDict` or dict or list[:obj:`ConfigDict` or \ - dict], optional): Initialization config dict. - """ - - def __init__( - self, - in_channels: int, - out_channels: int, - kernel_size: int = 3, - deform_groups: int = 4, - init_cfg: OptMultiConfig = dict( - type='Normal', - layer='Conv2d', - std=0.1, - override=dict(type='Normal', name='conv_adaption', std=0.01)) - ) -> None: - super().__init__(init_cfg=init_cfg) - offset_channels = kernel_size * kernel_size * 2 - self.conv_offset = nn.Conv2d( - 4, deform_groups * offset_channels, 1, bias=False) - self.conv_adaption = DeformConv2d( - in_channels, - out_channels, - kernel_size=kernel_size, - padding=(kernel_size - 1) // 2, - deform_groups=deform_groups) - self.relu = nn.ReLU(inplace=True) - - def forward(self, x: Tensor, shape: Tensor) -> Tensor: - """Forward function of feature align module. - - Args: - x (Tensor): Features from the upstream network. - shape (Tensor): Exponential of bbox predictions. - - Returns: - x (Tensor): The aligned features. - """ - offset = self.conv_offset(shape) - x = self.relu(self.conv_adaption(x, offset)) - return x - - -@MODELS.register_module() -class FoveaHead(AnchorFreeHead): - """Detection Head of `FoveaBox: Beyond Anchor-based Object Detector. - - `_. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - base_edge_list (list[int]): List of edges. - scale_ranges (list[tuple]): Range of scales. - sigma (float): Super parameter of ``FoveaHead``. - with_deform (bool): Whether use deform conv. - deform_groups (int): Deformable conv group size. - init_cfg (:obj:`ConfigDict` or dict or list[:obj:`ConfigDict` or \ - dict], optional): Initialization config dict. - """ - - def __init__(self, - num_classes: int, - in_channels: int, - base_edge_list: List[int] = (16, 32, 64, 128, 256), - scale_ranges: List[tuple] = ((8, 32), (16, 64), (32, 128), - (64, 256), (128, 512)), - sigma: float = 0.4, - with_deform: bool = False, - deform_groups: int = 4, - init_cfg: OptMultiConfig = dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='conv_cls', - std=0.01, - bias_prob=0.01)), - **kwargs) -> None: - self.base_edge_list = base_edge_list - self.scale_ranges = scale_ranges - self.sigma = sigma - self.with_deform = with_deform - self.deform_groups = deform_groups - super().__init__( - num_classes=num_classes, - in_channels=in_channels, - init_cfg=init_cfg, - **kwargs) - - def _init_layers(self) -> None: - """Initialize layers of the head.""" - # box branch - super()._init_reg_convs() - self.conv_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1) - - # cls branch - if not self.with_deform: - super()._init_cls_convs() - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - else: - self.cls_convs = nn.ModuleList() - self.cls_convs.append( - ConvModule( - self.feat_channels, (self.feat_channels * 4), - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.norm_cfg is None)) - self.cls_convs.append( - ConvModule((self.feat_channels * 4), (self.feat_channels * 4), - 1, - stride=1, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.norm_cfg is None)) - self.feature_adaption = FeatureAlign( - self.feat_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.conv_cls = nn.Conv2d( - int(self.feat_channels * 4), - self.cls_out_channels, - 3, - padding=1) - - def forward_single(self, x: Tensor) -> Tuple[Tensor, Tensor]: - """Forward features of a single scale level. - - Args: - x (Tensor): FPN feature maps of the specified stride. - - Returns: - tuple: scores for each class and bbox predictions of input - feature maps. - """ - cls_feat = x - reg_feat = x - for reg_layer in self.reg_convs: - reg_feat = reg_layer(reg_feat) - bbox_pred = self.conv_reg(reg_feat) - if self.with_deform: - cls_feat = self.feature_adaption(cls_feat, bbox_pred.exp()) - for cls_layer in self.cls_convs: - cls_feat = cls_layer(cls_feat) - cls_score = self.conv_cls(cls_feat) - return cls_score, bbox_pred - - def loss_by_feat( - self, - cls_scores: List[Tensor], - bbox_preds: List[Tensor], - batch_gt_instances: InstanceList, - batch_img_metas: List[dict], - batch_gt_instances_ignore: OptInstanceList = None - ) -> Dict[str, Tensor]: - """Calculate the loss based on the features extracted by the detection - head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_priors * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_priors * 4. - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes`` and ``labels`` - attributes. - batch_img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - batch_gt_instances_ignore (list[:obj:`InstanceData`], Optional): - Batch of gt_instances_ignore. It includes ``bboxes`` attribute - data that is ignored during training and testing. - Defaults to None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert len(cls_scores) == len(bbox_preds) - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - priors = self.prior_generator.grid_priors( - featmap_sizes, - dtype=bbox_preds[0].dtype, - device=bbox_preds[0].device) - num_imgs = cls_scores[0].size(0) - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(-1, self.cls_out_channels) - for cls_score in cls_scores - ] - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - for bbox_pred in bbox_preds - ] - flatten_cls_scores = torch.cat(flatten_cls_scores) - flatten_bbox_preds = torch.cat(flatten_bbox_preds) - flatten_labels, flatten_bbox_targets = self.get_targets( - batch_gt_instances, featmap_sizes, priors) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - pos_inds = ((flatten_labels >= 0) - & (flatten_labels < self.num_classes)).nonzero().view(-1) - num_pos = len(pos_inds) - - loss_cls = self.loss_cls( - flatten_cls_scores, flatten_labels, avg_factor=num_pos + num_imgs) - if num_pos > 0: - pos_bbox_preds = flatten_bbox_preds[pos_inds] - pos_bbox_targets = flatten_bbox_targets[pos_inds] - pos_weights = pos_bbox_targets.new_ones(pos_bbox_targets.size()) - loss_bbox = self.loss_bbox( - pos_bbox_preds, - pos_bbox_targets, - pos_weights, - avg_factor=num_pos) - else: - loss_bbox = torch.tensor( - 0, - dtype=flatten_bbox_preds.dtype, - device=flatten_bbox_preds.device) - return dict(loss_cls=loss_cls, loss_bbox=loss_bbox) - - def get_targets( - self, batch_gt_instances: InstanceList, featmap_sizes: List[tuple], - priors_list: List[Tensor]) -> Tuple[List[Tensor], List[Tensor]]: - """Compute regression and classification for priors in multiple images. - - Args: - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes`` and ``labels`` - attributes. - featmap_sizes (list[tuple]): Size tuple of feature maps. - priors_list (list[Tensor]): Priors list of each fpn level, each has - shape (num_priors, 2). - - Returns: - tuple: Targets of each level. - - - flatten_labels (list[Tensor]): Labels of each level. - - flatten_bbox_targets (list[Tensor]): BBox targets of each - level. - """ - label_list, bbox_target_list = multi_apply( - self._get_targets_single, - batch_gt_instances, - featmap_size_list=featmap_sizes, - priors_list=priors_list) - flatten_labels = [ - torch.cat([ - labels_level_img.flatten() for labels_level_img in labels_level - ]) for labels_level in zip(*label_list) - ] - flatten_bbox_targets = [ - torch.cat([ - bbox_targets_level_img.reshape(-1, 4) - for bbox_targets_level_img in bbox_targets_level - ]) for bbox_targets_level in zip(*bbox_target_list) - ] - flatten_labels = torch.cat(flatten_labels) - flatten_bbox_targets = torch.cat(flatten_bbox_targets) - return flatten_labels, flatten_bbox_targets - - def _get_targets_single(self, - gt_instances: InstanceData, - featmap_size_list: List[tuple] = None, - priors_list: List[Tensor] = None) -> tuple: - """Compute regression and classification targets for a single image. - - Args: - gt_instances (:obj:`InstanceData`): Ground truth of instance - annotations. It usually includes ``bboxes`` and ``labels`` - attributes. - featmap_size_list (list[tuple]): Size tuple of feature maps. - priors_list (list[Tensor]): Priors of each fpn level, each has - shape (num_priors, 2). - - Returns: - tuple: - - - label_list (list[Tensor]): Labels of all anchors in the image. - - box_target_list (list[Tensor]): BBox targets of all anchors in - the image. - """ - gt_bboxes_raw = gt_instances.bboxes - gt_labels_raw = gt_instances.labels - gt_areas = torch.sqrt((gt_bboxes_raw[:, 2] - gt_bboxes_raw[:, 0]) * - (gt_bboxes_raw[:, 3] - gt_bboxes_raw[:, 1])) - label_list = [] - bbox_target_list = [] - # for each pyramid, find the cls and box target - for base_len, (lower_bound, upper_bound), stride, featmap_size, \ - priors in zip(self.base_edge_list, self.scale_ranges, - self.strides, featmap_size_list, priors_list): - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - priors = priors.view(*featmap_size, 2) - x, y = priors[..., 0], priors[..., 1] - labels = gt_labels_raw.new_full(featmap_size, self.num_classes) - bbox_targets = gt_bboxes_raw.new_ones(featmap_size[0], - featmap_size[1], 4) - # scale assignment - hit_indices = ((gt_areas >= lower_bound) & - (gt_areas <= upper_bound)).nonzero().flatten() - if len(hit_indices) == 0: - label_list.append(labels) - bbox_target_list.append(torch.log(bbox_targets)) - continue - _, hit_index_order = torch.sort(-gt_areas[hit_indices]) - hit_indices = hit_indices[hit_index_order] - gt_bboxes = gt_bboxes_raw[hit_indices, :] / stride - gt_labels = gt_labels_raw[hit_indices] - half_w = 0.5 * (gt_bboxes[:, 2] - gt_bboxes[:, 0]) - half_h = 0.5 * (gt_bboxes[:, 3] - gt_bboxes[:, 1]) - # valid fovea area: left, right, top, down - pos_left = torch.ceil( - gt_bboxes[:, 0] + (1 - self.sigma) * half_w - 0.5).long(). \ - clamp(0, featmap_size[1] - 1) - pos_right = torch.floor( - gt_bboxes[:, 0] + (1 + self.sigma) * half_w - 0.5).long(). \ - clamp(0, featmap_size[1] - 1) - pos_top = torch.ceil( - gt_bboxes[:, 1] + (1 - self.sigma) * half_h - 0.5).long(). \ - clamp(0, featmap_size[0] - 1) - pos_down = torch.floor( - gt_bboxes[:, 1] + (1 + self.sigma) * half_h - 0.5).long(). \ - clamp(0, featmap_size[0] - 1) - for px1, py1, px2, py2, label, (gt_x1, gt_y1, gt_x2, gt_y2) in \ - zip(pos_left, pos_top, pos_right, pos_down, gt_labels, - gt_bboxes_raw[hit_indices, :]): - labels[py1:py2 + 1, px1:px2 + 1] = label - bbox_targets[py1:py2 + 1, px1:px2 + 1, 0] = \ - (x[py1:py2 + 1, px1:px2 + 1] - gt_x1) / base_len - bbox_targets[py1:py2 + 1, px1:px2 + 1, 1] = \ - (y[py1:py2 + 1, px1:px2 + 1] - gt_y1) / base_len - bbox_targets[py1:py2 + 1, px1:px2 + 1, 2] = \ - (gt_x2 - x[py1:py2 + 1, px1:px2 + 1]) / base_len - bbox_targets[py1:py2 + 1, px1:px2 + 1, 3] = \ - (gt_y2 - y[py1:py2 + 1, px1:px2 + 1]) / base_len - bbox_targets = bbox_targets.clamp(min=1. / 16, max=16.) - label_list.append(labels) - bbox_target_list.append(torch.log(bbox_targets)) - return label_list, bbox_target_list - - # Same as base_dense_head/_predict_by_feat_single except self._bbox_decode - def _predict_by_feat_single(self, - cls_score_list: List[Tensor], - bbox_pred_list: List[Tensor], - score_factor_list: List[Tensor], - mlvl_priors: List[Tensor], - img_meta: dict, - cfg: Optional[ConfigDict] = None, - rescale: bool = False, - with_nms: bool = True) -> InstanceData: - """Transform a single image's features extracted from the head into - bbox results. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_priors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has shape - (num_priors * 4, H, W). - score_factor_list (list[Tensor]): Score factor from all scale - levels of a single image, each item has shape - (num_priors * 1, H, W). - mlvl_priors (list[Tensor]): Each element in the list is - the priors of a single level in feature pyramid, has shape - (num_priors, 2). - img_meta (dict): Image meta info. - cfg (ConfigDict, optional): Test / postprocessing - configuration, if None, test_cfg would be used. - Defaults to None. - rescale (bool): If True, return boxes in original image space. - Defaults to False. - with_nms (bool): If True, do nms before return boxes. - Defaults to True. - - Returns: - :obj:`InstanceData`: Detection results of each image - after the post process. - Each item usually contains following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance, ) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances, ). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_score_list) == len(bbox_pred_list) - img_shape = img_meta['img_shape'] - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_labels = [] - for level_idx, (cls_score, bbox_pred, stride, base_len, priors) in \ - enumerate(zip(cls_score_list, bbox_pred_list, self.strides, - self.base_edge_list, mlvl_priors)): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - - scores = cls_score.permute(1, 2, 0).reshape( - -1, self.cls_out_channels).sigmoid() - - # After https://github.com/open-mmlab/mmdetection/pull/6268/, - # this operation keeps fewer bboxes under the same `nms_pre`. - # There is no difference in performance for most models. If you - # find a slight drop in performance, you can set a larger - # `nms_pre` than before. - results = filter_scores_and_topk( - scores, cfg.score_thr, nms_pre, - dict(bbox_pred=bbox_pred, priors=priors)) - scores, labels, _, filtered_results = results - - bbox_pred = filtered_results['bbox_pred'] - priors = filtered_results['priors'] - - bboxes = self._bbox_decode(priors, bbox_pred, base_len, img_shape) - - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_labels.append(labels) - - results = InstanceData() - results.bboxes = torch.cat(mlvl_bboxes) - results.scores = torch.cat(mlvl_scores) - results.labels = torch.cat(mlvl_labels) - - return self._bbox_post_process( - results=results, - cfg=cfg, - rescale=rescale, - with_nms=with_nms, - img_meta=img_meta) - - def _bbox_decode(self, priors: Tensor, bbox_pred: Tensor, base_len: int, - max_shape: int) -> Tensor: - """Function to decode bbox. - - Args: - priors (Tensor): Center proiors of an image, has shape - (num_instances, 2). - bbox_preds (Tensor): Box energies / deltas for all instances, - has shape (batch_size, num_instances, 4). - base_len (int): The base length. - max_shape (int): The max shape of bbox. - - Returns: - Tensor: Decoded bboxes in (tl_x, tl_y, br_x, br_y) format. Has - shape (batch_size, num_instances, 4). - """ - bbox_pred = bbox_pred.exp() - - y = priors[:, 1] - x = priors[:, 0] - x1 = (x - base_len * bbox_pred[:, 0]). \ - clamp(min=0, max=max_shape[1] - 1) - y1 = (y - base_len * bbox_pred[:, 1]). \ - clamp(min=0, max=max_shape[0] - 1) - x2 = (x + base_len * bbox_pred[:, 2]). \ - clamp(min=0, max=max_shape[1] - 1) - y2 = (y + base_len * bbox_pred[:, 3]). \ - clamp(min=0, max=max_shape[0] - 1) - decoded_bboxes = torch.stack([x1, y1, x2, y2], -1) - return decoded_bboxes diff --git a/spaces/Laihiujin/OneFormer/oneformer/utils/__init__.py b/spaces/Laihiujin/OneFormer/oneformer/utils/__init__.py deleted file mode 100644 index 130d3011b032f91df1a9cf965625e54922f6c81b..0000000000000000000000000000000000000000 --- a/spaces/Laihiujin/OneFormer/oneformer/utils/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .events import setup_wandb, WandbWriter \ No newline at end of file diff --git a/spaces/Lamai/LAMAIGPT/tests/unit/test_chat.py b/spaces/Lamai/LAMAIGPT/tests/unit/test_chat.py deleted file mode 100644 index 774f4103762c28d5a02e89c14b224fae0bc0756a..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/tests/unit/test_chat.py +++ /dev/null @@ -1,86 +0,0 @@ -# Generated by CodiumAI -import time -import unittest -from unittest.mock import patch - -from autogpt.chat import create_chat_message, generate_context - - -class TestChat(unittest.TestCase): - # Tests that the function returns a dictionary with the correct keys and values when valid strings are provided for role and content. - def test_happy_path_role_content(self): - result = create_chat_message("system", "Hello, world!") - self.assertEqual(result, {"role": "system", "content": "Hello, world!"}) - - # Tests that the function returns a dictionary with the correct keys and values when empty strings are provided for role and content. - def test_empty_role_content(self): - result = create_chat_message("", "") - self.assertEqual(result, {"role": "", "content": ""}) - - # Tests the behavior of the generate_context function when all input parameters are empty. - @patch("time.strftime") - def test_generate_context_empty_inputs(self, mock_strftime): - # Mock the time.strftime function to return a fixed value - mock_strftime.return_value = "Sat Apr 15 00:00:00 2023" - # Arrange - prompt = "" - relevant_memory = "" - full_message_history = [] - model = "gpt-3.5-turbo-0301" - - # Act - result = generate_context(prompt, relevant_memory, full_message_history, model) - - # Assert - expected_result = ( - -1, - 47, - 3, - [ - {"role": "system", "content": ""}, - { - "role": "system", - "content": f"The current time and date is {time.strftime('%c')}", - }, - { - "role": "system", - "content": f"This reminds you of these events from your past:\n\n\n", - }, - ], - ) - self.assertEqual(result, expected_result) - - # Tests that the function successfully generates a current_context given valid inputs. - def test_generate_context_valid_inputs(self): - # Given - prompt = "What is your favorite color?" - relevant_memory = "You once painted your room blue." - full_message_history = [ - create_chat_message("user", "Hi there!"), - create_chat_message("assistant", "Hello! How can I assist you today?"), - create_chat_message("user", "Can you tell me a joke?"), - create_chat_message( - "assistant", - "Why did the tomato turn red? Because it saw the salad dressing!", - ), - create_chat_message("user", "Haha, that's funny."), - ] - model = "gpt-3.5-turbo-0301" - - # When - result = generate_context(prompt, relevant_memory, full_message_history, model) - - # Then - self.assertIsInstance(result[0], int) - self.assertIsInstance(result[1], int) - self.assertIsInstance(result[2], int) - self.assertIsInstance(result[3], list) - self.assertGreaterEqual(result[0], 0) - self.assertGreaterEqual(result[1], 0) - self.assertGreaterEqual(result[2], 0) - self.assertGreaterEqual( - len(result[3]), 3 - ) # current_context should have at least 3 messages - self.assertLessEqual( - result[1], 2048 - ) # token limit for GPT-3.5-turbo-0301 is 2048 tokens diff --git a/spaces/LanguageBind/LanguageBind/vl_ret/dataloader_msvd_retrieval.py b/spaces/LanguageBind/LanguageBind/vl_ret/dataloader_msvd_retrieval.py deleted file mode 100644 index 94e312ee5379c53e807ced0745aa19904462e057..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/vl_ret/dataloader_msvd_retrieval.py +++ /dev/null @@ -1,191 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import unicode_literals -from __future__ import print_function - -import os -from torch.utils.data import Dataset -import numpy as np -import pickle -from .rawvideo_util import RawVideoExtractor - -class MSVD_DataLoader(Dataset): - """MSVD dataset loader.""" - def __init__( - self, - subset, - data_path, - features_path, - tokenizer, - max_words=30, - feature_framerate=1.0, - max_frames=100, - image_resolution=224, - frame_order=0, - slice_framepos=0, - ): - self.data_path = data_path - self.features_path = features_path - self.feature_framerate = feature_framerate - self.max_words = max_words - self.max_frames = max_frames - self.tokenizer = tokenizer - # 0: ordinary order; 1: reverse order; 2: random order. - self.frame_order = frame_order - assert self.frame_order in [0, 1, 2] - # 0: cut from head frames; 1: cut from tail frames; 2: extract frames uniformly. - self.slice_framepos = slice_framepos - assert self.slice_framepos in [0, 1, 2] - - self.subset = subset - assert self.subset in ["train", "val", "test"] - video_id_path_dict = {} - video_id_path_dict["train"] = os.path.join(self.data_path, "train_list.txt") - video_id_path_dict["val"] = os.path.join(self.data_path, "val_list.txt") - video_id_path_dict["test"] = os.path.join(self.data_path, "test_list.txt") - caption_file = os.path.join(self.data_path, "raw-captions.pkl") - - with open(video_id_path_dict[self.subset], 'r') as fp: - video_ids = [itm.strip() for itm in fp.readlines()] - - with open(caption_file, 'rb') as f: - captions = pickle.load(f) - - video_dict = {} - for root, dub_dir, video_files in os.walk(self.features_path): - for video_file in video_files: - video_id_ = ".".join(video_file.split(".")[:-1]) - if video_id_ not in video_ids: - continue - file_path_ = os.path.join(root, video_file) - video_dict[video_id_] = file_path_ - self.video_dict = video_dict - - self.sample_len = 0 - self.sentences_dict = {} - self.cut_off_points = [] - for video_id in video_ids: - assert video_id in captions - for cap in captions[video_id]: - cap_txt = " ".join(cap) - self.sentences_dict[len(self.sentences_dict)] = (video_id, cap_txt) - self.cut_off_points.append(len(self.sentences_dict)) - - ## below variables are used to multi-sentences retrieval - # self.cut_off_points: used to tag the label when calculate the metric - # self.sentence_num: used to cut the sentence representation - # self.video_num: used to cut the video representation - self.multi_sentence_per_video = True # !!! important tag for eval - if self.subset == "val" or self.subset == "test": - self.sentence_num = len(self.sentences_dict) - self.video_num = len(video_ids) - assert len(self.cut_off_points) == self.video_num - print("For {}, sentence number: {}".format(self.subset, self.sentence_num)) - print("For {}, video number: {}".format(self.subset, self.video_num)) - - print("Video number: {}".format(len(self.video_dict))) - print("Total Paire: {}".format(len(self.sentences_dict))) - - self.sample_len = len(self.sentences_dict) - self.rawVideoExtractor = RawVideoExtractor(framerate=feature_framerate, size=image_resolution) - self.SPECIAL_TOKEN = {"CLS_TOKEN": "<|startoftext|>", "SEP_TOKEN": "<|endoftext|>", - "MASK_TOKEN": "[MASK]", "UNK_TOKEN": "[UNK]", "PAD_TOKEN": "[PAD]"} - - def __len__(self): - return self.sample_len - - def _get_text(self, video_id, caption): - k = 1 - choice_video_ids = [video_id] - pairs_text = np.zeros((k, self.max_words), dtype=np.long) - pairs_mask = np.zeros((k, self.max_words), dtype=np.long) - pairs_segment = np.zeros((k, self.max_words), dtype=np.long) - - for i, video_id in enumerate(choice_video_ids): - # words = self.tokenizer.tokenize(caption) - # - # words = [self.SPECIAL_TOKEN["CLS_TOKEN"]] + words - # total_length_with_CLS = self.max_words - 1 - # if len(words) > total_length_with_CLS: - # words = words[:total_length_with_CLS] - # words = words + [self.SPECIAL_TOKEN["SEP_TOKEN"]] - # - # input_ids = self.tokenizer.convert_tokens_to_ids(words) - # input_mask = [1] * len(input_ids) - # segment_ids = [0] * len(input_ids) - - - output = self.tokenizer(caption) - - input_ids = output[0].squeeze() - input_mask = output[1].squeeze() - segment_ids = [0] * len(input_ids) - - - while len(input_ids) < self.max_words: - input_ids.append(0) - input_mask.append(0) - segment_ids.append(0) - assert len(input_ids) == self.max_words - assert len(input_mask) == self.max_words - assert len(segment_ids) == self.max_words - - pairs_text[i] = np.array(input_ids) - pairs_mask[i] = np.array(input_mask) - pairs_segment[i] = np.array(segment_ids) - - return pairs_text, pairs_mask, pairs_segment, choice_video_ids - - def _get_rawvideo(self, choice_video_ids): - video_mask = np.zeros((len(choice_video_ids), self.max_frames), dtype=np.long) - max_video_length = [0] * len(choice_video_ids) - - # Pair x L x T x 3 x H x W - video = np.zeros((len(choice_video_ids), self.max_frames, 1, 3, - self.rawVideoExtractor.size, self.rawVideoExtractor.size), dtype=np.float) - - for i, video_id in enumerate(choice_video_ids): - video_path = self.video_dict[video_id] - - raw_video_data = self.rawVideoExtractor.get_video_data(video_path) - raw_video_data = raw_video_data['video'] - # print('raw_video_data', raw_video_data.shape) - - if len(raw_video_data.shape) > 3: - raw_video_data_clip = raw_video_data - # L x T x 3 x H x W - raw_video_slice = self.rawVideoExtractor.process_raw_data(raw_video_data_clip) - if self.max_frames < raw_video_slice.shape[0]: - if self.slice_framepos == 0: - video_slice = raw_video_slice[:self.max_frames, ...] - elif self.slice_framepos == 1: - video_slice = raw_video_slice[-self.max_frames:, ...] - else: - sample_indx = np.linspace(0, raw_video_slice.shape[0] - 1, num=self.max_frames, dtype=int) - # print('sample_indx', raw_video_slice.shape[0], sample_indx) - video_slice = raw_video_slice[sample_indx, ...] - else: - video_slice = raw_video_slice - - video_slice = self.rawVideoExtractor.process_frame_order(video_slice, frame_order=self.frame_order) - - slice_len = video_slice.shape[0] - max_video_length[i] = max_video_length[i] if max_video_length[i] > slice_len else slice_len - if slice_len < 1: - pass - else: - video[i][:slice_len, ...] = video_slice - else: - print("video path: {} error. video id: {}".format(video_path, video_id)) - - for i, v_length in enumerate(max_video_length): - video_mask[i][:v_length] = [1] * v_length - - return video, video_mask - - def __getitem__(self, idx): - video_id, caption = self.sentences_dict[idx] - - pairs_text, pairs_mask, pairs_segment, choice_video_ids = self._get_text(video_id, caption) - video, video_mask = self._get_rawvideo(choice_video_ids) - return pairs_text, pairs_mask, pairs_segment, video, video_mask \ No newline at end of file diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/infer_uvr5.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/infer_uvr5.py deleted file mode 100644 index 9b58f05ef69d1ea96ccf5d3d018b27acbb1c3b32..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/infer_uvr5.py +++ /dev/null @@ -1,355 +0,0 @@ -import os, sys, torch, warnings - -now_dir = os.getcwd() -sys.path.append(now_dir) - -warnings.filterwarnings("ignore") -import librosa -import numpy as np -from lib.uvr5_pack.lib_v5 import spec_utils -from lib.uvr5_pack.utils import inference -from lib.uvr5_pack.lib_v5.model_param_init import ModelParameters -import soundfile as sf -from lib.uvr5_pack.lib_v5.nets_new import CascadedNet -from lib.uvr5_pack.lib_v5 import nets_61968KB as nets - - -class _audio_pre_: - def __init__(self, agg, model_path, device, is_half): - self.model_path = model_path - self.device = device - self.data = { - # Processing Options - "postprocess": False, - "tta": False, - # Constants - "window_size": 512, - "agg": agg, - "high_end_process": "mirroring", - } - mp = ModelParameters("lib/uvr5_pack/lib_v5/modelparams/4band_v2.json") - model = nets.CascadedASPPNet(mp.param["bins"] * 2) - cpk = torch.load(model_path, map_location="cpu") - model.load_state_dict(cpk) - model.eval() - if is_half: - model = model.half().to(device) - else: - model = model.to(device) - - self.mp = mp - self.model = model - - def _path_audio_(self, music_file, ins_root=None, vocal_root=None, format="flac"): - if ins_root is None and vocal_root is None: - return "No save root." - name = os.path.basename(music_file) - if ins_root is not None: - os.makedirs(ins_root, exist_ok=True) - if vocal_root is not None: - os.makedirs(vocal_root, exist_ok=True) - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - bands_n = len(self.mp.param["band"]) - # print(bands_n) - for d in range(bands_n, 0, -1): - bp = self.mp.param["band"][d] - if d == bands_n: # high-end band - ( - X_wave[d], - _, - ) = librosa.core.load( - music_file, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - if X_wave[d].ndim == 1: - X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]]) - else: # lower bands - X_wave[d] = librosa.core.resample( - X_wave[d + 1], - self.mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - # Stft of wave source - X_spec_s[d] = spec_utils.wave_to_spectrogram_mt( - X_wave[d], - bp["hl"], - bp["n_fft"], - self.mp.param["mid_side"], - self.mp.param["mid_side_b2"], - self.mp.param["reverse"], - ) - # pdb.set_trace() - if d == bands_n and self.data["high_end_process"] != "none": - input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + ( - self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"] - ) - input_high_end = X_spec_s[d][ - :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, : - ] - - X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp) - aggresive_set = float(self.data["agg"] / 100) - aggressiveness = { - "value": aggresive_set, - "split_bin": self.mp.param["band"][1]["crop_stop"], - } - with torch.no_grad(): - pred, X_mag, X_phase = inference( - X_spec_m, self.device, self.model, aggressiveness, self.data - ) - # Postprocess - if self.data["postprocess"]: - pred_inv = np.clip(X_mag - pred, 0, np.inf) - pred = spec_utils.mask_silence(pred, pred_inv) - y_spec_m = pred * X_phase - v_spec_m = X_spec_m - y_spec_m - - if ins_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], y_spec_m, input_high_end, self.mp - ) - wav_instrument = spec_utils.cmb_spectrogram_to_wave( - y_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp) - print("%s instruments done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - ins_root, - "instrument_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) # - else: - path = os.path.join( - ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - if vocal_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], v_spec_m, input_high_end, self.mp - ) - wav_vocals = spec_utils.cmb_spectrogram_to_wave( - v_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp) - print("%s vocals done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - vocal_root, - "vocal_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - else: - path = os.path.join( - vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - - -class _audio_pre_new: - def __init__(self, agg, model_path, device, is_half): - self.model_path = model_path - self.device = device - self.data = { - # Processing Options - "postprocess": False, - "tta": False, - # Constants - "window_size": 512, - "agg": agg, - "high_end_process": "mirroring", - } - mp = ModelParameters("lib/uvr5_pack/lib_v5/modelparams/4band_v3.json") - nout = 64 if "DeReverb" in model_path else 48 - model = CascadedNet(mp.param["bins"] * 2, nout) - cpk = torch.load(model_path, map_location="cpu") - model.load_state_dict(cpk) - model.eval() - if is_half: - model = model.half().to(device) - else: - model = model.to(device) - - self.mp = mp - self.model = model - - def _path_audio_( - self, music_file, vocal_root=None, ins_root=None, format="flac" - ): # 3个VR模型vocal和ins是反的 - if ins_root is None and vocal_root is None: - return "No save root." - name = os.path.basename(music_file) - if ins_root is not None: - os.makedirs(ins_root, exist_ok=True) - if vocal_root is not None: - os.makedirs(vocal_root, exist_ok=True) - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - bands_n = len(self.mp.param["band"]) - # print(bands_n) - for d in range(bands_n, 0, -1): - bp = self.mp.param["band"][d] - if d == bands_n: # high-end band - ( - X_wave[d], - _, - ) = librosa.core.load( - music_file, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - if X_wave[d].ndim == 1: - X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]]) - else: # lower bands - X_wave[d] = librosa.core.resample( - X_wave[d + 1], - self.mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - # Stft of wave source - X_spec_s[d] = spec_utils.wave_to_spectrogram_mt( - X_wave[d], - bp["hl"], - bp["n_fft"], - self.mp.param["mid_side"], - self.mp.param["mid_side_b2"], - self.mp.param["reverse"], - ) - # pdb.set_trace() - if d == bands_n and self.data["high_end_process"] != "none": - input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + ( - self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"] - ) - input_high_end = X_spec_s[d][ - :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, : - ] - - X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp) - aggresive_set = float(self.data["agg"] / 100) - aggressiveness = { - "value": aggresive_set, - "split_bin": self.mp.param["band"][1]["crop_stop"], - } - with torch.no_grad(): - pred, X_mag, X_phase = inference( - X_spec_m, self.device, self.model, aggressiveness, self.data - ) - # Postprocess - if self.data["postprocess"]: - pred_inv = np.clip(X_mag - pred, 0, np.inf) - pred = spec_utils.mask_silence(pred, pred_inv) - y_spec_m = pred * X_phase - v_spec_m = X_spec_m - y_spec_m - - if ins_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], y_spec_m, input_high_end, self.mp - ) - wav_instrument = spec_utils.cmb_spectrogram_to_wave( - y_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp) - print("%s instruments done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - ins_root, - "instrument_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) # - else: - path = os.path.join( - ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - if vocal_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], v_spec_m, input_high_end, self.mp - ) - wav_vocals = spec_utils.cmb_spectrogram_to_wave( - v_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp) - print("%s vocals done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - vocal_root, - "vocal_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - else: - path = os.path.join( - vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - - -if __name__ == "__main__": - device = "cuda" - is_half = True - model_path = "assets/uvr5_weights/DeEchoNormal.pth" - pre_fun = _audio_pre_new(model_path=model_path, device=device, is_half=True, agg=10) - audio_path = "雪雪伴奏对消HP5.wav" - save_path = "opt" - pre_fun._path_audio_(audio_path, save_path, save_path) diff --git a/spaces/Lerdweg/Energie-NRW/README.md b/spaces/Lerdweg/Energie-NRW/README.md deleted file mode 100644 index 35490137ffa54d6ddf641ba7b7de2a642b408826..0000000000000000000000000000000000000000 --- a/spaces/Lerdweg/Energie-NRW/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Energie NRW -emoji: 🏆 -colorFrom: purple -colorTo: red -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: false -license: cc-by-nc-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MISATO-dataset/Adaptability_protein_dynamics/graph.py b/spaces/MISATO-dataset/Adaptability_protein_dynamics/graph.py deleted file mode 100644 index aa6446dc02d38547da595573a023d08522e2a0f6..0000000000000000000000000000000000000000 --- a/spaces/MISATO-dataset/Adaptability_protein_dynamics/graph.py +++ /dev/null @@ -1,121 +0,0 @@ -import numpy as np -import scipy.spatial as ss -import torch -import torch.nn.functional as F -from torch_geometric.utils import to_undirected -from torch_sparse import coalesce - -atom_mapping = {0:'H', 1:'C', 2:'N', 3:'O', 4:'F', 5:'P', 6:'S', 7:'CL', 8:'BR', 9:'I', 10: 'UNK'} -residue_mapping = {0:'ALA', 1:'ARG', 2:'ASN', 3:'ASP', 4:'CYS', 5:'CYX', 6:'GLN', 7:'GLU', 8:'GLY', 9:'HIE', 10:'ILE', 11:'LEU', 12:'LYS', 13:'MET', 14:'PHE', 15:'PRO', 16:'SER', 17:'THR', 18:'TRP', 19:'TYR', 20:'VAL', 21:'UNK'} - -ligand_atoms_mapping = {8: 0, 16: 1, 6: 2, 7: 3, 1: 4, 15: 5, 17: 6, 9: 7, 53: 8, 35: 9, 5: 10, 33: 11, 26: 12, 14: 13, 34: 14, 44: 15, 12: 16, 23: 17, 77: 18, 27: 19, 52: 20, 30: 21, 4: 22, 45: 23} - - -def prot_df_to_graph(item, df, edge_dist_cutoff, feat_col='element'): - r""" - Converts protein in dataframe representation to a graph compatible with Pytorch-Geometric, where each node is an atom. - - :param df: Protein structure in dataframe format. - :type df: pandas.DataFrame - :param node_col: Column of dataframe to find node feature values. For example, for atoms use ``feat_col="element"`` and for residues use ``feat_col="resname"`` - :type node_col: str, optional - :param allowable_feats: List containing all possible values of node type, to be converted into 1-hot node features. - Any elements in ``feat_col`` that are not found in ``allowable_feats`` will be added to an appended "unknown" bin (see :func:`atom3d.util.graph.one_of_k_encoding_unk`). - :type allowable_feats: list, optional - :param edge_dist_cutoff: Maximum distance cutoff (in Angstroms) to define an edge between two atoms, defaults to 4.5. - :type edge_dist_cutoff: float, optional - - :return: tuple containing - - - node_feats (torch.FloatTensor): Features for each node, one-hot encoded by values in ``allowable_feats``. - - - edges (torch.LongTensor): Edges in COO format - - - edge_weights (torch.LongTensor): Edge weights, defined as a function of distance between atoms given by :math:`w_{i,j} = \frac{1}{d(i,j)}`, where :math:`d(i, j)` is the Euclidean distance between node :math:`i` and node :math:`j`. - - - node_pos (torch.FloatTensor): x-y-z coordinates of each node - :rtype: Tuple - """ - - allowable_feats = atom_mapping - - try : - node_pos = torch.FloatTensor(df[['x', 'y', 'z']].to_numpy()) - kd_tree = ss.KDTree(node_pos) - edge_tuples = list(kd_tree.query_pairs(edge_dist_cutoff)) - edges = torch.LongTensor(edge_tuples).t().contiguous() - edges = to_undirected(edges) - except: - print(f"Problem with PDB Id is {item['id']}") - - node_feats = torch.FloatTensor([one_of_k_encoding_unk_indices(e-1, allowable_feats) for e in df[feat_col]]) - edge_weights = torch.FloatTensor( - [1.0 / (np.linalg.norm(node_pos[i] - node_pos[j]) + 1e-5) for i, j in edges.t()]).view(-1) - - - return node_feats, edges, edge_weights, node_pos - - -def mol_df_to_graph_for_qm(df, bonds=None, allowable_atoms=None, edge_dist_cutoff=4.5, onehot_edges=True): - """ - Converts molecule in dataframe to a graph compatible with Pytorch-Geometric - :param df: Molecule structure in dataframe format - :type mol: pandas.DataFrame - :param bonds: Molecule structure in dataframe format - :type bonds: pandas.DataFrame - :param allowable_atoms: List containing allowable atom types - :type allowable_atoms: list[str], optional - :return: Tuple containing \n - - node_feats (torch.FloatTensor): Features for each node, one-hot encoded by atom type in ``allowable_atoms``. - - edge_index (torch.LongTensor): Edges from chemical bond graph in COO format. - - edge_feats (torch.FloatTensor): Edge features given by bond type. Single = 1.0, Double = 2.0, Triple = 3.0, Aromatic = 1.5. - - node_pos (torch.FloatTensor): x-y-z coordinates of each node. - """ - if allowable_atoms is None: - allowable_atoms = ligand_atoms_mapping - node_pos = torch.FloatTensor(df[['x', 'y', 'z']].to_numpy()) - - if bonds is not None: - N = df.shape[0] - bond_mapping = {1.0: 0, 2.0: 1, 3.0: 2, 1.5: 3} - bond_data = torch.FloatTensor(bonds) - edge_tuples = torch.cat((bond_data[:, :2], torch.flip(bond_data[:, :2], dims=(1,))), dim=0) - edge_index = edge_tuples.t().long().contiguous() - - if onehot_edges: - bond_idx = list(map(lambda x: bond_mapping[x], bond_data[:,-1].tolist())) + list(map(lambda x: bond_mapping[x], bond_data[:,-1].tolist())) - edge_attr = F.one_hot(torch.tensor(bond_idx), num_classes=4).to(torch.float) - edge_index, edge_attr = coalesce(edge_index, edge_attr, N, N) - - else: - edge_attr = torch.cat((torch.FloatTensor(bond_data[:,-1]).view(-1), torch.FloatTensor(bond_data[:,-1]).view(-1)), dim=0) - else: - kd_tree = ss.KDTree(node_pos) - edge_tuples = list(kd_tree.query_pairs(edge_dist_cutoff)) - edge_index = torch.LongTensor(edge_tuples).t().contiguous() - edge_index = to_undirected(edge_index) - edge_attr = torch.FloatTensor([1.0 / (np.linalg.norm(node_pos[i] - node_pos[j]) + 1e-5) for i, j in edge_index.t()]).view(-1) - edge_attr = edge_attr.unsqueeze(1) - - node_feats = torch.FloatTensor([one_of_k_encoding_unk_indices_qm(e, allowable_atoms) for e in df['element']]) - - return node_feats, edge_index, edge_attr, node_pos - - -def one_of_k_encoding_unk_indices(x, allowable_set): - """Converts input to 1-hot encoding given a set of allowable values. Additionally maps inputs not in the allowable set to the last element.""" - one_hot_encoding = [0] * len(allowable_set) - if x in allowable_set: - one_hot_encoding[x] = 1 - else: - one_hot_encoding[-1] = 1 - return one_hot_encoding - -def one_of_k_encoding_unk_indices_qm(x, allowable_set): - """Converts input to 1-hot encoding given a set of allowable values. Additionally maps inputs not in the allowable set to the last element.""" - one_hot_encoding = [0] * (len(allowable_set)+1) - if x in allowable_set: - one_hot_encoding[allowable_set[x]] = 1 - else: - one_hot_encoding[-1] = 1 - return one_hot_encoding \ No newline at end of file diff --git a/spaces/MISATO-dataset/Adaptability_protein_dynamics/main.py b/spaces/MISATO-dataset/Adaptability_protein_dynamics/main.py deleted file mode 100644 index cc35546729c76d9e766dd9a9ca46f224f103fbd6..0000000000000000000000000000000000000000 --- a/spaces/MISATO-dataset/Adaptability_protein_dynamics/main.py +++ /dev/null @@ -1,277 +0,0 @@ - - -import gradio as gr -import py3Dmol -from Bio.PDB import * - -import numpy as np -from Bio.PDB import PDBParser -import pandas as pd -import torch -import os -from MDmodel import GNN_MD -import h5py -from transformMD import GNNTransformMD -import sys -import pytraj as pt -import pickle - -# JavaScript functions -resid_hover = """function(atom,viewer) {{ - if(!atom.label) {{ - atom.label = viewer.addLabel('{0}:'+atom.atom+atom.serial, - {{position: atom, backgroundColor: 'mintcream', fontColor:'black'}}); - }} -}}""" -hover_func = """ -function(atom,viewer) { - if(!atom.label) { - atom.label = viewer.addLabel(atom.interaction, - {position: atom, backgroundColor: 'black', fontColor:'white'}); - } -}""" -unhover_func = """ -function(atom,viewer) { - if(atom.label) { - viewer.removeLabel(atom.label); - delete atom.label; - } -}""" -atom_mapping = {0:'H', 1:'C', 2:'N', 3:'O', 4:'F', 5:'P', 6:'S', 7:'CL', 8:'BR', 9:'I', 10: 'UNK'} - -model = GNN_MD(11, 64) -state_dict = torch.load( - "best_weights_rep0.pt", - map_location=torch.device("cpu"), -)["model_state_dict"] -model.load_state_dict(state_dict) -model = model.to('cpu') -model.eval() - - -def run_leap(fileName, path): - leapText = """ - source leaprc.protein.ff14SB - source leaprc.water.tip3p - exp = loadpdb PATH4amb.pdb - saveamberparm exp PATHexp.top PATHexp.crd - quit - """ - with open(path+"leap.in", "w") as outLeap: - outLeap.write(leapText.replace('PATH', path)) - os.system("tleap -f "+path+"leap.in >> "+path+"leap.out") - -def convert_to_amber_format(pdbName): - fileName, path = pdbName+'.pdb', '' - os.system("pdb4amber -i "+fileName+" -p -y -o "+path+"4amb.pdb -l "+path+"pdb4amber_protein.log") - run_leap(fileName, path) - traj = pt.iterload(path+'exp.crd', top = path+'exp.top') - pt.write_traj(path+fileName, traj, overwrite= True) - print(path+fileName+' was created. Please always use this file for inspection because the coordinates might get translated during amber file generation and thus might vary from the input pdb file.') - return pt.iterload(path+'exp.crd', top = path+'exp.top') - -def get_maps(mapPath): - residueMap = pickle.load(open(os.path.join(mapPath,'atoms_residue_map_generate.pickle'),'rb')) - nameMap = pickle.load(open(os.path.join(mapPath,'atoms_name_map_generate.pickle'),'rb')) - typeMap = pickle.load(open(os.path.join(mapPath,'atoms_type_map_generate.pickle'),'rb')) - elementMap = pickle.load(open(os.path.join(mapPath,'map_atomType_element_numbers.pickle'),'rb')) - return residueMap, nameMap, typeMap, elementMap - -def get_residues_atomwise(residues): - atomwise = [] - for name, nAtoms in residues: - for i in range(nAtoms): - atomwise.append(name) - return atomwise - -def get_begin_atom_index(traj): - natoms = [m.n_atoms for m in traj.top.mols] - molecule_begin_atom_index = [0] - x = 0 - for i in range(len(natoms)): - x += natoms[i] - molecule_begin_atom_index.append(x) - print('molecule begin atom index', molecule_begin_atom_index, natoms) - return molecule_begin_atom_index - -def get_traj_info(traj, mapPath): - coordinates = traj.xyz - residueMap, nameMap, typeMap, elementMap = get_maps(mapPath) - types = [typeMap[a.type] for a in traj.top.atoms] - elements = [elementMap[typ] for typ in types] - atomic_numbers = [a.atomic_number for a in traj.top.atoms] - molecule_begin_atom_index = get_begin_atom_index(traj) - residues = [(residueMap[res.name], res.n_atoms) for res in traj.top.residues] - residues_atomwise = get_residues_atomwise(residues) - return coordinates[0], elements, types, atomic_numbers, residues_atomwise, molecule_begin_atom_index - -def write_h5_info(outName, struct, atoms_type, atoms_number, atoms_residue, atoms_element, molecules_begin_atom_index, atoms_coordinates_ref): - if os.path.isfile(outName): - os.remove(outName) - with h5py.File(outName, 'w') as oF: - subgroup = oF.create_group(struct) - subgroup.create_dataset('atoms_residue', data= atoms_residue, compression = "gzip", dtype='i8') - subgroup.create_dataset('molecules_begin_atom_index', data= molecules_begin_atom_index, compression = "gzip", dtype='i8') - subgroup.create_dataset('atoms_type', data= atoms_type, compression = "gzip", dtype='i8') - subgroup.create_dataset('atoms_number', data= atoms_number, compression = "gzip", dtype='i8') - subgroup.create_dataset('atoms_element', data= atoms_element, compression = "gzip", dtype='i8') - subgroup.create_dataset('atoms_coordinates_ref', data= atoms_coordinates_ref, compression = "gzip", dtype='f8') - -def preprocess(pdbid: str = None, ouputfile: str = "inference_for_md.hdf5", mask: str = "!@H=", mappath: str = "/maps/"): - traj = convert_to_amber_format(pdbid) - atoms_coordinates_ref, atoms_element, atoms_type, atoms_number, atoms_residue, molecules_begin_atom_index = get_traj_info(traj[mask], mappath) - write_h5_info(ouputfile, pdbid, atoms_type, atoms_number, atoms_residue, atoms_element, molecules_begin_atom_index, atoms_coordinates_ref) - -def get_pdb(pdb_code="", filepath=""): - try: - return filepath.name - except AttributeError as e: - if pdb_code is None or pdb_code == "": - return None - else: - os.system(f"wget -qnc https://files.rcsb.org/view/{pdb_code}.pdb") - return f"{pdb_code}.pdb" - - -def get_offset(pdb): - pdb_multiline = pdb.split("\n") - for line in pdb_multiline: - if line.startswith("ATOM"): - return int(line[22:27]) - - -def get_pdbid_from_filename(filename: str): - # Assuming the filename would be of the standard form 11GS.pdb - return filename.split(".")[0] - -def predict(pdb_code, pdb_file, topN): - #path_to_pdb = get_pdb(pdb_code=pdb_code, filepath=pdb_file) - - #pdb = open(path_to_pdb, "r").read() - # switch to misato env if not running from container - - pdbid = get_pdbid_from_filename(pdb_file.name) - mdh5_file = "inference_for_md.hdf5" - mappath = "/maps" - mask = "!@H=" - preprocess(pdbid=pdbid, ouputfile=mdh5_file, mask=mask, mappath=mappath) - - md_H5File = h5py.File(mdh5_file) - - column_names = ["x", "y", "z", "element"] - atoms_protein = pd.DataFrame(columns = column_names) - cutoff = md_H5File[pdbid]["molecules_begin_atom_index"][:][-1] # cutoff defines protein atoms - - atoms_protein["x"] = md_H5File[pdbid]["atoms_coordinates_ref"][:][:cutoff, 0] - atoms_protein["y"] = md_H5File[pdbid]["atoms_coordinates_ref"][:][:cutoff, 1] - atoms_protein["z"] = md_H5File[pdbid]["atoms_coordinates_ref"][:][:cutoff, 2] - - atoms_protein["element"] = md_H5File[pdbid]["atoms_element"][:][:cutoff] - - item = {} - item["scores"] = 0 - item["id"] = pdbid - item["atoms_protein"] = atoms_protein - - transform = GNNTransformMD() - data_item = transform(item) - adaptability = model(data_item) - adaptability = adaptability.detach().numpy() - - data = [] - - - for i in range(adaptability.shape[0]): - data.append([i, atom_mapping[atoms_protein.iloc[i, atoms_protein.columns.get_loc("element")] - 1], atoms_protein.iloc[i, atoms_protein.columns.get_loc("x")],atoms_protein.iloc[i, atoms_protein.columns.get_loc("y")],atoms_protein.iloc[i, atoms_protein.columns.get_loc("z")],adaptability[i]]) - - topN_ind = np.argsort(adaptability)[::-1][:topN] - - pdb = open(pdb_file.name, "r").read() - pdb2 = pdb - - view = py3Dmol.view(width=1000, height=800) - view.setBackgroundColor('white') - view.addModel(pdb, "pdb") - view.setStyle({'stick': {'colorscheme': {'prop': 'resi', 'C': '#cccccc'}},'cartoon': {'color': '#4c4e9e', 'alpha':"0.6"}}) - - #view.addModel(pdb2, "pdb2") - #view.setStyle({'cartoon': {'color': 'gray'}}) - - - # Commenting since the visualizer is not rendered - # view.addLight([0, 0, 10], [1, 1, 1], 1) # Add directional light from the z-axis - # view.setSpecular(0.5) # Adjust the specular lighting effect - # view.setAmbient(0.5) # Adjust the ambient lighting effect - - for i in range(topN): - adaptability_value = adaptability[topN_ind[i]] - color = '#a0210f' - view.addSphere({ - 'center': { - 'x': atoms_protein.iloc[topN_ind[i], atoms_protein.columns.get_loc("x")], - 'y': atoms_protein.iloc[topN_ind[i], atoms_protein.columns.get_loc("y")], - 'z': atoms_protein.iloc[topN_ind[i], atoms_protein.columns.get_loc("z")] - }, - 'radius': adaptability_value / 1.5, - 'color': color, - 'alpha': 0.75 - }) - - - view.zoomTo() - - output = view._make_html().replace("'", '"') - - x = f""" {output} """ # do not use ' in this input - - return f"""""", pd.DataFrame(data, columns=['index','element','x','y','z','Adaptability']) - -def export_csv(d): - d.to_csv("adaptabilities.csv") - return gr.File.update(value="adaptabilities.csv", visible=True) - - -callback = gr.CSVLogger() - -def run(): - with gr.Blocks() as demo: - gr.Markdown("# Protein Adaptability Prediction") - - #text_input = gr.Textbox() - #text_output = gr.Textbox() - #text_button = gr.Button("Flip") - inp = gr.Textbox(placeholder="Upload PDB file below", label="Input structure") - #inp = "" - topN = gr.Slider(value=100, - minimum=1, maximum=1000, label="Number of highest adaptability values to visualize", step=1 - ) - pdb_file = gr.File(label="PDB File Upload") - #with gr.Row(): - # helix = gr.ColorPicker(label="helix") - # sheet = gr.ColorPicker(label="sheet") - # loop = gr.ColorPicker(label="loop") - single_btn = gr.Button(label="Run") - with gr.Row(): - html = gr.HTML() - with gr.Row(): - Dbutton = gr.Button("Download adaptability values") - csv = gr.File(interactive=False, visible=False) - with gr.Row(): - dataframe = gr.Dataframe() - - single_btn.click(fn=predict, inputs=[inp, pdb_file, topN], outputs=[html, dataframe]) - - Dbutton.click(export_csv, dataframe, csv) - - - - - demo.launch(server_name="0.0.0.0", server_port=7860) - - -if __name__ == "__main__": - run() diff --git a/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/train.py b/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/train.py deleted file mode 100644 index 80c8d9eb593249250d223e8527c38f4a118a69d6..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/train.py +++ /dev/null @@ -1,328 +0,0 @@ -import os - -import torch -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler - -import commons -import utils -from data_utils import (TextAudioSpeakerLoader, TextAudioSpeakerCollate, - DistributedBucketSampler) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, -) -from losses import (generator_loss, discriminator_loss, feature_loss, kl_loss) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch - -torch.backends.cudnn.benchmark = True -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - hps = utils.get_hparams() - mp.spawn(run, nprocs=n_gpus, args=( - n_gpus, - hps, - )) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter( - log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend='nccl', - init_method='env://', - world_size=n_gpus, - rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, [32, 300, 400, 500, 600, 700, 800, 900, 1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader(train_dataset, - num_workers=8, - shuffle=False, - pin_memory=True, - collate_fn=collate_fn, - batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, - hps.data) - eval_loader = DataLoader(eval_dataset, - num_workers=8, - shuffle=False, - batch_size=hps.train.batch_size, - pin_memory=True, - drop_last=False, - collate_fn=collate_fn) - - net_g = SynthesizerTrn(hps.data.num_phones, - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).cuda(rank) - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW(net_g.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW(net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - net_g = DDP(net_g, device_ids=[rank]) - net_d = DDP(net_d, device_ids=[rank]) - - try: - _, _, _, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g) - _, _, _, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d) - global_step = (epoch_str - 1) * len(train_loader) - except Exception as e: - epoch_str = 1 - global_step = 0 - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR( - optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR( - optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], - [optim_g, optim_d], [scheduler_g, scheduler_d], - scaler, [train_loader, eval_loader], logger, - [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], - [optim_g, optim_d], [scheduler_g, scheduler_d], - scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, - loaders, logger, writers): - net_g, net_d = nets - optim_g, optim_d = optims - scheduler_g, scheduler_d = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, - speakers) in enumerate(train_loader): - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda( - rank, non_blocking=True) - spec, spec_lengths = spec.cuda( - rank, non_blocking=True), spec_lengths.cuda(rank, - non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda( - rank, non_blocking=True) - speakers = speakers.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask, ( - z, z_p, m_p, logs_p, m_q, - logs_q) = net_g(x, x_lengths, spec, spec_lengths, speakers) - - mel = spec_to_mel_torch(spec, hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments( - mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), hps.data.filter_length, - hps.data.n_mel_channels, hps.data.sampling_rate, - hps.data.hop_length, hps.data.win_length, hps.data.mel_fmin, - hps.data.mel_fmax) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, - hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss( - y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, - z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [ - loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl - ] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = { - "loss/g/total": loss_gen_all, - "loss/d/total": loss_disc_all, - "learning_rate": lr, - "grad_norm_d": grad_norm_d, - "grad_norm_g": grad_norm_g - } - scalar_dict.update({ - "loss/g/fm": loss_fm, - "loss/g/mel": loss_mel, - "loss/g/dur": loss_dur, - "loss/g/kl": loss_kl - }) - - scalar_dict.update({ - "loss/g/{}".format(i): v - for i, v in enumerate(losses_gen) - }) - scalar_dict.update({ - "loss/d_r/{}".format(i): v - for i, v in enumerate(losses_disc_r) - }) - scalar_dict.update({ - "loss/d_g/{}".format(i): v - for i, v in enumerate(losses_disc_g) - }) - image_dict = { - "slice/mel_org": - utils.plot_spectrogram_to_numpy( - y_mel[0].data.cpu().numpy()), - "slice/mel_gen": - utils.plot_spectrogram_to_numpy( - y_hat_mel[0].data.cpu().numpy()), - "all/mel": - utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": - utils.plot_alignment_to_numpy(attn[0, - 0].data.cpu().numpy()) - } - utils.summarize(writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint( - net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, - "G_{}.pth".format(global_step))) - utils.save_checkpoint( - net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, - "D_{}.pth".format(global_step))) - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, - speakers) in enumerate(eval_loader): - x, x_lengths = x.cuda(0), x_lengths.cuda(0) - spec, spec_lengths = spec.cuda(0), spec_lengths.cuda(0) - y, y_lengths = y.cuda(0), y_lengths.cuda(0) - speakers = speakers.cuda(0) - - # remove else - x = x[:1] - x_lengths = x_lengths[:1] - spec = spec[:1] - spec_lengths = spec_lengths[:1] - y = y[:1] - y_lengths = y_lengths[:1] - speakers = speakers[:1] - break - y_hat, attn, mask, *_ = generator.module.infer(x, - x_lengths, - speakers, - max_len=1000) - y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch(spec, hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), hps.data.filter_length, - hps.data.n_mel_channels, hps.data.sampling_rate, - hps.data.hop_length, hps.data.win_length, hps.data.mel_fmin, - hps.data.mel_fmax) - image_dict = { - "gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - } - audio_dict = {"gen/audio": y_hat[0, :, :y_hat_lengths[0]]} - if global_step == 0: - image_dict.update( - {"gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({"gt/audio": y[0, :, :y_lengths[0]]}) - - utils.summarize(writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate) - generator.train() - - -if __name__ == "__main__": - main() diff --git a/spaces/MetaWabbit/Auto-GPT/autogpt/commands/audio_text.py b/spaces/MetaWabbit/Auto-GPT/autogpt/commands/audio_text.py deleted file mode 100644 index cae32d4eb78c4268bf6ef1bae3c15a399af046bf..0000000000000000000000000000000000000000 --- a/spaces/MetaWabbit/Auto-GPT/autogpt/commands/audio_text.py +++ /dev/null @@ -1,36 +0,0 @@ -import json - -import requests - -from autogpt.config import Config -from autogpt.workspace import path_in_workspace - -cfg = Config() - - -def read_audio_from_file(audio_path): - audio_path = path_in_workspace(audio_path) - with open(audio_path, "rb") as audio_file: - audio = audio_file.read() - return read_audio(audio) - - -def read_audio(audio): - model = cfg.huggingface_audio_to_text_model - api_url = f"https://api-inference.huggingface.co/models/{model}" - api_token = cfg.huggingface_api_token - headers = {"Authorization": f"Bearer {api_token}"} - - if api_token is None: - raise ValueError( - "You need to set your Hugging Face API token in the config file." - ) - - response = requests.post( - api_url, - headers=headers, - data=audio, - ) - - text = json.loads(response.content.decode("utf-8"))["text"] - return "The audio says: " + text diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/svtr/README.md b/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/svtr/README.md deleted file mode 100644 index 096b8b8e70928df95874281f846f8d76d4b03e92..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/svtr/README.md +++ /dev/null @@ -1,69 +0,0 @@ -# SVTR - -> [SVTR: Scene Text Recognition with a Single Visual Model](https://arxiv.org/abs/2205.00159) - - - -## Abstract - -Dominant scene text recognition models commonly contain two building blocks, a visual model for feature extraction and a sequence model for text transcription. This hybrid architecture, although accurate, is complex and less efficient. In this study, we propose a Single Visual model for Scene Text recognition within the patch-wise image tokenization framework, which dispenses with the sequential modeling entirely. The method, termed SVTR, firstly decomposes an image text into small patches named character components. Afterward, hierarchical stages are recurrently carried out by component-level mixing, merging and/or combining. Global and local mixing blocks are devised to perceive the inter-character and intra-character patterns, leading to a multi-grained character component perception. Thus, characters are recognized by a simple linear prediction. Experimental results on both English and Chinese scene text recognition tasks demonstrate the effectiveness of SVTR. SVTR-L (Large) achieves highly competitive accuracy in English and outperforms existing methods by a large margin in Chinese, while running faster. In addition, SVTR-T (Tiny) is an effective and much smaller model, which shows appealing speed at inference. - -
    - -
    - -## Dataset - -### Train Dataset - -| trainset | instance_num | repeat_num | source | -| :-------: | :----------: | :--------: | :----: | -| SynthText | 7266686 | 1 | synth | -| Syn90k | 8919273 | 1 | synth | - -### Test Dataset - -| testset | instance_num | type | -| :-----: | :----------: | :-------: | -| IIIT5K | 3000 | regular | -| SVT | 647 | regular | -| IC13 | 1015 | regular | -| IC15 | 2077 | irregular | -| SVTP | 645 | irregular | -| CT80 | 288 | irregular | - -## Results and Models - -| Methods | | Regular Text | | | | Irregular Text | | download | -| :---------------------------------------------------------------: | :----: | :----------: | :-------: | :-: | :-------: | :------------: | :----: | :--------------------------------------------------------------------------: | -| | IIIT5K | SVT | IC13-1015 | | IC15-2077 | SVTP | CT80 | | -| [SVTR-tiny](/configs/textrecog/svtr/svtr-tiny_20e_st_mj.py) | - | - | - | | - | - | - | - | -| [SVTR-small](/configs/textrecog/svtr/svtr-small_20e_st_mj.py) | 0.8553 | 0.9026 | 0.9448 | | 0.7496 | 0.8496 | 0.8854 | [model](https://download.openmmlab.com/mmocr/textrecog/svtr/svtr-small_20e_st_mj/svtr-small_20e_st_mj-35d800d6.pth) \| [log](https://download.openmmlab.com/mmocr/textrecog/svtr/svtr-small_20e_st_mj/20230105_184454.log) | -| [SVTR-small-TTA](/configs/textrecog/svtr/svtr-small_20e_st_mj.py) | 0.8397 | 0.8964 | 0.9241 | | 0.7597 | 0.8124 | 0.8646 | | -| [SVTR-base](/configs/textrecog/svtr/svtr-base_20e_st_mj.py) | 0.8570 | 0.9181 | 0.9438 | | 0.7448 | 0.8388 | 0.9028 | [model](https://download.openmmlab.com/mmocr/textrecog/svtr/svtr-base_20e_st_mj/svtr-base_20e_st_mj-ea500101.pth) \| [log](https://download.openmmlab.com/mmocr/textrecog/svtr/svtr-base_20e_st_mj/20221227_175415.log) | -| [SVTR-base-TTA](/configs/textrecog/svtr/svtr-base_20e_st_mj.py) | 0.8517 | 0.9011 | 0.9379 | | 0.7569 | 0.8279 | 0.8819 | | -| [SVTR-large](/configs/textrecog/svtr/svtr-large_20e_st_mj.py) | - | - | - | | - | - | - | - | - -```{note} -The implementation and configuration follow the original code and paper, but there is still a gap between the reproduced results and the official ones. We appreciate any suggestions to improve its performance. -``` - -## Citation - -```bibtex -@inproceedings{ijcai2022p124, - title = {SVTR: Scene Text Recognition with a Single Visual Model}, - author = {Du, Yongkun and Chen, Zhineng and Jia, Caiyan and Yin, Xiaoting and Zheng, Tianlun and Li, Chenxia and Du, Yuning and Jiang, Yu-Gang}, - booktitle = {Proceedings of the Thirty-First International Joint Conference on - Artificial Intelligence, {IJCAI-22}}, - publisher = {International Joint Conferences on Artificial Intelligence Organization}, - editor = {Lud De Raedt}, - pages = {884--890}, - year = {2022}, - month = {7}, - note = {Main Track}, - doi = {10.24963/ijcai.2022/124}, - url = {https://doi.org/10.24963/ijcai.2022/124}, -} - -``` diff --git a/spaces/MultiTransformer/autogen-online/README.md b/spaces/MultiTransformer/autogen-online/README.md deleted file mode 100644 index 6a56e8e523a54aa9bd3766630bff1f19d7772f59..0000000000000000000000000000000000000000 --- a/spaces/MultiTransformer/autogen-online/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Use AutoGen Online -emoji: 🌍 -colorFrom: yellow -colorTo: indigo -sdk: static -pinned: false -license: mit -app_file: README.md ---- - -run this locally \ No newline at end of file diff --git a/spaces/NAACL2022/GlobEnc/src/attention_flow_abstract.py b/spaces/NAACL2022/GlobEnc/src/attention_flow_abstract.py deleted file mode 100644 index 7335174adb5632457cb2912610e6d04b31ff7b18..0000000000000000000000000000000000000000 --- a/spaces/NAACL2022/GlobEnc/src/attention_flow_abstract.py +++ /dev/null @@ -1,13 +0,0 @@ -import abc -import numpy as np - - -class AttentionFlow(abc.ABC): - @abc.abstractmethod - def compute_flows(self, attentions_list, desc="", output_hidden_states=False, num_cpus=4): - raise NotImplementedError() - - def pre_process(self, att_mat): - # if att_mat.sum(axis=-1)[..., None] != 1: - # att_mat = att_mat / np.max(att_mat, axis=(1, 2), keepdims=True) - return att_mat diff --git a/spaces/NCTCMumbai/NCTC/models/official/recommendation/data_pipeline.py b/spaces/NCTCMumbai/NCTC/models/official/recommendation/data_pipeline.py deleted file mode 100644 index 1b4dd33afe25df2468cdfcbb2c146392d7bec76e..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/recommendation/data_pipeline.py +++ /dev/null @@ -1,959 +0,0 @@ -# Copyright 2018 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Asynchronous data producer for the NCF pipeline.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import atexit -import functools -import os -import sys -import tempfile -import threading -import time -import timeit -import traceback -import typing - -import numpy as np -import six -from six.moves import queue -import tensorflow as tf -from absl import logging - -from official.recommendation import constants as rconst -from official.recommendation import movielens -from official.recommendation import popen_helper -from official.recommendation import stat_utils -from tensorflow.python.tpu.datasets import StreamingFilesDataset - - -SUMMARY_TEMPLATE = """General: -{spacer}Num users: {num_users} -{spacer}Num items: {num_items} - -Training: -{spacer}Positive count: {train_pos_ct} -{spacer}Batch size: {train_batch_size} {multiplier} -{spacer}Batch count per epoch: {train_batch_ct} - -Eval: -{spacer}Positive count: {eval_pos_ct} -{spacer}Batch size: {eval_batch_size} {multiplier} -{spacer}Batch count per epoch: {eval_batch_ct}""" - - -class DatasetManager(object): - """Helper class for handling TensorFlow specific data tasks. - - This class takes the (relatively) framework agnostic work done by the data - constructor classes and handles the TensorFlow specific portions (TFRecord - management, tf.Dataset creation, etc.). - """ - - def __init__(self, - is_training, - stream_files, - batches_per_epoch, - shard_root=None, - deterministic=False, - num_train_epochs=None): - # type: (bool, bool, int, typing.Optional[str], bool, int) -> None - """Constructs a `DatasetManager` instance. - Args: - is_training: Boolean of whether the data provided is training or - evaluation data. This determines whether to reuse the data - (if is_training=False) and the exact structure to use when storing and - yielding data. - stream_files: Boolean indicating whether data should be serialized and - written to file shards. - batches_per_epoch: The number of batches in a single epoch. - shard_root: The base directory to be used when stream_files=True. - deterministic: Forgo non-deterministic speedups. (i.e. sloppy=True) - num_train_epochs: Number of epochs to generate. If None, then each - call to `get_dataset()` increments the number of epochs requested. - """ - self._is_training = is_training - self._deterministic = deterministic - self._stream_files = stream_files - self._writers = [] - self._write_locks = [threading.RLock() for _ in - range(rconst.NUM_FILE_SHARDS)] if stream_files else [] - self._batches_per_epoch = batches_per_epoch - self._epochs_completed = 0 - self._epochs_requested = num_train_epochs if num_train_epochs else 0 - self._shard_root = shard_root - - self._result_queue = queue.Queue() - self._result_reuse = [] - - @property - def current_data_root(self): - subdir = (rconst.TRAIN_FOLDER_TEMPLATE.format(self._epochs_completed) - if self._is_training else rconst.EVAL_FOLDER) - return os.path.join(self._shard_root, subdir) - - def buffer_reached(self): - # Only applicable for training. - return (self._epochs_completed - self._epochs_requested >= - rconst.CYCLES_TO_BUFFER and self._is_training) - - @staticmethod - def serialize(data): - """Convert NumPy arrays into a TFRecords entry.""" - - def create_int_feature(values): - return tf.train.Feature(int64_list=tf.train.Int64List(value=list(values))) - - feature_dict = { - k: create_int_feature(v.astype(np.int64)) for k, v in data.items() - } - - return tf.train.Example( - features=tf.train.Features(feature=feature_dict)).SerializeToString() - - @staticmethod - def deserialize(serialized_data, batch_size=None, is_training=True): - """Convert serialized TFRecords into tensors. - - Args: - serialized_data: A tensor containing serialized records. - batch_size: The data arrives pre-batched, so batch size is needed to - deserialize the data. - is_training: Boolean, whether data to deserialize to training data - or evaluation data. - """ - - def _get_feature_map(batch_size, is_training=True): - """Returns data format of the serialized tf record file.""" - - if is_training: - return { - movielens.USER_COLUMN: - tf.io.FixedLenFeature([batch_size, 1], dtype=tf.int64), - movielens.ITEM_COLUMN: - tf.io.FixedLenFeature([batch_size, 1], dtype=tf.int64), - rconst.VALID_POINT_MASK: - tf.io.FixedLenFeature([batch_size, 1], dtype=tf.int64), - "labels": - tf.io.FixedLenFeature([batch_size, 1], dtype=tf.int64) - } - else: - return { - movielens.USER_COLUMN: - tf.io.FixedLenFeature([batch_size, 1], dtype=tf.int64), - movielens.ITEM_COLUMN: - tf.io.FixedLenFeature([batch_size, 1], dtype=tf.int64), - rconst.DUPLICATE_MASK: - tf.io.FixedLenFeature([batch_size, 1], dtype=tf.int64) - } - - features = tf.io.parse_single_example( - serialized_data, _get_feature_map(batch_size, is_training=is_training)) - users = tf.cast(features[movielens.USER_COLUMN], rconst.USER_DTYPE) - items = tf.cast(features[movielens.ITEM_COLUMN], rconst.ITEM_DTYPE) - - if is_training: - valid_point_mask = tf.cast(features[rconst.VALID_POINT_MASK], tf.bool) - fake_dup_mask = tf.zeros_like(users) - return { - movielens.USER_COLUMN: users, - movielens.ITEM_COLUMN: items, - rconst.VALID_POINT_MASK: valid_point_mask, - rconst.TRAIN_LABEL_KEY: - tf.reshape(tf.cast(features["labels"], tf.bool), - (batch_size, 1)), - rconst.DUPLICATE_MASK: fake_dup_mask - } - else: - labels = tf.cast(tf.zeros_like(users), tf.bool) - fake_valid_pt_mask = tf.cast(tf.zeros_like(users), tf.bool) - return { - movielens.USER_COLUMN: - users, - movielens.ITEM_COLUMN: - items, - rconst.DUPLICATE_MASK: - tf.cast(features[rconst.DUPLICATE_MASK], tf.bool), - rconst.VALID_POINT_MASK: - fake_valid_pt_mask, - rconst.TRAIN_LABEL_KEY: - labels - } - - def put(self, index, data): - # type: (int, dict) -> None - """Store data for later consumption. - - Because there are several paths for storing and yielding data (queues, - lists, files) the data producer simply provides the data in a standard - format at which point the dataset manager handles storing it in the correct - form. - - Args: - index: Used to select shards when writing to files. - data: A dict of the data to be stored. This method mutates data, and - therefore expects to be the only consumer. - """ - if self._is_training: - mask_start_index = data.pop(rconst.MASK_START_INDEX) - batch_size = data[movielens.ITEM_COLUMN].shape[0] - data[rconst.VALID_POINT_MASK] = np.expand_dims( - np.less(np.arange(batch_size), mask_start_index), -1) - - if self._stream_files: - example_bytes = self.serialize(data) - with self._write_locks[index % rconst.NUM_FILE_SHARDS]: - self._writers[index % rconst.NUM_FILE_SHARDS].write(example_bytes) - - else: - self._result_queue.put(( - data, data.pop("labels")) if self._is_training else data) - - def start_construction(self): - if self._stream_files: - tf.io.gfile.makedirs(self.current_data_root) - template = os.path.join(self.current_data_root, rconst.SHARD_TEMPLATE) - self._writers = [tf.io.TFRecordWriter(template.format(i)) - for i in range(rconst.NUM_FILE_SHARDS)] - - def end_construction(self): - if self._stream_files: - [writer.close() for writer in self._writers] - self._writers = [] - self._result_queue.put(self.current_data_root) - - self._epochs_completed += 1 - - def data_generator(self, epochs_between_evals): - """Yields examples during local training.""" - assert not self._stream_files - assert self._is_training or epochs_between_evals == 1 - - if self._is_training: - for _ in range(self._batches_per_epoch * epochs_between_evals): - yield self._result_queue.get(timeout=300) - - else: - if self._result_reuse: - assert len(self._result_reuse) == self._batches_per_epoch - - for i in self._result_reuse: - yield i - else: - # First epoch. - for _ in range(self._batches_per_epoch * epochs_between_evals): - result = self._result_queue.get(timeout=300) - self._result_reuse.append(result) - yield result - - def increment_request_epoch(self): - self._epochs_requested += 1 - - def get_dataset(self, batch_size, epochs_between_evals): - """Construct the dataset to be used for training and eval. - - For local training, data is provided through Dataset.from_generator. For - remote training (TPUs) the data is first serialized to files and then sent - to the TPU through a StreamingFilesDataset. - - Args: - batch_size: The per-replica batch size of the dataset. - epochs_between_evals: How many epochs worth of data to yield. - (Generator mode only.) - """ - self.increment_request_epoch() - if self._stream_files: - if epochs_between_evals > 1: - raise ValueError("epochs_between_evals > 1 not supported for file " - "based dataset.") - epoch_data_dir = self._result_queue.get(timeout=300) - if not self._is_training: - self._result_queue.put(epoch_data_dir) # Eval data is reused. - - file_pattern = os.path.join( - epoch_data_dir, rconst.SHARD_TEMPLATE.format("*")) - dataset = StreamingFilesDataset( - files=file_pattern, worker_job=popen_helper.worker_job(), - num_parallel_reads=rconst.NUM_FILE_SHARDS, num_epochs=1, - sloppy=not self._deterministic) - map_fn = functools.partial( - self.deserialize, - batch_size=batch_size, - is_training=self._is_training) - dataset = dataset.map(map_fn, num_parallel_calls=16) - - else: - types = {movielens.USER_COLUMN: rconst.USER_DTYPE, - movielens.ITEM_COLUMN: rconst.ITEM_DTYPE} - shapes = { - movielens.USER_COLUMN: tf.TensorShape([batch_size, 1]), - movielens.ITEM_COLUMN: tf.TensorShape([batch_size, 1]) - } - - if self._is_training: - types[rconst.VALID_POINT_MASK] = np.bool - shapes[rconst.VALID_POINT_MASK] = tf.TensorShape([batch_size, 1]) - - types = (types, np.bool) - shapes = (shapes, tf.TensorShape([batch_size, 1])) - - else: - types[rconst.DUPLICATE_MASK] = np.bool - shapes[rconst.DUPLICATE_MASK] = tf.TensorShape([batch_size, 1]) - - data_generator = functools.partial( - self.data_generator, epochs_between_evals=epochs_between_evals) - dataset = tf.data.Dataset.from_generator( - generator=data_generator, output_types=types, - output_shapes=shapes) - - return dataset.prefetch(16) - - def make_input_fn(self, batch_size): - """Create an input_fn which checks for batch size consistency.""" - - def input_fn(params): - """Returns batches for training.""" - - # Estimator passes batch_size during training and eval_batch_size during - # eval. - param_batch_size = (params["batch_size"] if self._is_training else - params.get("eval_batch_size") or params["batch_size"]) - if batch_size != param_batch_size: - raise ValueError("producer batch size ({}) differs from params batch " - "size ({})".format(batch_size, param_batch_size)) - - epochs_between_evals = (params.get("epochs_between_evals", 1) - if self._is_training else 1) - return self.get_dataset(batch_size=batch_size, - epochs_between_evals=epochs_between_evals) - - return input_fn - - -class BaseDataConstructor(threading.Thread): - """Data constructor base class. - - This class manages the control flow for constructing data. It is not meant - to be used directly, but instead subclasses should implement the following - two methods: - - self.construct_lookup_variables - self.lookup_negative_items - - """ - - def __init__( - self, - maximum_number_epochs, # type: int - num_users, # type: int - num_items, # type: int - user_map, # type: dict - item_map, # type: dict - train_pos_users, # type: np.ndarray - train_pos_items, # type: np.ndarray - train_batch_size, # type: int - batches_per_train_step, # type: int - num_train_negatives, # type: int - eval_pos_users, # type: np.ndarray - eval_pos_items, # type: np.ndarray - eval_batch_size, # type: int - batches_per_eval_step, # type: int - stream_files, # type: bool - deterministic=False, # type: bool - epoch_dir=None, # type: str - num_train_epochs=None, # type: int - create_data_offline=False # type: bool - ): - # General constants - self._maximum_number_epochs = maximum_number_epochs - self._num_users = num_users - self._num_items = num_items - self.user_map = user_map - self.item_map = item_map - self._train_pos_users = train_pos_users - self._train_pos_items = train_pos_items - self.train_batch_size = train_batch_size - self._num_train_negatives = num_train_negatives - self._batches_per_train_step = batches_per_train_step - self._eval_pos_users = eval_pos_users - self._eval_pos_items = eval_pos_items - self.eval_batch_size = eval_batch_size - self.num_train_epochs = num_train_epochs - self.create_data_offline = create_data_offline - - # Training - if self._train_pos_users.shape != self._train_pos_items.shape: - raise ValueError( - "User positives ({}) is different from item positives ({})".format( - self._train_pos_users.shape, self._train_pos_items.shape)) - - (self._train_pos_count,) = self._train_pos_users.shape - self._elements_in_epoch = (1 + num_train_negatives) * self._train_pos_count - self.train_batches_per_epoch = self._count_batches( - self._elements_in_epoch, train_batch_size, batches_per_train_step) - - # Evaluation - if eval_batch_size % (1 + rconst.NUM_EVAL_NEGATIVES): - raise ValueError("Eval batch size {} is not divisible by {}".format( - eval_batch_size, 1 + rconst.NUM_EVAL_NEGATIVES)) - self._eval_users_per_batch = int( - eval_batch_size // (1 + rconst.NUM_EVAL_NEGATIVES)) - self._eval_elements_in_epoch = num_users * (1 + rconst.NUM_EVAL_NEGATIVES) - self.eval_batches_per_epoch = self._count_batches( - self._eval_elements_in_epoch, eval_batch_size, batches_per_eval_step) - - # Intermediate artifacts - self._current_epoch_order = np.empty(shape=(0,)) - self._shuffle_iterator = None - - self._shuffle_with_forkpool = not stream_files - if stream_files: - self._shard_root = epoch_dir or tempfile.mkdtemp(prefix="ncf_") - if not create_data_offline: - atexit.register(tf.io.gfile.rmtree, self._shard_root) - else: - self._shard_root = None - - self._train_dataset = DatasetManager(True, stream_files, - self.train_batches_per_epoch, - self._shard_root, deterministic, - num_train_epochs) - self._eval_dataset = DatasetManager(False, stream_files, - self.eval_batches_per_epoch, - self._shard_root, deterministic, - num_train_epochs) - - # Threading details - super(BaseDataConstructor, self).__init__() - self.daemon = True - self._stop_loop = False - self._fatal_exception = None - self.deterministic = deterministic - - def __str__(self): - multiplier = ("(x{} devices)".format(self._batches_per_train_step) - if self._batches_per_train_step > 1 else "") - summary = SUMMARY_TEMPLATE.format( - spacer=" ", num_users=self._num_users, num_items=self._num_items, - train_pos_ct=self._train_pos_count, - train_batch_size=self.train_batch_size, - train_batch_ct=self.train_batches_per_epoch, - eval_pos_ct=self._num_users, eval_batch_size=self.eval_batch_size, - eval_batch_ct=self.eval_batches_per_epoch, multiplier=multiplier) - return super(BaseDataConstructor, self).__str__() + "\n" + summary - - @staticmethod - def _count_batches(example_count, batch_size, batches_per_step): - """Determine the number of batches, rounding up to fill all devices.""" - x = (example_count + batch_size - 1) // batch_size - return (x + batches_per_step - 1) // batches_per_step * batches_per_step - - def stop_loop(self): - self._stop_loop = True - - def construct_lookup_variables(self): - """Perform any one time pre-compute work.""" - raise NotImplementedError - - def lookup_negative_items(self, **kwargs): - """Randomly sample negative items for given users.""" - raise NotImplementedError - - def _run(self): - atexit.register(self.stop_loop) - self._start_shuffle_iterator() - self.construct_lookup_variables() - self._construct_training_epoch() - self._construct_eval_epoch() - for _ in range(self._maximum_number_epochs - 1): - self._construct_training_epoch() - self.stop_loop() - - def run(self): - try: - self._run() - except Exception as e: - # The Thread base class swallows stack traces, so unfortunately it is - # necessary to catch and re-raise to get debug output - traceback.print_exc() - self._fatal_exception = e - sys.stderr.flush() - raise - - def _start_shuffle_iterator(self): - if self._shuffle_with_forkpool: - pool = popen_helper.get_forkpool(3, closing=False) - else: - pool = popen_helper.get_threadpool(1, closing=False) - atexit.register(pool.close) - args = [(self._elements_in_epoch, stat_utils.random_int32()) - for _ in range(self._maximum_number_epochs)] - imap = pool.imap if self.deterministic else pool.imap_unordered - self._shuffle_iterator = imap(stat_utils.permutation, args) - - def _get_training_batch(self, i): - """Construct a single batch of training data. - - Args: - i: The index of the batch. This is used when stream_files=True to assign - data to file shards. - """ - batch_indices = self._current_epoch_order[i * self.train_batch_size: - (i + 1) * self.train_batch_size] - (mask_start_index,) = batch_indices.shape - - batch_ind_mod = np.mod(batch_indices, self._train_pos_count) - users = self._train_pos_users[batch_ind_mod] - - negative_indices = np.greater_equal(batch_indices, self._train_pos_count) - negative_users = users[negative_indices] - - negative_items = self.lookup_negative_items(negative_users=negative_users) - - items = self._train_pos_items[batch_ind_mod] - items[negative_indices] = negative_items - - labels = np.logical_not(negative_indices) - - # Pad last partial batch - pad_length = self.train_batch_size - mask_start_index - if pad_length: - # We pad with arange rather than zeros because the network will still - # compute logits for padded examples, and padding with zeros would create - # a very "hot" embedding key which can have performance implications. - user_pad = np.arange(pad_length, dtype=users.dtype) % self._num_users - item_pad = np.arange(pad_length, dtype=items.dtype) % self._num_items - label_pad = np.zeros(shape=(pad_length,), dtype=labels.dtype) - users = np.concatenate([users, user_pad]) - items = np.concatenate([items, item_pad]) - labels = np.concatenate([labels, label_pad]) - - self._train_dataset.put( - i, { - movielens.USER_COLUMN: - np.reshape(users, (self.train_batch_size, 1)), - movielens.ITEM_COLUMN: - np.reshape(items, (self.train_batch_size, 1)), - rconst.MASK_START_INDEX: - np.array(mask_start_index, dtype=np.int32), - "labels": - np.reshape(labels, (self.train_batch_size, 1)), - }) - - def _wait_to_construct_train_epoch(self): - count = 0 - while self._train_dataset.buffer_reached() and not self._stop_loop: - time.sleep(0.01) - count += 1 - if count >= 100 and np.log10(count) == np.round(np.log10(count)): - logging.info( - "Waited {} times for training data to be consumed".format(count)) - - def _construct_training_epoch(self): - """Loop to construct a batch of training data.""" - if not self.create_data_offline: - self._wait_to_construct_train_epoch() - - start_time = timeit.default_timer() - if self._stop_loop: - return - - self._train_dataset.start_construction() - map_args = list(range(self.train_batches_per_epoch)) - self._current_epoch_order = next(self._shuffle_iterator) - - get_pool = (popen_helper.get_fauxpool if self.deterministic else - popen_helper.get_threadpool) - with get_pool(6) as pool: - pool.map(self._get_training_batch, map_args) - self._train_dataset.end_construction() - - logging.info("Epoch construction complete. Time: {:.1f} seconds".format( - timeit.default_timer() - start_time)) - - @staticmethod - def _assemble_eval_batch(users, positive_items, negative_items, - users_per_batch): - """Construct duplicate_mask and structure data accordingly. - - The positive items should be last so that they lose ties. However, they - should not be masked out if the true eval positive happens to be - selected as a negative. So instead, the positive is placed in the first - position, and then switched with the last element after the duplicate - mask has been computed. - - Args: - users: An array of users in a batch. (should be identical along axis 1) - positive_items: An array (batch_size x 1) of positive item indices. - negative_items: An array of negative item indices. - users_per_batch: How many users should be in the batch. This is passed - as an argument so that ncf_test.py can use this method. - - Returns: - User, item, and duplicate_mask arrays. - """ - items = np.concatenate([positive_items, negative_items], axis=1) - - # We pad the users and items here so that the duplicate mask calculation - # will include padding. The metric function relies on all padded elements - # except the positive being marked as duplicate to mask out padded points. - if users.shape[0] < users_per_batch: - pad_rows = users_per_batch - users.shape[0] - padding = np.zeros(shape=(pad_rows, users.shape[1]), dtype=np.int32) - users = np.concatenate([users, padding.astype(users.dtype)], axis=0) - items = np.concatenate([items, padding.astype(items.dtype)], axis=0) - - duplicate_mask = stat_utils.mask_duplicates(items, axis=1).astype(np.bool) - - items[:, (0, -1)] = items[:, (-1, 0)] - duplicate_mask[:, (0, -1)] = duplicate_mask[:, (-1, 0)] - - assert users.shape == items.shape == duplicate_mask.shape - return users, items, duplicate_mask - - def _get_eval_batch(self, i): - """Construct a single batch of evaluation data. - - Args: - i: The index of the batch. - """ - low_index = i * self._eval_users_per_batch - high_index = (i + 1) * self._eval_users_per_batch - users = np.repeat(self._eval_pos_users[low_index:high_index, np.newaxis], - 1 + rconst.NUM_EVAL_NEGATIVES, axis=1) - positive_items = self._eval_pos_items[low_index:high_index, np.newaxis] - negative_items = (self.lookup_negative_items(negative_users=users[:, :-1]) - .reshape(-1, rconst.NUM_EVAL_NEGATIVES)) - - users, items, duplicate_mask = self._assemble_eval_batch( - users, positive_items, negative_items, self._eval_users_per_batch) - - self._eval_dataset.put( - i, { - movielens.USER_COLUMN: - np.reshape(users.flatten(), (self.eval_batch_size, 1)), - movielens.ITEM_COLUMN: - np.reshape(items.flatten(), (self.eval_batch_size, 1)), - rconst.DUPLICATE_MASK: - np.reshape(duplicate_mask.flatten(), (self.eval_batch_size, 1)), - }) - - def _construct_eval_epoch(self): - """Loop to construct data for evaluation.""" - if self._stop_loop: - return - - start_time = timeit.default_timer() - - self._eval_dataset.start_construction() - map_args = [i for i in range(self.eval_batches_per_epoch)] - - get_pool = (popen_helper.get_fauxpool if self.deterministic else - popen_helper.get_threadpool) - with get_pool(6) as pool: - pool.map(self._get_eval_batch, map_args) - self._eval_dataset.end_construction() - - logging.info("Eval construction complete. Time: {:.1f} seconds".format( - timeit.default_timer() - start_time)) - - def make_input_fn(self, is_training): - # It isn't feasible to provide a foolproof check, so this is designed to - # catch most failures rather than provide an exhaustive guard. - if self._fatal_exception is not None: - raise ValueError("Fatal exception in the data production loop: {}" - .format(self._fatal_exception)) - - return ( - self._train_dataset.make_input_fn(self.train_batch_size) if is_training - else self._eval_dataset.make_input_fn(self.eval_batch_size)) - - def increment_request_epoch(self): - self._train_dataset.increment_request_epoch() - - -class DummyConstructor(threading.Thread): - """Class for running with synthetic data.""" - - def __init__(self, *args, **kwargs): - super(DummyConstructor, self).__init__(*args, **kwargs) - self.train_batches_per_epoch = rconst.SYNTHETIC_BATCHES_PER_EPOCH - self.eval_batches_per_epoch = rconst.SYNTHETIC_BATCHES_PER_EPOCH - - def run(self): - pass - - def stop_loop(self): - pass - - def increment_request_epoch(self): - pass - - @staticmethod - def make_input_fn(is_training): - """Construct training input_fn that uses synthetic data.""" - - def input_fn(params): - """Returns dummy input batches for training.""" - - # Estimator passes batch_size during training and eval_batch_size during - # eval. - batch_size = (params["batch_size"] if is_training else - params.get("eval_batch_size") or params["batch_size"]) - num_users = params["num_users"] - num_items = params["num_items"] - - users = tf.random.uniform([batch_size, 1], - dtype=tf.int32, - minval=0, - maxval=num_users) - items = tf.random.uniform([batch_size, 1], - dtype=tf.int32, - minval=0, - maxval=num_items) - - if is_training: - valid_point_mask = tf.cast( - tf.random.uniform([batch_size, 1], - dtype=tf.int32, - minval=0, - maxval=2), tf.bool) - labels = tf.cast( - tf.random.uniform([batch_size, 1], - dtype=tf.int32, - minval=0, - maxval=2), tf.bool) - data = { - movielens.USER_COLUMN: users, - movielens.ITEM_COLUMN: items, - rconst.VALID_POINT_MASK: valid_point_mask, - }, labels - else: - dupe_mask = tf.cast( - tf.random.uniform([batch_size, 1], - dtype=tf.int32, - minval=0, - maxval=2), tf.bool) - data = { - movielens.USER_COLUMN: users, - movielens.ITEM_COLUMN: items, - rconst.DUPLICATE_MASK: dupe_mask, - } - - dataset = tf.data.Dataset.from_tensors(data).repeat( - rconst.SYNTHETIC_BATCHES_PER_EPOCH * params["batches_per_step"]) - dataset = dataset.prefetch(32) - return dataset - - return input_fn - - -class MaterializedDataConstructor(BaseDataConstructor): - """Materialize a table of negative examples for fast negative generation. - - This class creates a table (num_users x num_items) containing all of the - negative examples for each user. This table is conceptually ragged; that is to - say the items dimension will have a number of unused elements at the end equal - to the number of positive elements for a given user. For instance: - - num_users = 3 - num_items = 5 - positives = [[1, 3], [0], [1, 2, 3, 4]] - - will generate a negative table: - [ - [0 2 4 int32max int32max], - [1 2 3 4 int32max], - [0 int32max int32max int32max int32max], - ] - - and a vector of per-user negative counts, which in this case would be: - [3, 4, 1] - - When sampling negatives, integers are (nearly) uniformly selected from the - range [0, per_user_neg_count[user]) which gives a column_index, at which - point the negative can be selected as: - negative_table[user, column_index] - - This technique will not scale; however MovieLens is small enough that even - a pre-compute which is quadratic in problem size will still fit in memory. A - more scalable lookup method is in the works. - """ - def __init__(self, *args, **kwargs): - super(MaterializedDataConstructor, self).__init__(*args, **kwargs) - self._negative_table = None - self._per_user_neg_count = None - - def construct_lookup_variables(self): - # Materialize negatives for fast lookup sampling. - start_time = timeit.default_timer() - inner_bounds = np.argwhere(self._train_pos_users[1:] - - self._train_pos_users[:-1])[:, 0] + 1 - (upper_bound,) = self._train_pos_users.shape - index_bounds = [0] + inner_bounds.tolist() + [upper_bound] - self._negative_table = np.zeros(shape=(self._num_users, self._num_items), - dtype=rconst.ITEM_DTYPE) - - # Set the table to the max value to make sure the embedding lookup will fail - # if we go out of bounds, rather than just overloading item zero. - self._negative_table += np.iinfo(rconst.ITEM_DTYPE).max - assert self._num_items < np.iinfo(rconst.ITEM_DTYPE).max - - # Reuse arange during generation. np.delete will make a copy. - full_set = np.arange(self._num_items, dtype=rconst.ITEM_DTYPE) - - self._per_user_neg_count = np.zeros( - shape=(self._num_users,), dtype=np.int32) - - # Threading does not improve this loop. For some reason, the np.delete - # call does not parallelize well. Multiprocessing incurs too much - # serialization overhead to be worthwhile. - for i in range(self._num_users): - positives = self._train_pos_items[index_bounds[i]:index_bounds[i+1]] - negatives = np.delete(full_set, positives) - self._per_user_neg_count[i] = self._num_items - positives.shape[0] - self._negative_table[i, :self._per_user_neg_count[i]] = negatives - - logging.info("Negative sample table built. Time: {:.1f} seconds".format( - timeit.default_timer() - start_time)) - - def lookup_negative_items(self, negative_users, **kwargs): - negative_item_choice = stat_utils.very_slightly_biased_randint( - self._per_user_neg_count[negative_users]) - return self._negative_table[negative_users, negative_item_choice] - - -class BisectionDataConstructor(BaseDataConstructor): - """Use bisection to index within positive examples. - - This class tallies the number of negative items which appear before each - positive item for a user. This means that in order to select the ith negative - item for a user, it only needs to determine which two positive items bound - it at which point the item id for the ith negative is a simply algebraic - expression. - """ - def __init__(self, *args, **kwargs): - super(BisectionDataConstructor, self).__init__(*args, **kwargs) - self.index_bounds = None - self._sorted_train_pos_items = None - self._total_negatives = None - - def _index_segment(self, user): - lower, upper = self.index_bounds[user:user+2] - items = self._sorted_train_pos_items[lower:upper] - - negatives_since_last_positive = np.concatenate( - [items[0][np.newaxis], items[1:] - items[:-1] - 1]) - - return np.cumsum(negatives_since_last_positive) - - def construct_lookup_variables(self): - start_time = timeit.default_timer() - inner_bounds = np.argwhere(self._train_pos_users[1:] - - self._train_pos_users[:-1])[:, 0] + 1 - (upper_bound,) = self._train_pos_users.shape - self.index_bounds = np.array([0] + inner_bounds.tolist() + [upper_bound]) - - # Later logic will assume that the users are in sequential ascending order. - assert np.array_equal(self._train_pos_users[self.index_bounds[:-1]], - np.arange(self._num_users)) - - self._sorted_train_pos_items = self._train_pos_items.copy() - - for i in range(self._num_users): - lower, upper = self.index_bounds[i:i+2] - self._sorted_train_pos_items[lower:upper].sort() - - self._total_negatives = np.concatenate([ - self._index_segment(i) for i in range(self._num_users)]) - - logging.info("Negative total vector built. Time: {:.1f} seconds".format( - timeit.default_timer() - start_time)) - - def lookup_negative_items(self, negative_users, **kwargs): - output = np.zeros(shape=negative_users.shape, dtype=rconst.ITEM_DTYPE) - 1 - - left_index = self.index_bounds[negative_users] - right_index = self.index_bounds[negative_users + 1] - 1 - - num_positives = right_index - left_index + 1 - num_negatives = self._num_items - num_positives - neg_item_choice = stat_utils.very_slightly_biased_randint(num_negatives) - - # Shortcuts: - # For points where the negative is greater than or equal to the tally before - # the last positive point there is no need to bisect. Instead the item id - # corresponding to the negative item choice is simply: - # last_postive_index + 1 + (neg_choice - last_negative_tally) - # Similarly, if the selection is less than the tally at the first positive - # then the item_id is simply the selection. - # - # Because MovieLens organizes popular movies into low integers (which is - # preserved through the preprocessing), the first shortcut is very - # efficient, allowing ~60% of samples to bypass the bisection. For the same - # reason, the second shortcut is rarely triggered (<0.02%) and is therefore - # not worth implementing. - use_shortcut = neg_item_choice >= self._total_negatives[right_index] - output[use_shortcut] = ( - self._sorted_train_pos_items[right_index] + 1 + - (neg_item_choice - self._total_negatives[right_index]) - )[use_shortcut] - - if np.all(use_shortcut): - # The bisection code is ill-posed when there are no elements. - return output - - not_use_shortcut = np.logical_not(use_shortcut) - left_index = left_index[not_use_shortcut] - right_index = right_index[not_use_shortcut] - neg_item_choice = neg_item_choice[not_use_shortcut] - - num_loops = np.max( - np.ceil(np.log2(num_positives[not_use_shortcut])).astype(np.int32)) - - for i in range(num_loops): - mid_index = (left_index + right_index) // 2 - right_criteria = self._total_negatives[mid_index] > neg_item_choice - left_criteria = np.logical_not(right_criteria) - - right_index[right_criteria] = mid_index[right_criteria] - left_index[left_criteria] = mid_index[left_criteria] - - # Expected state after bisection pass: - # The right index is the smallest index whose tally is greater than the - # negative item choice index. - - assert np.all((right_index - left_index) <= 1) - - output[not_use_shortcut] = ( - self._sorted_train_pos_items[right_index] - - (self._total_negatives[right_index] - neg_item_choice) - ) - - assert np.all(output >= 0) - - return output - - -def get_constructor(name): - if name == "bisection": - return BisectionDataConstructor - if name == "materialized": - return MaterializedDataConstructor - raise ValueError("Unrecognized constructor: {}".format(name)) diff --git a/spaces/NMEX/vits-uma-genshin-honkai/README.md b/spaces/NMEX/vits-uma-genshin-honkai/README.md deleted file mode 100644 index 2fd2870bef9c579ab20b33fdd09aea238aeb1f1d..0000000000000000000000000000000000000000 --- a/spaces/NMEX/vits-uma-genshin-honkai/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -license: apache-2.0 -title: ' vits-uma-genshin-honkai' -sdk: gradio -sdk_version: 3.7 -emoji: 🐨 -colorTo: yellow -pinned: false -app_file: app.py -duplicated_from: sayashi/vits-uma-genshin-honkai ---- diff --git a/spaces/NeuralInternet/Text-Generation_Playground/server.py b/spaces/NeuralInternet/Text-Generation_Playground/server.py deleted file mode 100644 index a7f24f44078bbe77ca06519669aaf53c0b3e4dcb..0000000000000000000000000000000000000000 --- a/spaces/NeuralInternet/Text-Generation_Playground/server.py +++ /dev/null @@ -1,382 +0,0 @@ -import gc -import io -import json -import re -import sys -import time -import zipfile -from pathlib import Path - -import gradio as gr -import torch - -import modules.chat as chat -import modules.extensions as extensions_module -import modules.shared as shared -import modules.ui as ui -from modules.html_generator import generate_chat_html -from modules.models import load_model, load_soft_prompt -from modules.text_generation import generate_reply - -# Loading custom settings -settings_file = None -if shared.args.settings is not None and Path(shared.args.settings).exists(): - settings_file = Path(shared.args.settings) -elif Path('settings.json').exists(): - settings_file = Path('settings.json') -if settings_file is not None: - print(f"Loading settings from {settings_file}...") - new_settings = json.loads(open(settings_file, 'r').read()) - for item in new_settings: - shared.settings[item] = new_settings[item] - -def get_available_models(): - if shared.args.flexgen: - return sorted([re.sub('-np$', '', item.name) for item in list(Path('models/').glob('*')) if item.name.endswith('-np')], key=str.lower) - else: - return sorted([item.name for item in list(Path('models/').glob('*')) if not item.name.endswith(('.txt', '-np', '.pt'))], key=str.lower) - -def get_available_presets(): - return sorted(set(map(lambda x : '.'.join(str(x.name).split('.')[:-1]), Path('presets').glob('*.txt'))), key=str.lower) - -def get_available_characters(): - return ['None'] + sorted(set(map(lambda x : '.'.join(str(x.name).split('.')[:-1]), Path('characters').glob('*.json'))), key=str.lower) - -def get_available_extensions(): - return sorted(set(map(lambda x : x.parts[1], Path('extensions').glob('*/script.py'))), key=str.lower) - -def get_available_softprompts(): - return ['None'] + sorted(set(map(lambda x : '.'.join(str(x.name).split('.')[:-1]), Path('softprompts').glob('*.zip'))), key=str.lower) - -def load_model_wrapper(selected_model): - if selected_model != shared.model_name: - shared.model_name = selected_model - shared.model = shared.tokenizer = None - if not shared.args.cpu: - gc.collect() - torch.cuda.empty_cache() - shared.model, shared.tokenizer = load_model(shared.model_name) - - return selected_model - -def load_preset_values(preset_menu, return_dict=False): - generate_params = { - 'do_sample': True, - 'temperature': 1, - 'top_p': 1, - 'typical_p': 1, - 'repetition_penalty': 1, - 'top_k': 50, - 'num_beams': 1, - 'penalty_alpha': 0, - 'min_length': 0, - 'length_penalty': 1, - 'no_repeat_ngram_size': 0, - 'early_stopping': False, - } - with open(Path(f'presets/{preset_menu}.txt'), 'r') as infile: - preset = infile.read() - for i in preset.splitlines(): - i = i.rstrip(',').strip().split('=') - if len(i) == 2 and i[0].strip() != 'tokens': - generate_params[i[0].strip()] = eval(i[1].strip()) - - generate_params['temperature'] = min(1.99, generate_params['temperature']) - - if return_dict: - return generate_params - else: - return generate_params['do_sample'], generate_params['temperature'], generate_params['top_p'], generate_params['typical_p'], generate_params['repetition_penalty'], generate_params['top_k'], generate_params['min_length'], generate_params['no_repeat_ngram_size'], generate_params['num_beams'], generate_params['penalty_alpha'], generate_params['length_penalty'], generate_params['early_stopping'] - -def upload_soft_prompt(file): - with zipfile.ZipFile(io.BytesIO(file)) as zf: - zf.extract('meta.json') - j = json.loads(open('meta.json', 'r').read()) - name = j['name'] - Path('meta.json').unlink() - - with open(Path(f'softprompts/{name}.zip'), 'wb') as f: - f.write(file) - - return name - -def create_settings_menus(default_preset): - generate_params = load_preset_values(default_preset if not shared.args.flexgen else 'Naive', return_dict=True) - - with gr.Row(): - with gr.Column(): - with gr.Row(): - shared.gradio['model_menu'] = gr.Dropdown(choices=available_models, value=shared.model_name, label='Model') - ui.create_refresh_button(shared.gradio['model_menu'], lambda : None, lambda : {'choices': get_available_models()}, 'refresh-button') - with gr.Column(): - with gr.Row(): - shared.gradio['preset_menu'] = gr.Dropdown(choices=available_presets, value=default_preset if not shared.args.flexgen else 'Naive', label='Generation parameters preset') - ui.create_refresh_button(shared.gradio['preset_menu'], lambda : None, lambda : {'choices': get_available_presets()}, 'refresh-button') - - with gr.Accordion('Custom generation parameters', open=False, elem_id='accordion'): - with gr.Row(): - with gr.Column(): - shared.gradio['temperature'] = gr.Slider(0.01, 1.99, value=generate_params['temperature'], step=0.01, label='temperature') - shared.gradio['repetition_penalty'] = gr.Slider(1.0, 2.99, value=generate_params['repetition_penalty'],step=0.01,label='repetition_penalty') - shared.gradio['top_k'] = gr.Slider(0,200,value=generate_params['top_k'],step=1,label='top_k') - shared.gradio['top_p'] = gr.Slider(0.0,1.0,value=generate_params['top_p'],step=0.01,label='top_p') - with gr.Column(): - shared.gradio['do_sample'] = gr.Checkbox(value=generate_params['do_sample'], label='do_sample') - shared.gradio['typical_p'] = gr.Slider(0.0,1.0,value=generate_params['typical_p'],step=0.01,label='typical_p') - shared.gradio['no_repeat_ngram_size'] = gr.Slider(0, 20, step=1, value=generate_params['no_repeat_ngram_size'], label='no_repeat_ngram_size') - shared.gradio['min_length'] = gr.Slider(0, 2000, step=1, value=generate_params['min_length'] if shared.args.no_stream else 0, label='min_length', interactive=shared.args.no_stream) - - gr.Markdown('Contrastive search:') - shared.gradio['penalty_alpha'] = gr.Slider(0, 5, value=generate_params['penalty_alpha'], label='penalty_alpha') - - gr.Markdown('Beam search (uses a lot of VRAM):') - with gr.Row(): - with gr.Column(): - shared.gradio['num_beams'] = gr.Slider(1, 20, step=1, value=generate_params['num_beams'], label='num_beams') - with gr.Column(): - shared.gradio['length_penalty'] = gr.Slider(-5, 5, value=generate_params['length_penalty'], label='length_penalty') - shared.gradio['early_stopping'] = gr.Checkbox(value=generate_params['early_stopping'], label='early_stopping') - - with gr.Accordion('Soft prompt', open=False, elem_id='accordion'): - with gr.Row(): - shared.gradio['softprompts_menu'] = gr.Dropdown(choices=available_softprompts, value='None', label='Soft prompt') - ui.create_refresh_button(shared.gradio['softprompts_menu'], lambda : None, lambda : {'choices': get_available_softprompts()}, 'refresh-button') - - gr.Markdown('Upload a soft prompt (.zip format):') - with gr.Row(): - shared.gradio['upload_softprompt'] = gr.File(type='binary', file_types=['.zip']) - - shared.gradio['model_menu'].change(load_model_wrapper, [shared.gradio['model_menu']], [shared.gradio['model_menu']], show_progress=True) - shared.gradio['preset_menu'].change(load_preset_values, [shared.gradio['preset_menu']], [shared.gradio['do_sample'], shared.gradio['temperature'], shared.gradio['top_p'], shared.gradio['typical_p'], shared.gradio['repetition_penalty'], shared.gradio['top_k'], shared.gradio['min_length'], shared.gradio['no_repeat_ngram_size'], shared.gradio['num_beams'], shared.gradio['penalty_alpha'], shared.gradio['length_penalty'], shared.gradio['early_stopping']]) - shared.gradio['softprompts_menu'].change(load_soft_prompt, [shared.gradio['softprompts_menu']], [shared.gradio['softprompts_menu']], show_progress=True) - shared.gradio['upload_softprompt'].upload(upload_soft_prompt, [shared.gradio['upload_softprompt']], [shared.gradio['softprompts_menu']]) - -available_models = get_available_models() -available_presets = get_available_presets() -available_characters = get_available_characters() -available_softprompts = get_available_softprompts() - -# Default extensions -extensions_module.available_extensions = get_available_extensions() -if shared.args.chat or shared.args.cai_chat: - for extension in shared.settings['chat_default_extensions']: - shared.args.extensions = shared.args.extensions or [] - if extension not in shared.args.extensions: - shared.args.extensions.append(extension) -else: - for extension in shared.settings['default_extensions']: - shared.args.extensions = shared.args.extensions or [] - if extension not in shared.args.extensions: - shared.args.extensions.append(extension) -if shared.args.extensions is not None and len(shared.args.extensions) > 0: - extensions_module.load_extensions() - -# Default model -if shared.args.model is not None: - shared.model_name = shared.args.model -else: - if len(available_models) == 0: - print('No models are available! Please download at least one.') - sys.exit(0) - elif len(available_models) == 1: - i = 0 - else: - print('The following models are available:\n') - for i, model in enumerate(available_models): - print(f'{i+1}. {model}') - print(f'\nWhich one do you want to load? 1-{len(available_models)}\n') - i = int(input())-1 - print() - shared.model_name = available_models[i] -shared.model, shared.tokenizer = load_model(shared.model_name) - -# Default UI settings -gen_events = [] -default_preset = shared.settings['presets'][next((k for k in shared.settings['presets'] if re.match(k.lower(), shared.model_name.lower())), 'default')] -default_text = shared.settings['prompts'][next((k for k in shared.settings['prompts'] if re.match(k.lower(), shared.model_name.lower())), 'default')] -title ='Text Generation Playground' -description = '\n\n# Text Generation Playground \nGenerate text using Large Language Models.\n' -suffix = '_pygmalion' if 'pygmalion' in shared.model_name.lower() else '' - -if shared.args.chat or shared.args.cai_chat: - with gr.Blocks(css=ui.css+ui.chat_css, analytics_enabled=False, title=title) as shared.gradio['interface']: - gr.HTML('''
    Original github repo
    -

    For faster inference without waiting in queue, you may duplicate the space. Duplicate Space

    -(👇 Scroll down to see the interface 👀)''') - if shared.args.cai_chat: - shared.gradio['display'] = gr.HTML(value=generate_chat_html(shared.history['visible'], shared.settings[f'name1{suffix}'], shared.settings[f'name2{suffix}'], shared.character)) - else: - shared.gradio['display'] = gr.Chatbot(value=shared.history['visible']).style(color_map=("#326efd", "#212528")) - shared.gradio['textbox'] = gr.Textbox(label='Input') - with gr.Row(): - shared.gradio['Stop'] = gr.Button('Stop') - shared.gradio['Generate'] = gr.Button('Generate') - with gr.Row(): - shared.gradio['Impersonate'] = gr.Button('Impersonate') - shared.gradio['Regenerate'] = gr.Button('Regenerate') - with gr.Row(): - shared.gradio['Copy last reply'] = gr.Button('Copy last reply') - shared.gradio['Replace last reply'] = gr.Button('Replace last reply') - shared.gradio['Remove last'] = gr.Button('Remove last') - - shared.gradio['Clear history'] = gr.Button('Clear history') - shared.gradio['Clear history-confirm'] = gr.Button('Confirm', variant="stop", visible=False) - shared.gradio['Clear history-cancel'] = gr.Button('Cancel', visible=False) - with gr.Tab('Chat settings'): - shared.gradio['name1'] = gr.Textbox(value=shared.settings[f'name1{suffix}'], lines=1, label='Your name') - shared.gradio['name2'] = gr.Textbox(value=shared.settings[f'name2{suffix}'], lines=1, label='Bot\'s name') - shared.gradio['context'] = gr.Textbox(value=shared.settings[f'context{suffix}'], lines=5, label='Context') - with gr.Row(): - shared.gradio['character_menu'] = gr.Dropdown(choices=available_characters, value='None', label='Character', elem_id='character-menu') - ui.create_refresh_button(shared.gradio['character_menu'], lambda : None, lambda : {'choices': get_available_characters()}, 'refresh-button') - - with gr.Row(): - shared.gradio['check'] = gr.Checkbox(value=shared.settings[f'stop_at_newline{suffix}'], label='Stop generating at new line character?') - with gr.Row(): - with gr.Tab('Chat history'): - with gr.Row(): - with gr.Column(): - gr.Markdown('Upload') - shared.gradio['upload_chat_history'] = gr.File(type='binary', file_types=['.json', '.txt']) - with gr.Column(): - gr.Markdown('Download') - shared.gradio['download'] = gr.File() - shared.gradio['download_button'] = gr.Button(value='Click me') - with gr.Tab('Upload character'): - with gr.Row(): - with gr.Column(): - gr.Markdown('1. Select the JSON file') - shared.gradio['upload_json'] = gr.File(type='binary', file_types=['.json']) - with gr.Column(): - gr.Markdown('2. Select your character\'s profile picture (optional)') - shared.gradio['upload_img_bot'] = gr.File(type='binary', file_types=['image']) - shared.gradio['Upload character'] = gr.Button(value='Submit') - with gr.Tab('Upload your profile picture'): - shared.gradio['upload_img_me'] = gr.File(type='binary', file_types=['image']) - with gr.Tab('Upload TavernAI Character Card'): - shared.gradio['upload_img_tavern'] = gr.File(type='binary', file_types=['image']) - - with gr.Tab('Generation settings'): - with gr.Row(): - with gr.Column(): - shared.gradio['max_new_tokens'] = gr.Slider(minimum=shared.settings['max_new_tokens_min'], maximum=shared.settings['max_new_tokens_max'], step=1, label='max_new_tokens', value=shared.settings['max_new_tokens']) - with gr.Column(): - shared.gradio['chat_prompt_size_slider'] = gr.Slider(minimum=shared.settings['chat_prompt_size_min'], maximum=shared.settings['chat_prompt_size_max'], step=1, label='Maximum prompt size in tokens', value=shared.settings['chat_prompt_size']) - shared.gradio['chat_generation_attempts'] = gr.Slider(minimum=shared.settings['chat_generation_attempts_min'], maximum=shared.settings['chat_generation_attempts_max'], value=shared.settings['chat_generation_attempts'], step=1, label='Generation attempts (for longer replies)') - create_settings_menus(default_preset) - - shared.input_params = [shared.gradio[k] for k in ['textbox', 'max_new_tokens', 'do_sample', 'temperature', 'top_p', 'typical_p', 'repetition_penalty', 'top_k', 'min_length', 'no_repeat_ngram_size', 'num_beams', 'penalty_alpha', 'length_penalty', 'early_stopping', 'name1', 'name2', 'context', 'check', 'chat_prompt_size_slider', 'chat_generation_attempts']] - if shared.args.extensions is not None: - with gr.Tab('Extensions'): - extensions_module.create_extensions_block() - - function_call = 'chat.cai_chatbot_wrapper' if shared.args.cai_chat else 'chat.chatbot_wrapper' - - gen_events.append(shared.gradio['Generate'].click(eval(function_call), shared.input_params, shared.gradio['display'], show_progress=shared.args.no_stream, api_name='textgen')) - gen_events.append(shared.gradio['textbox'].submit(eval(function_call), shared.input_params, shared.gradio['display'], show_progress=shared.args.no_stream)) - gen_events.append(shared.gradio['Regenerate'].click(chat.regenerate_wrapper, shared.input_params, shared.gradio['display'], show_progress=shared.args.no_stream)) - gen_events.append(shared.gradio['Impersonate'].click(chat.impersonate_wrapper, shared.input_params, shared.gradio['textbox'], show_progress=shared.args.no_stream)) - shared.gradio['Stop'].click(chat.stop_everything_event, [], [], cancels=gen_events) - - shared.gradio['Copy last reply'].click(chat.send_last_reply_to_input, [], shared.gradio['textbox'], show_progress=shared.args.no_stream) - shared.gradio['Replace last reply'].click(chat.replace_last_reply, [shared.gradio['textbox'], shared.gradio['name1'], shared.gradio['name2']], shared.gradio['display'], show_progress=shared.args.no_stream) - - # Clear history with confirmation - clear_arr = [shared.gradio[k] for k in ['Clear history-confirm', 'Clear history', 'Clear history-cancel']] - shared.gradio['Clear history'].click(lambda :[gr.update(visible=True), gr.update(visible=False), gr.update(visible=True)], None, clear_arr) - shared.gradio['Clear history-confirm'].click(lambda :[gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, clear_arr) - shared.gradio['Clear history-confirm'].click(chat.clear_chat_log, [shared.gradio['name1'], shared.gradio['name2']], shared.gradio['display']) - shared.gradio['Clear history-cancel'].click(lambda :[gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, clear_arr) - - shared.gradio['Remove last'].click(chat.remove_last_message, [shared.gradio['name1'], shared.gradio['name2']], [shared.gradio['display'], shared.gradio['textbox']], show_progress=False) - shared.gradio['download_button'].click(chat.save_history, inputs=[], outputs=[shared.gradio['download']]) - shared.gradio['Upload character'].click(chat.upload_character, [shared.gradio['upload_json'], shared.gradio['upload_img_bot']], [shared.gradio['character_menu']]) - - # Clearing stuff and saving the history - for i in ['Generate', 'Regenerate', 'Replace last reply']: - shared.gradio[i].click(lambda x: '', shared.gradio['textbox'], shared.gradio['textbox'], show_progress=False) - shared.gradio[i].click(lambda : chat.save_history(timestamp=False), [], [], show_progress=False) - shared.gradio['Clear history-confirm'].click(lambda : chat.save_history(timestamp=False), [], [], show_progress=False) - shared.gradio['textbox'].submit(lambda x: '', shared.gradio['textbox'], shared.gradio['textbox'], show_progress=False) - shared.gradio['textbox'].submit(lambda : chat.save_history(timestamp=False), [], [], show_progress=False) - - shared.gradio['character_menu'].change(chat.load_character, [shared.gradio['character_menu'], shared.gradio['name1'], shared.gradio['name2']], [shared.gradio['name2'], shared.gradio['context'], shared.gradio['display']]) - shared.gradio['upload_chat_history'].upload(chat.load_history, [shared.gradio['upload_chat_history'], shared.gradio['name1'], shared.gradio['name2']], []) - shared.gradio['upload_img_tavern'].upload(chat.upload_tavern_character, [shared.gradio['upload_img_tavern'], shared.gradio['name1'], shared.gradio['name2']], [shared.gradio['character_menu']]) - shared.gradio['upload_img_me'].upload(chat.upload_your_profile_picture, [shared.gradio['upload_img_me']], []) - - reload_func = chat.redraw_html if shared.args.cai_chat else lambda : shared.history['visible'] - reload_inputs = [shared.gradio['name1'], shared.gradio['name2']] if shared.args.cai_chat else [] - shared.gradio['upload_chat_history'].upload(reload_func, reload_inputs, [shared.gradio['display']]) - shared.gradio['upload_img_me'].upload(reload_func, reload_inputs, [shared.gradio['display']]) - shared.gradio['Stop'].click(reload_func, reload_inputs, [shared.gradio['display']]) - - shared.gradio['interface'].load(lambda : chat.load_default_history(shared.settings[f'name1{suffix}'], shared.settings[f'name2{suffix}']), None, None) - shared.gradio['interface'].load(reload_func, reload_inputs, [shared.gradio['display']], show_progress=True) - -elif shared.args.notebook: - with gr.Blocks(css=ui.css, analytics_enabled=False, title=title) as shared.gradio['interface']: - gr.Markdown(description) - with gr.Tab('Raw'): - shared.gradio['textbox'] = gr.Textbox(value=default_text, lines=23) - with gr.Tab('Markdown'): - shared.gradio['markdown'] = gr.Markdown() - with gr.Tab('HTML'): - shared.gradio['html'] = gr.HTML() - - shared.gradio['Generate'] = gr.Button('Generate') - shared.gradio['Stop'] = gr.Button('Stop') - shared.gradio['max_new_tokens'] = gr.Slider(minimum=shared.settings['max_new_tokens_min'], maximum=shared.settings['max_new_tokens_max'], step=1, label='max_new_tokens', value=shared.settings['max_new_tokens']) - - create_settings_menus(default_preset) - if shared.args.extensions is not None: - extensions_module.create_extensions_block() - - shared.input_params = [shared.gradio[k] for k in ['textbox', 'max_new_tokens', 'do_sample', 'temperature', 'top_p', 'typical_p', 'repetition_penalty', 'top_k', 'min_length', 'no_repeat_ngram_size', 'num_beams', 'penalty_alpha', 'length_penalty', 'early_stopping']] - output_params = [shared.gradio[k] for k in ['textbox', 'markdown', 'html']] - gen_events.append(shared.gradio['Generate'].click(generate_reply, shared.input_params, output_params, show_progress=shared.args.no_stream, api_name='textgen')) - gen_events.append(shared.gradio['textbox'].submit(generate_reply, shared.input_params, output_params, show_progress=shared.args.no_stream)) - shared.gradio['Stop'].click(None, None, None, cancels=gen_events) - -else: - with gr.Blocks(css=ui.css, analytics_enabled=False, title=title) as shared.gradio['interface']: - gr.Markdown(description) - with gr.Row(): - with gr.Column(): - shared.gradio['textbox'] = gr.Textbox(value=default_text, lines=15, label='Input') - shared.gradio['max_new_tokens'] = gr.Slider(minimum=shared.settings['max_new_tokens_min'], maximum=shared.settings['max_new_tokens_max'], step=1, label='max_new_tokens', value=shared.settings['max_new_tokens']) - shared.gradio['Generate'] = gr.Button('Generate') - with gr.Row(): - with gr.Column(): - shared.gradio['Continue'] = gr.Button('Continue') - with gr.Column(): - shared.gradio['Stop'] = gr.Button('Stop') - - create_settings_menus(default_preset) - if shared.args.extensions is not None: - extensions_module.create_extensions_block() - - with gr.Column(): - with gr.Tab('Raw'): - shared.gradio['output_textbox'] = gr.Textbox(lines=15, label='Output') - with gr.Tab('Markdown'): - shared.gradio['markdown'] = gr.Markdown() - with gr.Tab('HTML'): - shared.gradio['html'] = gr.HTML() - - shared.input_params = [shared.gradio[k] for k in ['textbox', 'max_new_tokens', 'do_sample', 'temperature', 'top_p', 'typical_p', 'repetition_penalty', 'top_k', 'min_length', 'no_repeat_ngram_size', 'num_beams', 'penalty_alpha', 'length_penalty', 'early_stopping']] - output_params = [shared.gradio[k] for k in ['output_textbox', 'markdown', 'html']] - gen_events.append(shared.gradio['Generate'].click(generate_reply, shared.input_params, output_params, show_progress=shared.args.no_stream, api_name='textgen')) - gen_events.append(shared.gradio['textbox'].submit(generate_reply, shared.input_params, output_params, show_progress=shared.args.no_stream)) - gen_events.append(shared.gradio['Continue'].click(generate_reply, [shared.gradio['output_textbox']] + shared.input_params[1:], output_params, show_progress=shared.args.no_stream)) - shared.gradio['Stop'].click(None, None, None, cancels=gen_events) - -shared.gradio['interface'].queue() -if shared.args.listen: - shared.gradio['interface'].launch(prevent_thread_lock=True, share=shared.args.share, server_name='0.0.0.0', server_port=shared.args.listen_port, inbrowser=shared.args.auto_launch) -else: - shared.gradio['interface'].launch(prevent_thread_lock=True, share=shared.args.share, server_port=shared.args.listen_port, inbrowser=shared.args.auto_launch) - -# I think that I will need this later -while True: - time.sleep(0.5) diff --git a/spaces/Norod78/PumpkinHeads/face_detection.py b/spaces/Norod78/PumpkinHeads/face_detection.py deleted file mode 100644 index 3401974698c6ba9bf38bc30c97854196e510d6a4..0000000000000000000000000000000000000000 --- a/spaces/Norod78/PumpkinHeads/face_detection.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) 2021 Justin Pinkney - -import dlib -import numpy as np -import os -from PIL import Image -from PIL import ImageOps -from scipy.ndimage import gaussian_filter -import cv2 - - -MODEL_PATH = "shape_predictor_5_face_landmarks.dat" -detector = dlib.get_frontal_face_detector() - - -def align(image_in, face_index=0, output_size=256): - try: - image_in = ImageOps.exif_transpose(image_in) - except: - print("exif problem, not rotating") - - landmarks = list(get_landmarks(image_in)) - n_faces = len(landmarks) - face_index = min(n_faces-1, face_index) - if n_faces == 0: - aligned_image = image_in - quad = None - else: - aligned_image, quad = image_align(image_in, landmarks[face_index], output_size=output_size) - - return aligned_image, n_faces, quad - - -def composite_images(quad, img, output): - """Composite an image into and output canvas according to transformed co-ords""" - output = output.convert("RGBA") - img = img.convert("RGBA") - input_size = img.size - src = np.array(((0, 0), (0, input_size[1]), input_size, (input_size[0], 0)), dtype=np.float32) - dst = np.float32(quad) - mtx = cv2.getPerspectiveTransform(dst, src) - img = img.transform(output.size, Image.PERSPECTIVE, mtx.flatten(), Image.BILINEAR) - output.alpha_composite(img) - - return output.convert("RGB") - - -def get_landmarks(image): - """Get landmarks from PIL image""" - shape_predictor = dlib.shape_predictor(MODEL_PATH) - - max_size = max(image.size) - reduction_scale = int(max_size/512) - if reduction_scale == 0: - reduction_scale = 1 - downscaled = image.reduce(reduction_scale) - img = np.array(downscaled) - detections = detector(img, 0) - - for detection in detections: - try: - face_landmarks = [(reduction_scale*item.x, reduction_scale*item.y) for item in shape_predictor(img, detection).parts()] - yield face_landmarks - except Exception as e: - print(e) - - -def image_align(src_img, face_landmarks, output_size=512, transform_size=2048, enable_padding=True, x_scale=1, y_scale=1, em_scale=0.1, alpha=False): - # Align function modified from ffhq-dataset - # See https://github.com/NVlabs/ffhq-dataset for license - - lm = np.array(face_landmarks) - lm_eye_left = lm[2:3] # left-clockwise - lm_eye_right = lm[0:1] # left-clockwise - - # Calculate auxiliary vectors. - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - eye_avg = (eye_left + eye_right) * 0.5 - eye_to_eye = 0.71*(eye_right - eye_left) - mouth_avg = lm[4] - eye_to_mouth = 1.35*(mouth_avg - eye_avg) - - # Choose oriented crop rectangle. - x = eye_to_eye.copy() - x /= np.hypot(*x) - x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8) - x *= x_scale - y = np.flipud(x) * [-y_scale, y_scale] - c = eye_avg + eye_to_mouth * em_scale - quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) - quad_orig = quad.copy() - qsize = np.hypot(*x) * 2 - - img = src_img.convert('RGBA').convert('RGB') - - # Shrink. - shrink = int(np.floor(qsize / output_size * 0.5)) - if shrink > 1: - rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink))) - img = img.resize(rsize, Image.Resampling.LANCZOS) - quad /= shrink - qsize /= shrink - - # Crop. - border = max(int(np.rint(qsize * 0.1)), 3) - crop = (int(np.floor(min(quad[:,0]))), int(np.floor(min(quad[:,1]))), int(np.ceil(max(quad[:,0]))), int(np.ceil(max(quad[:,1])))) - crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]), min(crop[3] + border, img.size[1])) - if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]: - img = img.crop(crop) - quad -= crop[0:2] - - # Pad. - pad = (int(np.floor(min(quad[:,0]))), int(np.floor(min(quad[:,1]))), int(np.ceil(max(quad[:,0]))), int(np.ceil(max(quad[:,1])))) - pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0), max(pad[3] - img.size[1] + border, 0)) - if enable_padding and max(pad) > border - 4: - pad = np.maximum(pad, int(np.rint(qsize * 0.3))) - img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect') - h, w, _ = img.shape - y, x, _ = np.ogrid[:h, :w, :1] - mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w-1-x) / pad[2]), 1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h-1-y) / pad[3])) - blur = qsize * 0.02 - img += (gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0) - img += (np.median(img, axis=(0,1)) - img) * np.clip(mask, 0.0, 1.0) - img = np.uint8(np.clip(np.rint(img), 0, 255)) - if alpha: - mask = 1-np.clip(3.0 * mask, 0.0, 1.0) - mask = np.uint8(np.clip(np.rint(mask*255), 0, 255)) - img = np.concatenate((img, mask), axis=2) - img = Image.fromarray(img, 'RGBA') - else: - img = Image.fromarray(img, 'RGB') - quad += pad[:2] - - # Transform. - img = img.transform((transform_size, transform_size), Image.QUAD, (quad + 0.5).flatten(), Image.BILINEAR) - if output_size < transform_size: - img = img.resize((output_size, output_size), Image.Resampling.LANCZOS) - - return img, quad_orig diff --git a/spaces/OAOA/DifFace/datapipe/prepare/face/make_lfw.py b/spaces/OAOA/DifFace/datapipe/prepare/face/make_lfw.py deleted file mode 100644 index cc02eca46bedbf08a46ccaec45d521be427d5178..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/datapipe/prepare/face/make_lfw.py +++ /dev/null @@ -1,32 +0,0 @@ -#!/usr/bin/env python -# -*- coding:utf-8 -*- -# Power by Zongsheng Yue 2022-07-19 12:32:34 - -import os -import argparse -from pathlib import Path - -parser = argparse.ArgumentParser() -parser.add_argument("--save_dir", type=str, default="./testdata/LFW-Test", - help="Folder to save the LR images") -parser.add_argument("--data_dir", type=str, default="./testdata/lfw", - help="LFW Testing dataset") -parser.add_argument("--txt_file", type=str, default="./testdata/peopleDevTest.txt", - help="LFW Testing data file paths") -args = parser.parse_args() - -with open(args.txt_file, 'r') as ff: - file_dirs = [x.split('\t')[0] for x in ff.readlines()][1:] - -if not Path(args.save_dir).exists(): - Path(args.save_dir).mkdir(parents=True) - -for current_dir in file_dirs: - current_dir = Path(args.data_dir) / current_dir - file_path = sorted([str(x) for x in current_dir.glob('*.jpg')])[0] - commond = f'cp {file_path} {args.save_dir}' - os.system(commond) - -num_images = len([x for x in Path(args.save_dir).glob('*.jpg')]) -print(f'Number of images: {num_images}') - diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/get_common_voice_audio_manifest.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/get_common_voice_audio_manifest.py deleted file mode 100644 index a30254604311a488a1d4959f941051890ed32b2e..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/get_common_voice_audio_manifest.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -from pathlib import Path -from collections import defaultdict -from typing import List, Dict, Tuple - -import pandas as pd -import numpy as np -import torchaudio -from tqdm import tqdm - -from examples.speech_to_text.data_utils import load_df_from_tsv, save_df_to_tsv - - -log = logging.getLogger(__name__) - -SPLITS = ["train", "dev", "test"] - - -def get_top_n( - root: Path, n_speakers: int = 10, min_n_tokens: int = 5 -) -> pd.DataFrame: - df = load_df_from_tsv(root / "validated.tsv") - df["n_tokens"] = [len(s.split()) for s in df["sentence"]] - df = df[df["n_tokens"] >= min_n_tokens] - df["n_frames"] = [ - torchaudio.info((root / "clips" / p).as_posix()).num_frames - for p in tqdm(df["path"]) - ] - df["id"] = [Path(p).stem for p in df["path"]] - total_duration_ms = df.groupby("client_id")["n_frames"].agg(["sum"]) - total_duration_ms = total_duration_ms.sort_values("sum", ascending=False) - - top_n_total_duration_ms = total_duration_ms.head(n_speakers) - top_n_client_ids = set(top_n_total_duration_ms.index.tolist()) - df_top_n = df[df["client_id"].isin(top_n_client_ids)] - return df_top_n - - -def get_splits( - df, train_split_ratio=0.99, speaker_in_all_splits=False, rand_seed=0 -) -> Tuple[Dict[str, str], List[str]]: - np.random.seed(rand_seed) - dev_split_ratio = (1. - train_split_ratio) / 3 - grouped = list(df.groupby("client_id")) - id_to_split = {} - for _, cur_df in tqdm(grouped): - cur_n_examples = len(cur_df) - if speaker_in_all_splits and cur_n_examples < 3: - continue - cur_n_train = int(cur_n_examples * train_split_ratio) - cur_n_dev = int(cur_n_examples * dev_split_ratio) - cur_n_test = cur_n_examples - cur_n_dev - cur_n_train - if speaker_in_all_splits and cur_n_dev * cur_n_test == 0: - cur_n_dev, cur_n_test = 1, 1 - cur_n_train = cur_n_examples - cur_n_dev - cur_n_test - cur_indices = cur_df.index.tolist() - cur_shuffled_indices = np.random.permutation(cur_n_examples) - cur_shuffled_indices = [cur_indices[i] for i in cur_shuffled_indices] - cur_indices_by_split = { - "train": cur_shuffled_indices[:cur_n_train], - "dev": cur_shuffled_indices[cur_n_train: cur_n_train + cur_n_dev], - "test": cur_shuffled_indices[cur_n_train + cur_n_dev:] - } - for split in SPLITS: - for i in cur_indices_by_split[split]: - id_ = df["id"].loc[i] - id_to_split[id_] = split - return id_to_split, sorted(df["client_id"].unique()) - - -def convert_to_wav(root: Path, filenames: List[str], target_sr=16_000): - out_root = root / "wav" - out_root.mkdir(exist_ok=True, parents=True) - print("Converting to WAV...") - for n in tqdm(filenames): - in_path = (root / "clips" / n).as_posix() - waveform, sr = torchaudio.load(in_path) - converted, converted_sr = torchaudio.sox_effects.apply_effects_tensor( - waveform, sr, [["rate", str(target_sr)], ["channels", "1"]] - ) - out_path = (out_root / Path(n).with_suffix(".wav").name).as_posix() - torchaudio.save(out_path, converted, converted_sr, encoding="PCM_S", - bits_per_sample=16) - - -def process(args): - data_root = Path(args.data_root).absolute() / args.lang - - # Generate TSV manifest - print("Generating manifest...") - - df_top_n = get_top_n(data_root) - id_to_split, speakers = get_splits(df_top_n) - - if args.convert_to_wav: - convert_to_wav(data_root, df_top_n["path"].tolist()) - - manifest_by_split = {split: defaultdict(list) for split in SPLITS} - for sample in tqdm(df_top_n.to_dict(orient="index").values()): - sample_id = sample["id"] - split = id_to_split[sample_id] - manifest_by_split[split]["id"].append(sample_id) - if args.convert_to_wav: - audio_path = data_root / "wav" / f"{sample_id}.wav" - else: - audio_path = data_root / "clips" / f"{sample_id}.mp3" - manifest_by_split[split]["audio"].append(audio_path.as_posix()) - manifest_by_split[split]["n_frames"].append(sample["n_frames"]) - manifest_by_split[split]["tgt_text"].append(sample["sentence"]) - manifest_by_split[split]["speaker"].append(sample["client_id"]) - manifest_by_split[split]["src_text"].append(sample["sentence"]) - - output_root = Path(args.output_manifest_root).absolute() - output_root.mkdir(parents=True, exist_ok=True) - for split in SPLITS: - save_df_to_tsv( - pd.DataFrame.from_dict(manifest_by_split[split]), - output_root / f"{split}.audio.tsv" - ) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--data-root", "-d", required=True, type=str) - parser.add_argument("--output-manifest-root", "-m", required=True, type=str) - parser.add_argument("--lang", "-l", required=True, type=str) - parser.add_argument("--convert-to-wav", action="store_true") - args = parser.parse_args() - - process(args) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/w2vu_generate.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/w2vu_generate.py deleted file mode 100644 index 6177239dc75f6937d036462a5a2379aaee202e7d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/w2vu_generate.py +++ /dev/null @@ -1,707 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Run inference for pre-processed data with a trained model. -""" - -import ast -from collections import namedtuple -from dataclasses import dataclass, field -from enum import Enum, auto -import hydra -from hydra.core.config_store import ConfigStore -import logging -import math -import os -from omegaconf import OmegaConf -from typing import Optional -import sys - -import editdistance -import torch - -from hydra.core.hydra_config import HydraConfig - -from fairseq import checkpoint_utils, progress_bar, tasks, utils -from fairseq.data.data_utils import post_process -from fairseq.dataclass.configs import FairseqDataclass, FairseqConfig -from fairseq.logging.meters import StopwatchMeter -from omegaconf import open_dict - -from examples.speech_recognition.kaldi.kaldi_decoder import KaldiDecoderConfig - -logging.root.setLevel(logging.INFO) -logging.basicConfig(stream=sys.stdout, level=logging.INFO) -logger = logging.getLogger(__name__) - - -class DecoderType(Enum): - VITERBI = auto() - KENLM = auto() - FAIRSEQ = auto() - KALDI = auto() - - -@dataclass -class UnsupGenerateConfig(FairseqDataclass): - fairseq: FairseqConfig = FairseqConfig() - lm_weight: float = field( - default=2.0, - metadata={"help": "language model weight"}, - ) - w2l_decoder: DecoderType = field( - default=DecoderType.VITERBI, - metadata={"help": "type of decoder to use"}, - ) - kaldi_decoder_config: Optional[KaldiDecoderConfig] = None - lexicon: Optional[str] = field( - default=None, - metadata={ - "help": "path to lexicon. This is also used to 'phonemize' for unsupvised param tuning" - }, - ) - lm_model: Optional[str] = field( - default=None, - metadata={"help": "path to language model (kenlm or fairseq)"}, - ) - unit_lm: bool = field( - default=False, - metadata={"help": "whether to use unit lm"}, - ) - beam_threshold: float = field( - default=50.0, - metadata={"help": "beam score threshold"}, - ) - beam_size_token: float = field( - default=100.0, - metadata={"help": "max tokens per beam"}, - ) - beam: int = field( - default=5, - metadata={"help": "decoder beam size"}, - ) - nbest: int = field( - default=1, - metadata={"help": "number of results to return"}, - ) - word_score: float = field( - default=1.0, - metadata={"help": "word score to add at end of word"}, - ) - unk_weight: float = field( - default=-math.inf, - metadata={"help": "unknown token weight"}, - ) - sil_weight: float = field( - default=0.0, - metadata={"help": "silence token weight"}, - ) - targets: Optional[str] = field( - default=None, - metadata={"help": "extension of ground truth labels to compute UER"}, - ) - results_path: Optional[str] = field( - default=None, - metadata={"help": "where to store results"}, - ) - post_process: Optional[str] = field( - default=None, - metadata={"help": "how to post process results"}, - ) - vocab_usage_power: float = field( - default=2, - metadata={"help": "for unsupervised param tuning"}, - ) - - viterbi_transcript: Optional[str] = field( - default=None, - metadata={"help": "for unsupervised param tuning"}, - ) - min_lm_ppl: float = field( - default=0, - metadata={"help": "for unsupervised param tuning"}, - ) - min_vt_uer: float = field( - default=0, - metadata={"help": "for unsupervised param tuning"}, - ) - - blank_weight: float = field( - default=0, - metadata={"help": "value to add or set for blank emission"}, - ) - blank_mode: str = field( - default="set", - metadata={ - "help": "can be add or set, how to modify blank emission with blank weight" - }, - ) - sil_is_blank: bool = field( - default=False, - metadata={"help": "if true, token is same as blank token"}, - ) - - unsupervised_tuning: bool = field( - default=False, - metadata={ - "help": "if true, returns a score based on unsupervised param selection metric instead of UER" - }, - ) - is_ax: bool = field( - default=False, - metadata={ - "help": "if true, assumes we are using ax for tuning and returns a tuple for ax to consume" - }, - ) - - -def get_dataset_itr(cfg, task): - return task.get_batch_iterator( - dataset=task.dataset(cfg.fairseq.dataset.gen_subset), - max_tokens=cfg.fairseq.dataset.max_tokens, - max_sentences=cfg.fairseq.dataset.batch_size, - max_positions=(sys.maxsize, sys.maxsize), - ignore_invalid_inputs=cfg.fairseq.dataset.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=cfg.fairseq.dataset.required_batch_size_multiple, - num_shards=cfg.fairseq.dataset.num_shards, - shard_id=cfg.fairseq.dataset.shard_id, - num_workers=cfg.fairseq.dataset.num_workers, - data_buffer_size=cfg.fairseq.dataset.data_buffer_size, - ).next_epoch_itr(shuffle=False) - - -def process_predictions( - cfg: UnsupGenerateConfig, - hypos, - tgt_dict, - target_tokens, - res_files, -): - retval = [] - word_preds = [] - transcriptions = [] - dec_scores = [] - - for i, hypo in enumerate(hypos[: min(len(hypos), cfg.nbest)]): - if torch.is_tensor(hypo["tokens"]): - tokens = hypo["tokens"].int().cpu() - tokens = tokens[tokens >= tgt_dict.nspecial] - hyp_pieces = tgt_dict.string(tokens) - else: - hyp_pieces = " ".join(hypo["tokens"]) - - if "words" in hypo and len(hypo["words"]) > 0: - hyp_words = " ".join(hypo["words"]) - else: - hyp_words = post_process(hyp_pieces, cfg.post_process) - - to_write = {} - if res_files is not None: - to_write[res_files["hypo.units"]] = hyp_pieces - to_write[res_files["hypo.words"]] = hyp_words - - tgt_words = "" - if target_tokens is not None: - if isinstance(target_tokens, str): - tgt_pieces = tgt_words = target_tokens - else: - tgt_pieces = tgt_dict.string(target_tokens) - tgt_words = post_process(tgt_pieces, cfg.post_process) - - if res_files is not None: - to_write[res_files["ref.units"]] = tgt_pieces - to_write[res_files["ref.words"]] = tgt_words - - if not cfg.fairseq.common_eval.quiet: - logger.info(f"HYPO {i}:" + hyp_words) - if tgt_words: - logger.info("TARGET:" + tgt_words) - - if "am_score" in hypo and "lm_score" in hypo: - logger.info( - f"DECODER AM SCORE: {hypo['am_score']}, DECODER LM SCORE: {hypo['lm_score']}, DECODER SCORE: {hypo['score']}" - ) - elif "score" in hypo: - logger.info(f"DECODER SCORE: {hypo['score']}") - - logger.info("___________________") - - hyp_words_arr = hyp_words.split() - tgt_words_arr = tgt_words.split() - - retval.append( - ( - editdistance.eval(hyp_words_arr, tgt_words_arr), - len(hyp_words_arr), - len(tgt_words_arr), - hyp_pieces, - hyp_words, - ) - ) - word_preds.append(hyp_words_arr) - transcriptions.append(to_write) - dec_scores.append(-hypo.get("score", 0)) # negate cuz kaldi returns NLL - - if len(retval) > 1: - best = None - for r, t in zip(retval, transcriptions): - if best is None or r[0] < best[0][0]: - best = r, t - for dest, tran in best[1].items(): - print(tran, file=dest) - dest.flush() - return best[0] - - assert len(transcriptions) == 1 - for dest, tran in transcriptions[0].items(): - print(tran, file=dest) - - return retval[0] - - -def prepare_result_files(cfg: UnsupGenerateConfig): - def get_res_file(file_prefix): - if cfg.fairseq.dataset.num_shards > 1: - file_prefix = f"{cfg.fairseq.dataset.shard_id}_{file_prefix}" - path = os.path.join( - cfg.results_path, - "{}{}.txt".format( - cfg.fairseq.dataset.gen_subset, - file_prefix, - ), - ) - return open(path, "w", buffering=1) - - if not cfg.results_path: - return None - - return { - "hypo.words": get_res_file(""), - "hypo.units": get_res_file("_units"), - "ref.words": get_res_file("_ref"), - "ref.units": get_res_file("_ref_units"), - "hypo.nbest.words": get_res_file("_nbest_words"), - } - - -def optimize_models(cfg: UnsupGenerateConfig, use_cuda, models): - """Optimize ensemble for generation""" - for model in models: - model.eval() - if cfg.fairseq.common.fp16: - model.half() - if use_cuda: - model.cuda() - - -GenResult = namedtuple( - "GenResult", - [ - "count", - "errs_t", - "gen_timer", - "lengths_hyp_unit_t", - "lengths_hyp_t", - "lengths_t", - "lm_score_t", - "num_feats", - "num_sentences", - "num_symbols", - "vt_err_t", - "vt_length_t", - ], -) - - -def generate(cfg: UnsupGenerateConfig, models, saved_cfg, use_cuda): - task = tasks.setup_task(cfg.fairseq.task) - saved_cfg.task.labels = cfg.fairseq.task.labels - task.load_dataset(cfg.fairseq.dataset.gen_subset, task_cfg=saved_cfg.task) - # Set dictionary - tgt_dict = task.target_dictionary - logger.info( - "| {} {} {} examples".format( - cfg.fairseq.task.data, - cfg.fairseq.dataset.gen_subset, - len(task.dataset(cfg.fairseq.dataset.gen_subset)), - ) - ) - # Load dataset (possibly sharded) - itr = get_dataset_itr(cfg, task) - # Initialize generator - gen_timer = StopwatchMeter() - - def build_generator(cfg: UnsupGenerateConfig): - w2l_decoder = cfg.w2l_decoder - if w2l_decoder == DecoderType.VITERBI: - from examples.speech_recognition.w2l_decoder import W2lViterbiDecoder - - return W2lViterbiDecoder(cfg, task.target_dictionary) - elif w2l_decoder == DecoderType.KENLM: - from examples.speech_recognition.w2l_decoder import W2lKenLMDecoder - - return W2lKenLMDecoder(cfg, task.target_dictionary) - elif w2l_decoder == DecoderType.FAIRSEQ: - from examples.speech_recognition.w2l_decoder import W2lFairseqLMDecoder - - return W2lFairseqLMDecoder(cfg, task.target_dictionary) - elif w2l_decoder == DecoderType.KALDI: - from examples.speech_recognition.kaldi.kaldi_decoder import KaldiDecoder - - assert cfg.kaldi_decoder_config is not None - - return KaldiDecoder( - cfg.kaldi_decoder_config, - cfg.beam, - ) - else: - raise NotImplementedError( - "only wav2letter decoders with (viterbi, kenlm, fairseqlm) options are supported at the moment but found " - + str(w2l_decoder) - ) - - generator = build_generator(cfg) - - kenlm = None - fairseq_lm = None - if cfg.lm_model is not None: - import kenlm - - kenlm = kenlm.Model(cfg.lm_model) - - num_sentences = 0 - if cfg.results_path is not None and not os.path.exists(cfg.results_path): - os.makedirs(cfg.results_path) - - res_files = prepare_result_files(cfg) - errs_t = 0 - lengths_hyp_t = 0 - lengths_hyp_unit_t = 0 - lengths_t = 0 - count = 0 - num_feats = 0 - all_hyp_pieces = [] - all_hyp_words = [] - - num_symbols = ( - len([s for s in tgt_dict.symbols if not s.startswith("madeup")]) - - tgt_dict.nspecial - ) - targets = None - if cfg.targets is not None: - tgt_path = os.path.join( - cfg.fairseq.task.data, cfg.fairseq.dataset.gen_subset + "." + cfg.targets - ) - if os.path.exists(tgt_path): - with open(tgt_path, "r") as f: - targets = f.read().splitlines() - viterbi_transcript = None - if cfg.viterbi_transcript is not None and len(cfg.viterbi_transcript) > 0: - logger.info(f"loading viterbi transcript from {cfg.viterbi_transcript}") - with open(cfg.viterbi_transcript, "r") as vf: - viterbi_transcript = vf.readlines() - viterbi_transcript = [v.rstrip().split() for v in viterbi_transcript] - - gen_timer.start() - - start = 0 - end = len(itr) - - hypo_futures = None - if cfg.w2l_decoder == DecoderType.KALDI: - logger.info("Extracting features") - hypo_futures = [] - samples = [] - with progress_bar.build_progress_bar(cfg.fairseq.common, itr) as t: - for i, sample in enumerate(t): - if "net_input" not in sample or i < start or i >= end: - continue - if "padding_mask" not in sample["net_input"]: - sample["net_input"]["padding_mask"] = None - - hypos, num_feats = gen_hypos( - generator, models, num_feats, sample, task, use_cuda - ) - hypo_futures.append(hypos) - samples.append(sample) - itr = list(zip(hypo_futures, samples)) - start = 0 - end = len(itr) - logger.info("Finished extracting features") - - with progress_bar.build_progress_bar(cfg.fairseq.common, itr) as t: - for i, sample in enumerate(t): - if i < start or i >= end: - continue - - if hypo_futures is not None: - hypos, sample = sample - hypos = [h.result() for h in hypos] - else: - if "net_input" not in sample: - continue - - hypos, num_feats = gen_hypos( - generator, models, num_feats, sample, task, use_cuda - ) - - for i, sample_id in enumerate(sample["id"].tolist()): - if targets is not None: - target_tokens = targets[sample_id] - elif "target" in sample or "target_label" in sample: - toks = ( - sample["target"][i, :] - if "target_label" not in sample - else sample["target_label"][i, :] - ) - - target_tokens = utils.strip_pad(toks, tgt_dict.pad()).int().cpu() - else: - target_tokens = None - - # Process top predictions - ( - errs, - length_hyp, - length, - hyp_pieces, - hyp_words, - ) = process_predictions( - cfg, - hypos[i], - tgt_dict, - target_tokens, - res_files, - ) - errs_t += errs - lengths_hyp_t += length_hyp - lengths_hyp_unit_t += ( - len(hyp_pieces) if len(hyp_pieces) > 0 else len(hyp_words) - ) - lengths_t += length - count += 1 - all_hyp_pieces.append(hyp_pieces) - all_hyp_words.append(hyp_words) - - num_sentences += ( - sample["nsentences"] if "nsentences" in sample else sample["id"].numel() - ) - - lm_score_sum = 0 - if kenlm is not None: - - if cfg.unit_lm: - lm_score_sum = sum(kenlm.score(w) for w in all_hyp_pieces) - else: - lm_score_sum = sum(kenlm.score(w) for w in all_hyp_words) - elif fairseq_lm is not None: - lm_score_sum = sum(fairseq_lm.score([h.split() for h in all_hyp_words])[0]) - - vt_err_t = 0 - vt_length_t = 0 - if viterbi_transcript is not None: - unit_hyps = [] - if cfg.targets is not None and cfg.lexicon is not None: - lex = {} - with open(cfg.lexicon, "r") as lf: - for line in lf: - items = line.rstrip().split() - lex[items[0]] = items[1:] - for h in all_hyp_pieces: - hyp_ws = [] - for w in h.split(): - assert w in lex, w - hyp_ws.extend(lex[w]) - unit_hyps.append(hyp_ws) - - else: - unit_hyps.extend([h.split() for h in all_hyp_words]) - - vt_err_t = sum( - editdistance.eval(vt, h) for vt, h in zip(viterbi_transcript, unit_hyps) - ) - - vt_length_t = sum(len(h) for h in viterbi_transcript) - - if res_files is not None: - for r in res_files.values(): - r.close() - - gen_timer.stop(lengths_hyp_t) - - return GenResult( - count, - errs_t, - gen_timer, - lengths_hyp_unit_t, - lengths_hyp_t, - lengths_t, - lm_score_sum, - num_feats, - num_sentences, - num_symbols, - vt_err_t, - vt_length_t, - ) - - -def gen_hypos(generator, models, num_feats, sample, task, use_cuda): - sample = utils.move_to_cuda(sample) if use_cuda else sample - - if "features" in sample["net_input"]: - sample["net_input"]["dense_x_only"] = True - num_feats += ( - sample["net_input"]["features"].shape[0] - * sample["net_input"]["features"].shape[1] - ) - hypos = task.inference_step(generator, models, sample, None) - return hypos, num_feats - - -def main(cfg: UnsupGenerateConfig, model=None): - if ( - cfg.fairseq.dataset.max_tokens is None - and cfg.fairseq.dataset.batch_size is None - ): - cfg.fairseq.dataset.max_tokens = 1024000 - - use_cuda = torch.cuda.is_available() and not cfg.fairseq.common.cpu - - task = tasks.setup_task(cfg.fairseq.task) - - overrides = ast.literal_eval(cfg.fairseq.common_eval.model_overrides) - - if cfg.fairseq.task._name == "unpaired_audio_text": - overrides["model"] = { - "blank_weight": cfg.blank_weight, - "blank_mode": cfg.blank_mode, - "blank_is_sil": cfg.sil_is_blank, - "no_softmax": True, - "segmentation": { - "type": "NONE", - }, - } - else: - overrides["model"] = { - "blank_weight": cfg.blank_weight, - "blank_mode": cfg.blank_mode, - } - - if model is None: - # Load ensemble - logger.info("| loading model(s) from {}".format(cfg.fairseq.common_eval.path)) - models, saved_cfg = checkpoint_utils.load_model_ensemble( - cfg.fairseq.common_eval.path.split("\\"), - arg_overrides=overrides, - task=task, - suffix=cfg.fairseq.checkpoint.checkpoint_suffix, - strict=(cfg.fairseq.checkpoint.checkpoint_shard_count == 1), - num_shards=cfg.fairseq.checkpoint.checkpoint_shard_count, - ) - optimize_models(cfg, use_cuda, models) - else: - models = [model] - saved_cfg = cfg.fairseq - - with open_dict(saved_cfg.task): - saved_cfg.task.shuffle = False - saved_cfg.task.sort_by_length = False - - gen_result = generate(cfg, models, saved_cfg, use_cuda) - - wer = None - if gen_result.lengths_t > 0: - wer = gen_result.errs_t * 100.0 / gen_result.lengths_t - logger.info(f"WER: {wer}") - - lm_ppl = float("inf") - - if gen_result.lm_score_t != 0 and gen_result.lengths_hyp_t > 0: - hyp_len = gen_result.lengths_hyp_t - lm_ppl = math.pow( - 10, -gen_result.lm_score_t / (hyp_len + gen_result.num_sentences) - ) - logger.info(f"LM PPL: {lm_ppl}") - - logger.info( - "| Processed {} sentences ({} tokens) in {:.1f}s ({:.2f}" - " sentences/s, {:.2f} tokens/s)".format( - gen_result.num_sentences, - gen_result.gen_timer.n, - gen_result.gen_timer.sum, - gen_result.num_sentences / gen_result.gen_timer.sum, - 1.0 / gen_result.gen_timer.avg, - ) - ) - - vt_diff = None - if gen_result.vt_length_t > 0: - vt_diff = gen_result.vt_err_t / gen_result.vt_length_t - vt_diff = max(cfg.min_vt_uer, vt_diff) - - lm_ppl = max(cfg.min_lm_ppl, lm_ppl) - - if not cfg.unsupervised_tuning == 0: - weighted_score = wer - else: - weighted_score = math.log(lm_ppl) * (vt_diff or 1.0) - - res = ( - f"| Generate {cfg.fairseq.dataset.gen_subset} with beam={cfg.beam}, " - f"lm_weight={cfg.kaldi_decoder_config.acoustic_scale if cfg.kaldi_decoder_config else cfg.lm_weight}, " - f"word_score={cfg.word_score}, sil_weight={cfg.sil_weight}, blank_weight={cfg.blank_weight}, " - f"WER: {wer}, LM_PPL: {lm_ppl}, num feats: {gen_result.num_feats}, " - f"length: {gen_result.lengths_hyp_t}, UER to viterbi: {(vt_diff or 0) * 100}, score: {weighted_score}" - ) - - logger.info(res) - # print(res) - - return task, weighted_score - - -@hydra.main( - config_path=os.path.join("../../..", "fairseq", "config"), config_name="config" -) -def hydra_main(cfg): - with open_dict(cfg): - # make hydra logging work with ddp (see # see https://github.com/facebookresearch/hydra/issues/1126) - cfg.job_logging_cfg = OmegaConf.to_container( - HydraConfig.get().job_logging, resolve=True - ) - - cfg = OmegaConf.create( - OmegaConf.to_container(cfg, resolve=False, enum_to_str=False) - ) - OmegaConf.set_struct(cfg, True) - logger.info(cfg) - - utils.import_user_module(cfg.fairseq.common) - - _, score = main(cfg) - - if cfg.is_ax: - return score, None - return score - - -def cli_main(): - try: - from hydra._internal.utils import get_args - - cfg_name = get_args().config_name or "config" - except: - logger.warning("Failed to get config name from hydra args") - cfg_name = "config" - - cs = ConfigStore.instance() - cs.store(name=cfg_name, node=UnsupGenerateConfig) - hydra_main() - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/speech_recognition/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/speech_recognition/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes_panoptic.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes_panoptic.py deleted file mode 100644 index 48c136f1623261b079591065fec7c7fc38165076..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes_panoptic.py +++ /dev/null @@ -1,187 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import json -import logging -import os - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets.builtin_meta import CITYSCAPES_CATEGORIES -from detectron2.utils.file_io import PathManager - -""" -This file contains functions to register the Cityscapes panoptic dataset to the DatasetCatalog. -""" - - -logger = logging.getLogger(__name__) - - -def get_cityscapes_panoptic_files(image_dir, gt_dir, json_info): - files = [] - # scan through the directory - cities = PathManager.ls(image_dir) - logger.info(f"{len(cities)} cities found in '{image_dir}'.") - image_dict = {} - for city in cities: - city_img_dir = os.path.join(image_dir, city) - for basename in PathManager.ls(city_img_dir): - image_file = os.path.join(city_img_dir, basename) - - suffix = "_leftImg8bit.png" - assert basename.endswith(suffix), basename - basename = os.path.basename(basename)[: -len(suffix)] - - image_dict[basename] = image_file - - for ann in json_info["annotations"]: - image_file = image_dict.get(ann["image_id"], None) - assert image_file is not None, "No image {} found for annotation {}".format( - ann["image_id"], ann["file_name"] - ) - label_file = os.path.join(gt_dir, ann["file_name"]) - segments_info = ann["segments_info"] - - files.append((image_file, label_file, segments_info)) - - assert len(files), "No images found in {}".format(image_dir) - assert PathManager.isfile(files[0][0]), files[0][0] - assert PathManager.isfile(files[0][1]), files[0][1] - return files - - -def load_cityscapes_panoptic(image_dir, gt_dir, gt_json, meta): - """ - Args: - image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train". - gt_dir (str): path to the raw annotations. e.g., - "~/cityscapes/gtFine/cityscapes_panoptic_train". - gt_json (str): path to the json file. e.g., - "~/cityscapes/gtFine/cityscapes_panoptic_train.json". - meta (dict): dictionary containing "thing_dataset_id_to_contiguous_id" - and "stuff_dataset_id_to_contiguous_id" to map category ids to - contiguous ids for training. - - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - """ - - def _convert_category_id(segment_info, meta): - if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]: - segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - else: - segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - return segment_info - - assert os.path.exists( - gt_json - ), "Please run `python cityscapesscripts/preparation/createPanopticImgs.py` to generate label files." # noqa - with open(gt_json) as f: - json_info = json.load(f) - files = get_cityscapes_panoptic_files(image_dir, gt_dir, json_info) - ret = [] - for image_file, label_file, segments_info in files: - sem_label_file = ( - image_file.replace("leftImg8bit", "gtFine").split(".")[0] + "_labelTrainIds.png" - ) - segments_info = [_convert_category_id(x, meta) for x in segments_info] - ret.append( - { - "file_name": image_file, - "image_id": "_".join( - os.path.splitext(os.path.basename(image_file))[0].split("_")[:3] - ), - "sem_seg_file_name": sem_label_file, - "pan_seg_file_name": label_file, - "segments_info": segments_info, - } - ) - assert len(ret), f"No images found in {image_dir}!" - assert PathManager.isfile( - ret[0]["sem_seg_file_name"] - ), "Please generate labelTrainIds.png with cityscapesscripts/preparation/createTrainIdLabelImgs.py" # noqa - assert PathManager.isfile( - ret[0]["pan_seg_file_name"] - ), "Please generate panoptic annotation with python cityscapesscripts/preparation/createPanopticImgs.py" # noqa - return ret - - -_RAW_CITYSCAPES_PANOPTIC_SPLITS = { - "cityscapes_fine_panoptic_train": ( - "cityscapes/leftImg8bit/train", - "cityscapes/gtFine/cityscapes_panoptic_train", - "cityscapes/gtFine/cityscapes_panoptic_train.json", - ), - "cityscapes_fine_panoptic_val": ( - "cityscapes/leftImg8bit/val", - "cityscapes/gtFine/cityscapes_panoptic_val", - "cityscapes/gtFine/cityscapes_panoptic_val.json", - ), - # "cityscapes_fine_panoptic_test": not supported yet -} - - -def register_all_cityscapes_panoptic(root): - meta = {} - # The following metadata maps contiguous id from [0, #thing categories + - # #stuff categories) to their names and colors. We have to replica of the - # same name and color under "thing_*" and "stuff_*" because the current - # visualization function in D2 handles thing and class classes differently - # due to some heuristic used in Panoptic FPN. We keep the same naming to - # enable reusing existing visualization functions. - thing_classes = [k["name"] for k in CITYSCAPES_CATEGORIES] - thing_colors = [k["color"] for k in CITYSCAPES_CATEGORIES] - stuff_classes = [k["name"] for k in CITYSCAPES_CATEGORIES] - stuff_colors = [k["color"] for k in CITYSCAPES_CATEGORIES] - - meta["thing_classes"] = thing_classes - meta["thing_colors"] = thing_colors - meta["stuff_classes"] = stuff_classes - meta["stuff_colors"] = stuff_colors - - # There are three types of ids in cityscapes panoptic segmentation: - # (1) category id: like semantic segmentation, it is the class id for each - # pixel. Since there are some classes not used in evaluation, the category - # id is not always contiguous and thus we have two set of category ids: - # - original category id: category id in the original dataset, mainly - # used for evaluation. - # - contiguous category id: [0, #classes), in order to train the classifier - # (2) instance id: this id is used to differentiate different instances from - # the same category. For "stuff" classes, the instance id is always 0; for - # "thing" classes, the instance id starts from 1 and 0 is reserved for - # ignored instances (e.g. crowd annotation). - # (3) panoptic id: this is the compact id that encode both category and - # instance id by: category_id * 1000 + instance_id. - thing_dataset_id_to_contiguous_id = {} - stuff_dataset_id_to_contiguous_id = {} - - for k in CITYSCAPES_CATEGORIES: - if k["isthing"] == 1: - thing_dataset_id_to_contiguous_id[k["id"]] = k["trainId"] - else: - stuff_dataset_id_to_contiguous_id[k["id"]] = k["trainId"] - - meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id - meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id - - for key, (image_dir, gt_dir, gt_json) in _RAW_CITYSCAPES_PANOPTIC_SPLITS.items(): - image_dir = os.path.join(root, image_dir) - gt_dir = os.path.join(root, gt_dir) - gt_json = os.path.join(root, gt_json) - - DatasetCatalog.register( - key, lambda x=image_dir, y=gt_dir, z=gt_json: load_cityscapes_panoptic(x, y, z, meta) - ) - MetadataCatalog.get(key).set( - panoptic_root=gt_dir, - image_root=image_dir, - panoptic_json=gt_json, - gt_dir=gt_dir.replace("cityscapes_panoptic_", ""), - evaluator_type="cityscapes_panoptic_seg", - ignore_label=255, - label_divisor=1000, - **meta, - ) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/dev/run_instant_tests.sh b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/dev/run_instant_tests.sh deleted file mode 100644 index 9fd9ba0c239d3e982c17711c9db872de3730decf..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/dev/run_instant_tests.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash -e -# Copyright (c) Facebook, Inc. and its affiliates. - -BIN="python tools/train_net.py" -OUTPUT="instant_test_output" -NUM_GPUS=2 - -CFG_LIST=( "${@:1}" ) -if [ ${#CFG_LIST[@]} -eq 0 ]; then - CFG_LIST=( ./configs/quick_schedules/*instant_test.yaml ) -fi - -echo "========================================================================" -echo "Configs to run:" -echo "${CFG_LIST[@]}" -echo "========================================================================" - -for cfg in "${CFG_LIST[@]}"; do - echo "========================================================================" - echo "Running $cfg ..." - echo "========================================================================" - $BIN --num-gpus $NUM_GPUS --config-file "$cfg" \ - SOLVER.IMS_PER_BATCH $(($NUM_GPUS * 2)) \ - OUTPUT_DIR "$OUTPUT" - rm -rf "$OUTPUT" -done - diff --git a/spaces/OpenMotionLab/MotionGPT/pyrender/pyrender/shader_program.py b/spaces/OpenMotionLab/MotionGPT/pyrender/pyrender/shader_program.py deleted file mode 100644 index c1803f280c98033abe0769771a9ad8ecfec942e3..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/pyrender/pyrender/shader_program.py +++ /dev/null @@ -1,283 +0,0 @@ -"""OpenGL shader program wrapper. -""" -import numpy as np -import os -import re - -import OpenGL -from OpenGL.GL import * -from OpenGL.GL import shaders as gl_shader_utils - - -class ShaderProgramCache(object): - """A cache for shader programs. - """ - - def __init__(self, shader_dir=None): - self._program_cache = {} - self.shader_dir = shader_dir - if self.shader_dir is None: - base_dir, _ = os.path.split(os.path.realpath(__file__)) - self.shader_dir = os.path.join(base_dir, 'shaders') - - def get_program(self, vertex_shader, fragment_shader, - geometry_shader=None, defines=None): - """Get a program via a list of shader files to include in the program. - - Parameters - ---------- - vertex_shader : str - The vertex shader filename. - fragment_shader : str - The fragment shader filename. - geometry_shader : str - The geometry shader filename. - defines : dict - Defines and their values for the shader. - - Returns - ------- - program : :class:`.ShaderProgram` - The program. - """ - shader_names = [] - if defines is None: - defines = {} - shader_filenames = [ - x for x in [vertex_shader, fragment_shader, geometry_shader] - if x is not None - ] - for fn in shader_filenames: - if fn is None: - continue - _, name = os.path.split(fn) - shader_names.append(name) - cid = OpenGL.contextdata.getContext() - key = tuple([cid] + sorted( - [(s,1) for s in shader_names] + [(d, defines[d]) for d in defines] - )) - - if key not in self._program_cache: - shader_filenames = [ - os.path.join(self.shader_dir, fn) for fn in shader_filenames - ] - if len(shader_filenames) == 2: - shader_filenames.append(None) - vs, fs, gs = shader_filenames - self._program_cache[key] = ShaderProgram( - vertex_shader=vs, fragment_shader=fs, - geometry_shader=gs, defines=defines - ) - return self._program_cache[key] - - def clear(self): - for key in self._program_cache: - self._program_cache[key].delete() - self._program_cache = {} - - -class ShaderProgram(object): - """A thin wrapper about OpenGL shader programs that supports easy creation, - binding, and uniform-setting. - - Parameters - ---------- - vertex_shader : str - The vertex shader filename. - fragment_shader : str - The fragment shader filename. - geometry_shader : str - The geometry shader filename. - defines : dict - Defines and their values for the shader. - """ - - def __init__(self, vertex_shader, fragment_shader, - geometry_shader=None, defines=None): - - self.vertex_shader = vertex_shader - self.fragment_shader = fragment_shader - self.geometry_shader = geometry_shader - - self.defines = defines - if self.defines is None: - self.defines = {} - - self._program_id = None - self._vao_id = None # PYOPENGL BUG - - # DEBUG - # self._unif_map = {} - - def _add_to_context(self): - if self._program_id is not None: - raise ValueError('Shader program already in context') - shader_ids = [] - - # Load vert shader - shader_ids.append(gl_shader_utils.compileShader( - self._load(self.vertex_shader), GL_VERTEX_SHADER) - ) - # Load frag shader - shader_ids.append(gl_shader_utils.compileShader( - self._load(self.fragment_shader), GL_FRAGMENT_SHADER) - ) - # Load geometry shader - if self.geometry_shader is not None: - shader_ids.append(gl_shader_utils.compileShader( - self._load(self.geometry_shader), GL_GEOMETRY_SHADER) - ) - - # Bind empty VAO PYOPENGL BUG - if self._vao_id is None: - self._vao_id = glGenVertexArrays(1) - glBindVertexArray(self._vao_id) - - # Compile program - self._program_id = gl_shader_utils.compileProgram(*shader_ids) - - # Unbind empty VAO PYOPENGL BUG - glBindVertexArray(0) - - def _in_context(self): - return self._program_id is not None - - def _remove_from_context(self): - if self._program_id is not None: - glDeleteProgram(self._program_id) - glDeleteVertexArrays(1, [self._vao_id]) - self._program_id = None - self._vao_id = None - - def _load(self, shader_filename): - path, _ = os.path.split(shader_filename) - - with open(shader_filename) as f: - text = f.read() - - def ifdef(matchobj): - if matchobj.group(1) in self.defines: - return '#if 1' - else: - return '#if 0' - - def ifndef(matchobj): - if matchobj.group(1) in self.defines: - return '#if 0' - else: - return '#if 1' - - ifdef_regex = re.compile( - '#ifdef\\s+([a-zA-Z_][a-zA-Z_0-9]*)\\s*$', re.MULTILINE - ) - ifndef_regex = re.compile( - '#ifndef\\s+([a-zA-Z_][a-zA-Z_0-9]*)\\s*$', re.MULTILINE - ) - text = re.sub(ifdef_regex, ifdef, text) - text = re.sub(ifndef_regex, ifndef, text) - - for define in self.defines: - value = str(self.defines[define]) - text = text.replace(define, value) - - return text - - def _bind(self): - """Bind this shader program to the current OpenGL context. - """ - if self._program_id is None: - raise ValueError('Cannot bind program that is not in context') - # glBindVertexArray(self._vao_id) - glUseProgram(self._program_id) - - def _unbind(self): - """Unbind this shader program from the current OpenGL context. - """ - glUseProgram(0) - - def delete(self): - """Delete this shader program from the current OpenGL context. - """ - self._remove_from_context() - - def set_uniform(self, name, value, unsigned=False): - """Set a uniform value in the current shader program. - - Parameters - ---------- - name : str - Name of the uniform to set. - value : int, float, or ndarray - Value to set the uniform to. - unsigned : bool - If True, ints will be treated as unsigned values. - """ - try: - # DEBUG - # self._unif_map[name] = 1, (1,) - loc = glGetUniformLocation(self._program_id, name) - - if loc == -1: - raise ValueError('Invalid shader variable: {}'.format(name)) - - if isinstance(value, np.ndarray): - # DEBUG - # self._unif_map[name] = value.size, value.shape - if value.ndim == 1: - if (np.issubdtype(value.dtype, np.unsignedinteger) or - unsigned): - dtype = 'u' - value = value.astype(np.uint32) - elif np.issubdtype(value.dtype, np.integer): - dtype = 'i' - value = value.astype(np.int32) - else: - dtype = 'f' - value = value.astype(np.float32) - self._FUNC_MAP[(value.shape[0], dtype)](loc, 1, value) - else: - self._FUNC_MAP[(value.shape[0], value.shape[1])]( - loc, 1, GL_TRUE, value - ) - - # Call correct uniform function - elif isinstance(value, float): - glUniform1f(loc, value) - elif isinstance(value, int): - if unsigned: - glUniform1ui(loc, value) - else: - glUniform1i(loc, value) - elif isinstance(value, bool): - if unsigned: - glUniform1ui(loc, int(value)) - else: - glUniform1i(loc, int(value)) - else: - raise ValueError('Invalid data type') - except Exception: - pass - - _FUNC_MAP = { - (1,'u'): glUniform1uiv, - (2,'u'): glUniform2uiv, - (3,'u'): glUniform3uiv, - (4,'u'): glUniform4uiv, - (1,'i'): glUniform1iv, - (2,'i'): glUniform2iv, - (3,'i'): glUniform3iv, - (4,'i'): glUniform4iv, - (1,'f'): glUniform1fv, - (2,'f'): glUniform2fv, - (3,'f'): glUniform3fv, - (4,'f'): glUniform4fv, - (2,2): glUniformMatrix2fv, - (2,3): glUniformMatrix2x3fv, - (2,4): glUniformMatrix2x4fv, - (3,2): glUniformMatrix3x2fv, - (3,3): glUniformMatrix3fv, - (3,4): glUniformMatrix3x4fv, - (4,2): glUniformMatrix4x2fv, - (4,3): glUniformMatrix4x3fv, - (4,4): glUniformMatrix4fv, - } diff --git a/spaces/PAIR/Text2Video-Zero/annotator/midas/midas/transforms.py b/spaces/PAIR/Text2Video-Zero/annotator/midas/midas/transforms.py deleted file mode 100644 index 350cbc11662633ad7f8968eb10be2e7de6e384e9..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/midas/midas/transforms.py +++ /dev/null @@ -1,234 +0,0 @@ -import numpy as np -import cv2 -import math - - -def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA): - """Rezise the sample to ensure the given size. Keeps aspect ratio. - - Args: - sample (dict): sample - size (tuple): image size - - Returns: - tuple: new size - """ - shape = list(sample["disparity"].shape) - - if shape[0] >= size[0] and shape[1] >= size[1]: - return sample - - scale = [0, 0] - scale[0] = size[0] / shape[0] - scale[1] = size[1] / shape[1] - - scale = max(scale) - - shape[0] = math.ceil(scale * shape[0]) - shape[1] = math.ceil(scale * shape[1]) - - # resize - sample["image"] = cv2.resize( - sample["image"], tuple(shape[::-1]), interpolation=image_interpolation_method - ) - - sample["disparity"] = cv2.resize( - sample["disparity"], tuple(shape[::-1]), interpolation=cv2.INTER_NEAREST - ) - sample["mask"] = cv2.resize( - sample["mask"].astype(np.float32), - tuple(shape[::-1]), - interpolation=cv2.INTER_NEAREST, - ) - sample["mask"] = sample["mask"].astype(bool) - - return tuple(shape) - - -class Resize(object): - """Resize sample to given size (width, height). - """ - - def __init__( - self, - width, - height, - resize_target=True, - keep_aspect_ratio=False, - ensure_multiple_of=1, - resize_method="lower_bound", - image_interpolation_method=cv2.INTER_AREA, - ): - """Init. - - Args: - width (int): desired output width - height (int): desired output height - resize_target (bool, optional): - True: Resize the full sample (image, mask, target). - False: Resize image only. - Defaults to True. - keep_aspect_ratio (bool, optional): - True: Keep the aspect ratio of the input sample. - Output sample might not have the given width and height, and - resize behaviour depends on the parameter 'resize_method'. - Defaults to False. - ensure_multiple_of (int, optional): - Output width and height is constrained to be multiple of this parameter. - Defaults to 1. - resize_method (str, optional): - "lower_bound": Output will be at least as large as the given size. - "upper_bound": Output will be at max as large as the given size. (Output size might be smaller than given size.) - "minimal": Scale as least as possible. (Output size might be smaller than given size.) - Defaults to "lower_bound". - """ - self.__width = width - self.__height = height - - self.__resize_target = resize_target - self.__keep_aspect_ratio = keep_aspect_ratio - self.__multiple_of = ensure_multiple_of - self.__resize_method = resize_method - self.__image_interpolation_method = image_interpolation_method - - def constrain_to_multiple_of(self, x, min_val=0, max_val=None): - y = (np.round(x / self.__multiple_of) * self.__multiple_of).astype(int) - - if max_val is not None and y > max_val: - y = (np.floor(x / self.__multiple_of) * self.__multiple_of).astype(int) - - if y < min_val: - y = (np.ceil(x / self.__multiple_of) * self.__multiple_of).astype(int) - - return y - - def get_size(self, width, height): - # determine new height and width - scale_height = self.__height / height - scale_width = self.__width / width - - if self.__keep_aspect_ratio: - if self.__resize_method == "lower_bound": - # scale such that output size is lower bound - if scale_width > scale_height: - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - elif self.__resize_method == "upper_bound": - # scale such that output size is upper bound - if scale_width < scale_height: - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - elif self.__resize_method == "minimal": - # scale as least as possbile - if abs(1 - scale_width) < abs(1 - scale_height): - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - else: - raise ValueError( - f"resize_method {self.__resize_method} not implemented" - ) - - if self.__resize_method == "lower_bound": - new_height = self.constrain_to_multiple_of( - scale_height * height, min_val=self.__height - ) - new_width = self.constrain_to_multiple_of( - scale_width * width, min_val=self.__width - ) - elif self.__resize_method == "upper_bound": - new_height = self.constrain_to_multiple_of( - scale_height * height, max_val=self.__height - ) - new_width = self.constrain_to_multiple_of( - scale_width * width, max_val=self.__width - ) - elif self.__resize_method == "minimal": - new_height = self.constrain_to_multiple_of(scale_height * height) - new_width = self.constrain_to_multiple_of(scale_width * width) - else: - raise ValueError(f"resize_method {self.__resize_method} not implemented") - - return (new_width, new_height) - - def __call__(self, sample): - width, height = self.get_size( - sample["image"].shape[1], sample["image"].shape[0] - ) - - # resize sample - sample["image"] = cv2.resize( - sample["image"], - (width, height), - interpolation=self.__image_interpolation_method, - ) - - if self.__resize_target: - if "disparity" in sample: - sample["disparity"] = cv2.resize( - sample["disparity"], - (width, height), - interpolation=cv2.INTER_NEAREST, - ) - - if "depth" in sample: - sample["depth"] = cv2.resize( - sample["depth"], (width, height), interpolation=cv2.INTER_NEAREST - ) - - sample["mask"] = cv2.resize( - sample["mask"].astype(np.float32), - (width, height), - interpolation=cv2.INTER_NEAREST, - ) - sample["mask"] = sample["mask"].astype(bool) - - return sample - - -class NormalizeImage(object): - """Normlize image by given mean and std. - """ - - def __init__(self, mean, std): - self.__mean = mean - self.__std = std - - def __call__(self, sample): - sample["image"] = (sample["image"] - self.__mean) / self.__std - - return sample - - -class PrepareForNet(object): - """Prepare sample for usage as network input. - """ - - def __init__(self): - pass - - def __call__(self, sample): - image = np.transpose(sample["image"], (2, 0, 1)) - sample["image"] = np.ascontiguousarray(image).astype(np.float32) - - if "mask" in sample: - sample["mask"] = sample["mask"].astype(np.float32) - sample["mask"] = np.ascontiguousarray(sample["mask"]) - - if "disparity" in sample: - disparity = sample["disparity"].astype(np.float32) - sample["disparity"] = np.ascontiguousarray(disparity) - - if "depth" in sample: - depth = sample["depth"].astype(np.float32) - sample["depth"] = np.ascontiguousarray(depth) - - return sample diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/hooks/checkpoint.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/hooks/checkpoint.py deleted file mode 100644 index 6af3fae43ac4b35532641a81eb13557edfc7dfba..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/hooks/checkpoint.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings - -from annotator.uniformer.mmcv.fileio import FileClient -from ..dist_utils import allreduce_params, master_only -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class CheckpointHook(Hook): - """Save checkpoints periodically. - - Args: - interval (int): The saving period. If ``by_epoch=True``, interval - indicates epochs, otherwise it indicates iterations. - Default: -1, which means "never". - by_epoch (bool): Saving checkpoints by epoch or by iteration. - Default: True. - save_optimizer (bool): Whether to save optimizer state_dict in the - checkpoint. It is usually used for resuming experiments. - Default: True. - out_dir (str, optional): The root directory to save checkpoints. If not - specified, ``runner.work_dir`` will be used by default. If - specified, the ``out_dir`` will be the concatenation of ``out_dir`` - and the last level directory of ``runner.work_dir``. - `Changed in version 1.3.16.` - max_keep_ckpts (int, optional): The maximum checkpoints to keep. - In some cases we want only the latest few checkpoints and would - like to delete old ones to save the disk space. - Default: -1, which means unlimited. - save_last (bool, optional): Whether to force the last checkpoint to be - saved regardless of interval. Default: True. - sync_buffer (bool, optional): Whether to synchronize buffers in - different gpus. Default: False. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - `New in version 1.3.16.` - - .. warning:: - Before v1.3.16, the ``out_dir`` argument indicates the path where the - checkpoint is stored. However, since v1.3.16, ``out_dir`` indicates the - root directory and the final path to save checkpoint is the - concatenation of ``out_dir`` and the last level directory of - ``runner.work_dir``. Suppose the value of ``out_dir`` is "/path/of/A" - and the value of ``runner.work_dir`` is "/path/of/B", then the final - path will be "/path/of/A/B". - """ - - def __init__(self, - interval=-1, - by_epoch=True, - save_optimizer=True, - out_dir=None, - max_keep_ckpts=-1, - save_last=True, - sync_buffer=False, - file_client_args=None, - **kwargs): - self.interval = interval - self.by_epoch = by_epoch - self.save_optimizer = save_optimizer - self.out_dir = out_dir - self.max_keep_ckpts = max_keep_ckpts - self.save_last = save_last - self.args = kwargs - self.sync_buffer = sync_buffer - self.file_client_args = file_client_args - - def before_run(self, runner): - if not self.out_dir: - self.out_dir = runner.work_dir - - self.file_client = FileClient.infer_client(self.file_client_args, - self.out_dir) - - # if `self.out_dir` is not equal to `runner.work_dir`, it means that - # `self.out_dir` is set so the final `self.out_dir` is the - # concatenation of `self.out_dir` and the last level directory of - # `runner.work_dir` - if self.out_dir != runner.work_dir: - basename = osp.basename(runner.work_dir.rstrip(osp.sep)) - self.out_dir = self.file_client.join_path(self.out_dir, basename) - - runner.logger.info((f'Checkpoints will be saved to {self.out_dir} by ' - f'{self.file_client.name}.')) - - # disable the create_symlink option because some file backends do not - # allow to create a symlink - if 'create_symlink' in self.args: - if self.args[ - 'create_symlink'] and not self.file_client.allow_symlink: - self.args['create_symlink'] = False - warnings.warn( - ('create_symlink is set as True by the user but is changed' - 'to be False because creating symbolic link is not ' - f'allowed in {self.file_client.name}')) - else: - self.args['create_symlink'] = self.file_client.allow_symlink - - def after_train_epoch(self, runner): - if not self.by_epoch: - return - - # save checkpoint for following cases: - # 1. every ``self.interval`` epochs - # 2. reach the last epoch of training - if self.every_n_epochs( - runner, self.interval) or (self.save_last - and self.is_last_epoch(runner)): - runner.logger.info( - f'Saving checkpoint at {runner.epoch + 1} epochs') - if self.sync_buffer: - allreduce_params(runner.model.buffers()) - self._save_checkpoint(runner) - - @master_only - def _save_checkpoint(self, runner): - """Save the current checkpoint and delete unwanted checkpoint.""" - runner.save_checkpoint( - self.out_dir, save_optimizer=self.save_optimizer, **self.args) - if runner.meta is not None: - if self.by_epoch: - cur_ckpt_filename = self.args.get( - 'filename_tmpl', 'epoch_{}.pth').format(runner.epoch + 1) - else: - cur_ckpt_filename = self.args.get( - 'filename_tmpl', 'iter_{}.pth').format(runner.iter + 1) - runner.meta.setdefault('hook_msgs', dict()) - runner.meta['hook_msgs']['last_ckpt'] = self.file_client.join_path( - self.out_dir, cur_ckpt_filename) - # remove other checkpoints - if self.max_keep_ckpts > 0: - if self.by_epoch: - name = 'epoch_{}.pth' - current_ckpt = runner.epoch + 1 - else: - name = 'iter_{}.pth' - current_ckpt = runner.iter + 1 - redundant_ckpts = range( - current_ckpt - self.max_keep_ckpts * self.interval, 0, - -self.interval) - filename_tmpl = self.args.get('filename_tmpl', name) - for _step in redundant_ckpts: - ckpt_path = self.file_client.join_path( - self.out_dir, filename_tmpl.format(_step)) - if self.file_client.isfile(ckpt_path): - self.file_client.remove(ckpt_path) - else: - break - - def after_train_iter(self, runner): - if self.by_epoch: - return - - # save checkpoint for following cases: - # 1. every ``self.interval`` iterations - # 2. reach the last iteration of training - if self.every_n_iters( - runner, self.interval) or (self.save_last - and self.is_last_iter(runner)): - runner.logger.info( - f'Saving checkpoint at {runner.iter + 1} iterations') - if self.sync_buffer: - allreduce_params(runner.model.buffers()) - self._save_checkpoint(runner) diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/rdelim.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/rdelim.go deleted file mode 100644 index 80385f850b2779f3b8f4df654d638bde17d36efe..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/rdelim.go and /dev/null differ diff --git a/spaces/PeepDaSlan9/Deci-DeciCoder-1b/app.py b/spaces/PeepDaSlan9/Deci-DeciCoder-1b/app.py deleted file mode 100644 index 5bf82cf43f0869c375abc287fd449e4dee2765ec..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/Deci-DeciCoder-1b/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Deci/DeciCoder-1b").launch() \ No newline at end of file diff --git a/spaces/Proveedy/dreambooth-trainingv15/convertosd.py b/spaces/Proveedy/dreambooth-trainingv15/convertosd.py deleted file mode 100644 index 1211d34edf018b7c402a765c5a7ecdb684cc28e3..0000000000000000000000000000000000000000 --- a/spaces/Proveedy/dreambooth-trainingv15/convertosd.py +++ /dev/null @@ -1,302 +0,0 @@ -# Script for converting a HF Diffusers saved pipeline to a Stable Diffusion checkpoint. -# *Only* converts the UNet, VAE, and Text Encoder. -# Does not convert optimizer state or any other thing. - -import argparse -import os.path as osp -import re - -import torch -import gc - -# =================# -# UNet Conversion # -# =================# - -unet_conversion_map = [ - # (stable-diffusion, HF Diffusers) - ("time_embed.0.weight", "time_embedding.linear_1.weight"), - ("time_embed.0.bias", "time_embedding.linear_1.bias"), - ("time_embed.2.weight", "time_embedding.linear_2.weight"), - ("time_embed.2.bias", "time_embedding.linear_2.bias"), - ("input_blocks.0.0.weight", "conv_in.weight"), - ("input_blocks.0.0.bias", "conv_in.bias"), - ("out.0.weight", "conv_norm_out.weight"), - ("out.0.bias", "conv_norm_out.bias"), - ("out.2.weight", "conv_out.weight"), - ("out.2.bias", "conv_out.bias"), -] - -unet_conversion_map_resnet = [ - # (stable-diffusion, HF Diffusers) - ("in_layers.0", "norm1"), - ("in_layers.2", "conv1"), - ("out_layers.0", "norm2"), - ("out_layers.3", "conv2"), - ("emb_layers.1", "time_emb_proj"), - ("skip_connection", "conv_shortcut"), -] - -unet_conversion_map_layer = [] -# hardcoded number of downblocks and resnets/attentions... -# would need smarter logic for other networks. -for i in range(4): - # loop over downblocks/upblocks - - for j in range(2): - # loop over resnets/attentions for downblocks - hf_down_res_prefix = f"down_blocks.{i}.resnets.{j}." - sd_down_res_prefix = f"input_blocks.{3*i + j + 1}.0." - unet_conversion_map_layer.append((sd_down_res_prefix, hf_down_res_prefix)) - - if i < 3: - # no attention layers in down_blocks.3 - hf_down_atn_prefix = f"down_blocks.{i}.attentions.{j}." - sd_down_atn_prefix = f"input_blocks.{3*i + j + 1}.1." - unet_conversion_map_layer.append((sd_down_atn_prefix, hf_down_atn_prefix)) - - for j in range(3): - # loop over resnets/attentions for upblocks - hf_up_res_prefix = f"up_blocks.{i}.resnets.{j}." - sd_up_res_prefix = f"output_blocks.{3*i + j}.0." - unet_conversion_map_layer.append((sd_up_res_prefix, hf_up_res_prefix)) - - if i > 0: - # no attention layers in up_blocks.0 - hf_up_atn_prefix = f"up_blocks.{i}.attentions.{j}." - sd_up_atn_prefix = f"output_blocks.{3*i + j}.1." - unet_conversion_map_layer.append((sd_up_atn_prefix, hf_up_atn_prefix)) - - if i < 3: - # no downsample in down_blocks.3 - hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0.conv." - sd_downsample_prefix = f"input_blocks.{3*(i+1)}.0.op." - unet_conversion_map_layer.append((sd_downsample_prefix, hf_downsample_prefix)) - - # no upsample in up_blocks.3 - hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0." - sd_upsample_prefix = f"output_blocks.{3*i + 2}.{1 if i == 0 else 2}." - unet_conversion_map_layer.append((sd_upsample_prefix, hf_upsample_prefix)) - -hf_mid_atn_prefix = "mid_block.attentions.0." -sd_mid_atn_prefix = "middle_block.1." -unet_conversion_map_layer.append((sd_mid_atn_prefix, hf_mid_atn_prefix)) - -for j in range(2): - hf_mid_res_prefix = f"mid_block.resnets.{j}." - sd_mid_res_prefix = f"middle_block.{2*j}." - unet_conversion_map_layer.append((sd_mid_res_prefix, hf_mid_res_prefix)) - - -def convert_unet_state_dict(unet_state_dict): - # buyer beware: this is a *brittle* function, - # and correct output requires that all of these pieces interact in - # the exact order in which I have arranged them. - mapping = {k: k for k in unet_state_dict.keys()} - for sd_name, hf_name in unet_conversion_map: - mapping[hf_name] = sd_name - for k, v in mapping.items(): - if "resnets" in k: - for sd_part, hf_part in unet_conversion_map_resnet: - v = v.replace(hf_part, sd_part) - mapping[k] = v - for k, v in mapping.items(): - for sd_part, hf_part in unet_conversion_map_layer: - v = v.replace(hf_part, sd_part) - mapping[k] = v - new_state_dict = {v: unet_state_dict[k] for k, v in mapping.items()} - return new_state_dict - - -# ================# -# VAE Conversion # -# ================# - -vae_conversion_map = [ - # (stable-diffusion, HF Diffusers) - ("nin_shortcut", "conv_shortcut"), - ("norm_out", "conv_norm_out"), - ("mid.attn_1.", "mid_block.attentions.0."), -] - -for i in range(4): - # down_blocks have two resnets - for j in range(2): - hf_down_prefix = f"encoder.down_blocks.{i}.resnets.{j}." - sd_down_prefix = f"encoder.down.{i}.block.{j}." - vae_conversion_map.append((sd_down_prefix, hf_down_prefix)) - - if i < 3: - hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0." - sd_downsample_prefix = f"down.{i}.downsample." - vae_conversion_map.append((sd_downsample_prefix, hf_downsample_prefix)) - - hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0." - sd_upsample_prefix = f"up.{3-i}.upsample." - vae_conversion_map.append((sd_upsample_prefix, hf_upsample_prefix)) - - # up_blocks have three resnets - # also, up blocks in hf are numbered in reverse from sd - for j in range(3): - hf_up_prefix = f"decoder.up_blocks.{i}.resnets.{j}." - sd_up_prefix = f"decoder.up.{3-i}.block.{j}." - vae_conversion_map.append((sd_up_prefix, hf_up_prefix)) - -# this part accounts for mid blocks in both the encoder and the decoder -for i in range(2): - hf_mid_res_prefix = f"mid_block.resnets.{i}." - sd_mid_res_prefix = f"mid.block_{i+1}." - vae_conversion_map.append((sd_mid_res_prefix, hf_mid_res_prefix)) - - -vae_conversion_map_attn = [ - # (stable-diffusion, HF Diffusers) - ("norm.", "group_norm."), - ("q.", "query."), - ("k.", "key."), - ("v.", "value."), - ("proj_out.", "proj_attn."), -] - - -def reshape_weight_for_sd(w): - # convert HF linear weights to SD conv2d weights - return w.reshape(*w.shape, 1, 1) - - -def convert_vae_state_dict(vae_state_dict): - mapping = {k: k for k in vae_state_dict.keys()} - for k, v in mapping.items(): - for sd_part, hf_part in vae_conversion_map: - v = v.replace(hf_part, sd_part) - mapping[k] = v - for k, v in mapping.items(): - if "attentions" in k: - for sd_part, hf_part in vae_conversion_map_attn: - v = v.replace(hf_part, sd_part) - mapping[k] = v - new_state_dict = {v: vae_state_dict[k] for k, v in mapping.items()} - weights_to_convert = ["q", "k", "v", "proj_out"] - print("Converting to CKPT ...") - for k, v in new_state_dict.items(): - for weight_name in weights_to_convert: - if f"mid.attn_1.{weight_name}.weight" in k: - print(f"Reshaping {k} for SD format") - new_state_dict[k] = reshape_weight_for_sd(v) - return new_state_dict - - -# =========================# -# Text Encoder Conversion # -# =========================# - - -textenc_conversion_lst = [ - # (stable-diffusion, HF Diffusers) - ("resblocks.", "text_model.encoder.layers."), - ("ln_1", "layer_norm1"), - ("ln_2", "layer_norm2"), - (".c_fc.", ".fc1."), - (".c_proj.", ".fc2."), - (".attn", ".self_attn"), - ("ln_final.", "transformer.text_model.final_layer_norm."), - ("token_embedding.weight", "transformer.text_model.embeddings.token_embedding.weight"), - ("positional_embedding", "transformer.text_model.embeddings.position_embedding.weight"), -] -protected = {re.escape(x[1]): x[0] for x in textenc_conversion_lst} -textenc_pattern = re.compile("|".join(protected.keys())) - -# Ordering is from https://github.com/pytorch/pytorch/blob/master/test/cpp/api/modules.cpp -code2idx = {"q": 0, "k": 1, "v": 2} - - -def convert_text_enc_state_dict_v20(text_enc_dict): - new_state_dict = {} - capture_qkv_weight = {} - capture_qkv_bias = {} - for k, v in text_enc_dict.items(): - if ( - k.endswith(".self_attn.q_proj.weight") - or k.endswith(".self_attn.k_proj.weight") - or k.endswith(".self_attn.v_proj.weight") - ): - k_pre = k[: -len(".q_proj.weight")] - k_code = k[-len("q_proj.weight")] - if k_pre not in capture_qkv_weight: - capture_qkv_weight[k_pre] = [None, None, None] - capture_qkv_weight[k_pre][code2idx[k_code]] = v - continue - - if ( - k.endswith(".self_attn.q_proj.bias") - or k.endswith(".self_attn.k_proj.bias") - or k.endswith(".self_attn.v_proj.bias") - ): - k_pre = k[: -len(".q_proj.bias")] - k_code = k[-len("q_proj.bias")] - if k_pre not in capture_qkv_bias: - capture_qkv_bias[k_pre] = [None, None, None] - capture_qkv_bias[k_pre][code2idx[k_code]] = v - continue - - relabelled_key = textenc_pattern.sub(lambda m: protected[re.escape(m.group(0))], k) - new_state_dict[relabelled_key] = v - - for k_pre, tensors in capture_qkv_weight.items(): - if None in tensors: - raise Exception("CORRUPTED MODEL: one of the q-k-v values for the text encoder was missing") - relabelled_key = textenc_pattern.sub(lambda m: protected[re.escape(m.group(0))], k_pre) - new_state_dict[relabelled_key + ".in_proj_weight"] = torch.cat(tensors) - - for k_pre, tensors in capture_qkv_bias.items(): - if None in tensors: - raise Exception("CORRUPTED MODEL: one of the q-k-v values for the text encoder was missing") - relabelled_key = textenc_pattern.sub(lambda m: protected[re.escape(m.group(0))], k_pre) - new_state_dict[relabelled_key + ".in_proj_bias"] = torch.cat(tensors) - - return new_state_dict - - -def convert_text_enc_state_dict(text_enc_dict): - return text_enc_dict - - -def convert(model_path, checkpoint_path): - unet_path = osp.join(model_path, "unet", "diffusion_pytorch_model.bin") - vae_path = osp.join(model_path, "vae", "diffusion_pytorch_model.bin") - text_enc_path = osp.join(model_path, "text_encoder", "pytorch_model.bin") - - # Convert the UNet model - unet_state_dict = torch.load(unet_path, map_location="cpu") - unet_state_dict = convert_unet_state_dict(unet_state_dict) - unet_state_dict = {"model.diffusion_model." + k: v for k, v in unet_state_dict.items()} - - # Convert the VAE model - vae_state_dict = torch.load(vae_path, map_location="cpu") - vae_state_dict = convert_vae_state_dict(vae_state_dict) - vae_state_dict = {"first_stage_model." + k: v for k, v in vae_state_dict.items()} - - # Convert the text encoder model - text_enc_dict = torch.load(text_enc_path, map_location="cpu") - - # Easiest way to identify v2.0 model seems to be that the text encoder (OpenCLIP) is deeper - is_v20_model = "text_model.encoder.layers.22.layer_norm2.bias" in text_enc_dict - - if is_v20_model: - # Need to add the tag 'transformer' in advance so we can knock it out from the final layer-norm - text_enc_dict = {"transformer." + k: v for k, v in text_enc_dict.items()} - text_enc_dict = convert_text_enc_state_dict_v20(text_enc_dict) - text_enc_dict = {"cond_stage_model.model." + k: v for k, v in text_enc_dict.items()} - else: - text_enc_dict = convert_text_enc_state_dict(text_enc_dict) - text_enc_dict = {"cond_stage_model.transformer." + k: v for k, v in text_enc_dict.items()} - - # Put together new checkpoint - state_dict = {**unet_state_dict, **vae_state_dict, **text_enc_dict} - state_dict = {k: v.half() for k, v in state_dict.items()} - state_dict = {"state_dict": state_dict} - torch.save(state_dict, checkpoint_path) - del state_dict, text_enc_dict, vae_state_dict, unet_state_dict - torch.cuda.empty_cache() - gc.collect() - \ No newline at end of file diff --git a/spaces/RMXK/RVC_HFF/Dockerfile b/spaces/RMXK/RVC_HFF/Dockerfile deleted file mode 100644 index b81f131c79cc585012b28002f4916491e85f3a33..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/Dockerfile +++ /dev/null @@ -1,29 +0,0 @@ -# syntax=docker/dockerfile:1 - -FROM python:3.10-bullseye - -EXPOSE 7865 - -WORKDIR /app - -COPY . . - -RUN apt update && apt install -y -qq ffmpeg aria2 && apt clean - -RUN pip3 install --no-cache-dir -r requirements.txt - -RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D40k.pth -d assets/pretrained_v2/ -o D40k.pth -RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G40k.pth -d assets/pretrained_v2/ -o G40k.pth -RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D40k.pth -d assets/pretrained_v2/ -o f0D40k.pth -RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G40k.pth -d assets/pretrained_v2/ -o f0G40k.pth - -RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d assets/uvr5_weights/ -o HP2-人声vocals+非人声instrumentals.pth -RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d assets/uvr5_weights/ -o HP5-主旋律人声vocals+其他instrumentals.pth - -RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d assets/hubert -o hubert_base.pt - -RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/rmvpe.pt -d assets/hubert -o rmvpe.pt - -VOLUME [ "/app/weights", "/app/opt" ] - -CMD ["python3", "infer-web.py"] \ No newline at end of file diff --git a/spaces/RMXK/RVC_HFF/demucs/compressed.py b/spaces/RMXK/RVC_HFF/demucs/compressed.py deleted file mode 100644 index eb8fbb75463ba71ca86729b22baebf24598ade57..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/demucs/compressed.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import json -from fractions import Fraction -from concurrent import futures - -import musdb -from torch import distributed - -from .audio import AudioFile - - -def get_musdb_tracks(root, *args, **kwargs): - mus = musdb.DB(root, *args, **kwargs) - return {track.name: track.path for track in mus} - - -class StemsSet: - def __init__(self, tracks, metadata, duration=None, stride=1, - samplerate=44100, channels=2, streams=slice(None)): - - self.metadata = [] - for name, path in tracks.items(): - meta = dict(metadata[name]) - meta["path"] = path - meta["name"] = name - self.metadata.append(meta) - if duration is not None and meta["duration"] < duration: - raise ValueError(f"Track {name} duration is too small {meta['duration']}") - self.metadata.sort(key=lambda x: x["name"]) - self.duration = duration - self.stride = stride - self.channels = channels - self.samplerate = samplerate - self.streams = streams - - def __len__(self): - return sum(self._examples_count(m) for m in self.metadata) - - def _examples_count(self, meta): - if self.duration is None: - return 1 - else: - return int((meta["duration"] - self.duration) // self.stride + 1) - - def track_metadata(self, index): - for meta in self.metadata: - examples = self._examples_count(meta) - if index >= examples: - index -= examples - continue - return meta - - def __getitem__(self, index): - for meta in self.metadata: - examples = self._examples_count(meta) - if index >= examples: - index -= examples - continue - streams = AudioFile(meta["path"]).read(seek_time=index * self.stride, - duration=self.duration, - channels=self.channels, - samplerate=self.samplerate, - streams=self.streams) - return (streams - meta["mean"]) / meta["std"] - - -def _get_track_metadata(path): - # use mono at 44kHz as reference. For any other settings data won't be perfectly - # normalized but it should be good enough. - audio = AudioFile(path) - mix = audio.read(streams=0, channels=1, samplerate=44100) - return {"duration": audio.duration, "std": mix.std().item(), "mean": mix.mean().item()} - - -def _build_metadata(tracks, workers=10): - pendings = [] - with futures.ProcessPoolExecutor(workers) as pool: - for name, path in tracks.items(): - pendings.append((name, pool.submit(_get_track_metadata, path))) - return {name: p.result() for name, p in pendings} - - -def _build_musdb_metadata(path, musdb, workers): - tracks = get_musdb_tracks(musdb) - metadata = _build_metadata(tracks, workers) - path.parent.mkdir(exist_ok=True, parents=True) - json.dump(metadata, open(path, "w")) - - -def get_compressed_datasets(args, samples): - metadata_file = args.metadata / "musdb.json" - if not metadata_file.is_file() and args.rank == 0: - _build_musdb_metadata(metadata_file, args.musdb, args.workers) - if args.world_size > 1: - distributed.barrier() - metadata = json.load(open(metadata_file)) - duration = Fraction(samples, args.samplerate) - stride = Fraction(args.data_stride, args.samplerate) - train_set = StemsSet(get_musdb_tracks(args.musdb, subsets=["train"], split="train"), - metadata, - duration=duration, - stride=stride, - streams=slice(1, None), - samplerate=args.samplerate, - channels=args.audio_channels) - valid_set = StemsSet(get_musdb_tracks(args.musdb, subsets=["train"], split="valid"), - metadata, - samplerate=args.samplerate, - channels=args.audio_channels) - return train_set, valid_set diff --git a/spaces/RamziRebai/hf_sum/README.md b/spaces/RamziRebai/hf_sum/README.md deleted file mode 100644 index a276060a429aa53e7f7ce56adb36e4448992ca49..0000000000000000000000000000000000000000 --- a/spaces/RamziRebai/hf_sum/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: HF Summarization -emoji: 📊 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/locations/base.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/locations/base.py deleted file mode 100644 index 3f7de0061f188de568180fe6ab075a21890201df..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/locations/base.py +++ /dev/null @@ -1,81 +0,0 @@ -import functools -import os -import site -import sys -import sysconfig -import typing - -from pip._internal.exceptions import InstallationError -from pip._internal.utils import appdirs -from pip._internal.utils.virtualenv import running_under_virtualenv - -# Application Directories -USER_CACHE_DIR = appdirs.user_cache_dir("pip") - -# FIXME doesn't account for venv linked to global site-packages -site_packages: typing.Optional[str] = sysconfig.get_path("purelib") - - -def get_major_minor_version() -> str: - """ - Return the major-minor version of the current Python as a string, e.g. - "3.7" or "3.10". - """ - return "{}.{}".format(*sys.version_info) - - -def change_root(new_root: str, pathname: str) -> str: - """Return 'pathname' with 'new_root' prepended. - - If 'pathname' is relative, this is equivalent to os.path.join(new_root, pathname). - Otherwise, it requires making 'pathname' relative and then joining the - two, which is tricky on DOS/Windows and Mac OS. - - This is borrowed from Python's standard library's distutils module. - """ - if os.name == "posix": - if not os.path.isabs(pathname): - return os.path.join(new_root, pathname) - else: - return os.path.join(new_root, pathname[1:]) - - elif os.name == "nt": - (drive, path) = os.path.splitdrive(pathname) - if path[0] == "\\": - path = path[1:] - return os.path.join(new_root, path) - - else: - raise InstallationError( - f"Unknown platform: {os.name}\n" - "Can not change root path prefix on unknown platform." - ) - - -def get_src_prefix() -> str: - if running_under_virtualenv(): - src_prefix = os.path.join(sys.prefix, "src") - else: - # FIXME: keep src in cwd for now (it is not a temporary folder) - try: - src_prefix = os.path.join(os.getcwd(), "src") - except OSError: - # In case the current working directory has been renamed or deleted - sys.exit("The folder you are executing pip from can no longer be found.") - - # under macOS + virtualenv sys.prefix is not properly resolved - # it is something like /path/to/python/bin/.. - return os.path.abspath(src_prefix) - - -try: - # Use getusersitepackages if this is present, as it ensures that the - # value is initialised properly. - user_site: typing.Optional[str] = site.getusersitepackages() -except AttributeError: - user_site = site.USER_SITE - - -@functools.lru_cache(maxsize=None) -def is_osx_framework() -> bool: - return bool(sysconfig.get_config_var("PYTHONFRAMEWORK")) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/factory.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/factory.py deleted file mode 100644 index a4c24b52a1bf4fd055f4a130e80f4401fe06ea6b..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/factory.py +++ /dev/null @@ -1,731 +0,0 @@ -import contextlib -import functools -import logging -from typing import ( - TYPE_CHECKING, - Dict, - FrozenSet, - Iterable, - Iterator, - List, - Mapping, - NamedTuple, - Optional, - Sequence, - Set, - Tuple, - TypeVar, - cast, -) - -from pip._vendor.packaging.requirements import InvalidRequirement -from pip._vendor.packaging.specifiers import SpecifierSet -from pip._vendor.packaging.utils import NormalizedName, canonicalize_name -from pip._vendor.resolvelib import ResolutionImpossible - -from pip._internal.cache import CacheEntry, WheelCache -from pip._internal.exceptions import ( - DistributionNotFound, - InstallationError, - MetadataInconsistent, - UnsupportedPythonVersion, - UnsupportedWheel, -) -from pip._internal.index.package_finder import PackageFinder -from pip._internal.metadata import BaseDistribution, get_default_environment -from pip._internal.models.link import Link -from pip._internal.models.wheel import Wheel -from pip._internal.operations.prepare import RequirementPreparer -from pip._internal.req.constructors import install_req_from_link_and_ireq -from pip._internal.req.req_install import ( - InstallRequirement, - check_invalid_constraint_type, -) -from pip._internal.resolution.base import InstallRequirementProvider -from pip._internal.utils.compatibility_tags import get_supported -from pip._internal.utils.hashes import Hashes -from pip._internal.utils.packaging import get_requirement -from pip._internal.utils.virtualenv import running_under_virtualenv - -from .base import Candidate, CandidateVersion, Constraint, Requirement -from .candidates import ( - AlreadyInstalledCandidate, - BaseCandidate, - EditableCandidate, - ExtrasCandidate, - LinkCandidate, - RequiresPythonCandidate, - as_base_candidate, -) -from .found_candidates import FoundCandidates, IndexCandidateInfo -from .requirements import ( - ExplicitRequirement, - RequiresPythonRequirement, - SpecifierRequirement, - UnsatisfiableRequirement, -) - -if TYPE_CHECKING: - from typing import Protocol - - class ConflictCause(Protocol): - requirement: RequiresPythonRequirement - parent: Candidate - - -logger = logging.getLogger(__name__) - -C = TypeVar("C") -Cache = Dict[Link, C] - - -class CollectedRootRequirements(NamedTuple): - requirements: List[Requirement] - constraints: Dict[str, Constraint] - user_requested: Dict[str, int] - - -class Factory: - def __init__( - self, - finder: PackageFinder, - preparer: RequirementPreparer, - make_install_req: InstallRequirementProvider, - wheel_cache: Optional[WheelCache], - use_user_site: bool, - force_reinstall: bool, - ignore_installed: bool, - ignore_requires_python: bool, - py_version_info: Optional[Tuple[int, ...]] = None, - ) -> None: - self._finder = finder - self.preparer = preparer - self._wheel_cache = wheel_cache - self._python_candidate = RequiresPythonCandidate(py_version_info) - self._make_install_req_from_spec = make_install_req - self._use_user_site = use_user_site - self._force_reinstall = force_reinstall - self._ignore_requires_python = ignore_requires_python - - self._build_failures: Cache[InstallationError] = {} - self._link_candidate_cache: Cache[LinkCandidate] = {} - self._editable_candidate_cache: Cache[EditableCandidate] = {} - self._installed_candidate_cache: Dict[str, AlreadyInstalledCandidate] = {} - self._extras_candidate_cache: Dict[ - Tuple[int, FrozenSet[str]], ExtrasCandidate - ] = {} - - if not ignore_installed: - env = get_default_environment() - self._installed_dists = { - dist.canonical_name: dist - for dist in env.iter_installed_distributions(local_only=False) - } - else: - self._installed_dists = {} - - @property - def force_reinstall(self) -> bool: - return self._force_reinstall - - def _fail_if_link_is_unsupported_wheel(self, link: Link) -> None: - if not link.is_wheel: - return - wheel = Wheel(link.filename) - if wheel.supported(self._finder.target_python.get_tags()): - return - msg = f"{link.filename} is not a supported wheel on this platform." - raise UnsupportedWheel(msg) - - def _make_extras_candidate( - self, base: BaseCandidate, extras: FrozenSet[str] - ) -> ExtrasCandidate: - cache_key = (id(base), extras) - try: - candidate = self._extras_candidate_cache[cache_key] - except KeyError: - candidate = ExtrasCandidate(base, extras) - self._extras_candidate_cache[cache_key] = candidate - return candidate - - def _make_candidate_from_dist( - self, - dist: BaseDistribution, - extras: FrozenSet[str], - template: InstallRequirement, - ) -> Candidate: - try: - base = self._installed_candidate_cache[dist.canonical_name] - except KeyError: - base = AlreadyInstalledCandidate(dist, template, factory=self) - self._installed_candidate_cache[dist.canonical_name] = base - if not extras: - return base - return self._make_extras_candidate(base, extras) - - def _make_candidate_from_link( - self, - link: Link, - extras: FrozenSet[str], - template: InstallRequirement, - name: Optional[NormalizedName], - version: Optional[CandidateVersion], - ) -> Optional[Candidate]: - # TODO: Check already installed candidate, and use it if the link and - # editable flag match. - - if link in self._build_failures: - # We already tried this candidate before, and it does not build. - # Don't bother trying again. - return None - - if template.editable: - if link not in self._editable_candidate_cache: - try: - self._editable_candidate_cache[link] = EditableCandidate( - link, - template, - factory=self, - name=name, - version=version, - ) - except MetadataInconsistent as e: - logger.info( - "Discarding [blue underline]%s[/]: [yellow]%s[reset]", - link, - e, - extra={"markup": True}, - ) - self._build_failures[link] = e - return None - - base: BaseCandidate = self._editable_candidate_cache[link] - else: - if link not in self._link_candidate_cache: - try: - self._link_candidate_cache[link] = LinkCandidate( - link, - template, - factory=self, - name=name, - version=version, - ) - except MetadataInconsistent as e: - logger.info( - "Discarding [blue underline]%s[/]: [yellow]%s[reset]", - link, - e, - extra={"markup": True}, - ) - self._build_failures[link] = e - return None - base = self._link_candidate_cache[link] - - if not extras: - return base - return self._make_extras_candidate(base, extras) - - def _iter_found_candidates( - self, - ireqs: Sequence[InstallRequirement], - specifier: SpecifierSet, - hashes: Hashes, - prefers_installed: bool, - incompatible_ids: Set[int], - ) -> Iterable[Candidate]: - if not ireqs: - return () - - # The InstallRequirement implementation requires us to give it a - # "template". Here we just choose the first requirement to represent - # all of them. - # Hopefully the Project model can correct this mismatch in the future. - template = ireqs[0] - assert template.req, "Candidates found on index must be PEP 508" - name = canonicalize_name(template.req.name) - - extras: FrozenSet[str] = frozenset() - for ireq in ireqs: - assert ireq.req, "Candidates found on index must be PEP 508" - specifier &= ireq.req.specifier - hashes &= ireq.hashes(trust_internet=False) - extras |= frozenset(ireq.extras) - - def _get_installed_candidate() -> Optional[Candidate]: - """Get the candidate for the currently-installed version.""" - # If --force-reinstall is set, we want the version from the index - # instead, so we "pretend" there is nothing installed. - if self._force_reinstall: - return None - try: - installed_dist = self._installed_dists[name] - except KeyError: - return None - # Don't use the installed distribution if its version does not fit - # the current dependency graph. - if not specifier.contains(installed_dist.version, prereleases=True): - return None - candidate = self._make_candidate_from_dist( - dist=installed_dist, - extras=extras, - template=template, - ) - # The candidate is a known incompatibility. Don't use it. - if id(candidate) in incompatible_ids: - return None - return candidate - - def iter_index_candidate_infos() -> Iterator[IndexCandidateInfo]: - result = self._finder.find_best_candidate( - project_name=name, - specifier=specifier, - hashes=hashes, - ) - icans = list(result.iter_applicable()) - - # PEP 592: Yanked releases are ignored unless the specifier - # explicitly pins a version (via '==' or '===') that can be - # solely satisfied by a yanked release. - all_yanked = all(ican.link.is_yanked for ican in icans) - - def is_pinned(specifier: SpecifierSet) -> bool: - for sp in specifier: - if sp.operator == "===": - return True - if sp.operator != "==": - continue - if sp.version.endswith(".*"): - continue - return True - return False - - pinned = is_pinned(specifier) - - # PackageFinder returns earlier versions first, so we reverse. - for ican in reversed(icans): - if not (all_yanked and pinned) and ican.link.is_yanked: - continue - func = functools.partial( - self._make_candidate_from_link, - link=ican.link, - extras=extras, - template=template, - name=name, - version=ican.version, - ) - yield ican.version, func - - return FoundCandidates( - iter_index_candidate_infos, - _get_installed_candidate(), - prefers_installed, - incompatible_ids, - ) - - def _iter_explicit_candidates_from_base( - self, - base_requirements: Iterable[Requirement], - extras: FrozenSet[str], - ) -> Iterator[Candidate]: - """Produce explicit candidates from the base given an extra-ed package. - - :param base_requirements: Requirements known to the resolver. The - requirements are guaranteed to not have extras. - :param extras: The extras to inject into the explicit requirements' - candidates. - """ - for req in base_requirements: - lookup_cand, _ = req.get_candidate_lookup() - if lookup_cand is None: # Not explicit. - continue - # We've stripped extras from the identifier, and should always - # get a BaseCandidate here, unless there's a bug elsewhere. - base_cand = as_base_candidate(lookup_cand) - assert base_cand is not None, "no extras here" - yield self._make_extras_candidate(base_cand, extras) - - def _iter_candidates_from_constraints( - self, - identifier: str, - constraint: Constraint, - template: InstallRequirement, - ) -> Iterator[Candidate]: - """Produce explicit candidates from constraints. - - This creates "fake" InstallRequirement objects that are basically clones - of what "should" be the template, but with original_link set to link. - """ - for link in constraint.links: - self._fail_if_link_is_unsupported_wheel(link) - candidate = self._make_candidate_from_link( - link, - extras=frozenset(), - template=install_req_from_link_and_ireq(link, template), - name=canonicalize_name(identifier), - version=None, - ) - if candidate: - yield candidate - - def find_candidates( - self, - identifier: str, - requirements: Mapping[str, Iterable[Requirement]], - incompatibilities: Mapping[str, Iterator[Candidate]], - constraint: Constraint, - prefers_installed: bool, - ) -> Iterable[Candidate]: - # Collect basic lookup information from the requirements. - explicit_candidates: Set[Candidate] = set() - ireqs: List[InstallRequirement] = [] - for req in requirements[identifier]: - cand, ireq = req.get_candidate_lookup() - if cand is not None: - explicit_candidates.add(cand) - if ireq is not None: - ireqs.append(ireq) - - # If the current identifier contains extras, add explicit candidates - # from entries from extra-less identifier. - with contextlib.suppress(InvalidRequirement): - parsed_requirement = get_requirement(identifier) - explicit_candidates.update( - self._iter_explicit_candidates_from_base( - requirements.get(parsed_requirement.name, ()), - frozenset(parsed_requirement.extras), - ), - ) - - # Add explicit candidates from constraints. We only do this if there are - # known ireqs, which represent requirements not already explicit. If - # there are no ireqs, we're constraining already-explicit requirements, - # which is handled later when we return the explicit candidates. - if ireqs: - try: - explicit_candidates.update( - self._iter_candidates_from_constraints( - identifier, - constraint, - template=ireqs[0], - ), - ) - except UnsupportedWheel: - # If we're constrained to install a wheel incompatible with the - # target architecture, no candidates will ever be valid. - return () - - # Since we cache all the candidates, incompatibility identification - # can be made quicker by comparing only the id() values. - incompat_ids = {id(c) for c in incompatibilities.get(identifier, ())} - - # If none of the requirements want an explicit candidate, we can ask - # the finder for candidates. - if not explicit_candidates: - return self._iter_found_candidates( - ireqs, - constraint.specifier, - constraint.hashes, - prefers_installed, - incompat_ids, - ) - - return ( - c - for c in explicit_candidates - if id(c) not in incompat_ids - and constraint.is_satisfied_by(c) - and all(req.is_satisfied_by(c) for req in requirements[identifier]) - ) - - def _make_requirement_from_install_req( - self, ireq: InstallRequirement, requested_extras: Iterable[str] - ) -> Optional[Requirement]: - if not ireq.match_markers(requested_extras): - logger.info( - "Ignoring %s: markers '%s' don't match your environment", - ireq.name, - ireq.markers, - ) - return None - if not ireq.link: - return SpecifierRequirement(ireq) - self._fail_if_link_is_unsupported_wheel(ireq.link) - cand = self._make_candidate_from_link( - ireq.link, - extras=frozenset(ireq.extras), - template=ireq, - name=canonicalize_name(ireq.name) if ireq.name else None, - version=None, - ) - if cand is None: - # There's no way we can satisfy a URL requirement if the underlying - # candidate fails to build. An unnamed URL must be user-supplied, so - # we fail eagerly. If the URL is named, an unsatisfiable requirement - # can make the resolver do the right thing, either backtrack (and - # maybe find some other requirement that's buildable) or raise a - # ResolutionImpossible eventually. - if not ireq.name: - raise self._build_failures[ireq.link] - return UnsatisfiableRequirement(canonicalize_name(ireq.name)) - return self.make_requirement_from_candidate(cand) - - def collect_root_requirements( - self, root_ireqs: List[InstallRequirement] - ) -> CollectedRootRequirements: - collected = CollectedRootRequirements([], {}, {}) - for i, ireq in enumerate(root_ireqs): - if ireq.constraint: - # Ensure we only accept valid constraints - problem = check_invalid_constraint_type(ireq) - if problem: - raise InstallationError(problem) - if not ireq.match_markers(): - continue - assert ireq.name, "Constraint must be named" - name = canonicalize_name(ireq.name) - if name in collected.constraints: - collected.constraints[name] &= ireq - else: - collected.constraints[name] = Constraint.from_ireq(ireq) - else: - req = self._make_requirement_from_install_req( - ireq, - requested_extras=(), - ) - if req is None: - continue - if ireq.user_supplied and req.name not in collected.user_requested: - collected.user_requested[req.name] = i - collected.requirements.append(req) - return collected - - def make_requirement_from_candidate( - self, candidate: Candidate - ) -> ExplicitRequirement: - return ExplicitRequirement(candidate) - - def make_requirement_from_spec( - self, - specifier: str, - comes_from: Optional[InstallRequirement], - requested_extras: Iterable[str] = (), - ) -> Optional[Requirement]: - ireq = self._make_install_req_from_spec(specifier, comes_from) - return self._make_requirement_from_install_req(ireq, requested_extras) - - def make_requires_python_requirement( - self, - specifier: SpecifierSet, - ) -> Optional[Requirement]: - if self._ignore_requires_python: - return None - # Don't bother creating a dependency for an empty Requires-Python. - if not str(specifier): - return None - return RequiresPythonRequirement(specifier, self._python_candidate) - - def get_wheel_cache_entry( - self, link: Link, name: Optional[str] - ) -> Optional[CacheEntry]: - """Look up the link in the wheel cache. - - If ``preparer.require_hashes`` is True, don't use the wheel cache, - because cached wheels, always built locally, have different hashes - than the files downloaded from the index server and thus throw false - hash mismatches. Furthermore, cached wheels at present have - nondeterministic contents due to file modification times. - """ - if self._wheel_cache is None or self.preparer.require_hashes: - return None - return self._wheel_cache.get_cache_entry( - link=link, - package_name=name, - supported_tags=get_supported(), - ) - - def get_dist_to_uninstall(self, candidate: Candidate) -> Optional[BaseDistribution]: - # TODO: Are there more cases this needs to return True? Editable? - dist = self._installed_dists.get(candidate.project_name) - if dist is None: # Not installed, no uninstallation required. - return None - - # We're installing into global site. The current installation must - # be uninstalled, no matter it's in global or user site, because the - # user site installation has precedence over global. - if not self._use_user_site: - return dist - - # We're installing into user site. Remove the user site installation. - if dist.in_usersite: - return dist - - # We're installing into user site, but the installed incompatible - # package is in global site. We can't uninstall that, and would let - # the new user installation to "shadow" it. But shadowing won't work - # in virtual environments, so we error out. - if running_under_virtualenv() and dist.in_site_packages: - message = ( - f"Will not install to the user site because it will lack " - f"sys.path precedence to {dist.raw_name} in {dist.location}" - ) - raise InstallationError(message) - return None - - def _report_requires_python_error( - self, causes: Sequence["ConflictCause"] - ) -> UnsupportedPythonVersion: - assert causes, "Requires-Python error reported with no cause" - - version = self._python_candidate.version - - if len(causes) == 1: - specifier = str(causes[0].requirement.specifier) - message = ( - f"Package {causes[0].parent.name!r} requires a different " - f"Python: {version} not in {specifier!r}" - ) - return UnsupportedPythonVersion(message) - - message = f"Packages require a different Python. {version} not in:" - for cause in causes: - package = cause.parent.format_for_error() - specifier = str(cause.requirement.specifier) - message += f"\n{specifier!r} (required by {package})" - return UnsupportedPythonVersion(message) - - def _report_single_requirement_conflict( - self, req: Requirement, parent: Optional[Candidate] - ) -> DistributionNotFound: - if parent is None: - req_disp = str(req) - else: - req_disp = f"{req} (from {parent.name})" - - cands = self._finder.find_all_candidates(req.project_name) - skipped_by_requires_python = self._finder.requires_python_skipped_reasons() - versions = [str(v) for v in sorted({c.version for c in cands})] - - if skipped_by_requires_python: - logger.critical( - "Ignored the following versions that require a different python " - "version: %s", - "; ".join(skipped_by_requires_python) or "none", - ) - logger.critical( - "Could not find a version that satisfies the requirement %s " - "(from versions: %s)", - req_disp, - ", ".join(versions) or "none", - ) - if str(req) == "requirements.txt": - logger.info( - "HINT: You are attempting to install a package literally " - 'named "requirements.txt" (which cannot exist). Consider ' - "using the '-r' flag to install the packages listed in " - "requirements.txt" - ) - - return DistributionNotFound(f"No matching distribution found for {req}") - - def get_installation_error( - self, - e: "ResolutionImpossible[Requirement, Candidate]", - constraints: Dict[str, Constraint], - ) -> InstallationError: - - assert e.causes, "Installation error reported with no cause" - - # If one of the things we can't solve is "we need Python X.Y", - # that is what we report. - requires_python_causes = [ - cause - for cause in e.causes - if isinstance(cause.requirement, RequiresPythonRequirement) - and not cause.requirement.is_satisfied_by(self._python_candidate) - ] - if requires_python_causes: - # The comprehension above makes sure all Requirement instances are - # RequiresPythonRequirement, so let's cast for convenience. - return self._report_requires_python_error( - cast("Sequence[ConflictCause]", requires_python_causes), - ) - - # Otherwise, we have a set of causes which can't all be satisfied - # at once. - - # The simplest case is when we have *one* cause that can't be - # satisfied. We just report that case. - if len(e.causes) == 1: - req, parent = e.causes[0] - if req.name not in constraints: - return self._report_single_requirement_conflict(req, parent) - - # OK, we now have a list of requirements that can't all be - # satisfied at once. - - # A couple of formatting helpers - def text_join(parts: List[str]) -> str: - if len(parts) == 1: - return parts[0] - - return ", ".join(parts[:-1]) + " and " + parts[-1] - - def describe_trigger(parent: Candidate) -> str: - ireq = parent.get_install_requirement() - if not ireq or not ireq.comes_from: - return f"{parent.name}=={parent.version}" - if isinstance(ireq.comes_from, InstallRequirement): - return str(ireq.comes_from.name) - return str(ireq.comes_from) - - triggers = set() - for req, parent in e.causes: - if parent is None: - # This is a root requirement, so we can report it directly - trigger = req.format_for_error() - else: - trigger = describe_trigger(parent) - triggers.add(trigger) - - if triggers: - info = text_join(sorted(triggers)) - else: - info = "the requested packages" - - msg = ( - "Cannot install {} because these package versions " - "have conflicting dependencies.".format(info) - ) - logger.critical(msg) - msg = "\nThe conflict is caused by:" - - relevant_constraints = set() - for req, parent in e.causes: - if req.name in constraints: - relevant_constraints.add(req.name) - msg = msg + "\n " - if parent: - msg = msg + f"{parent.name} {parent.version} depends on " - else: - msg = msg + "The user requested " - msg = msg + req.format_for_error() - for key in relevant_constraints: - spec = constraints[key].specifier - msg += f"\n The user requested (constraint) {key}{spec}" - - msg = ( - msg - + "\n\n" - + "To fix this you could try to:\n" - + "1. loosen the range of package versions you've specified\n" - + "2. remove package versions to allow pip attempt to solve " - + "the dependency conflict\n" - ) - - logger.info(msg) - - return DistributionNotFound( - "ResolutionImpossible: for help visit " - "https://pip.pypa.io/en/latest/topics/dependency-resolution/" - "#dealing-with-dependency-conflicts" - ) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_export_format.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_export_format.py deleted file mode 100644 index b79c13069b9f5a7d7fc1d8c3364d3cd66c80c60f..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_export_format.py +++ /dev/null @@ -1,78 +0,0 @@ -CONSOLE_HTML_FORMAT = """\ - - - - - - - - -
    {code}
    -
    - - -""" - -CONSOLE_SVG_FORMAT = """\ - - - - - - - - - {lines} - - - {chrome} - - {backgrounds} - - {matrix} - - - -""" - -_SVG_FONT_FAMILY = "Rich Fira Code" -_SVG_CLASSES_PREFIX = "rich-svg" diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/jaraco/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/jaraco/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Rayzggz/illi-Bert-VITS2/text/chinese.py b/spaces/Rayzggz/illi-Bert-VITS2/text/chinese.py deleted file mode 100644 index 51acb3ec401d7647278a25537576a0fb1775d827..0000000000000000000000000000000000000000 --- a/spaces/Rayzggz/illi-Bert-VITS2/text/chinese.py +++ /dev/null @@ -1,198 +0,0 @@ -import os -import re - -import cn2an -from pypinyin import lazy_pinyin, Style - -from text.symbols import punctuation -from text.tone_sandhi import ToneSandhi - -current_file_path = os.path.dirname(__file__) -pinyin_to_symbol_map = { - line.split("\t")[0]: line.strip().split("\t")[1] - for line in open(os.path.join(current_file_path, "opencpop-strict.txt")).readlines() -} - -import jieba.posseg as psg - - -rep_map = { - ":": ",", - ";": ",", - ",": ",", - "。": ".", - "!": "!", - "?": "?", - "\n": ".", - "·": ",", - "、": ",", - "...": "…", - "$": ".", - "“": "'", - "”": "'", - "‘": "'", - "’": "'", - "(": "'", - ")": "'", - "(": "'", - ")": "'", - "《": "'", - "》": "'", - "【": "'", - "】": "'", - "[": "'", - "]": "'", - "—": "-", - "~": "-", - "~": "-", - "「": "'", - "」": "'", -} - -tone_modifier = ToneSandhi() - - -def replace_punctuation(text): - text = text.replace("嗯", "恩").replace("呣", "母") - pattern = re.compile("|".join(re.escape(p) for p in rep_map.keys())) - - replaced_text = pattern.sub(lambda x: rep_map[x.group()], text) - - replaced_text = re.sub( - r"[^\u4e00-\u9fa5" + "".join(punctuation) + r"]+", "", replaced_text - ) - - return replaced_text - - -def g2p(text): - pattern = r"(?<=[{0}])\s*".format("".join(punctuation)) - sentences = [i for i in re.split(pattern, text) if i.strip() != ""] - phones, tones, word2ph = _g2p(sentences) - assert sum(word2ph) == len(phones) - assert len(word2ph) == len(text) # Sometimes it will crash,you can add a try-catch. - phones = ["_"] + phones + ["_"] - tones = [0] + tones + [0] - word2ph = [1] + word2ph + [1] - return phones, tones, word2ph - - -def _get_initials_finals(word): - initials = [] - finals = [] - orig_initials = lazy_pinyin(word, neutral_tone_with_five=True, style=Style.INITIALS) - orig_finals = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3 - ) - for c, v in zip(orig_initials, orig_finals): - initials.append(c) - finals.append(v) - return initials, finals - - -def _g2p(segments): - phones_list = [] - tones_list = [] - word2ph = [] - for seg in segments: - # Replace all English words in the sentence - seg = re.sub("[a-zA-Z]+", "", seg) - seg_cut = psg.lcut(seg) - initials = [] - finals = [] - seg_cut = tone_modifier.pre_merge_for_modify(seg_cut) - for word, pos in seg_cut: - if pos == "eng": - continue - sub_initials, sub_finals = _get_initials_finals(word) - sub_finals = tone_modifier.modified_tone(word, pos, sub_finals) - initials.append(sub_initials) - finals.append(sub_finals) - - # assert len(sub_initials) == len(sub_finals) == len(word) - initials = sum(initials, []) - finals = sum(finals, []) - # - for c, v in zip(initials, finals): - raw_pinyin = c + v - # NOTE: post process for pypinyin outputs - # we discriminate i, ii and iii - if c == v: - assert c in punctuation - phone = [c] - tone = "0" - word2ph.append(1) - else: - v_without_tone = v[:-1] - tone = v[-1] - - pinyin = c + v_without_tone - assert tone in "12345" - - if c: - # 多音节 - v_rep_map = { - "uei": "ui", - "iou": "iu", - "uen": "un", - } - if v_without_tone in v_rep_map.keys(): - pinyin = c + v_rep_map[v_without_tone] - else: - # 单音节 - pinyin_rep_map = { - "ing": "ying", - "i": "yi", - "in": "yin", - "u": "wu", - } - if pinyin in pinyin_rep_map.keys(): - pinyin = pinyin_rep_map[pinyin] - else: - single_rep_map = { - "v": "yu", - "e": "e", - "i": "y", - "u": "w", - } - if pinyin[0] in single_rep_map.keys(): - pinyin = single_rep_map[pinyin[0]] + pinyin[1:] - - assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin) - phone = pinyin_to_symbol_map[pinyin].split(" ") - word2ph.append(len(phone)) - - phones_list += phone - tones_list += [int(tone)] * len(phone) - return phones_list, tones_list, word2ph - - -def text_normalize(text): - numbers = re.findall(r"\d+(?:\.?\d+)?", text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - text = replace_punctuation(text) - return text - - -def get_bert_feature(text, word2ph): - from text import chinese_bert - - return chinese_bert.get_bert_feature(text, word2ph) - - -if __name__ == "__main__": - from text.chinese_bert import get_bert_feature - - text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏" - text = text_normalize(text) - print(text) - phones, tones, word2ph = g2p(text) - bert = get_bert_feature(text, word2ph) - - print(phones, tones, word2ph, bert.shape) - - -# # 示例用法 -# text = "这是一个示例文本:,你好!这是一个测试...." -# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试 diff --git a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/scripts/reproduce_train/indoor.sh b/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/scripts/reproduce_train/indoor.sh deleted file mode 100644 index 705723bf14a6e6fbe949df64bbc3a68a9159e659..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/scripts/reproduce_train/indoor.sh +++ /dev/null @@ -1,34 +0,0 @@ -#!/bin/bash -l - -SCRIPTPATH=$(dirname $(readlink -f "$0")) -PROJECT_DIR="${SCRIPTPATH}/../../" - -# conda activate loftr -export PYTHONPATH=$PROJECT_DIR:$PYTHONPATH -cd $PROJECT_DIR - -data_cfg_path="configs/data/scannet_trainval.py" -main_cfg_path="configs/aspan/indoor/aspan_train.py" - -n_nodes=1 -n_gpus_per_node=8 -torch_num_workers=36 -batch_size=3 -pin_memory=true -exp_name="indoor-ds-bs-aspan-bs=$(($n_gpus_per_node * $batch_size))" - -CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' python -u ./train.py \ - ${data_cfg_path} \ - ${main_cfg_path} \ - --exp_name=${exp_name} \ - --gpus=${n_gpus_per_node} --num_nodes=${n_nodes} --accelerator="ddp" \ - --batch_size=${batch_size} --num_workers=${torch_num_workers} --pin_memory=${pin_memory} \ - --check_val_every_n_epoch=1 \ - --log_every_n_steps=100 \ - --flush_logs_every_n_steps=100 \ - --limit_val_batches=1. \ - --num_sanity_val_steps=10 \ - --benchmark=True \ - --max_epochs=30 \ - --parallel_load_data \ - --mode integrated \ No newline at end of file diff --git a/spaces/Realcat/image-matching-webui/third_party/DarkFeat/datasets/InvISP/data/data_preprocess.sh b/spaces/Realcat/image-matching-webui/third_party/DarkFeat/datasets/InvISP/data/data_preprocess.sh deleted file mode 100644 index 17dae1fa90b6b3a21fc1fb91b0c63eb6f54ffeba..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/DarkFeat/datasets/InvISP/data/data_preprocess.sh +++ /dev/null @@ -1,14 +0,0 @@ -!/bin/bash -dir_nikon="./NIKON_D700/DNG/" -dir_canon="./Canon_EOS_5D/DNG/" -if [ ! -d "$dir_nikon" ];then -mkdir $dir_nikon -fi -if [ ! -d "$dir_canon" ];then -mkdir $dir_canon -fi -wget -P./NIKON_D700/DNG -i NIKON_D700.txt -wget -P./Canon_EOS_5D/DNG -i Canon_EOS_5D.txt -python data_preprocess.py -python data_preprocess.py --camera="Canon_EOS_5D" - diff --git a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/__init__.py b/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/__init__.py deleted file mode 100644 index c00abd633fd3df598d74a8c0bb2db0343906be72..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .model_zoo import dedode_detector_B, dedode_detector_L, dedode_descriptor_B diff --git a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/utils/dataset.py b/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/utils/dataset.py deleted file mode 100644 index f26722dddcc15516b1986182a246b0cdb52c347a..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/utils/dataset.py +++ /dev/null @@ -1,209 +0,0 @@ -import io -from loguru import logger - -import cv2 -import numpy as np -import h5py -import torch -from numpy.linalg import inv - - -MEGADEPTH_CLIENT = SCANNET_CLIENT = None - -# --- DATA IO --- - - -def load_array_from_s3( - path, - client, - cv_type, - use_h5py=False, -): - byte_str = client.Get(path) - try: - if not use_h5py: - raw_array = np.fromstring(byte_str, np.uint8) - data = cv2.imdecode(raw_array, cv_type) - else: - f = io.BytesIO(byte_str) - data = np.array(h5py.File(f, "r")["/depth"]) - except Exception as ex: - print(f"==> Data loading failure: {path}") - raise ex - - assert data is not None - return data - - -def imread_gray(path, augment_fn=None, client=SCANNET_CLIENT): - cv_type = cv2.IMREAD_GRAYSCALE if augment_fn is None else cv2.IMREAD_COLOR - if str(path).startswith("s3://"): - image = load_array_from_s3(str(path), client, cv_type) - else: - image = cv2.imread(str(path), cv_type) - - if augment_fn is not None: - image = cv2.imread(str(path), cv2.IMREAD_COLOR) - image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - image = augment_fn(image) - image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) - return image # (h, w) - - -def get_resized_wh(w, h, resize=None): - if (resize is not None) and (max(h, w) > resize): # resize the longer edge - scale = resize / max(h, w) - w_new, h_new = int(round(w * scale)), int(round(h * scale)) - else: - w_new, h_new = w, h - return w_new, h_new - - -def get_divisible_wh(w, h, df=None): - if df is not None: - w_new, h_new = map(lambda x: int(x // df * df), [w, h]) - else: - w_new, h_new = w, h - return w_new, h_new - - -def pad_bottom_right(inp, pad_size, ret_mask=False): - assert isinstance(pad_size, int) and pad_size >= max( - inp.shape[-2:] - ), f"{pad_size} < {max(inp.shape[-2:])}" - mask = None - if inp.ndim == 2: - padded = np.zeros((pad_size, pad_size), dtype=inp.dtype) - padded[: inp.shape[0], : inp.shape[1]] = inp - if ret_mask: - mask = np.zeros((pad_size, pad_size), dtype=bool) - mask[: inp.shape[0], : inp.shape[1]] = True - elif inp.ndim == 3: - padded = np.zeros((inp.shape[0], pad_size, pad_size), dtype=inp.dtype) - padded[:, : inp.shape[1], : inp.shape[2]] = inp - if ret_mask: - mask = np.zeros((inp.shape[0], pad_size, pad_size), dtype=bool) - mask[:, : inp.shape[1], : inp.shape[2]] = True - else: - raise NotImplementedError() - return padded, mask - - -# --- MEGADEPTH --- - - -def read_megadepth_gray(path, resize=None, df=None, padding=False, augment_fn=None): - """ - Args: - resize (int, optional): the longer edge of resized images. None for no resize. - padding (bool): If set to 'True', zero-pad resized images to squared size. - augment_fn (callable, optional): augments images with pre-defined visual effects - Returns: - image (torch.tensor): (1, h, w) - mask (torch.tensor): (h, w) - scale (torch.tensor): [w/w_new, h/h_new] - """ - # read image - image = imread_gray(path, augment_fn, client=MEGADEPTH_CLIENT) - - # resize image - w, h = image.shape[1], image.shape[0] - w_new, h_new = get_resized_wh(w, h, resize) - w_new, h_new = get_divisible_wh(w_new, h_new, df) - - image = cv2.resize(image, (w_new, h_new)) - scale = torch.tensor([w / w_new, h / h_new], dtype=torch.float) - - if padding: # padding - pad_to = resize # max(h_new, w_new) - image, mask = pad_bottom_right(image, pad_to, ret_mask=True) - else: - mask = None - - image = ( - torch.from_numpy(image).float()[None] / 255 - ) # (h, w) -> (1, h, w) and normalized - mask = torch.from_numpy(mask) if mask is not None else None - - return image, mask, scale - - -def read_megadepth_depth(path, pad_to=None): - if str(path).startswith("s3://"): - depth = load_array_from_s3(path, MEGADEPTH_CLIENT, None, use_h5py=True) - else: - depth = np.array(h5py.File(path, "r")["depth"]) - if pad_to is not None: - depth, _ = pad_bottom_right(depth, pad_to, ret_mask=False) - depth = torch.from_numpy(depth).float() # (h, w) - return depth - - -# --- ScanNet --- - - -def read_scannet_gray(path, resize=(640, 480), augment_fn=None): - """ - Args: - resize (tuple): align image to depthmap, in (w, h). - augment_fn (callable, optional): augments images with pre-defined visual effects - Returns: - image (torch.tensor): (1, h, w) - mask (torch.tensor): (h, w) - scale (torch.tensor): [w/w_new, h/h_new] - """ - # read and resize image - image = imread_gray(path, augment_fn) - image = cv2.resize(image, resize) - - # (h, w) -> (1, h, w) and normalized - image = torch.from_numpy(image).float()[None] / 255 - return image - - -# ---- evaluation datasets: HLoc, Aachen, InLoc - - -def read_img_gray(path, resize=None, down_factor=16): - # read and resize image - image = imread_gray(path, None) - w, h = image.shape[1], image.shape[0] - if (resize is not None) and (max(h, w) > resize): - scale = float(resize / max(h, w)) - w_new, h_new = int(round(w * scale)), int(round(h * scale)) - else: - w_new, h_new = w, h - w_new, h_new = get_divisible_wh(w_new, h_new, down_factor) - image = cv2.resize(image, (w_new, h_new)) - - # (h, w) -> (1, h, w) and normalized - image = torch.from_numpy(image).float()[None] / 255 - scale = torch.tensor([w / w_new, h / h_new], dtype=torch.float) - return image, scale - - -def read_scannet_depth(path): - if str(path).startswith("s3://"): - depth = load_array_from_s3(str(path), SCANNET_CLIENT, cv2.IMREAD_UNCHANGED) - else: - depth = cv2.imread(str(path), cv2.IMREAD_UNCHANGED) - depth = depth / 1000 - depth = torch.from_numpy(depth).float() # (h, w) - return depth - - -def read_scannet_pose(path): - """Read ScanNet's Camera2World pose and transform it to World2Camera. - - Returns: - pose_w2c (np.ndarray): (4, 4) - """ - cam2world = np.loadtxt(path, delimiter=" ") - world2cam = inv(cam2world) - return world2cam - - -def read_scannet_intrinsic(path): - """Read ScanNet's intrinsic matrix and return the 3x3 matrix.""" - intrinsic = np.loadtxt(path, delimiter=" ") - return intrinsic[:-1, :-1] diff --git a/spaces/Redgon/bingo/src/components/ui/input.tsx b/spaces/Redgon/bingo/src/components/ui/input.tsx deleted file mode 100644 index 684a857f3d769b78818fb13de1abaebfb09ca79c..0000000000000000000000000000000000000000 --- a/spaces/Redgon/bingo/src/components/ui/input.tsx +++ /dev/null @@ -1,25 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface InputProps - extends React.InputHTMLAttributes {} - -const Input = React.forwardRef( - ({ className, type, ...props }, ref) => { - return ( - - ) - } -) -Input.displayName = 'Input' - -export { Input } diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/losses/lovasz_loss.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/losses/lovasz_loss.py deleted file mode 100644 index 6badb67f6d987b59fb07aa97caaaf89896e27a8d..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/losses/lovasz_loss.py +++ /dev/null @@ -1,303 +0,0 @@ -"""Modified from https://github.com/bermanmaxim/LovaszSoftmax/blob/master/pytor -ch/lovasz_losses.py Lovasz-Softmax and Jaccard hinge loss in PyTorch Maxim -Berman 2018 ESAT-PSI KU Leuven (MIT License)""" - -import annotator.uniformer.mmcv as mmcv -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import get_class_weight, weight_reduce_loss - - -def lovasz_grad(gt_sorted): - """Computes gradient of the Lovasz extension w.r.t sorted errors. - - See Alg. 1 in paper. - """ - p = len(gt_sorted) - gts = gt_sorted.sum() - intersection = gts - gt_sorted.float().cumsum(0) - union = gts + (1 - gt_sorted).float().cumsum(0) - jaccard = 1. - intersection / union - if p > 1: # cover 1-pixel case - jaccard[1:p] = jaccard[1:p] - jaccard[0:-1] - return jaccard - - -def flatten_binary_logits(logits, labels, ignore_index=None): - """Flattens predictions in the batch (binary case) Remove labels equal to - 'ignore_index'.""" - logits = logits.view(-1) - labels = labels.view(-1) - if ignore_index is None: - return logits, labels - valid = (labels != ignore_index) - vlogits = logits[valid] - vlabels = labels[valid] - return vlogits, vlabels - - -def flatten_probs(probs, labels, ignore_index=None): - """Flattens predictions in the batch.""" - if probs.dim() == 3: - # assumes output of a sigmoid layer - B, H, W = probs.size() - probs = probs.view(B, 1, H, W) - B, C, H, W = probs.size() - probs = probs.permute(0, 2, 3, 1).contiguous().view(-1, C) # B*H*W, C=P,C - labels = labels.view(-1) - if ignore_index is None: - return probs, labels - valid = (labels != ignore_index) - vprobs = probs[valid.nonzero().squeeze()] - vlabels = labels[valid] - return vprobs, vlabels - - -def lovasz_hinge_flat(logits, labels): - """Binary Lovasz hinge loss. - - Args: - logits (torch.Tensor): [P], logits at each prediction - (between -infty and +infty). - labels (torch.Tensor): [P], binary ground truth labels (0 or 1). - - Returns: - torch.Tensor: The calculated loss. - """ - if len(labels) == 0: - # only void pixels, the gradients should be 0 - return logits.sum() * 0. - signs = 2. * labels.float() - 1. - errors = (1. - logits * signs) - errors_sorted, perm = torch.sort(errors, dim=0, descending=True) - perm = perm.data - gt_sorted = labels[perm] - grad = lovasz_grad(gt_sorted) - loss = torch.dot(F.relu(errors_sorted), grad) - return loss - - -def lovasz_hinge(logits, - labels, - classes='present', - per_image=False, - class_weight=None, - reduction='mean', - avg_factor=None, - ignore_index=255): - """Binary Lovasz hinge loss. - - Args: - logits (torch.Tensor): [B, H, W], logits at each pixel - (between -infty and +infty). - labels (torch.Tensor): [B, H, W], binary ground truth masks (0 or 1). - classes (str | list[int], optional): Placeholder, to be consistent with - other loss. Default: None. - per_image (bool, optional): If per_image is True, compute the loss per - image instead of per batch. Default: False. - class_weight (list[float], optional): Placeholder, to be consistent - with other loss. Default: None. - reduction (str, optional): The method used to reduce the loss. Options - are "none", "mean" and "sum". This parameter only works when - per_image is True. Default: 'mean'. - avg_factor (int, optional): Average factor that is used to average - the loss. This parameter only works when per_image is True. - Default: None. - ignore_index (int | None): The label index to be ignored. Default: 255. - - Returns: - torch.Tensor: The calculated loss. - """ - if per_image: - loss = [ - lovasz_hinge_flat(*flatten_binary_logits( - logit.unsqueeze(0), label.unsqueeze(0), ignore_index)) - for logit, label in zip(logits, labels) - ] - loss = weight_reduce_loss( - torch.stack(loss), None, reduction, avg_factor) - else: - loss = lovasz_hinge_flat( - *flatten_binary_logits(logits, labels, ignore_index)) - return loss - - -def lovasz_softmax_flat(probs, labels, classes='present', class_weight=None): - """Multi-class Lovasz-Softmax loss. - - Args: - probs (torch.Tensor): [P, C], class probabilities at each prediction - (between 0 and 1). - labels (torch.Tensor): [P], ground truth labels (between 0 and C - 1). - classes (str | list[int], optional): Classes chosen to calculate loss. - 'all' for all classes, 'present' for classes present in labels, or - a list of classes to average. Default: 'present'. - class_weight (list[float], optional): The weight for each class. - Default: None. - - Returns: - torch.Tensor: The calculated loss. - """ - if probs.numel() == 0: - # only void pixels, the gradients should be 0 - return probs * 0. - C = probs.size(1) - losses = [] - class_to_sum = list(range(C)) if classes in ['all', 'present'] else classes - for c in class_to_sum: - fg = (labels == c).float() # foreground for class c - if (classes == 'present' and fg.sum() == 0): - continue - if C == 1: - if len(classes) > 1: - raise ValueError('Sigmoid output possible only with 1 class') - class_pred = probs[:, 0] - else: - class_pred = probs[:, c] - errors = (fg - class_pred).abs() - errors_sorted, perm = torch.sort(errors, 0, descending=True) - perm = perm.data - fg_sorted = fg[perm] - loss = torch.dot(errors_sorted, lovasz_grad(fg_sorted)) - if class_weight is not None: - loss *= class_weight[c] - losses.append(loss) - return torch.stack(losses).mean() - - -def lovasz_softmax(probs, - labels, - classes='present', - per_image=False, - class_weight=None, - reduction='mean', - avg_factor=None, - ignore_index=255): - """Multi-class Lovasz-Softmax loss. - - Args: - probs (torch.Tensor): [B, C, H, W], class probabilities at each - prediction (between 0 and 1). - labels (torch.Tensor): [B, H, W], ground truth labels (between 0 and - C - 1). - classes (str | list[int], optional): Classes chosen to calculate loss. - 'all' for all classes, 'present' for classes present in labels, or - a list of classes to average. Default: 'present'. - per_image (bool, optional): If per_image is True, compute the loss per - image instead of per batch. Default: False. - class_weight (list[float], optional): The weight for each class. - Default: None. - reduction (str, optional): The method used to reduce the loss. Options - are "none", "mean" and "sum". This parameter only works when - per_image is True. Default: 'mean'. - avg_factor (int, optional): Average factor that is used to average - the loss. This parameter only works when per_image is True. - Default: None. - ignore_index (int | None): The label index to be ignored. Default: 255. - - Returns: - torch.Tensor: The calculated loss. - """ - - if per_image: - loss = [ - lovasz_softmax_flat( - *flatten_probs( - prob.unsqueeze(0), label.unsqueeze(0), ignore_index), - classes=classes, - class_weight=class_weight) - for prob, label in zip(probs, labels) - ] - loss = weight_reduce_loss( - torch.stack(loss), None, reduction, avg_factor) - else: - loss = lovasz_softmax_flat( - *flatten_probs(probs, labels, ignore_index), - classes=classes, - class_weight=class_weight) - return loss - - -@LOSSES.register_module() -class LovaszLoss(nn.Module): - """LovaszLoss. - - This loss is proposed in `The Lovasz-Softmax loss: A tractable surrogate - for the optimization of the intersection-over-union measure in neural - networks `_. - - Args: - loss_type (str, optional): Binary or multi-class loss. - Default: 'multi_class'. Options are "binary" and "multi_class". - classes (str | list[int], optional): Classes chosen to calculate loss. - 'all' for all classes, 'present' for classes present in labels, or - a list of classes to average. Default: 'present'. - per_image (bool, optional): If per_image is True, compute the loss per - image instead of per batch. Default: False. - reduction (str, optional): The method used to reduce the loss. Options - are "none", "mean" and "sum". This parameter only works when - per_image is True. Default: 'mean'. - class_weight (list[float] | str, optional): Weight of each class. If in - str format, read them from a file. Defaults to None. - loss_weight (float, optional): Weight of the loss. Defaults to 1.0. - """ - - def __init__(self, - loss_type='multi_class', - classes='present', - per_image=False, - reduction='mean', - class_weight=None, - loss_weight=1.0): - super(LovaszLoss, self).__init__() - assert loss_type in ('binary', 'multi_class'), "loss_type should be \ - 'binary' or 'multi_class'." - - if loss_type == 'binary': - self.cls_criterion = lovasz_hinge - else: - self.cls_criterion = lovasz_softmax - assert classes in ('all', 'present') or mmcv.is_list_of(classes, int) - if not per_image: - assert reduction == 'none', "reduction should be 'none' when \ - per_image is False." - - self.classes = classes - self.per_image = per_image - self.reduction = reduction - self.loss_weight = loss_weight - self.class_weight = get_class_weight(class_weight) - - def forward(self, - cls_score, - label, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function.""" - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.class_weight is not None: - class_weight = cls_score.new_tensor(self.class_weight) - else: - class_weight = None - - # if multi-class loss, transform logits to probs - if self.cls_criterion == lovasz_softmax: - cls_score = F.softmax(cls_score, dim=1) - - loss_cls = self.loss_weight * self.cls_criterion( - cls_score, - label, - self.classes, - self.per_image, - class_weight=class_weight, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_cls diff --git a/spaces/Rongjiehuang/ProDiff/utils/trainer.py b/spaces/Rongjiehuang/ProDiff/utils/trainer.py deleted file mode 100644 index 6821fee1a4a08174bd3f3916dbc368fe89f1ba5b..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/ProDiff/utils/trainer.py +++ /dev/null @@ -1,518 +0,0 @@ -import random -from torch.cuda.amp import GradScaler, autocast -from utils import move_to_cuda -import subprocess -import numpy as np -import torch.optim -import torch.utils.data -import copy -import logging -import os -import re -import sys -import torch -import torch.distributed as dist -import torch.multiprocessing as mp -import tqdm - -from utils.ckpt_utils import get_last_checkpoint, get_all_ckpts -from utils.ddp_utils import DDP -from utils.hparams import hparams - - -class Trainer: - def __init__( - self, - work_dir, - default_save_path=None, - accumulate_grad_batches=1, - max_updates=160000, - print_nan_grads=False, - val_check_interval=2000, - num_sanity_val_steps=5, - amp=False, - # tb logger - log_save_interval=100, - tb_log_interval=10, - # checkpoint - monitor_key='val_loss', - monitor_mode='min', - num_ckpt_keep=5, - save_best=True, - resume_from_checkpoint=0, - seed=1234, - debug=False, - ): - os.makedirs(work_dir, exist_ok=True) - self.work_dir = work_dir - self.accumulate_grad_batches = accumulate_grad_batches - self.max_updates = max_updates - self.num_sanity_val_steps = num_sanity_val_steps - self.print_nan_grads = print_nan_grads - self.default_save_path = default_save_path - self.resume_from_checkpoint = resume_from_checkpoint if resume_from_checkpoint > 0 else None - self.seed = seed - self.debug = debug - # model and optm - self.task = None - self.optimizers = [] - - # trainer state - self.testing = False - self.global_step = 0 - self.current_epoch = 0 - self.total_batches = 0 - - # configure checkpoint - self.monitor_key = monitor_key - self.num_ckpt_keep = num_ckpt_keep - self.save_best = save_best - self.monitor_op = np.less if monitor_mode == 'min' else np.greater - self.best_val_results = np.Inf if monitor_mode == 'min' else -np.Inf - self.mode = 'min' - - # allow int, string and gpu list - self.all_gpu_ids = [ - int(x) for x in os.environ.get("CUDA_VISIBLE_DEVICES", "").split(",") if x != ''] - self.num_gpus = len(self.all_gpu_ids) - self.on_gpu = self.num_gpus > 0 - self.root_gpu = 0 - logging.info(f'GPU available: {torch.cuda.is_available()}, GPU used: {self.all_gpu_ids}') - self.use_ddp = self.num_gpus > 1 - self.proc_rank = 0 - # Tensorboard logging - self.log_save_interval = log_save_interval - self.val_check_interval = val_check_interval - self.tb_log_interval = tb_log_interval - self.amp = amp - self.amp_scalar = GradScaler() - - def test(self, task_cls): - self.testing = True - self.fit(task_cls) - - def fit(self, task_cls): - if len(self.all_gpu_ids) > 1: - mp.spawn(self.ddp_run, nprocs=self.num_gpus, args=(task_cls, copy.deepcopy(hparams))) - else: - self.task = task_cls() - self.task.trainer = self - self.run_single_process(self.task) - return 1 - - def ddp_run(self, gpu_idx, task_cls, hparams_): - hparams.update(hparams_) - task = task_cls() - self.ddp_init(gpu_idx, task) - self.run_single_process(task) - - def run_single_process(self, task): - """Sanity check a few things before starting actual training. - - :param task: - """ - # build model, optm and load checkpoint - model = task.build_model() - if model is not None: - task.model = model - checkpoint, _ = get_last_checkpoint(self.work_dir, self.resume_from_checkpoint) - if checkpoint is not None: - self.restore_weights(checkpoint) - elif self.on_gpu: - task.cuda(self.root_gpu) - if not self.testing: - self.optimizers = task.configure_optimizers() - self.fisrt_epoch = True - if checkpoint is not None: - self.restore_opt_state(checkpoint) - del checkpoint - # clear cache after restore - if self.on_gpu: - torch.cuda.empty_cache() - - if self.use_ddp: - self.task = self.configure_ddp(self.task) - dist.barrier() - - task_ref = self.get_task_ref() - task_ref.trainer = self - task_ref.testing = self.testing - # link up experiment object - if self.proc_rank == 0: - task_ref.build_tensorboard(save_dir=self.work_dir, name='lightning_logs', version='lastest') - else: - os.makedirs('tmp', exist_ok=True) - task_ref.build_tensorboard(save_dir='tmp', name='tb_tmp', version='lastest') - self.logger = task_ref.logger - try: - if self.testing: - self.run_evaluation(test=True) - else: - self.train() - except KeyboardInterrupt as e: - task_ref.on_keyboard_interrupt() - - #################### - # valid and test - #################### - def run_evaluation(self, test=False): - eval_results = self.evaluate(self.task, test, tqdm_desc='Valid' if not test else 'test') - if eval_results is not None and 'tb_log' in eval_results: - tb_log_output = eval_results['tb_log'] - self.log_metrics_to_tb(tb_log_output) - if self.proc_rank == 0 and not test: - self.save_checkpoint(epoch=self.current_epoch, logs=eval_results) - - def evaluate(self, task, test=False, tqdm_desc='Valid', max_batches=None): - # enable eval mode - task.zero_grad() - task.eval() - torch.set_grad_enabled(False) - - task_ref = self.get_task_ref() - if test: - ret = task_ref.test_start() - if ret == 'EXIT': - return - - outputs = [] - dataloader = task_ref.test_dataloader() if test else task_ref.val_dataloader() - pbar = tqdm.tqdm(dataloader, desc=tqdm_desc, total=max_batches, dynamic_ncols=True, unit='step', - disable=self.root_gpu > 0) - for batch_idx, batch in enumerate(pbar): - if batch is None: # pragma: no cover - continue - # stop short when on fast_dev_run (sets max_batch=1) - if max_batches is not None and batch_idx >= max_batches: - break - - # make dataloader_idx arg in validation_step optional - if self.on_gpu: - batch = move_to_cuda(batch, self.root_gpu) - args = [batch, batch_idx] - if self.use_ddp: - output = task(*args) - else: - if test: - output = task_ref.test_step(*args) - else: - output = task_ref.validation_step(*args) - # track outputs for collation - outputs.append(output) - # give model a chance to do something with the outputs (and method defined) - if test: - eval_results = task_ref.test_end(outputs) - else: - eval_results = task_ref.validation_end(outputs) - # enable train mode again - task.train() - torch.set_grad_enabled(True) - return eval_results - - #################### - # train - #################### - def train(self): - task_ref = self.get_task_ref() - task_ref.on_train_start() - if self.num_sanity_val_steps > 0: - # run tiny validation (if validation defined) to make sure program won't crash during val - self.evaluate(self.task, False, 'Sanity Val', max_batches=self.num_sanity_val_steps) - # clear cache before training - if self.on_gpu: - torch.cuda.empty_cache() - dataloader = task_ref.train_dataloader() - epoch = self.current_epoch - # run all epochs - while True: - # set seed for distributed sampler (enables shuffling for each epoch) - if self.use_ddp and hasattr(dataloader.sampler, 'set_epoch'): - dataloader.sampler.set_epoch(epoch) - # update training progress in trainer and model - task_ref.current_epoch = epoch - self.current_epoch = epoch - # total batches includes multiple val checks - self.batch_loss_value = 0 # accumulated grads - # before epoch hook - task_ref.on_epoch_start() - - # run epoch - train_pbar = tqdm.tqdm(dataloader, initial=self.global_step, total=float('inf'), - dynamic_ncols=True, unit='step', disable=self.root_gpu > 0) - for batch_idx, batch in enumerate(train_pbar): - pbar_metrics, tb_metrics = self.run_training_batch(batch_idx, batch) - train_pbar.set_postfix(**pbar_metrics) - should_check_val = (self.global_step % self.val_check_interval == 0 - and not self.fisrt_epoch) - if should_check_val: - self.run_evaluation() - self.fisrt_epoch = False - # when metrics should be logged - if (self.global_step + 1) % self.tb_log_interval == 0: - # logs user requested information to logger - self.log_metrics_to_tb(tb_metrics) - - self.global_step += 1 - task_ref.global_step = self.global_step - if self.global_step > self.max_updates: - print("| Training end..") - break - # epoch end hook - task_ref.on_epoch_end() - epoch += 1 - if self.global_step > self.max_updates: - break - task_ref.on_train_end() - - def run_training_batch(self, batch_idx, batch): - if batch is None: - return {} - all_progress_bar_metrics = [] - all_log_metrics = [] - task_ref = self.get_task_ref() - for opt_idx, optimizer in enumerate(self.optimizers): - if optimizer is None: - continue - # make sure only the gradients of the current optimizer's paramaters are calculated - # in the training step to prevent dangling gradients in multiple-optimizer setup. - if len(self.optimizers) > 1: - for param in task_ref.parameters(): - param.requires_grad = False - for group in optimizer.param_groups: - for param in group['params']: - param.requires_grad = True - - # forward pass - with autocast(enabled=self.amp): - if self.on_gpu: - batch = move_to_cuda(copy.copy(batch), self.root_gpu) - args = [batch, batch_idx, opt_idx] - if self.use_ddp: - output = self.task(*args) - else: - output = task_ref.training_step(*args) - loss = output['loss'] - if loss is None: - continue - progress_bar_metrics = output['progress_bar'] - log_metrics = output['tb_log'] - # accumulate loss - loss = loss / self.accumulate_grad_batches - - # backward pass - if loss.requires_grad: - if self.amp: - self.amp_scalar.scale(loss).backward() - else: - loss.backward() - - # track progress bar metrics - all_log_metrics.append(log_metrics) - all_progress_bar_metrics.append(progress_bar_metrics) - - if loss is None: - continue - - # nan grads - if self.print_nan_grads: - has_nan_grad = False - for name, param in task_ref.named_parameters(): - if (param.grad is not None) and torch.isnan(param.grad.float()).any(): - print("| NaN params: ", name, param, param.grad) - has_nan_grad = True - if has_nan_grad: - exit(0) - - # gradient update with accumulated gradients - if (self.global_step + 1) % self.accumulate_grad_batches == 0: - task_ref.on_before_optimization(opt_idx) - if self.amp: - self.amp_scalar.step(optimizer) - self.amp_scalar.update() - else: - optimizer.step() - optimizer.zero_grad() - task_ref.on_after_optimization(self.current_epoch, batch_idx, optimizer, opt_idx) - - # collapse all metrics into one dict - all_progress_bar_metrics = {k: v for d in all_progress_bar_metrics for k, v in d.items()} - all_log_metrics = {k: v for d in all_log_metrics for k, v in d.items()} - return all_progress_bar_metrics, all_log_metrics - - #################### - # load and save checkpoint - #################### - def restore_weights(self, checkpoint): - # load model state - task_ref = self.get_task_ref() - - if len([k for k in checkpoint['state_dict'].keys() if '.' in k]) > 0: - task_ref.load_state_dict(checkpoint['state_dict']) - else: - for k, v in checkpoint['state_dict'].items(): - getattr(task_ref, k).load_state_dict(v) - - if self.on_gpu: - task_ref.cuda(self.root_gpu) - # load training state (affects trainer only) - self.best_val_results = checkpoint['checkpoint_callback_best'] - self.global_step = checkpoint['global_step'] - self.current_epoch = checkpoint['epoch'] - task_ref.global_step = self.global_step - - # wait for all models to restore weights - if self.use_ddp: - # wait for all processes to catch up - dist.barrier() - - def restore_opt_state(self, checkpoint): - if self.testing: - return - # restore the optimizers - optimizer_states = checkpoint['optimizer_states'] - for optimizer, opt_state in zip(self.optimizers, optimizer_states): - if optimizer is None: - return - try: - optimizer.load_state_dict(opt_state) - # move optimizer to GPU 1 weight at a time - if self.on_gpu: - for state in optimizer.state.values(): - for k, v in state.items(): - if isinstance(v, torch.Tensor): - state[k] = v.cuda(self.root_gpu) - except ValueError: - print("| WARMING: optimizer parameters not match !!!") - try: - if dist.is_initialized() and dist.get_rank() > 0: - return - except Exception as e: - print(e) - return - did_restore = True - return did_restore - - def save_checkpoint(self, epoch, logs=None): - monitor_op = np.less - ckpt_path = f'{self.work_dir}/model_ckpt_steps_{self.global_step}.ckpt' - logging.info(f'Epoch {epoch:05d}@{self.global_step}: saving model to {ckpt_path}') - self._atomic_save(ckpt_path) - for old_ckpt in get_all_ckpts(self.work_dir)[self.num_ckpt_keep:]: - subprocess.check_call(f'rm -rf "{old_ckpt}"', shell=True) - logging.info(f'Delete ckpt: {os.path.basename(old_ckpt)}') - current = None - if logs is not None and self.monitor_key in logs: - current = logs[self.monitor_key] - if current is not None and self.save_best: - if monitor_op(current, self.best_val_results): - best_filepath = f'{self.work_dir}/model_ckpt_best.pt' - self.best_val_results = current - logging.info( - f'Epoch {epoch:05d}@{self.global_step}: {self.monitor_key} reached {current:0.5f}. ' - f'Saving model to {best_filepath}') - self._atomic_save(best_filepath) - - def _atomic_save(self, filepath): - checkpoint = self.dump_checkpoint() - tmp_path = str(filepath) + ".part" - torch.save(checkpoint, tmp_path, _use_new_zipfile_serialization=False) - os.replace(tmp_path, filepath) - - def dump_checkpoint(self): - checkpoint = {'epoch': self.current_epoch, 'global_step': self.global_step, - 'checkpoint_callback_best': self.best_val_results} - # save optimizers - optimizer_states = [] - for i, optimizer in enumerate(self.optimizers): - if optimizer is not None: - optimizer_states.append(optimizer.state_dict()) - - checkpoint['optimizer_states'] = optimizer_states - task_ref = self.get_task_ref() - checkpoint['state_dict'] = { - k: v.state_dict() for k, v in task_ref.named_children() if len(list(v.parameters())) > 0} - return checkpoint - - #################### - # DDP - #################### - def ddp_init(self, gpu_idx, task): - # determine which process we are and world size - self.proc_rank = gpu_idx - task.trainer = self - self.init_ddp_connection(self.proc_rank, self.num_gpus) - - # copy model to each gpu - torch.cuda.set_device(gpu_idx) - # override root GPU - self.root_gpu = gpu_idx - self.task = task - - def configure_ddp(self, task): - task = DDP(task, device_ids=[self.root_gpu], find_unused_parameters=True) - if dist.get_rank() != 0 and not self.debug: - sys.stdout = open(os.devnull, "w") - sys.stderr = open(os.devnull, "w") - random.seed(self.seed) - np.random.seed(self.seed) - return task - - def init_ddp_connection(self, proc_rank, world_size): - root_node = '127.0.0.1' - root_node = self.resolve_root_node_address(root_node) - os.environ['MASTER_ADDR'] = root_node - dist.init_process_group('nccl', rank=proc_rank, world_size=world_size) - - def resolve_root_node_address(self, root_node): - if '[' in root_node: - name = root_node.split('[')[0] - number = root_node.split(',')[0] - if '-' in number: - number = number.split('-')[0] - number = re.sub('[^0-9]', '', number) - root_node = name + number - return root_node - - #################### - # utils - #################### - def get_task_ref(self): - from tasks.base_task import BaseTask - task: BaseTask = self.task.module if isinstance(self.task, DDP) else self.task - return task - - def log_metrics_to_tb(self, metrics, step=None): - """Logs the metric dict passed in. - - :param metrics: - """ - # added metrics by Lightning for convenience - metrics['epoch'] = self.current_epoch - - # turn all tensors to scalars - scalar_metrics = self.metrics_to_scalars(metrics) - - step = step if step is not None else self.global_step - # log actual metrics - if self.proc_rank == 0: - self.log_metrics(self.logger, scalar_metrics, step=step) - - @staticmethod - def log_metrics(logger, metrics, step=None): - for k, v in metrics.items(): - if isinstance(v, torch.Tensor): - v = v.item() - logger.add_scalar(k, v, step) - - def metrics_to_scalars(self, metrics): - new_metrics = {} - for k, v in metrics.items(): - if isinstance(v, torch.Tensor): - v = v.item() - - if type(v) is dict: - v = self.metrics_to_scalars(v) - - new_metrics[k] = v - - return new_metrics diff --git a/spaces/SaffalPoosh/faceRecognition/README.md b/spaces/SaffalPoosh/faceRecognition/README.md deleted file mode 100644 index 191dda3655f5bbe2946aef74357055d5b91c6478..0000000000000000000000000000000000000000 --- a/spaces/SaffalPoosh/faceRecognition/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: FaceRecognition -emoji: 🦀 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.1.3 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sakil/Humanoid_robot/README.md b/spaces/Sakil/Humanoid_robot/README.md deleted file mode 100644 index 0ee6e329d006a1b265119965893e9d15ec07588e..0000000000000000000000000000000000000000 --- a/spaces/Sakil/Humanoid_robot/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Humanoid_robot -emoji: 🚀 -colorFrom: yellow -colorTo: red -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/SamerKharboush/chatGPT-Sam-Turbo/modules/presets.py b/spaces/SamerKharboush/chatGPT-Sam-Turbo/modules/presets.py deleted file mode 100644 index 918c2380bd8a63b00e565fcd4149bd7419c71539..0000000000000000000000000000000000000000 --- a/spaces/SamerKharboush/chatGPT-Sam-Turbo/modules/presets.py +++ /dev/null @@ -1,196 +0,0 @@ -# -*- coding:utf-8 -*- -import gradio as gr - -# ChatGPT 设置 -initial_prompt = "You are a helpful assistant." -API_URL = "https://api.openai.com/v1/chat/completions" -BALANCE_API_URL="https://api.openai.com/dashboard/billing/credit_grants" -USAGE_API_URL="https://api.openai.com/dashboard/billing/usage" -HISTORY_DIR = "history" -TEMPLATES_DIR = "templates" - -# 错误信息 -standard_error_msg = "☹️发生了错误:" # 错误信息的标准前缀 -error_retrieve_prompt = "请检查网络连接,或者API-Key是否有效。" # 获取对话时发生错误 -connection_timeout_prompt = "连接超时,无法获取对话。" # 连接超时 -read_timeout_prompt = "读取超时,无法获取对话。" # 读取超时 -proxy_error_prompt = "代理错误,无法获取对话。" # 代理错误 -ssl_error_prompt = "SSL错误,无法获取对话。" # SSL 错误 -no_apikey_msg = "API key长度不是51位,请检查是否输入正确。" # API key 长度不足 51 位 -no_input_msg = "请输入对话内容。" # 未输入对话内容 - -timeout_streaming = 10 # 流式对话时的超时时间 -timeout_all = 200 # 非流式对话时的超时时间 -enable_streaming_option = True # 是否启用选择选择是否实时显示回答的勾选框 -HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True -CONCURRENT_COUNT = 100 # 允许同时使用的用户数量 - -SIM_K = 5 -INDEX_QUERY_TEMPRATURE = 1.0 - -title = """

    川虎ChatGPT 🚀

    """ -description = """\ -
    - -由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536) 和 [明昭MZhao](https://space.bilibili.com/24807452)开发 - -访问川虎ChatGPT的 [GitHub项目](https://github.com/GaiZhenbiao/ChuanhuChatGPT) 下载最新版脚本 - -此App使用 `gpt-3.5-turbo` 大语言模型 -
    -""" - -footer = """\ -
    {versions}
    -""" - -summarize_prompt = "你是谁?我们刚才聊了什么?" # 总结对话时的 prompt - -MODELS = [ - "gpt-3.5-turbo", - "gpt-3.5-turbo-0301", - "gpt-4", - "gpt-4-0314", - "gpt-4-32k", - "gpt-4-32k-0314", -] # 可选的模型 - -MODEL_SOFT_TOKEN_LIMIT = { - "gpt-3.5-turbo": { - "streaming": 3500, - "all": 3500 - }, - "gpt-3.5-turbo-0301": { - "streaming": 3500, - "all": 3500 - }, - "gpt-4": { - "streaming": 7500, - "all": 7500 - }, - "gpt-4-0314": { - "streaming": 7500, - "all": 7500 - }, - "gpt-4-32k": { - "streaming": 31000, - "all": 31000 - }, - "gpt-4-32k-0314": { - "streaming": 31000, - "all": 31000 - } -} - -REPLY_LANGUAGES = [ - "简体中文", - "繁體中文", - "English", - "日本語", - "Español", - "Français", - "Deutsch", - "跟随问题语言(不稳定)" -] - - -WEBSEARCH_PTOMPT_TEMPLATE = """\ -Web search results: - -{web_results} -Current date: {current_date} - -Instructions: Using the provided web search results, write a comprehensive reply to the given query. Make sure to cite results using [[number](URL)] notation after the reference. If the provided search results refer to multiple subjects with the same name, write separate answers for each subject. -Query: {query} -Reply in {reply_language} -""" - -PROMPT_TEMPLATE = """\ -Context information is below. ---------------------- -{context_str} ---------------------- -Current date: {current_date}. -Using the provided context information, write a comprehensive reply to the given query. -Make sure to cite results using [number] notation after the reference. -If the provided context information refer to multiple subjects with the same name, write separate answers for each subject. -Use prior knowledge only if the given context didn't provide enough information. -Answer the question: {query_str} -Reply in {reply_language} -""" - -REFINE_TEMPLATE = """\ -The original question is as follows: {query_str} -We have provided an existing answer: {existing_answer} -We have the opportunity to refine the existing answer -(only if needed) with some more context below. ------------- -{context_msg} ------------- -Given the new context, refine the original answer to better -Reply in {reply_language} -If the context isn't useful, return the original answer. -""" - -ALREADY_CONVERTED_MARK = "" - -small_and_beautiful_theme = gr.themes.Soft( - primary_hue=gr.themes.Color( - c50="#02C160", - c100="rgba(2, 193, 96, 0.2)", - c200="#02C160", - c300="rgba(2, 193, 96, 0.32)", - c400="rgba(2, 193, 96, 0.32)", - c500="rgba(2, 193, 96, 1.0)", - c600="rgba(2, 193, 96, 1.0)", - c700="rgba(2, 193, 96, 0.32)", - c800="rgba(2, 193, 96, 0.32)", - c900="#02C160", - c950="#02C160", - ), - secondary_hue=gr.themes.Color( - c50="#576b95", - c100="#576b95", - c200="#576b95", - c300="#576b95", - c400="#576b95", - c500="#576b95", - c600="#576b95", - c700="#576b95", - c800="#576b95", - c900="#576b95", - c950="#576b95", - ), - neutral_hue=gr.themes.Color( - name="gray", - c50="#f9fafb", - c100="#f3f4f6", - c200="#e5e7eb", - c300="#d1d5db", - c400="#B2B2B2", - c500="#808080", - c600="#636363", - c700="#515151", - c800="#393939", - c900="#272727", - c950="#171717", - ), - radius_size=gr.themes.sizes.radius_sm, - ).set( - button_primary_background_fill="#06AE56", - button_primary_background_fill_dark="#06AE56", - button_primary_background_fill_hover="#07C863", - button_primary_border_color="#06AE56", - button_primary_border_color_dark="#06AE56", - button_primary_text_color="#FFFFFF", - button_primary_text_color_dark="#FFFFFF", - button_secondary_background_fill="#F2F2F2", - button_secondary_background_fill_dark="#2B2B2B", - button_secondary_text_color="#393939", - button_secondary_text_color_dark="#FFFFFF", - # background_fill_primary="#F7F7F7", - # background_fill_primary_dark="#1F1F1F", - block_title_text_color="*primary_500", - block_title_background_fill="*primary_100", - input_background_fill="#F6F6F6", - ) diff --git a/spaces/SanchezVFX/dis/README.md b/spaces/SanchezVFX/dis/README.md deleted file mode 100644 index 72b7c3f161a801956879acee54685db041448c2d..0000000000000000000000000000000000000000 --- a/spaces/SanchezVFX/dis/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: DIS Background Removal -emoji: 🔥 🌠 🏰 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: doevent/dis-background-removal ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/yolo/README.md b/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/yolo/README.md deleted file mode 100644 index ed47930a3c5eca8f22ca974e94d8f7b1d18a45da..0000000000000000000000000000000000000000 --- a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/yolo/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# A PyTorch implementation of a YOLO v3 Object Detector - -Forked from https://github.com/ayooshkathuria/pytorch-yolo-v3 diff --git a/spaces/Sapphire-356/Video2MC/videopose_PSTMO.py b/spaces/Sapphire-356/Video2MC/videopose_PSTMO.py deleted file mode 100644 index fe3795ac944dd7d2bf1c603e93f32d769175df89..0000000000000000000000000000000000000000 --- a/spaces/Sapphire-356/Video2MC/videopose_PSTMO.py +++ /dev/null @@ -1,242 +0,0 @@ -import os -import time - -from common.arguments import parse_args -from common.camera import * -from common.generators import * -from common.loss import * -from common.model import * -from common.utils import Timer, evaluate, add_path -from common.inference_3d import * - -from model.block.refine import refine -from model.stmo import Model - -import HPE2keyframes as Hk - -from datetime import datetime -import pytz - - -# from joints_detectors.openpose.main import generate_kpts as open_pose - - -os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152 -os.environ["CUDA_VISIBLE_DEVICES"] = "0" - -metadata = {'layout_name': 'coco', 'num_joints': 17, 'keypoints_symmetry': [[1, 3, 5, 7, 9, 11, 13, 15], [2, 4, 6, 8, 10, 12, 14, 16]]} - -add_path() - - -# record time -def ckpt_time(ckpt=None): - if not ckpt: - return time.time() - else: - return time.time() - float(ckpt), time.time() - - -time0 = ckpt_time() - - -def get_detector_2d(detector_name): - def get_alpha_pose(): - from joints_detectors.Alphapose.gene_npz import generate_kpts as alpha_pose - return alpha_pose - - detector_map = { - 'alpha_pose': get_alpha_pose, - # 'hr_pose': get_hr_pose, - # 'open_pose': open_pose - } - - assert detector_name in detector_map, f'2D detector: {detector_name} not implemented yet!' - - return detector_map[detector_name]() - - -class Skeleton: - def parents(self): - return np.array([-1, 0, 1, 2, 0, 4, 5, 0, 7, 8, 9, 8, 11, 12, 8, 14, 15]) - - def joints_right(self): - return [1, 2, 3, 14, 15, 16] - - -def main(args, progress): - detector_2d = get_detector_2d(args.detector_2d) - - assert detector_2d, 'detector_2d should be in ({alpha, hr, open}_pose)' - - # 2D kpts loads or generate - #args.input_npz = './outputs/alpha_pose_skiing_cut/skiing_cut.npz' - if not args.input_npz: - video_name = args.viz_video - keypoints = detector_2d(video_name, progress) - else: - npz = np.load(args.input_npz) - keypoints = npz['kpts'] # (N, 17, 2) - - keypoints_symmetry = metadata['keypoints_symmetry'] - kps_left, kps_right = list(keypoints_symmetry[0]), list(keypoints_symmetry[1]) - joints_left, joints_right = list([4, 5, 6, 11, 12, 13]), list([1, 2, 3, 14, 15, 16]) - - # normlization keypoints Suppose using the camera parameter - keypoints = normalize_screen_coordinates(keypoints[..., :2], w=1000, h=1002) - - # model_pos = TemporalModel(17, 2, 17, filter_widths=[3, 3, 3, 3, 3], causal=args.causal, dropout=args.dropout, channels=args.channels, - # dense=args.dense) - - model = {} - model['trans'] = Model(args) - - - # if torch.cuda.is_available(): - # model_pos = model_pos - - ckpt, time1 = ckpt_time(time0) - print('-------------- load data spends {:.2f} seconds'.format(ckpt)) - - # load trained model - # chk_filename = os.path.join(args.checkpoint, args.resume if args.resume else args.evaluate) - # print('Loading checkpoint', chk_filename) - # checkpoint = torch.load(chk_filename, map_location=lambda storage, loc: storage) # 把loc映射到storage - # model_pos.load_state_dict(checkpoint['model_pos']) - - model_dict = model['trans'].state_dict() - - no_refine_path = "checkpoint/PSTMOS_no_refine_48_5137_in_the_wild.pth" - pre_dict = torch.load(no_refine_path, map_location=torch.device('cpu')) - for key, value in pre_dict.items(): - name = key[7:] - model_dict[name] = pre_dict[key] - model['trans'].load_state_dict(model_dict) - - - ckpt, time2 = ckpt_time(time1) - print('-------------- load 3D model spends {:.2f} seconds'.format(ckpt)) - - # Receptive field: 243 frames for args.arc [3, 3, 3, 3, 3] - receptive_field = args.frames - pad = (receptive_field - 1) // 2 # Padding on each side - causal_shift = 0 - - print('Rendering...') - input_keypoints = keypoints.copy() - print(input_keypoints.shape) - # gen = UnchunkedGenerator(None, None, [input_keypoints], - # pad=pad, causal_shift=causal_shift, augment=args.test_time_augmentation, - # kps_left=kps_left, kps_right=kps_right, joints_left=joints_left, joints_right=joints_right) - # test_data = Fusion(opt=args, train=False, dataset=dataset, root_path=root_path, MAE=opt.MAE) - # test_dataloader = torch.utils.data.DataLoader(test_data, batch_size=1, - # shuffle=False, num_workers=0, pin_memory=True) - #prediction = evaluate(gen, model_pos, return_predictions=True) - - gen = Evaluate_Generator(128, None, None, [input_keypoints], args.stride, - pad=pad, causal_shift=causal_shift, augment=args.test_time_augmentation, shuffle=False, - kps_left=kps_left, kps_right=kps_right, joints_left=joints_left, joints_right=joints_right) - - prediction = val(args, gen, model, progress) - - # save 3D joint points - # np.save(f'outputs/test_3d_{args.video_name}_output.npy', prediction, allow_pickle=True) - - rot = np.array([0.14070565, -0.15007018, -0.7552408, 0.62232804], dtype=np.float32) - prediction = camera_to_world(prediction, R=rot, t=0) - - # We don't have the trajectory, but at least we can rebase the height - prediction[:, :, 2] -= np.min(prediction[:, :, 2]) - - output_dir_dict = {} - npy_filename = f'output_3Dpose_npy/{args.video_name}.npy' - output_dir_dict['npy'] = npy_filename - np.save(npy_filename, prediction, allow_pickle=True) - - anim_output = {'Skeleton': prediction} - input_keypoints = image_coordinates(input_keypoints[..., :2], w=1000, h=1002) - - ckpt, time3 = ckpt_time(time2) - print('-------------- generate reconstruction 3D data spends {:.2f} seconds'.format(ckpt)) - - if not args.viz_output: - args.viz_output = 'outputs/alpha_result.mp4' - - from common.visualization import render_animation - render_animation(input_keypoints, anim_output, - Skeleton(), 25, args.viz_bitrate, np.array(70., dtype=np.float32), args.viz_output, progress, - limit=args.viz_limit, downsample=args.viz_downsample, size=args.viz_size, - input_video_path=args.viz_video, viewport=(1000, 1002), - input_video_skip=args.viz_skip) - - ckpt, time4 = ckpt_time(time3) - print('total spend {:2f} second'.format(ckpt)) - - return output_dir_dict - - -def inference_video(video_path, detector_2d, progress): - """ - Do image -> 2d points -> 3d points to video. - :param detector_2d: used 2d joints detector. Can be {alpha_pose, hr_pose} - :param video_path: relative to outputs - :return: None - """ - args = parse_args() - - args.detector_2d = detector_2d - dir_name = os.path.dirname(video_path) - basename = os.path.basename(video_path) - args.video_name = basename[:basename.rfind('.')] - args.viz_video = video_path - args.viz_output = f'output_videos/{args.video_name}.mp4' - args.evaluate = 'pretrained_h36m_detectron_coco.bin' - - with Timer(video_path): - output_dir_dict = main(args, progress) - - output_dir_dict["output_videos"] = args.viz_output - output_dir_dict["video_name"] = args.video_name - return output_dir_dict - - -def gr_video2mc(video_path, progress): - - print("\n>>>>> One video uploaded <<<<<\n") - china_tz = pytz.timezone('Asia/Shanghai') - current_time = datetime.now(china_tz) - formatted_time = current_time.strftime('%Y-%m-%d %H:%M:%S') - print(f"Start Time: {formatted_time}\n") - - if not os.path.exists('output_3Dpose_npy'): - os.makedirs('output_3Dpose_npy') - if not os.path.exists('output_alphapose'): - os.makedirs('output_alphapose') - if not os.path.exists('output_miframes'): - os.makedirs('output_miframes') - if not os.path.exists('output_videos'): - os.makedirs('output_videos') - - FPS_mine_imator = 30 - output_dir_dict = inference_video(video_path, 'alpha_pose', progress) - Hk.hpe2keyframes(output_dir_dict['npy'], FPS_mine_imator, f"output_miframes/{output_dir_dict['video_name']}.miframes") - path1 = os.path.abspath(f"output_miframes/{output_dir_dict['video_name']}.miframes") - path2 = os.path.abspath(f"output_videos/{output_dir_dict['video_name']}.mp4") - - print("\n----- One video processed -----\n") - china_tz = pytz.timezone('Asia/Shanghai') - current_time = datetime.now(china_tz) - formatted_time = current_time.strftime('%Y-%m-%d %H:%M:%S') - print(f"Finished Time: {formatted_time}\n") - - - return path1, path2 - - -if __name__ == '__main__': - - files = os.listdir('./input_videos') - FPS_mine_imator = 30 - for file in files: - output_dir_dict = inference_video(os.path.join('input_videos', file), 'alpha_pose') - Hk.hpe2keyframes(output_dir_dict['npy'], FPS_mine_imator, f"output_miframes/{output_dir_dict['video_name']}.miframes") diff --git a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cuda.h b/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cuda.h deleted file mode 100644 index ad1311a78f61303616504eb991aaa9c4a93d9948..0000000000000000000000000000000000000000 --- a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cuda.h +++ /dev/null @@ -1,33 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#pragma once -#include - -namespace groundingdino { - -at::Tensor ms_deform_attn_cuda_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step); - -std::vector ms_deform_attn_cuda_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step); - -} // namespace groundingdino \ No newline at end of file diff --git a/spaces/SoUmNerd/RemoteMojo/regex.py b/spaces/SoUmNerd/RemoteMojo/regex.py deleted file mode 100644 index fa1e4e2c85750f82c545f497c7f271d90d473f18..0000000000000000000000000000000000000000 --- a/spaces/SoUmNerd/RemoteMojo/regex.py +++ /dev/null @@ -1,19 +0,0 @@ -import re - -def find_imports(code): - pattern = r'Python\.import_module\("([^"]*)"\)' - matches = re.findall(pattern, code) - return matches - -if __name__ == "__main__": - code = ''' - from python import Python - Python.import_module("numpy") - Python.import_module("pandas") - Python.import_module("matplotlib") - - def main(): - print("hello world") - ''' - - print(find_imports(code)) # Output: ['numpy', 'pandas', 'matplotlib'] \ No newline at end of file diff --git a/spaces/Solis/Solis/llm_src/utils/solis/solis_solver.py b/spaces/Solis/Solis/llm_src/utils/solis/solis_solver.py deleted file mode 100644 index 03e54a6073895db9104140b326c37b8c41cfe724..0000000000000000000000000000000000000000 --- a/spaces/Solis/Solis/llm_src/utils/solis/solis_solver.py +++ /dev/null @@ -1,97 +0,0 @@ -import math -from llm_src.utils.solis.helper import generate_func_set, generate_func_set_all, is_func - -def try_search(args, orig_nums, fp_results, func_set=None): - - if func_set is None: - if "multiarith" in args.dataset: - op_set, x_set, func_set = generate_func_set( - num_x=3, - num_op=2, - show_funx=False, - op_sets=['+', '-', '*', '/'], - rep=True, - ) - elif "addsub" in args.dataset: - op_set, x_set, func_set_3 = generate_func_set_all( - num_x=3, - num_op=2, - show_funx=False, - op_sets=['+', '-'], - rep=True, - ) - op_set, x_set, func_set_2 = generate_func_set( - num_x=2, - num_op=1, - show_funx=False, - op_sets=['+', '-'], - rep=True, - ) - if len(orig_nums) == 3: - func_set = func_set_3 - elif len(orig_nums) == 2: - func_set = func_set_2 - else: - return None, None - - errors_cnt = [0] * len(func_set) - losses_cnt = [0] * len(func_set) - for k, expr in enumerate(func_set): - - for fp_result in fp_results: - - pred = fp_result["fp_z"] - repl_numbers = fp_result["fp_nums"] - - flag_, loss_ = is_func(expr, [str(pred)], [repl_numbers], return_loss=True) - errors_cnt[k] += (flag_ == False) - losses_cnt[k] += abs(loss_) - - tmp_min = 10000000000000 - tmp_error = 10000000000000 - thresh_k = int(len(errors_cnt)) - expr_filter = "" - for k, cnt in enumerate(errors_cnt): - if cnt <= thresh_k and errors_cnt[k] < tmp_min: - expr_filter = func_set[k] - tmp_min = errors_cnt[k] - - tmp_min = losses_cnt[0] - if expr_filter == "": - for k, loss in enumerate(losses_cnt): - if loss < tmp_min: - expr_filter = func_set[k] - tmp_min = loss - - # try calibration - cali_pred = "" - if expr_filter != "": - var_dict = {} - for i_, var in enumerate(orig_nums): - var_dict.update({ - f"x{i_}": var, - }) - try: - cali_pred = eval(expr_filter, var_dict) - if "multiarith" in args.dataset: - cali_pred = round(cali_pred, 5) - if int(cali_pred * 10 // 10) == cali_pred: - cali_pred = int(cali_pred) - else: - cali_pred = math.ceil(cali_pred) - elif "addsub" in args.dataset: - bit_max = 0 - for number in orig_nums: - bit = str(number).split('.') - if len(bit) == 1: - bit = 0 - else: - bit = len(bit[-1]) - bit_max = max(bit, bit_max) - cali_pred = round(cali_pred, bit_max) - except: - return None, None - - return expr_filter, cali_pred - else: - return None, None \ No newline at end of file diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/adversarial/discriminators/msd.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/adversarial/discriminators/msd.py deleted file mode 100644 index c4e67e29b46ab22f6ffeec85ffc64d8b99800b1b..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/adversarial/discriminators/msd.py +++ /dev/null @@ -1,126 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import numpy as np -import torch -import torch.nn as nn - -from ...modules import NormConv1d -from .base import MultiDiscriminator, MultiDiscriminatorOutputType - - -class ScaleDiscriminator(nn.Module): - """Waveform sub-discriminator. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - kernel_sizes (Sequence[int]): Kernel sizes for first and last convolutions. - filters (int): Number of initial filters for convolutions. - max_filters (int): Maximum number of filters. - downsample_scales (Sequence[int]): Scale for downsampling implemented as strided convolutions. - inner_kernel_sizes (Sequence[int] or None): Kernel sizes for inner convolutions. - groups (Sequence[int] or None): Groups for inner convolutions. - strides (Sequence[int] or None): Strides for inner convolutions. - paddings (Sequence[int] or None): Paddings for inner convolutions. - norm (str): Normalization method. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - pad (str): Padding for initial convolution. - pad_params (dict): Parameters to provide to the padding module. - """ - def __init__(self, in_channels=1, out_channels=1, kernel_sizes: tp.Sequence[int] = [5, 3], - filters: int = 16, max_filters: int = 1024, downsample_scales: tp.Sequence[int] = [4, 4, 4, 4], - inner_kernel_sizes: tp.Optional[tp.Sequence[int]] = None, groups: tp.Optional[tp.Sequence[int]] = None, - strides: tp.Optional[tp.Sequence[int]] = None, paddings: tp.Optional[tp.Sequence[int]] = None, - norm: str = 'weight_norm', activation: str = 'LeakyReLU', - activation_params: dict = {'negative_slope': 0.2}, pad: str = 'ReflectionPad1d', - pad_params: dict = {}): - super().__init__() - assert len(kernel_sizes) == 2 - assert kernel_sizes[0] % 2 == 1 - assert kernel_sizes[1] % 2 == 1 - assert (inner_kernel_sizes is None or len(inner_kernel_sizes) == len(downsample_scales)) - assert (groups is None or len(groups) == len(downsample_scales)) - assert (strides is None or len(strides) == len(downsample_scales)) - assert (paddings is None or len(paddings) == len(downsample_scales)) - self.activation = getattr(torch.nn, activation)(**activation_params) - self.convs = nn.ModuleList() - self.convs.append( - nn.Sequential( - getattr(torch.nn, pad)((np.prod(kernel_sizes) - 1) // 2, **pad_params), - NormConv1d(in_channels, filters, kernel_size=np.prod(kernel_sizes), stride=1, norm=norm) - ) - ) - - in_chs = filters - for i, downsample_scale in enumerate(downsample_scales): - out_chs = min(in_chs * downsample_scale, max_filters) - default_kernel_size = downsample_scale * 10 + 1 - default_stride = downsample_scale - default_padding = (default_kernel_size - 1) // 2 - default_groups = in_chs // 4 - self.convs.append( - NormConv1d(in_chs, out_chs, - kernel_size=inner_kernel_sizes[i] if inner_kernel_sizes else default_kernel_size, - stride=strides[i] if strides else default_stride, - groups=groups[i] if groups else default_groups, - padding=paddings[i] if paddings else default_padding, - norm=norm)) - in_chs = out_chs - - out_chs = min(in_chs * 2, max_filters) - self.convs.append(NormConv1d(in_chs, out_chs, kernel_size=kernel_sizes[0], stride=1, - padding=(kernel_sizes[0] - 1) // 2, norm=norm)) - self.conv_post = NormConv1d(out_chs, out_channels, kernel_size=kernel_sizes[1], stride=1, - padding=(kernel_sizes[1] - 1) // 2, norm=norm) - - def forward(self, x: torch.Tensor): - fmap = [] - for layer in self.convs: - x = layer(x) - x = self.activation(x) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - # x = torch.flatten(x, 1, -1) - return x, fmap - - -class MultiScaleDiscriminator(MultiDiscriminator): - """Multi-Scale (MSD) Discriminator, - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - downsample_factor (int): Downsampling factor between the different scales. - scale_norms (Sequence[str]): Normalization for each sub-discriminator. - **kwargs: Additional args for ScaleDiscriminator. - """ - def __init__(self, in_channels: int = 1, out_channels: int = 1, downsample_factor: int = 2, - scale_norms: tp.Sequence[str] = ['weight_norm', 'weight_norm', 'weight_norm'], **kwargs): - super().__init__() - self.discriminators = nn.ModuleList([ - ScaleDiscriminator(in_channels, out_channels, norm=norm, **kwargs) for norm in scale_norms - ]) - self.downsample = nn.AvgPool1d(downsample_factor * 2, downsample_factor, padding=downsample_factor) - - @property - def num_discriminators(self): - return len(self.discriminators) - - def forward(self, x: torch.Tensor) -> MultiDiscriminatorOutputType: - logits = [] - fmaps = [] - for i, disc in enumerate(self.discriminators): - if i != 0: - self.downsample(x) - logit, fmap = disc(x) - logits.append(logit) - fmaps.append(fmap) - return logits, fmaps diff --git a/spaces/SuYuanS/AudioCraft_Plus/tests/modules/test_conv.py b/spaces/SuYuanS/AudioCraft_Plus/tests/modules/test_conv.py deleted file mode 100644 index 28fbc4f1a0ebaf41b56947b767958ae696e75eec..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/tests/modules/test_conv.py +++ /dev/null @@ -1,203 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product -import math -import random - -import pytest -import torch -from torch import nn - -from audiocraft.modules import ( - NormConv1d, - NormConvTranspose1d, - StreamableConv1d, - StreamableConvTranspose1d, - pad1d, - unpad1d, -) - - -def test_get_extra_padding_for_conv1d(): - # TODO: Implement me! - pass - - -def test_pad1d_zeros(): - x = torch.randn(1, 1, 20) - - xp1 = pad1d(x, (0, 5), mode='constant', value=0.) - assert xp1.shape[-1] == 25 - xp2 = pad1d(x, (5, 5), mode='constant', value=0.) - assert xp2.shape[-1] == 30 - xp3 = pad1d(x, (0, 0), mode='constant', value=0.) - assert xp3.shape[-1] == 20 - xp4 = pad1d(x, (10, 30), mode='constant', value=0.) - assert xp4.shape[-1] == 60 - - with pytest.raises(AssertionError): - pad1d(x, (-1, 0), mode='constant', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (0, -1), mode='constant', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (-1, -1), mode='constant', value=0.) - - -def test_pad1d_reflect(): - x = torch.randn(1, 1, 20) - - xp1 = pad1d(x, (0, 5), mode='reflect', value=0.) - assert xp1.shape[-1] == 25 - xp2 = pad1d(x, (5, 5), mode='reflect', value=0.) - assert xp2.shape[-1] == 30 - xp3 = pad1d(x, (0, 0), mode='reflect', value=0.) - assert xp3.shape[-1] == 20 - xp4 = pad1d(x, (10, 30), mode='reflect', value=0.) - assert xp4.shape[-1] == 60 - - with pytest.raises(AssertionError): - pad1d(x, (-1, 0), mode='reflect', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (0, -1), mode='reflect', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (-1, -1), mode='reflect', value=0.) - - -def test_unpad1d(): - x = torch.randn(1, 1, 20) - - u1 = unpad1d(x, (5, 5)) - assert u1.shape[-1] == 10 - u2 = unpad1d(x, (0, 5)) - assert u2.shape[-1] == 15 - u3 = unpad1d(x, (5, 0)) - assert u3.shape[-1] == 15 - u4 = unpad1d(x, (0, 0)) - assert u4.shape[-1] == x.shape[-1] - - with pytest.raises(AssertionError): - unpad1d(x, (-1, 0)) - - with pytest.raises(AssertionError): - unpad1d(x, (0, -1)) - - with pytest.raises(AssertionError): - unpad1d(x, (-1, -1)) - - -class TestNormConv1d: - - def test_norm_conv1d_modules(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out, kernel_size, stride = 1, 4, 1 - expected_out_length = int((T - kernel_size) / stride + 1) - wn_conv = NormConv1d(C, 1, kernel_size=4, norm='weight_norm') - gn_conv = NormConv1d(C, 1, kernel_size=4, norm='time_group_norm') - nn_conv = NormConv1d(C, 1, kernel_size=4, norm='none') - - assert isinstance(wn_conv.norm, nn.Identity) - assert isinstance(wn_conv.conv, nn.Conv1d) - - assert isinstance(gn_conv.norm, nn.GroupNorm) - assert isinstance(gn_conv.conv, nn.Conv1d) - - assert isinstance(nn_conv.norm, nn.Identity) - assert isinstance(nn_conv.conv, nn.Conv1d) - - for conv_layer in [wn_conv, gn_conv, nn_conv]: - out = conv_layer(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestNormConvTranspose1d: - - def test_normalizations(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out, kernel_size, stride = 1, 4, 1 - expected_out_length = (T - 1) * stride + (kernel_size - 1) + 1 - - wn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='weight_norm') - gn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='time_group_norm') - nn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='none') - - assert isinstance(wn_convtr.norm, nn.Identity) - assert isinstance(wn_convtr.convtr, nn.ConvTranspose1d) - - assert isinstance(gn_convtr.norm, nn.GroupNorm) - assert isinstance(gn_convtr.convtr, nn.ConvTranspose1d) - - assert isinstance(nn_convtr.norm, nn.Identity) - assert isinstance(nn_convtr.convtr, nn.ConvTranspose1d) - - for convtr_layer in [wn_convtr, gn_convtr, nn_convtr]: - out = convtr_layer(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestStreamableConv1d: - - def get_streamable_conv1d_output_length(self, length, kernel_size, stride, dilation): - # StreamableConv1d internally pads to make sure that the last window is full - padding_total = (kernel_size - 1) * dilation - (stride - 1) - n_frames = (length - kernel_size + padding_total) / stride + 1 - ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total) - return ideal_length // stride - - def test_streamable_conv1d(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - C_out = 1 - - # conv params are [(kernel_size, stride, dilation)] - conv_params = [(4, 1, 1), (4, 2, 1), (3, 1, 3), (10, 5, 1), (3, 2, 3)] - for causal, (kernel_size, stride, dilation) in product([False, True], conv_params): - expected_out_length = self.get_streamable_conv1d_output_length(T, kernel_size, stride, dilation) - sconv = StreamableConv1d(C, C_out, kernel_size=kernel_size, stride=stride, dilation=dilation, causal=causal) - out = sconv(t0) - assert isinstance(out, torch.Tensor) - print(list(out.shape), [N, C_out, expected_out_length]) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestStreamableConvTranspose1d: - - def get_streamable_convtr1d_output_length(self, length, kernel_size, stride): - padding_total = (kernel_size - stride) - return (length - 1) * stride - padding_total + (kernel_size - 1) + 1 - - def test_streamable_convtr1d(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out = 1 - - with pytest.raises(AssertionError): - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=False, trim_right_ratio=0.5) - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=-1.) - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=2) - - # causal params are [(causal, trim_right)] - causal_params = [(False, 1.0), (True, 1.0), (True, 0.5), (True, 0.0)] - # conv params are [(kernel_size, stride)] - conv_params = [(4, 1), (4, 2), (3, 1), (10, 5)] - for ((causal, trim_right_ratio), (kernel_size, stride)) in product(causal_params, conv_params): - expected_out_length = self.get_streamable_convtr1d_output_length(T, kernel_size, stride) - sconvtr = StreamableConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, - causal=causal, trim_right_ratio=trim_right_ratio) - out = sconvtr(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/win32/shell32.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/win32/shell32.py deleted file mode 100644 index 5c945db74cbe8eb763492b3edccaeda27278e3d1..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/win32/shell32.py +++ /dev/null @@ -1,382 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -# Copyright (c) 2009-2014, Mario Vilas -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are met: -# -# * Redistributions of source code must retain the above copyright notice, -# this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright -# notice,this list of conditions and the following disclaimer in the -# documentation and/or other materials provided with the distribution. -# * Neither the name of the copyright holder nor the names of its -# contributors may be used to endorse or promote products derived from -# this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. - -""" -Wrapper for shell32.dll in ctypes. -""" - -# TODO -# * Add a class wrapper to SHELLEXECUTEINFO -# * More logic into ShellExecuteEx - -__revision__ = "$Id$" - -from winappdbg.win32.defines import * -from winappdbg.win32.kernel32 import LocalFree - -#============================================================================== -# This is used later on to calculate the list of exported symbols. -_all = None -_all = set(vars().keys()) -#============================================================================== - -#--- Constants ---------------------------------------------------------------- - -SEE_MASK_DEFAULT = 0x00000000 -SEE_MASK_CLASSNAME = 0x00000001 -SEE_MASK_CLASSKEY = 0x00000003 -SEE_MASK_IDLIST = 0x00000004 -SEE_MASK_INVOKEIDLIST = 0x0000000C -SEE_MASK_ICON = 0x00000010 -SEE_MASK_HOTKEY = 0x00000020 -SEE_MASK_NOCLOSEPROCESS = 0x00000040 -SEE_MASK_CONNECTNETDRV = 0x00000080 -SEE_MASK_NOASYNC = 0x00000100 -SEE_MASK_DOENVSUBST = 0x00000200 -SEE_MASK_FLAG_NO_UI = 0x00000400 -SEE_MASK_UNICODE = 0x00004000 -SEE_MASK_NO_CONSOLE = 0x00008000 -SEE_MASK_ASYNCOK = 0x00100000 -SEE_MASK_HMONITOR = 0x00200000 -SEE_MASK_NOZONECHECKS = 0x00800000 -SEE_MASK_WAITFORINPUTIDLE = 0x02000000 -SEE_MASK_FLAG_LOG_USAGE = 0x04000000 - -SE_ERR_FNF = 2 -SE_ERR_PNF = 3 -SE_ERR_ACCESSDENIED = 5 -SE_ERR_OOM = 8 -SE_ERR_DLLNOTFOUND = 32 -SE_ERR_SHARE = 26 -SE_ERR_ASSOCINCOMPLETE = 27 -SE_ERR_DDETIMEOUT = 28 -SE_ERR_DDEFAIL = 29 -SE_ERR_DDEBUSY = 30 -SE_ERR_NOASSOC = 31 - -SHGFP_TYPE_CURRENT = 0 -SHGFP_TYPE_DEFAULT = 1 - -CSIDL_DESKTOP = 0x0000 -CSIDL_INTERNET = 0x0001 -CSIDL_PROGRAMS = 0x0002 -CSIDL_CONTROLS = 0x0003 -CSIDL_PRINTERS = 0x0004 -CSIDL_PERSONAL = 0x0005 -CSIDL_FAVORITES = 0x0006 -CSIDL_STARTUP = 0x0007 -CSIDL_RECENT = 0x0008 -CSIDL_SENDTO = 0x0009 -CSIDL_BITBUCKET = 0x000a -CSIDL_STARTMENU = 0x000b -CSIDL_MYDOCUMENTS = CSIDL_PERSONAL -CSIDL_MYMUSIC = 0x000d -CSIDL_MYVIDEO = 0x000e -CSIDL_DESKTOPDIRECTORY = 0x0010 -CSIDL_DRIVES = 0x0011 -CSIDL_NETWORK = 0x0012 -CSIDL_NETHOOD = 0x0013 -CSIDL_FONTS = 0x0014 -CSIDL_TEMPLATES = 0x0015 -CSIDL_COMMON_STARTMENU = 0x0016 -CSIDL_COMMON_PROGRAMS = 0x0017 -CSIDL_COMMON_STARTUP = 0x0018 -CSIDL_COMMON_DESKTOPDIRECTORY = 0x0019 -CSIDL_APPDATA = 0x001a -CSIDL_PRINTHOOD = 0x001b -CSIDL_LOCAL_APPDATA = 0x001c -CSIDL_ALTSTARTUP = 0x001d -CSIDL_COMMON_ALTSTARTUP = 0x001e -CSIDL_COMMON_FAVORITES = 0x001f -CSIDL_INTERNET_CACHE = 0x0020 -CSIDL_COOKIES = 0x0021 -CSIDL_HISTORY = 0x0022 -CSIDL_COMMON_APPDATA = 0x0023 -CSIDL_WINDOWS = 0x0024 -CSIDL_SYSTEM = 0x0025 -CSIDL_PROGRAM_FILES = 0x0026 -CSIDL_MYPICTURES = 0x0027 -CSIDL_PROFILE = 0x0028 -CSIDL_SYSTEMX86 = 0x0029 -CSIDL_PROGRAM_FILESX86 = 0x002a -CSIDL_PROGRAM_FILES_COMMON = 0x002b -CSIDL_PROGRAM_FILES_COMMONX86 = 0x002c -CSIDL_COMMON_TEMPLATES = 0x002d -CSIDL_COMMON_DOCUMENTS = 0x002e -CSIDL_COMMON_ADMINTOOLS = 0x002f -CSIDL_ADMINTOOLS = 0x0030 -CSIDL_CONNECTIONS = 0x0031 -CSIDL_COMMON_MUSIC = 0x0035 -CSIDL_COMMON_PICTURES = 0x0036 -CSIDL_COMMON_VIDEO = 0x0037 -CSIDL_RESOURCES = 0x0038 -CSIDL_RESOURCES_LOCALIZED = 0x0039 -CSIDL_COMMON_OEM_LINKS = 0x003a -CSIDL_CDBURN_AREA = 0x003b -CSIDL_COMPUTERSNEARME = 0x003d -CSIDL_PROFILES = 0x003e - -CSIDL_FOLDER_MASK = 0x00ff - -CSIDL_FLAG_PER_USER_INIT = 0x0800 -CSIDL_FLAG_NO_ALIAS = 0x1000 -CSIDL_FLAG_DONT_VERIFY = 0x4000 -CSIDL_FLAG_CREATE = 0x8000 - -CSIDL_FLAG_MASK = 0xff00 - -#--- Structures --------------------------------------------------------------- - -# typedef struct _SHELLEXECUTEINFO { -# DWORD cbSize; -# ULONG fMask; -# HWND hwnd; -# LPCTSTR lpVerb; -# LPCTSTR lpFile; -# LPCTSTR lpParameters; -# LPCTSTR lpDirectory; -# int nShow; -# HINSTANCE hInstApp; -# LPVOID lpIDList; -# LPCTSTR lpClass; -# HKEY hkeyClass; -# DWORD dwHotKey; -# union { -# HANDLE hIcon; -# HANDLE hMonitor; -# } DUMMYUNIONNAME; -# HANDLE hProcess; -# } SHELLEXECUTEINFO, *LPSHELLEXECUTEINFO; - -class SHELLEXECUTEINFO(Structure): - _fields_ = [ - ("cbSize", DWORD), - ("fMask", ULONG), - ("hwnd", HWND), - ("lpVerb", LPSTR), - ("lpFile", LPSTR), - ("lpParameters", LPSTR), - ("lpDirectory", LPSTR), - ("nShow", ctypes.c_int), - ("hInstApp", HINSTANCE), - ("lpIDList", LPVOID), - ("lpClass", LPSTR), - ("hkeyClass", HKEY), - ("dwHotKey", DWORD), - ("hIcon", HANDLE), - ("hProcess", HANDLE), - ] - - def __get_hMonitor(self): - return self.hIcon - def __set_hMonitor(self, hMonitor): - self.hIcon = hMonitor - hMonitor = property(__get_hMonitor, __set_hMonitor) - -LPSHELLEXECUTEINFO = POINTER(SHELLEXECUTEINFO) - -#--- shell32.dll -------------------------------------------------------------- - -# LPWSTR *CommandLineToArgvW( -# LPCWSTR lpCmdLine, -# int *pNumArgs -# ); -def CommandLineToArgvW(lpCmdLine): - _CommandLineToArgvW = windll.shell32.CommandLineToArgvW - _CommandLineToArgvW.argtypes = [LPVOID, POINTER(ctypes.c_int)] - _CommandLineToArgvW.restype = LPVOID - - if not lpCmdLine: - lpCmdLine = None - argc = ctypes.c_int(0) - vptr = ctypes.windll.shell32.CommandLineToArgvW(lpCmdLine, byref(argc)) - if vptr == NULL: - raise ctypes.WinError() - argv = vptr - try: - argc = argc.value - if argc <= 0: - raise ctypes.WinError() - argv = ctypes.cast(argv, ctypes.POINTER(LPWSTR * argc) ) - argv = [ argv.contents[i] for i in compat.xrange(0, argc) ] - finally: - if vptr is not None: - LocalFree(vptr) - return argv - -def CommandLineToArgvA(lpCmdLine): - t_ansi = GuessStringType.t_ansi - t_unicode = GuessStringType.t_unicode - if isinstance(lpCmdLine, t_ansi): - cmdline = t_unicode(lpCmdLine) - else: - cmdline = lpCmdLine - return [t_ansi(x) for x in CommandLineToArgvW(cmdline)] - -CommandLineToArgv = GuessStringType(CommandLineToArgvA, CommandLineToArgvW) - -# HINSTANCE ShellExecute( -# HWND hwnd, -# LPCTSTR lpOperation, -# LPCTSTR lpFile, -# LPCTSTR lpParameters, -# LPCTSTR lpDirectory, -# INT nShowCmd -# ); -def ShellExecuteA(hwnd = None, lpOperation = None, lpFile = None, lpParameters = None, lpDirectory = None, nShowCmd = None): - _ShellExecuteA = windll.shell32.ShellExecuteA - _ShellExecuteA.argtypes = [HWND, LPSTR, LPSTR, LPSTR, LPSTR, INT] - _ShellExecuteA.restype = HINSTANCE - - if not nShowCmd: - nShowCmd = 0 - success = _ShellExecuteA(hwnd, lpOperation, lpFile, lpParameters, lpDirectory, nShowCmd) - success = ctypes.cast(success, c_int) - success = success.value - if not success > 32: # weird! isn't it? - raise ctypes.WinError(success) - -def ShellExecuteW(hwnd = None, lpOperation = None, lpFile = None, lpParameters = None, lpDirectory = None, nShowCmd = None): - _ShellExecuteW = windll.shell32.ShellExecuteW - _ShellExecuteW.argtypes = [HWND, LPWSTR, LPWSTR, LPWSTR, LPWSTR, INT] - _ShellExecuteW.restype = HINSTANCE - - if not nShowCmd: - nShowCmd = 0 - success = _ShellExecuteW(hwnd, lpOperation, lpFile, lpParameters, lpDirectory, nShowCmd) - success = ctypes.cast(success, c_int) - success = success.value - if not success > 32: # weird! isn't it? - raise ctypes.WinError(success) - -ShellExecute = GuessStringType(ShellExecuteA, ShellExecuteW) - -# BOOL ShellExecuteEx( -# __inout LPSHELLEXECUTEINFO lpExecInfo -# ); -def ShellExecuteEx(lpExecInfo): - if isinstance(lpExecInfo, SHELLEXECUTEINFOA): - ShellExecuteExA(lpExecInfo) - elif isinstance(lpExecInfo, SHELLEXECUTEINFOW): - ShellExecuteExW(lpExecInfo) - else: - raise TypeError("Expected SHELLEXECUTEINFOA or SHELLEXECUTEINFOW, got %s instead" % type(lpExecInfo)) - -def ShellExecuteExA(lpExecInfo): - _ShellExecuteExA = windll.shell32.ShellExecuteExA - _ShellExecuteExA.argtypes = [LPSHELLEXECUTEINFOA] - _ShellExecuteExA.restype = BOOL - _ShellExecuteExA.errcheck = RaiseIfZero - _ShellExecuteExA(byref(lpExecInfo)) - -def ShellExecuteExW(lpExecInfo): - _ShellExecuteExW = windll.shell32.ShellExecuteExW - _ShellExecuteExW.argtypes = [LPSHELLEXECUTEINFOW] - _ShellExecuteExW.restype = BOOL - _ShellExecuteExW.errcheck = RaiseIfZero - _ShellExecuteExW(byref(lpExecInfo)) - -# HINSTANCE FindExecutable( -# __in LPCTSTR lpFile, -# __in_opt LPCTSTR lpDirectory, -# __out LPTSTR lpResult -# ); -def FindExecutableA(lpFile, lpDirectory = None): - _FindExecutableA = windll.shell32.FindExecutableA - _FindExecutableA.argtypes = [LPSTR, LPSTR, LPSTR] - _FindExecutableA.restype = HINSTANCE - - lpResult = ctypes.create_string_buffer(MAX_PATH) - success = _FindExecutableA(lpFile, lpDirectory, lpResult) - success = ctypes.cast(success, ctypes.c_void_p) - success = success.value - if not success > 32: # weird! isn't it? - raise ctypes.WinError(success) - return lpResult.value - -def FindExecutableW(lpFile, lpDirectory = None): - _FindExecutableW = windll.shell32.FindExecutableW - _FindExecutableW.argtypes = [LPWSTR, LPWSTR, LPWSTR] - _FindExecutableW.restype = HINSTANCE - - lpResult = ctypes.create_unicode_buffer(MAX_PATH) - success = _FindExecutableW(lpFile, lpDirectory, lpResult) - success = ctypes.cast(success, ctypes.c_void_p) - success = success.value - if not success > 32: # weird! isn't it? - raise ctypes.WinError(success) - return lpResult.value - -FindExecutable = GuessStringType(FindExecutableA, FindExecutableW) - -# HRESULT SHGetFolderPath( -# __in HWND hwndOwner, -# __in int nFolder, -# __in HANDLE hToken, -# __in DWORD dwFlags, -# __out LPTSTR pszPath -# ); -def SHGetFolderPathA(nFolder, hToken = None, dwFlags = SHGFP_TYPE_CURRENT): - _SHGetFolderPathA = windll.shell32.SHGetFolderPathA # shfolder.dll in older win versions - _SHGetFolderPathA.argtypes = [HWND, ctypes.c_int, HANDLE, DWORD, LPSTR] - _SHGetFolderPathA.restype = HRESULT - _SHGetFolderPathA.errcheck = RaiseIfNotZero # S_OK == 0 - - pszPath = ctypes.create_string_buffer(MAX_PATH + 1) - _SHGetFolderPathA(None, nFolder, hToken, dwFlags, pszPath) - return pszPath.value - -def SHGetFolderPathW(nFolder, hToken = None, dwFlags = SHGFP_TYPE_CURRENT): - _SHGetFolderPathW = windll.shell32.SHGetFolderPathW # shfolder.dll in older win versions - _SHGetFolderPathW.argtypes = [HWND, ctypes.c_int, HANDLE, DWORD, LPWSTR] - _SHGetFolderPathW.restype = HRESULT - _SHGetFolderPathW.errcheck = RaiseIfNotZero # S_OK == 0 - - pszPath = ctypes.create_unicode_buffer(MAX_PATH + 1) - _SHGetFolderPathW(None, nFolder, hToken, dwFlags, pszPath) - return pszPath.value - -SHGetFolderPath = DefaultStringType(SHGetFolderPathA, SHGetFolderPathW) - -# BOOL IsUserAnAdmin(void); -def IsUserAnAdmin(): - # Supposedly, IsUserAnAdmin() is deprecated in Vista. - # But I tried it on Windows 7 and it works just fine. - _IsUserAnAdmin = windll.shell32.IsUserAnAdmin - _IsUserAnAdmin.argtypes = [] - _IsUserAnAdmin.restype = bool - return _IsUserAnAdmin() - -#============================================================================== -# This calculates the list of exported symbols. -_all = set(vars().keys()).difference(_all) -__all__ = [_x for _x in _all if not _x.startswith('_')] -__all__.sort() -#============================================================================== diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/masked_conv.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/masked_conv.py deleted file mode 100644 index cd514cc204c1d571ea5dc7e74b038c0f477a008b..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/masked_conv.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['masked_im2col_forward', 'masked_col2im_forward']) - - -class MaskedConv2dFunction(Function): - - @staticmethod - def symbolic(g, features, mask, weight, bias, padding, stride): - return g.op( - 'mmcv::MMCVMaskedConv2d', - features, - mask, - weight, - bias, - padding_i=padding, - stride_i=stride) - - @staticmethod - def forward(ctx, features, mask, weight, bias, padding=0, stride=1): - assert mask.dim() == 3 and mask.size(0) == 1 - assert features.dim() == 4 and features.size(0) == 1 - assert features.size()[2:] == mask.size()[1:] - pad_h, pad_w = _pair(padding) - stride_h, stride_w = _pair(stride) - if stride_h != 1 or stride_w != 1: - raise ValueError( - 'Stride could not only be 1 in masked_conv2d currently.') - out_channel, in_channel, kernel_h, kernel_w = weight.size() - - batch_size = features.size(0) - out_h = int( - math.floor((features.size(2) + 2 * pad_h - - (kernel_h - 1) - 1) / stride_h + 1)) - out_w = int( - math.floor((features.size(3) + 2 * pad_w - - (kernel_h - 1) - 1) / stride_w + 1)) - mask_inds = torch.nonzero(mask[0] > 0, as_tuple=False) - output = features.new_zeros(batch_size, out_channel, out_h, out_w) - if mask_inds.numel() > 0: - mask_h_idx = mask_inds[:, 0].contiguous() - mask_w_idx = mask_inds[:, 1].contiguous() - data_col = features.new_zeros(in_channel * kernel_h * kernel_w, - mask_inds.size(0)) - ext_module.masked_im2col_forward( - features, - mask_h_idx, - mask_w_idx, - data_col, - kernel_h=kernel_h, - kernel_w=kernel_w, - pad_h=pad_h, - pad_w=pad_w) - - masked_output = torch.addmm(1, bias[:, None], 1, - weight.view(out_channel, -1), data_col) - ext_module.masked_col2im_forward( - masked_output, - mask_h_idx, - mask_w_idx, - output, - height=out_h, - width=out_w, - channels=out_channel) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - return (None, ) * 5 - - -masked_conv2d = MaskedConv2dFunction.apply - - -class MaskedConv2d(nn.Conv2d): - """A MaskedConv2d which inherits the official Conv2d. - - The masked forward doesn't implement the backward function and only - supports the stride parameter to be 1 currently. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True): - super(MaskedConv2d, - self).__init__(in_channels, out_channels, kernel_size, stride, - padding, dilation, groups, bias) - - def forward(self, input, mask=None): - if mask is None: # fallback to the normal Conv2d - return super(MaskedConv2d, self).forward(input) - else: - return masked_conv2d(input, mask, self.weight, self.bias, - self.padding) diff --git a/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/run_pb.py b/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/run_pb.py deleted file mode 100644 index e46254f7b37f72e7d87672d70fd4b2f393ad7658..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/run_pb.py +++ /dev/null @@ -1,135 +0,0 @@ -"""Compute depth maps for images in the input folder. -""" -import os -import glob -import utils -import cv2 -import argparse - -import tensorflow as tf - -from transforms import Resize, NormalizeImage, PrepareForNet - -def run(input_path, output_path, model_path, model_type="large"): - """Run MonoDepthNN to compute depth maps. - - Args: - input_path (str): path to input folder - output_path (str): path to output folder - model_path (str): path to saved model - """ - print("initialize") - - # the runtime initialization will not allocate all memory on the device to avoid out of GPU memory - gpus = tf.config.experimental.list_physical_devices('GPU') - if gpus: - try: - for gpu in gpus: - #tf.config.experimental.set_memory_growth(gpu, True) - tf.config.experimental.set_virtual_device_configuration(gpu, - [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=4000)]) - except RuntimeError as e: - print(e) - - # network resolution - if model_type == "large": - net_w, net_h = 384, 384 - elif model_type == "small": - net_w, net_h = 256, 256 - else: - print(f"model_type '{model_type}' not implemented, use: --model_type large") - assert False - - # load network - graph_def = tf.compat.v1.GraphDef() - with tf.io.gfile.GFile(model_path, 'rb') as f: - graph_def.ParseFromString(f.read()) - tf.import_graph_def(graph_def, name='') - - - model_operations = tf.compat.v1.get_default_graph().get_operations() - input_node = '0:0' - output_layer = model_operations[len(model_operations) - 1].name + ':0' - print("Last layer name: ", output_layer) - - resize_image = Resize( - net_w, - net_h, - resize_target=None, - keep_aspect_ratio=False, - ensure_multiple_of=32, - resize_method="upper_bound", - image_interpolation_method=cv2.INTER_CUBIC, - ) - - def compose2(f1, f2): - return lambda x: f2(f1(x)) - - transform = compose2(resize_image, PrepareForNet()) - - # get input - img_names = glob.glob(os.path.join(input_path, "*")) - num_images = len(img_names) - - # create output folder - os.makedirs(output_path, exist_ok=True) - - print("start processing") - - with tf.compat.v1.Session() as sess: - try: - # load images - for ind, img_name in enumerate(img_names): - - print(" processing {} ({}/{})".format(img_name, ind + 1, num_images)) - - # input - img = utils.read_image(img_name) - img_input = transform({"image": img})["image"] - - # compute - prob_tensor = sess.graph.get_tensor_by_name(output_layer) - prediction, = sess.run(prob_tensor, {input_node: [img_input] }) - prediction = prediction.reshape(net_h, net_w) - prediction = cv2.resize(prediction, (img.shape[1], img.shape[0]), interpolation=cv2.INTER_CUBIC) - - # output - filename = os.path.join( - output_path, os.path.splitext(os.path.basename(img_name))[0] - ) - utils.write_depth(filename, prediction, bits=2) - - except KeyError: - print ("Couldn't find input node: ' + input_node + ' or output layer: " + output_layer + ".") - exit(-1) - - print("finished") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument('-i', '--input_path', - default='input', - help='folder with input images' - ) - - parser.add_argument('-o', '--output_path', - default='output', - help='folder for output images' - ) - - parser.add_argument('-m', '--model_weights', - default='model-f6b98070.pb', - help='path to the trained weights of model' - ) - - parser.add_argument('-t', '--model_type', - default='large', - help='model type: large or small' - ) - - args = parser.parse_args() - - # compute depth maps - run(args.input_path, args.output_path, args.model_weights, args.model_type) diff --git a/spaces/TH5314/newbing/src/components/ui/textarea.tsx b/spaces/TH5314/newbing/src/components/ui/textarea.tsx deleted file mode 100644 index e25af722c7a5dc1121a9ab58d6716952f9f76081..0000000000000000000000000000000000000000 --- a/spaces/TH5314/newbing/src/components/ui/textarea.tsx +++ /dev/null @@ -1,24 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface TextareaProps - extends React.TextareaHTMLAttributes {} - -const Textarea = React.forwardRef( - ({ className, ...props }, ref) => { - return ( - "; - support.noCloneChecked = !!div.cloneNode( true ).lastChild.defaultValue; - - // Support: IE <=9 only - // IE <=9 replaces "; - support.option = !!div.lastChild; -} )(); - - -// We have to close these tags to support XHTML (trac-13200) -var wrapMap = { - - // XHTML parsers do not magically insert elements in the - // same way that tag soup parsers do. So we cannot shorten - // this by omitting or other required elements. - thead: [ 1, "", "
    " ], - col: [ 2, "", "
    " ], - tr: [ 2, "", "
    " ], - td: [ 3, "", "
    " ], - - _default: [ 0, "", "" ] -}; - -wrapMap.tbody = wrapMap.tfoot = wrapMap.colgroup = wrapMap.caption = wrapMap.thead; -wrapMap.th = wrapMap.td; - -// Support: IE <=9 only -if ( !support.option ) { - wrapMap.optgroup = wrapMap.option = [ 1, "" ]; -} - - -function getAll( context, tag ) { - - // Support: IE <=9 - 11 only - // Use typeof to avoid zero-argument method invocation on host objects (trac-15151) - var ret; - - if ( typeof context.getElementsByTagName !== "undefined" ) { - ret = context.getElementsByTagName( tag || "*" ); - - } else if ( typeof context.querySelectorAll !== "undefined" ) { - ret = context.querySelectorAll( tag || "*" ); - - } else { - ret = []; - } - - if ( tag === undefined || tag && nodeName( context, tag ) ) { - return jQuery.merge( [ context ], ret ); - } - - return ret; -} - - -// Mark scripts as having already been evaluated -function setGlobalEval( elems, refElements ) { - var i = 0, - l = elems.length; - - for ( ; i < l; i++ ) { - dataPriv.set( - elems[ i ], - "globalEval", - !refElements || dataPriv.get( refElements[ i ], "globalEval" ) - ); - } -} - - -var rhtml = /<|&#?\w+;/; - -function buildFragment( elems, context, scripts, selection, ignored ) { - var elem, tmp, tag, wrap, attached, j, - fragment = context.createDocumentFragment(), - nodes = [], - i = 0, - l = elems.length; - - for ( ; i < l; i++ ) { - elem = elems[ i ]; - - if ( elem || elem === 0 ) { - - // Add nodes directly - if ( toType( elem ) === "object" ) { - - // Support: Android <=4.0 only, PhantomJS 1 only - // push.apply(_, arraylike) throws on ancient WebKit - jQuery.merge( nodes, elem.nodeType ? [ elem ] : elem ); - - // Convert non-html into a text node - } else if ( !rhtml.test( elem ) ) { - nodes.push( context.createTextNode( elem ) ); - - // Convert html into DOM nodes - } else { - tmp = tmp || fragment.appendChild( context.createElement( "div" ) ); - - // Deserialize a standard representation - tag = ( rtagName.exec( elem ) || [ "", "" ] )[ 1 ].toLowerCase(); - wrap = wrapMap[ tag ] || wrapMap._default; - tmp.innerHTML = wrap[ 1 ] + jQuery.htmlPrefilter( elem ) + wrap[ 2 ]; - - // Descend through wrappers to the right content - j = wrap[ 0 ]; - while ( j-- ) { - tmp = tmp.lastChild; - } - - // Support: Android <=4.0 only, PhantomJS 1 only - // push.apply(_, arraylike) throws on ancient WebKit - jQuery.merge( nodes, tmp.childNodes ); - - // Remember the top-level container - tmp = fragment.firstChild; - - // Ensure the created nodes are orphaned (trac-12392) - tmp.textContent = ""; - } - } - } - - // Remove wrapper from fragment - fragment.textContent = ""; - - i = 0; - while ( ( elem = nodes[ i++ ] ) ) { - - // Skip elements already in the context collection (trac-4087) - if ( selection && jQuery.inArray( elem, selection ) > -1 ) { - if ( ignored ) { - ignored.push( elem ); - } - continue; - } - - attached = isAttached( elem ); - - // Append to fragment - tmp = getAll( fragment.appendChild( elem ), "script" ); - - // Preserve script evaluation history - if ( attached ) { - setGlobalEval( tmp ); - } - - // Capture executables - if ( scripts ) { - j = 0; - while ( ( elem = tmp[ j++ ] ) ) { - if ( rscriptType.test( elem.type || "" ) ) { - scripts.push( elem ); - } - } - } - } - - return fragment; -} - - -var rtypenamespace = /^([^.]*)(?:\.(.+)|)/; - -function returnTrue() { - return true; -} - -function returnFalse() { - return false; -} - -function on( elem, types, selector, data, fn, one ) { - var origFn, type; - - // Types can be a map of types/handlers - if ( typeof types === "object" ) { - - // ( types-Object, selector, data ) - if ( typeof selector !== "string" ) { - - // ( types-Object, data ) - data = data || selector; - selector = undefined; - } - for ( type in types ) { - on( elem, type, selector, data, types[ type ], one ); - } - return elem; - } - - if ( data == null && fn == null ) { - - // ( types, fn ) - fn = selector; - data = selector = undefined; - } else if ( fn == null ) { - if ( typeof selector === "string" ) { - - // ( types, selector, fn ) - fn = data; - data = undefined; - } else { - - // ( types, data, fn ) - fn = data; - data = selector; - selector = undefined; - } - } - if ( fn === false ) { - fn = returnFalse; - } else if ( !fn ) { - return elem; - } - - if ( one === 1 ) { - origFn = fn; - fn = function( event ) { - - // Can use an empty set, since event contains the info - jQuery().off( event ); - return origFn.apply( this, arguments ); - }; - - // Use same guid so caller can remove using origFn - fn.guid = origFn.guid || ( origFn.guid = jQuery.guid++ ); - } - return elem.each( function() { - jQuery.event.add( this, types, fn, data, selector ); - } ); -} - -/* - * Helper functions for managing events -- not part of the public interface. - * Props to Dean Edwards' addEvent library for many of the ideas. - */ -jQuery.event = { - - global: {}, - - add: function( elem, types, handler, data, selector ) { - - var handleObjIn, eventHandle, tmp, - events, t, handleObj, - special, handlers, type, namespaces, origType, - elemData = dataPriv.get( elem ); - - // Only attach events to objects that accept data - if ( !acceptData( elem ) ) { - return; - } - - // Caller can pass in an object of custom data in lieu of the handler - if ( handler.handler ) { - handleObjIn = handler; - handler = handleObjIn.handler; - selector = handleObjIn.selector; - } - - // Ensure that invalid selectors throw exceptions at attach time - // Evaluate against documentElement in case elem is a non-element node (e.g., document) - if ( selector ) { - jQuery.find.matchesSelector( documentElement, selector ); - } - - // Make sure that the handler has a unique ID, used to find/remove it later - if ( !handler.guid ) { - handler.guid = jQuery.guid++; - } - - // Init the element's event structure and main handler, if this is the first - if ( !( events = elemData.events ) ) { - events = elemData.events = Object.create( null ); - } - if ( !( eventHandle = elemData.handle ) ) { - eventHandle = elemData.handle = function( e ) { - - // Discard the second event of a jQuery.event.trigger() and - // when an event is called after a page has unloaded - return typeof jQuery !== "undefined" && jQuery.event.triggered !== e.type ? - jQuery.event.dispatch.apply( elem, arguments ) : undefined; - }; - } - - // Handle multiple events separated by a space - types = ( types || "" ).match( rnothtmlwhite ) || [ "" ]; - t = types.length; - while ( t-- ) { - tmp = rtypenamespace.exec( types[ t ] ) || []; - type = origType = tmp[ 1 ]; - namespaces = ( tmp[ 2 ] || "" ).split( "." ).sort(); - - // There *must* be a type, no attaching namespace-only handlers - if ( !type ) { - continue; - } - - // If event changes its type, use the special event handlers for the changed type - special = jQuery.event.special[ type ] || {}; - - // If selector defined, determine special event api type, otherwise given type - type = ( selector ? special.delegateType : special.bindType ) || type; - - // Update special based on newly reset type - special = jQuery.event.special[ type ] || {}; - - // handleObj is passed to all event handlers - handleObj = jQuery.extend( { - type: type, - origType: origType, - data: data, - handler: handler, - guid: handler.guid, - selector: selector, - needsContext: selector && jQuery.expr.match.needsContext.test( selector ), - namespace: namespaces.join( "." ) - }, handleObjIn ); - - // Init the event handler queue if we're the first - if ( !( handlers = events[ type ] ) ) { - handlers = events[ type ] = []; - handlers.delegateCount = 0; - - // Only use addEventListener if the special events handler returns false - if ( !special.setup || - special.setup.call( elem, data, namespaces, eventHandle ) === false ) { - - if ( elem.addEventListener ) { - elem.addEventListener( type, eventHandle ); - } - } - } - - if ( special.add ) { - special.add.call( elem, handleObj ); - - if ( !handleObj.handler.guid ) { - handleObj.handler.guid = handler.guid; - } - } - - // Add to the element's handler list, delegates in front - if ( selector ) { - handlers.splice( handlers.delegateCount++, 0, handleObj ); - } else { - handlers.push( handleObj ); - } - - // Keep track of which events have ever been used, for event optimization - jQuery.event.global[ type ] = true; - } - - }, - - // Detach an event or set of events from an element - remove: function( elem, types, handler, selector, mappedTypes ) { - - var j, origCount, tmp, - events, t, handleObj, - special, handlers, type, namespaces, origType, - elemData = dataPriv.hasData( elem ) && dataPriv.get( elem ); - - if ( !elemData || !( events = elemData.events ) ) { - return; - } - - // Once for each type.namespace in types; type may be omitted - types = ( types || "" ).match( rnothtmlwhite ) || [ "" ]; - t = types.length; - while ( t-- ) { - tmp = rtypenamespace.exec( types[ t ] ) || []; - type = origType = tmp[ 1 ]; - namespaces = ( tmp[ 2 ] || "" ).split( "." ).sort(); - - // Unbind all events (on this namespace, if provided) for the element - if ( !type ) { - for ( type in events ) { - jQuery.event.remove( elem, type + types[ t ], handler, selector, true ); - } - continue; - } - - special = jQuery.event.special[ type ] || {}; - type = ( selector ? special.delegateType : special.bindType ) || type; - handlers = events[ type ] || []; - tmp = tmp[ 2 ] && - new RegExp( "(^|\\.)" + namespaces.join( "\\.(?:.*\\.|)" ) + "(\\.|$)" ); - - // Remove matching events - origCount = j = handlers.length; - while ( j-- ) { - handleObj = handlers[ j ]; - - if ( ( mappedTypes || origType === handleObj.origType ) && - ( !handler || handler.guid === handleObj.guid ) && - ( !tmp || tmp.test( handleObj.namespace ) ) && - ( !selector || selector === handleObj.selector || - selector === "**" && handleObj.selector ) ) { - handlers.splice( j, 1 ); - - if ( handleObj.selector ) { - handlers.delegateCount--; - } - if ( special.remove ) { - special.remove.call( elem, handleObj ); - } - } - } - - // Remove generic event handler if we removed something and no more handlers exist - // (avoids potential for endless recursion during removal of special event handlers) - if ( origCount && !handlers.length ) { - if ( !special.teardown || - special.teardown.call( elem, namespaces, elemData.handle ) === false ) { - - jQuery.removeEvent( elem, type, elemData.handle ); - } - - delete events[ type ]; - } - } - - // Remove data and the expando if it's no longer used - if ( jQuery.isEmptyObject( events ) ) { - dataPriv.remove( elem, "handle events" ); - } - }, - - dispatch: function( nativeEvent ) { - - var i, j, ret, matched, handleObj, handlerQueue, - args = new Array( arguments.length ), - - // Make a writable jQuery.Event from the native event object - event = jQuery.event.fix( nativeEvent ), - - handlers = ( - dataPriv.get( this, "events" ) || Object.create( null ) - )[ event.type ] || [], - special = jQuery.event.special[ event.type ] || {}; - - // Use the fix-ed jQuery.Event rather than the (read-only) native event - args[ 0 ] = event; - - for ( i = 1; i < arguments.length; i++ ) { - args[ i ] = arguments[ i ]; - } - - event.delegateTarget = this; - - // Call the preDispatch hook for the mapped type, and let it bail if desired - if ( special.preDispatch && special.preDispatch.call( this, event ) === false ) { - return; - } - - // Determine handlers - handlerQueue = jQuery.event.handlers.call( this, event, handlers ); - - // Run delegates first; they may want to stop propagation beneath us - i = 0; - while ( ( matched = handlerQueue[ i++ ] ) && !event.isPropagationStopped() ) { - event.currentTarget = matched.elem; - - j = 0; - while ( ( handleObj = matched.handlers[ j++ ] ) && - !event.isImmediatePropagationStopped() ) { - - // If the event is namespaced, then each handler is only invoked if it is - // specially universal or its namespaces are a superset of the event's. - if ( !event.rnamespace || handleObj.namespace === false || - event.rnamespace.test( handleObj.namespace ) ) { - - event.handleObj = handleObj; - event.data = handleObj.data; - - ret = ( ( jQuery.event.special[ handleObj.origType ] || {} ).handle || - handleObj.handler ).apply( matched.elem, args ); - - if ( ret !== undefined ) { - if ( ( event.result = ret ) === false ) { - event.preventDefault(); - event.stopPropagation(); - } - } - } - } - } - - // Call the postDispatch hook for the mapped type - if ( special.postDispatch ) { - special.postDispatch.call( this, event ); - } - - return event.result; - }, - - handlers: function( event, handlers ) { - var i, handleObj, sel, matchedHandlers, matchedSelectors, - handlerQueue = [], - delegateCount = handlers.delegateCount, - cur = event.target; - - // Find delegate handlers - if ( delegateCount && - - // Support: IE <=9 - // Black-hole SVG instance trees (trac-13180) - cur.nodeType && - - // Support: Firefox <=42 - // Suppress spec-violating clicks indicating a non-primary pointer button (trac-3861) - // https://www.w3.org/TR/DOM-Level-3-Events/#event-type-click - // Support: IE 11 only - // ...but not arrow key "clicks" of radio inputs, which can have `button` -1 (gh-2343) - !( event.type === "click" && event.button >= 1 ) ) { - - for ( ; cur !== this; cur = cur.parentNode || this ) { - - // Don't check non-elements (trac-13208) - // Don't process clicks on disabled elements (trac-6911, trac-8165, trac-11382, trac-11764) - if ( cur.nodeType === 1 && !( event.type === "click" && cur.disabled === true ) ) { - matchedHandlers = []; - matchedSelectors = {}; - for ( i = 0; i < delegateCount; i++ ) { - handleObj = handlers[ i ]; - - // Don't conflict with Object.prototype properties (trac-13203) - sel = handleObj.selector + " "; - - if ( matchedSelectors[ sel ] === undefined ) { - matchedSelectors[ sel ] = handleObj.needsContext ? - jQuery( sel, this ).index( cur ) > -1 : - jQuery.find( sel, this, null, [ cur ] ).length; - } - if ( matchedSelectors[ sel ] ) { - matchedHandlers.push( handleObj ); - } - } - if ( matchedHandlers.length ) { - handlerQueue.push( { elem: cur, handlers: matchedHandlers } ); - } - } - } - } - - // Add the remaining (directly-bound) handlers - cur = this; - if ( delegateCount < handlers.length ) { - handlerQueue.push( { elem: cur, handlers: handlers.slice( delegateCount ) } ); - } - - return handlerQueue; - }, - - addProp: function( name, hook ) { - Object.defineProperty( jQuery.Event.prototype, name, { - enumerable: true, - configurable: true, - - get: isFunction( hook ) ? - function() { - if ( this.originalEvent ) { - return hook( this.originalEvent ); - } - } : - function() { - if ( this.originalEvent ) { - return this.originalEvent[ name ]; - } - }, - - set: function( value ) { - Object.defineProperty( this, name, { - enumerable: true, - configurable: true, - writable: true, - value: value - } ); - } - } ); - }, - - fix: function( originalEvent ) { - return originalEvent[ jQuery.expando ] ? - originalEvent : - new jQuery.Event( originalEvent ); - }, - - special: { - load: { - - // Prevent triggered image.load events from bubbling to window.load - noBubble: true - }, - click: { - - // Utilize native event to ensure correct state for checkable inputs - setup: function( data ) { - - // For mutual compressibility with _default, replace `this` access with a local var. - // `|| data` is dead code meant only to preserve the variable through minification. - var el = this || data; - - // Claim the first handler - if ( rcheckableType.test( el.type ) && - el.click && nodeName( el, "input" ) ) { - - // dataPriv.set( el, "click", ... ) - leverageNative( el, "click", true ); - } - - // Return false to allow normal processing in the caller - return false; - }, - trigger: function( data ) { - - // For mutual compressibility with _default, replace `this` access with a local var. - // `|| data` is dead code meant only to preserve the variable through minification. - var el = this || data; - - // Force setup before triggering a click - if ( rcheckableType.test( el.type ) && - el.click && nodeName( el, "input" ) ) { - - leverageNative( el, "click" ); - } - - // Return non-false to allow normal event-path propagation - return true; - }, - - // For cross-browser consistency, suppress native .click() on links - // Also prevent it if we're currently inside a leveraged native-event stack - _default: function( event ) { - var target = event.target; - return rcheckableType.test( target.type ) && - target.click && nodeName( target, "input" ) && - dataPriv.get( target, "click" ) || - nodeName( target, "a" ); - } - }, - - beforeunload: { - postDispatch: function( event ) { - - // Support: Firefox 20+ - // Firefox doesn't alert if the returnValue field is not set. - if ( event.result !== undefined && event.originalEvent ) { - event.originalEvent.returnValue = event.result; - } - } - } - } -}; - -// Ensure the presence of an event listener that handles manually-triggered -// synthetic events by interrupting progress until reinvoked in response to -// *native* events that it fires directly, ensuring that state changes have -// already occurred before other listeners are invoked. -function leverageNative( el, type, isSetup ) { - - // Missing `isSetup` indicates a trigger call, which must force setup through jQuery.event.add - if ( !isSetup ) { - if ( dataPriv.get( el, type ) === undefined ) { - jQuery.event.add( el, type, returnTrue ); - } - return; - } - - // Register the controller as a special universal handler for all event namespaces - dataPriv.set( el, type, false ); - jQuery.event.add( el, type, { - namespace: false, - handler: function( event ) { - var result, - saved = dataPriv.get( this, type ); - - if ( ( event.isTrigger & 1 ) && this[ type ] ) { - - // Interrupt processing of the outer synthetic .trigger()ed event - if ( !saved ) { - - // Store arguments for use when handling the inner native event - // There will always be at least one argument (an event object), so this array - // will not be confused with a leftover capture object. - saved = slice.call( arguments ); - dataPriv.set( this, type, saved ); - - // Trigger the native event and capture its result - this[ type ](); - result = dataPriv.get( this, type ); - dataPriv.set( this, type, false ); - - if ( saved !== result ) { - - // Cancel the outer synthetic event - event.stopImmediatePropagation(); - event.preventDefault(); - - return result; - } - - // If this is an inner synthetic event for an event with a bubbling surrogate - // (focus or blur), assume that the surrogate already propagated from triggering - // the native event and prevent that from happening again here. - // This technically gets the ordering wrong w.r.t. to `.trigger()` (in which the - // bubbling surrogate propagates *after* the non-bubbling base), but that seems - // less bad than duplication. - } else if ( ( jQuery.event.special[ type ] || {} ).delegateType ) { - event.stopPropagation(); - } - - // If this is a native event triggered above, everything is now in order - // Fire an inner synthetic event with the original arguments - } else if ( saved ) { - - // ...and capture the result - dataPriv.set( this, type, jQuery.event.trigger( - saved[ 0 ], - saved.slice( 1 ), - this - ) ); - - // Abort handling of the native event by all jQuery handlers while allowing - // native handlers on the same element to run. On target, this is achieved - // by stopping immediate propagation just on the jQuery event. However, - // the native event is re-wrapped by a jQuery one on each level of the - // propagation so the only way to stop it for jQuery is to stop it for - // everyone via native `stopPropagation()`. This is not a problem for - // focus/blur which don't bubble, but it does also stop click on checkboxes - // and radios. We accept this limitation. - event.stopPropagation(); - event.isImmediatePropagationStopped = returnTrue; - } - } - } ); -} - -jQuery.removeEvent = function( elem, type, handle ) { - - // This "if" is needed for plain objects - if ( elem.removeEventListener ) { - elem.removeEventListener( type, handle ); - } -}; - -jQuery.Event = function( src, props ) { - - // Allow instantiation without the 'new' keyword - if ( !( this instanceof jQuery.Event ) ) { - return new jQuery.Event( src, props ); - } - - // Event object - if ( src && src.type ) { - this.originalEvent = src; - this.type = src.type; - - // Events bubbling up the document may have been marked as prevented - // by a handler lower down the tree; reflect the correct value. - this.isDefaultPrevented = src.defaultPrevented || - src.defaultPrevented === undefined && - - // Support: Android <=2.3 only - src.returnValue === false ? - returnTrue : - returnFalse; - - // Create target properties - // Support: Safari <=6 - 7 only - // Target should not be a text node (trac-504, trac-13143) - this.target = ( src.target && src.target.nodeType === 3 ) ? - src.target.parentNode : - src.target; - - this.currentTarget = src.currentTarget; - this.relatedTarget = src.relatedTarget; - - // Event type - } else { - this.type = src; - } - - // Put explicitly provided properties onto the event object - if ( props ) { - jQuery.extend( this, props ); - } - - // Create a timestamp if incoming event doesn't have one - this.timeStamp = src && src.timeStamp || Date.now(); - - // Mark it as fixed - this[ jQuery.expando ] = true; -}; - -// jQuery.Event is based on DOM3 Events as specified by the ECMAScript Language Binding -// https://www.w3.org/TR/2003/WD-DOM-Level-3-Events-20030331/ecma-script-binding.html -jQuery.Event.prototype = { - constructor: jQuery.Event, - isDefaultPrevented: returnFalse, - isPropagationStopped: returnFalse, - isImmediatePropagationStopped: returnFalse, - isSimulated: false, - - preventDefault: function() { - var e = this.originalEvent; - - this.isDefaultPrevented = returnTrue; - - if ( e && !this.isSimulated ) { - e.preventDefault(); - } - }, - stopPropagation: function() { - var e = this.originalEvent; - - this.isPropagationStopped = returnTrue; - - if ( e && !this.isSimulated ) { - e.stopPropagation(); - } - }, - stopImmediatePropagation: function() { - var e = this.originalEvent; - - this.isImmediatePropagationStopped = returnTrue; - - if ( e && !this.isSimulated ) { - e.stopImmediatePropagation(); - } - - this.stopPropagation(); - } -}; - -// Includes all common event props including KeyEvent and MouseEvent specific props -jQuery.each( { - altKey: true, - bubbles: true, - cancelable: true, - changedTouches: true, - ctrlKey: true, - detail: true, - eventPhase: true, - metaKey: true, - pageX: true, - pageY: true, - shiftKey: true, - view: true, - "char": true, - code: true, - charCode: true, - key: true, - keyCode: true, - button: true, - buttons: true, - clientX: true, - clientY: true, - offsetX: true, - offsetY: true, - pointerId: true, - pointerType: true, - screenX: true, - screenY: true, - targetTouches: true, - toElement: true, - touches: true, - which: true -}, jQuery.event.addProp ); - -jQuery.each( { focus: "focusin", blur: "focusout" }, function( type, delegateType ) { - - function focusMappedHandler( nativeEvent ) { - if ( document.documentMode ) { - - // Support: IE 11+ - // Attach a single focusin/focusout handler on the document while someone wants - // focus/blur. This is because the former are synchronous in IE while the latter - // are async. In other browsers, all those handlers are invoked synchronously. - - // `handle` from private data would already wrap the event, but we need - // to change the `type` here. - var handle = dataPriv.get( this, "handle" ), - event = jQuery.event.fix( nativeEvent ); - event.type = nativeEvent.type === "focusin" ? "focus" : "blur"; - event.isSimulated = true; - - // First, handle focusin/focusout - handle( nativeEvent ); - - // ...then, handle focus/blur - // - // focus/blur don't bubble while focusin/focusout do; simulate the former by only - // invoking the handler at the lower level. - if ( event.target === event.currentTarget ) { - - // The setup part calls `leverageNative`, which, in turn, calls - // `jQuery.event.add`, so event handle will already have been set - // by this point. - handle( event ); - } - } else { - - // For non-IE browsers, attach a single capturing handler on the document - // while someone wants focusin/focusout. - jQuery.event.simulate( delegateType, nativeEvent.target, - jQuery.event.fix( nativeEvent ) ); - } - } - - jQuery.event.special[ type ] = { - - // Utilize native event if possible so blur/focus sequence is correct - setup: function() { - - var attaches; - - // Claim the first handler - // dataPriv.set( this, "focus", ... ) - // dataPriv.set( this, "blur", ... ) - leverageNative( this, type, true ); - - if ( document.documentMode ) { - - // Support: IE 9 - 11+ - // We use the same native handler for focusin & focus (and focusout & blur) - // so we need to coordinate setup & teardown parts between those events. - // Use `delegateType` as the key as `type` is already used by `leverageNative`. - attaches = dataPriv.get( this, delegateType ); - if ( !attaches ) { - this.addEventListener( delegateType, focusMappedHandler ); - } - dataPriv.set( this, delegateType, ( attaches || 0 ) + 1 ); - } else { - - // Return false to allow normal processing in the caller - return false; - } - }, - trigger: function() { - - // Force setup before trigger - leverageNative( this, type ); - - // Return non-false to allow normal event-path propagation - return true; - }, - - teardown: function() { - var attaches; - - if ( document.documentMode ) { - attaches = dataPriv.get( this, delegateType ) - 1; - if ( !attaches ) { - this.removeEventListener( delegateType, focusMappedHandler ); - dataPriv.remove( this, delegateType ); - } else { - dataPriv.set( this, delegateType, attaches ); - } - } else { - - // Return false to indicate standard teardown should be applied - return false; - } - }, - - // Suppress native focus or blur if we're currently inside - // a leveraged native-event stack - _default: function( event ) { - return dataPriv.get( event.target, type ); - }, - - delegateType: delegateType - }; - - // Support: Firefox <=44 - // Firefox doesn't have focus(in | out) events - // Related ticket - https://bugzilla.mozilla.org/show_bug.cgi?id=687787 - // - // Support: Chrome <=48 - 49, Safari <=9.0 - 9.1 - // focus(in | out) events fire after focus & blur events, - // which is spec violation - http://www.w3.org/TR/DOM-Level-3-Events/#events-focusevent-event-order - // Related ticket - https://bugs.chromium.org/p/chromium/issues/detail?id=449857 - // - // Support: IE 9 - 11+ - // To preserve relative focusin/focus & focusout/blur event order guaranteed on the 3.x branch, - // attach a single handler for both events in IE. - jQuery.event.special[ delegateType ] = { - setup: function() { - - // Handle: regular nodes (via `this.ownerDocument`), window - // (via `this.document`) & document (via `this`). - var doc = this.ownerDocument || this.document || this, - dataHolder = document.documentMode ? this : doc, - attaches = dataPriv.get( dataHolder, delegateType ); - - // Support: IE 9 - 11+ - // We use the same native handler for focusin & focus (and focusout & blur) - // so we need to coordinate setup & teardown parts between those events. - // Use `delegateType` as the key as `type` is already used by `leverageNative`. - if ( !attaches ) { - if ( document.documentMode ) { - this.addEventListener( delegateType, focusMappedHandler ); - } else { - doc.addEventListener( type, focusMappedHandler, true ); - } - } - dataPriv.set( dataHolder, delegateType, ( attaches || 0 ) + 1 ); - }, - teardown: function() { - var doc = this.ownerDocument || this.document || this, - dataHolder = document.documentMode ? this : doc, - attaches = dataPriv.get( dataHolder, delegateType ) - 1; - - if ( !attaches ) { - if ( document.documentMode ) { - this.removeEventListener( delegateType, focusMappedHandler ); - } else { - doc.removeEventListener( type, focusMappedHandler, true ); - } - dataPriv.remove( dataHolder, delegateType ); - } else { - dataPriv.set( dataHolder, delegateType, attaches ); - } - } - }; -} ); - -// Create mouseenter/leave events using mouseover/out and event-time checks -// so that event delegation works in jQuery. -// Do the same for pointerenter/pointerleave and pointerover/pointerout -// -// Support: Safari 7 only -// Safari sends mouseenter too often; see: -// https://bugs.chromium.org/p/chromium/issues/detail?id=470258 -// for the description of the bug (it existed in older Chrome versions as well). -jQuery.each( { - mouseenter: "mouseover", - mouseleave: "mouseout", - pointerenter: "pointerover", - pointerleave: "pointerout" -}, function( orig, fix ) { - jQuery.event.special[ orig ] = { - delegateType: fix, - bindType: fix, - - handle: function( event ) { - var ret, - target = this, - related = event.relatedTarget, - handleObj = event.handleObj; - - // For mouseenter/leave call the handler if related is outside the target. - // NB: No relatedTarget if the mouse left/entered the browser window - if ( !related || ( related !== target && !jQuery.contains( target, related ) ) ) { - event.type = handleObj.origType; - ret = handleObj.handler.apply( this, arguments ); - event.type = fix; - } - return ret; - } - }; -} ); - -jQuery.fn.extend( { - - on: function( types, selector, data, fn ) { - return on( this, types, selector, data, fn ); - }, - one: function( types, selector, data, fn ) { - return on( this, types, selector, data, fn, 1 ); - }, - off: function( types, selector, fn ) { - var handleObj, type; - if ( types && types.preventDefault && types.handleObj ) { - - // ( event ) dispatched jQuery.Event - handleObj = types.handleObj; - jQuery( types.delegateTarget ).off( - handleObj.namespace ? - handleObj.origType + "." + handleObj.namespace : - handleObj.origType, - handleObj.selector, - handleObj.handler - ); - return this; - } - if ( typeof types === "object" ) { - - // ( types-object [, selector] ) - for ( type in types ) { - this.off( type, selector, types[ type ] ); - } - return this; - } - if ( selector === false || typeof selector === "function" ) { - - // ( types [, fn] ) - fn = selector; - selector = undefined; - } - if ( fn === false ) { - fn = returnFalse; - } - return this.each( function() { - jQuery.event.remove( this, types, fn, selector ); - } ); - } -} ); - - -var - - // Support: IE <=10 - 11, Edge 12 - 13 only - // In IE/Edge using regex groups here causes severe slowdowns. - // See https://connect.microsoft.com/IE/feedback/details/1736512/ - rnoInnerhtml = /\s*$/g; - -// Prefer a tbody over its parent table for containing new rows -function manipulationTarget( elem, content ) { - if ( nodeName( elem, "table" ) && - nodeName( content.nodeType !== 11 ? content : content.firstChild, "tr" ) ) { - - return jQuery( elem ).children( "tbody" )[ 0 ] || elem; - } - - return elem; -} - -// Replace/restore the type attribute of script elements for safe DOM manipulation -function disableScript( elem ) { - elem.type = ( elem.getAttribute( "type" ) !== null ) + "/" + elem.type; - return elem; -} -function restoreScript( elem ) { - if ( ( elem.type || "" ).slice( 0, 5 ) === "true/" ) { - elem.type = elem.type.slice( 5 ); - } else { - elem.removeAttribute( "type" ); - } - - return elem; -} - -function cloneCopyEvent( src, dest ) { - var i, l, type, pdataOld, udataOld, udataCur, events; - - if ( dest.nodeType !== 1 ) { - return; - } - - // 1. Copy private data: events, handlers, etc. - if ( dataPriv.hasData( src ) ) { - pdataOld = dataPriv.get( src ); - events = pdataOld.events; - - if ( events ) { - dataPriv.remove( dest, "handle events" ); - - for ( type in events ) { - for ( i = 0, l = events[ type ].length; i < l; i++ ) { - jQuery.event.add( dest, type, events[ type ][ i ] ); - } - } - } - } - - // 2. Copy user data - if ( dataUser.hasData( src ) ) { - udataOld = dataUser.access( src ); - udataCur = jQuery.extend( {}, udataOld ); - - dataUser.set( dest, udataCur ); - } -} - -// Fix IE bugs, see support tests -function fixInput( src, dest ) { - var nodeName = dest.nodeName.toLowerCase(); - - // Fails to persist the checked state of a cloned checkbox or radio button. - if ( nodeName === "input" && rcheckableType.test( src.type ) ) { - dest.checked = src.checked; - - // Fails to return the selected option to the default selected state when cloning options - } else if ( nodeName === "input" || nodeName === "textarea" ) { - dest.defaultValue = src.defaultValue; - } -} - -function domManip( collection, args, callback, ignored ) { - - // Flatten any nested arrays - args = flat( args ); - - var fragment, first, scripts, hasScripts, node, doc, - i = 0, - l = collection.length, - iNoClone = l - 1, - value = args[ 0 ], - valueIsFunction = isFunction( value ); - - // We can't cloneNode fragments that contain checked, in WebKit - if ( valueIsFunction || - ( l > 1 && typeof value === "string" && - !support.checkClone && rchecked.test( value ) ) ) { - return collection.each( function( index ) { - var self = collection.eq( index ); - if ( valueIsFunction ) { - args[ 0 ] = value.call( this, index, self.html() ); - } - domManip( self, args, callback, ignored ); - } ); - } - - if ( l ) { - fragment = buildFragment( args, collection[ 0 ].ownerDocument, false, collection, ignored ); - first = fragment.firstChild; - - if ( fragment.childNodes.length === 1 ) { - fragment = first; - } - - // Require either new content or an interest in ignored elements to invoke the callback - if ( first || ignored ) { - scripts = jQuery.map( getAll( fragment, "script" ), disableScript ); - hasScripts = scripts.length; - - // Use the original fragment for the last item - // instead of the first because it can end up - // being emptied incorrectly in certain situations (trac-8070). - for ( ; i < l; i++ ) { - node = fragment; - - if ( i !== iNoClone ) { - node = jQuery.clone( node, true, true ); - - // Keep references to cloned scripts for later restoration - if ( hasScripts ) { - - // Support: Android <=4.0 only, PhantomJS 1 only - // push.apply(_, arraylike) throws on ancient WebKit - jQuery.merge( scripts, getAll( node, "script" ) ); - } - } - - callback.call( collection[ i ], node, i ); - } - - if ( hasScripts ) { - doc = scripts[ scripts.length - 1 ].ownerDocument; - - // Reenable scripts - jQuery.map( scripts, restoreScript ); - - // Evaluate executable scripts on first document insertion - for ( i = 0; i < hasScripts; i++ ) { - node = scripts[ i ]; - if ( rscriptType.test( node.type || "" ) && - !dataPriv.access( node, "globalEval" ) && - jQuery.contains( doc, node ) ) { - - if ( node.src && ( node.type || "" ).toLowerCase() !== "module" ) { - - // Optional AJAX dependency, but won't run scripts if not present - if ( jQuery._evalUrl && !node.noModule ) { - jQuery._evalUrl( node.src, { - nonce: node.nonce || node.getAttribute( "nonce" ) - }, doc ); - } - } else { - - // Unwrap a CDATA section containing script contents. This shouldn't be - // needed as in XML documents they're already not visible when - // inspecting element contents and in HTML documents they have no - // meaning but we're preserving that logic for backwards compatibility. - // This will be removed completely in 4.0. See gh-4904. - DOMEval( node.textContent.replace( rcleanScript, "" ), node, doc ); - } - } - } - } - } - } - - return collection; -} - -function remove( elem, selector, keepData ) { - var node, - nodes = selector ? jQuery.filter( selector, elem ) : elem, - i = 0; - - for ( ; ( node = nodes[ i ] ) != null; i++ ) { - if ( !keepData && node.nodeType === 1 ) { - jQuery.cleanData( getAll( node ) ); - } - - if ( node.parentNode ) { - if ( keepData && isAttached( node ) ) { - setGlobalEval( getAll( node, "script" ) ); - } - node.parentNode.removeChild( node ); - } - } - - return elem; -} - -jQuery.extend( { - htmlPrefilter: function( html ) { - return html; - }, - - clone: function( elem, dataAndEvents, deepDataAndEvents ) { - var i, l, srcElements, destElements, - clone = elem.cloneNode( true ), - inPage = isAttached( elem ); - - // Fix IE cloning issues - if ( !support.noCloneChecked && ( elem.nodeType === 1 || elem.nodeType === 11 ) && - !jQuery.isXMLDoc( elem ) ) { - - // We eschew jQuery#find here for performance reasons: - // https://jsperf.com/getall-vs-sizzle/2 - destElements = getAll( clone ); - srcElements = getAll( elem ); - - for ( i = 0, l = srcElements.length; i < l; i++ ) { - fixInput( srcElements[ i ], destElements[ i ] ); - } - } - - // Copy the events from the original to the clone - if ( dataAndEvents ) { - if ( deepDataAndEvents ) { - srcElements = srcElements || getAll( elem ); - destElements = destElements || getAll( clone ); - - for ( i = 0, l = srcElements.length; i < l; i++ ) { - cloneCopyEvent( srcElements[ i ], destElements[ i ] ); - } - } else { - cloneCopyEvent( elem, clone ); - } - } - - // Preserve script evaluation history - destElements = getAll( clone, "script" ); - if ( destElements.length > 0 ) { - setGlobalEval( destElements, !inPage && getAll( elem, "script" ) ); - } - - // Return the cloned set - return clone; - }, - - cleanData: function( elems ) { - var data, elem, type, - special = jQuery.event.special, - i = 0; - - for ( ; ( elem = elems[ i ] ) !== undefined; i++ ) { - if ( acceptData( elem ) ) { - if ( ( data = elem[ dataPriv.expando ] ) ) { - if ( data.events ) { - for ( type in data.events ) { - if ( special[ type ] ) { - jQuery.event.remove( elem, type ); - - // This is a shortcut to avoid jQuery.event.remove's overhead - } else { - jQuery.removeEvent( elem, type, data.handle ); - } - } - } - - // Support: Chrome <=35 - 45+ - // Assign undefined instead of using delete, see Data#remove - elem[ dataPriv.expando ] = undefined; - } - if ( elem[ dataUser.expando ] ) { - - // Support: Chrome <=35 - 45+ - // Assign undefined instead of using delete, see Data#remove - elem[ dataUser.expando ] = undefined; - } - } - } - } -} ); - -jQuery.fn.extend( { - detach: function( selector ) { - return remove( this, selector, true ); - }, - - remove: function( selector ) { - return remove( this, selector ); - }, - - text: function( value ) { - return access( this, function( value ) { - return value === undefined ? - jQuery.text( this ) : - this.empty().each( function() { - if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { - this.textContent = value; - } - } ); - }, null, value, arguments.length ); - }, - - append: function() { - return domManip( this, arguments, function( elem ) { - if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { - var target = manipulationTarget( this, elem ); - target.appendChild( elem ); - } - } ); - }, - - prepend: function() { - return domManip( this, arguments, function( elem ) { - if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { - var target = manipulationTarget( this, elem ); - target.insertBefore( elem, target.firstChild ); - } - } ); - }, - - before: function() { - return domManip( this, arguments, function( elem ) { - if ( this.parentNode ) { - this.parentNode.insertBefore( elem, this ); - } - } ); - }, - - after: function() { - return domManip( this, arguments, function( elem ) { - if ( this.parentNode ) { - this.parentNode.insertBefore( elem, this.nextSibling ); - } - } ); - }, - - empty: function() { - var elem, - i = 0; - - for ( ; ( elem = this[ i ] ) != null; i++ ) { - if ( elem.nodeType === 1 ) { - - // Prevent memory leaks - jQuery.cleanData( getAll( elem, false ) ); - - // Remove any remaining nodes - elem.textContent = ""; - } - } - - return this; - }, - - clone: function( dataAndEvents, deepDataAndEvents ) { - dataAndEvents = dataAndEvents == null ? false : dataAndEvents; - deepDataAndEvents = deepDataAndEvents == null ? dataAndEvents : deepDataAndEvents; - - return this.map( function() { - return jQuery.clone( this, dataAndEvents, deepDataAndEvents ); - } ); - }, - - html: function( value ) { - return access( this, function( value ) { - var elem = this[ 0 ] || {}, - i = 0, - l = this.length; - - if ( value === undefined && elem.nodeType === 1 ) { - return elem.innerHTML; - } - - // See if we can take a shortcut and just use innerHTML - if ( typeof value === "string" && !rnoInnerhtml.test( value ) && - !wrapMap[ ( rtagName.exec( value ) || [ "", "" ] )[ 1 ].toLowerCase() ] ) { - - value = jQuery.htmlPrefilter( value ); - - try { - for ( ; i < l; i++ ) { - elem = this[ i ] || {}; - - // Remove element nodes and prevent memory leaks - if ( elem.nodeType === 1 ) { - jQuery.cleanData( getAll( elem, false ) ); - elem.innerHTML = value; - } - } - - elem = 0; - - // If using innerHTML throws an exception, use the fallback method - } catch ( e ) {} - } - - if ( elem ) { - this.empty().append( value ); - } - }, null, value, arguments.length ); - }, - - replaceWith: function() { - var ignored = []; - - // Make the changes, replacing each non-ignored context element with the new content - return domManip( this, arguments, function( elem ) { - var parent = this.parentNode; - - if ( jQuery.inArray( this, ignored ) < 0 ) { - jQuery.cleanData( getAll( this ) ); - if ( parent ) { - parent.replaceChild( elem, this ); - } - } - - // Force callback invocation - }, ignored ); - } -} ); - -jQuery.each( { - appendTo: "append", - prependTo: "prepend", - insertBefore: "before", - insertAfter: "after", - replaceAll: "replaceWith" -}, function( name, original ) { - jQuery.fn[ name ] = function( selector ) { - var elems, - ret = [], - insert = jQuery( selector ), - last = insert.length - 1, - i = 0; - - for ( ; i <= last; i++ ) { - elems = i === last ? this : this.clone( true ); - jQuery( insert[ i ] )[ original ]( elems ); - - // Support: Android <=4.0 only, PhantomJS 1 only - // .get() because push.apply(_, arraylike) throws on ancient WebKit - push.apply( ret, elems.get() ); - } - - return this.pushStack( ret ); - }; -} ); -var rnumnonpx = new RegExp( "^(" + pnum + ")(?!px)[a-z%]+$", "i" ); - -var rcustomProp = /^--/; - - -var getStyles = function( elem ) { - - // Support: IE <=11 only, Firefox <=30 (trac-15098, trac-14150) - // IE throws on elements created in popups - // FF meanwhile throws on frame elements through "defaultView.getComputedStyle" - var view = elem.ownerDocument.defaultView; - - if ( !view || !view.opener ) { - view = window; - } - - return view.getComputedStyle( elem ); - }; - -var swap = function( elem, options, callback ) { - var ret, name, - old = {}; - - // Remember the old values, and insert the new ones - for ( name in options ) { - old[ name ] = elem.style[ name ]; - elem.style[ name ] = options[ name ]; - } - - ret = callback.call( elem ); - - // Revert the old values - for ( name in options ) { - elem.style[ name ] = old[ name ]; - } - - return ret; -}; - - -var rboxStyle = new RegExp( cssExpand.join( "|" ), "i" ); - - - -( function() { - - // Executing both pixelPosition & boxSizingReliable tests require only one layout - // so they're executed at the same time to save the second computation. - function computeStyleTests() { - - // This is a singleton, we need to execute it only once - if ( !div ) { - return; - } - - container.style.cssText = "position:absolute;left:-11111px;width:60px;" + - "margin-top:1px;padding:0;border:0"; - div.style.cssText = - "position:relative;display:block;box-sizing:border-box;overflow:scroll;" + - "margin:auto;border:1px;padding:1px;" + - "width:60%;top:1%"; - documentElement.appendChild( container ).appendChild( div ); - - var divStyle = window.getComputedStyle( div ); - pixelPositionVal = divStyle.top !== "1%"; - - // Support: Android 4.0 - 4.3 only, Firefox <=3 - 44 - reliableMarginLeftVal = roundPixelMeasures( divStyle.marginLeft ) === 12; - - // Support: Android 4.0 - 4.3 only, Safari <=9.1 - 10.1, iOS <=7.0 - 9.3 - // Some styles come back with percentage values, even though they shouldn't - div.style.right = "60%"; - pixelBoxStylesVal = roundPixelMeasures( divStyle.right ) === 36; - - // Support: IE 9 - 11 only - // Detect misreporting of content dimensions for box-sizing:border-box elements - boxSizingReliableVal = roundPixelMeasures( divStyle.width ) === 36; - - // Support: IE 9 only - // Detect overflow:scroll screwiness (gh-3699) - // Support: Chrome <=64 - // Don't get tricked when zoom affects offsetWidth (gh-4029) - div.style.position = "absolute"; - scrollboxSizeVal = roundPixelMeasures( div.offsetWidth / 3 ) === 12; - - documentElement.removeChild( container ); - - // Nullify the div so it wouldn't be stored in the memory and - // it will also be a sign that checks already performed - div = null; - } - - function roundPixelMeasures( measure ) { - return Math.round( parseFloat( measure ) ); - } - - var pixelPositionVal, boxSizingReliableVal, scrollboxSizeVal, pixelBoxStylesVal, - reliableTrDimensionsVal, reliableMarginLeftVal, - container = document.createElement( "div" ), - div = document.createElement( "div" ); - - // Finish early in limited (non-browser) environments - if ( !div.style ) { - return; - } - - // Support: IE <=9 - 11 only - // Style of cloned element affects source element cloned (trac-8908) - div.style.backgroundClip = "content-box"; - div.cloneNode( true ).style.backgroundClip = ""; - support.clearCloneStyle = div.style.backgroundClip === "content-box"; - - jQuery.extend( support, { - boxSizingReliable: function() { - computeStyleTests(); - return boxSizingReliableVal; - }, - pixelBoxStyles: function() { - computeStyleTests(); - return pixelBoxStylesVal; - }, - pixelPosition: function() { - computeStyleTests(); - return pixelPositionVal; - }, - reliableMarginLeft: function() { - computeStyleTests(); - return reliableMarginLeftVal; - }, - scrollboxSize: function() { - computeStyleTests(); - return scrollboxSizeVal; - }, - - // Support: IE 9 - 11+, Edge 15 - 18+ - // IE/Edge misreport `getComputedStyle` of table rows with width/height - // set in CSS while `offset*` properties report correct values. - // Behavior in IE 9 is more subtle than in newer versions & it passes - // some versions of this test; make sure not to make it pass there! - // - // Support: Firefox 70+ - // Only Firefox includes border widths - // in computed dimensions. (gh-4529) - reliableTrDimensions: function() { - var table, tr, trChild, trStyle; - if ( reliableTrDimensionsVal == null ) { - table = document.createElement( "table" ); - tr = document.createElement( "tr" ); - trChild = document.createElement( "div" ); - - table.style.cssText = "position:absolute;left:-11111px;border-collapse:separate"; - tr.style.cssText = "border:1px solid"; - - // Support: Chrome 86+ - // Height set through cssText does not get applied. - // Computed height then comes back as 0. - tr.style.height = "1px"; - trChild.style.height = "9px"; - - // Support: Android 8 Chrome 86+ - // In our bodyBackground.html iframe, - // display for all div elements is set to "inline", - // which causes a problem only in Android 8 Chrome 86. - // Ensuring the div is display: block - // gets around this issue. - trChild.style.display = "block"; - - documentElement - .appendChild( table ) - .appendChild( tr ) - .appendChild( trChild ); - - trStyle = window.getComputedStyle( tr ); - reliableTrDimensionsVal = ( parseInt( trStyle.height, 10 ) + - parseInt( trStyle.borderTopWidth, 10 ) + - parseInt( trStyle.borderBottomWidth, 10 ) ) === tr.offsetHeight; - - documentElement.removeChild( table ); - } - return reliableTrDimensionsVal; - } - } ); -} )(); - - -function curCSS( elem, name, computed ) { - var width, minWidth, maxWidth, ret, - isCustomProp = rcustomProp.test( name ), - - // Support: Firefox 51+ - // Retrieving style before computed somehow - // fixes an issue with getting wrong values - // on detached elements - style = elem.style; - - computed = computed || getStyles( elem ); - - // getPropertyValue is needed for: - // .css('filter') (IE 9 only, trac-12537) - // .css('--customProperty) (gh-3144) - if ( computed ) { - - // Support: IE <=9 - 11+ - // IE only supports `"float"` in `getPropertyValue`; in computed styles - // it's only available as `"cssFloat"`. We no longer modify properties - // sent to `.css()` apart from camelCasing, so we need to check both. - // Normally, this would create difference in behavior: if - // `getPropertyValue` returns an empty string, the value returned - // by `.css()` would be `undefined`. This is usually the case for - // disconnected elements. However, in IE even disconnected elements - // with no styles return `"none"` for `getPropertyValue( "float" )` - ret = computed.getPropertyValue( name ) || computed[ name ]; - - if ( isCustomProp && ret ) { - - // Support: Firefox 105+, Chrome <=105+ - // Spec requires trimming whitespace for custom properties (gh-4926). - // Firefox only trims leading whitespace. Chrome just collapses - // both leading & trailing whitespace to a single space. - // - // Fall back to `undefined` if empty string returned. - // This collapses a missing definition with property defined - // and set to an empty string but there's no standard API - // allowing us to differentiate them without a performance penalty - // and returning `undefined` aligns with older jQuery. - // - // rtrimCSS treats U+000D CARRIAGE RETURN and U+000C FORM FEED - // as whitespace while CSS does not, but this is not a problem - // because CSS preprocessing replaces them with U+000A LINE FEED - // (which *is* CSS whitespace) - // https://www.w3.org/TR/css-syntax-3/#input-preprocessing - ret = ret.replace( rtrimCSS, "$1" ) || undefined; - } - - if ( ret === "" && !isAttached( elem ) ) { - ret = jQuery.style( elem, name ); - } - - // A tribute to the "awesome hack by Dean Edwards" - // Android Browser returns percentage for some values, - // but width seems to be reliably pixels. - // This is against the CSSOM draft spec: - // https://drafts.csswg.org/cssom/#resolved-values - if ( !support.pixelBoxStyles() && rnumnonpx.test( ret ) && rboxStyle.test( name ) ) { - - // Remember the original values - width = style.width; - minWidth = style.minWidth; - maxWidth = style.maxWidth; - - // Put in the new values to get a computed value out - style.minWidth = style.maxWidth = style.width = ret; - ret = computed.width; - - // Revert the changed values - style.width = width; - style.minWidth = minWidth; - style.maxWidth = maxWidth; - } - } - - return ret !== undefined ? - - // Support: IE <=9 - 11 only - // IE returns zIndex value as an integer. - ret + "" : - ret; -} - - -function addGetHookIf( conditionFn, hookFn ) { - - // Define the hook, we'll check on the first run if it's really needed. - return { - get: function() { - if ( conditionFn() ) { - - // Hook not needed (or it's not possible to use it due - // to missing dependency), remove it. - delete this.get; - return; - } - - // Hook needed; redefine it so that the support test is not executed again. - return ( this.get = hookFn ).apply( this, arguments ); - } - }; -} - - -var cssPrefixes = [ "Webkit", "Moz", "ms" ], - emptyStyle = document.createElement( "div" ).style, - vendorProps = {}; - -// Return a vendor-prefixed property or undefined -function vendorPropName( name ) { - - // Check for vendor prefixed names - var capName = name[ 0 ].toUpperCase() + name.slice( 1 ), - i = cssPrefixes.length; - - while ( i-- ) { - name = cssPrefixes[ i ] + capName; - if ( name in emptyStyle ) { - return name; - } - } -} - -// Return a potentially-mapped jQuery.cssProps or vendor prefixed property -function finalPropName( name ) { - var final = jQuery.cssProps[ name ] || vendorProps[ name ]; - - if ( final ) { - return final; - } - if ( name in emptyStyle ) { - return name; - } - return vendorProps[ name ] = vendorPropName( name ) || name; -} - - -var - - // Swappable if display is none or starts with table - // except "table", "table-cell", or "table-caption" - // See here for display values: https://developer.mozilla.org/en-US/docs/CSS/display - rdisplayswap = /^(none|table(?!-c[ea]).+)/, - cssShow = { position: "absolute", visibility: "hidden", display: "block" }, - cssNormalTransform = { - letterSpacing: "0", - fontWeight: "400" - }; - -function setPositiveNumber( _elem, value, subtract ) { - - // Any relative (+/-) values have already been - // normalized at this point - var matches = rcssNum.exec( value ); - return matches ? - - // Guard against undefined "subtract", e.g., when used as in cssHooks - Math.max( 0, matches[ 2 ] - ( subtract || 0 ) ) + ( matches[ 3 ] || "px" ) : - value; -} - -function boxModelAdjustment( elem, dimension, box, isBorderBox, styles, computedVal ) { - var i = dimension === "width" ? 1 : 0, - extra = 0, - delta = 0, - marginDelta = 0; - - // Adjustment may not be necessary - if ( box === ( isBorderBox ? "border" : "content" ) ) { - return 0; - } - - for ( ; i < 4; i += 2 ) { - - // Both box models exclude margin - // Count margin delta separately to only add it after scroll gutter adjustment. - // This is needed to make negative margins work with `outerHeight( true )` (gh-3982). - if ( box === "margin" ) { - marginDelta += jQuery.css( elem, box + cssExpand[ i ], true, styles ); - } - - // If we get here with a content-box, we're seeking "padding" or "border" or "margin" - if ( !isBorderBox ) { - - // Add padding - delta += jQuery.css( elem, "padding" + cssExpand[ i ], true, styles ); - - // For "border" or "margin", add border - if ( box !== "padding" ) { - delta += jQuery.css( elem, "border" + cssExpand[ i ] + "Width", true, styles ); - - // But still keep track of it otherwise - } else { - extra += jQuery.css( elem, "border" + cssExpand[ i ] + "Width", true, styles ); - } - - // If we get here with a border-box (content + padding + border), we're seeking "content" or - // "padding" or "margin" - } else { - - // For "content", subtract padding - if ( box === "content" ) { - delta -= jQuery.css( elem, "padding" + cssExpand[ i ], true, styles ); - } - - // For "content" or "padding", subtract border - if ( box !== "margin" ) { - delta -= jQuery.css( elem, "border" + cssExpand[ i ] + "Width", true, styles ); - } - } - } - - // Account for positive content-box scroll gutter when requested by providing computedVal - if ( !isBorderBox && computedVal >= 0 ) { - - // offsetWidth/offsetHeight is a rounded sum of content, padding, scroll gutter, and border - // Assuming integer scroll gutter, subtract the rest and round down - delta += Math.max( 0, Math.ceil( - elem[ "offset" + dimension[ 0 ].toUpperCase() + dimension.slice( 1 ) ] - - computedVal - - delta - - extra - - 0.5 - - // If offsetWidth/offsetHeight is unknown, then we can't determine content-box scroll gutter - // Use an explicit zero to avoid NaN (gh-3964) - ) ) || 0; - } - - return delta + marginDelta; -} - -function getWidthOrHeight( elem, dimension, extra ) { - - // Start with computed style - var styles = getStyles( elem ), - - // To avoid forcing a reflow, only fetch boxSizing if we need it (gh-4322). - // Fake content-box until we know it's needed to know the true value. - boxSizingNeeded = !support.boxSizingReliable() || extra, - isBorderBox = boxSizingNeeded && - jQuery.css( elem, "boxSizing", false, styles ) === "border-box", - valueIsBorderBox = isBorderBox, - - val = curCSS( elem, dimension, styles ), - offsetProp = "offset" + dimension[ 0 ].toUpperCase() + dimension.slice( 1 ); - - // Support: Firefox <=54 - // Return a confounding non-pixel value or feign ignorance, as appropriate. - if ( rnumnonpx.test( val ) ) { - if ( !extra ) { - return val; - } - val = "auto"; - } - - - // Support: IE 9 - 11 only - // Use offsetWidth/offsetHeight for when box sizing is unreliable. - // In those cases, the computed value can be trusted to be border-box. - if ( ( !support.boxSizingReliable() && isBorderBox || - - // Support: IE 10 - 11+, Edge 15 - 18+ - // IE/Edge misreport `getComputedStyle` of table rows with width/height - // set in CSS while `offset*` properties report correct values. - // Interestingly, in some cases IE 9 doesn't suffer from this issue. - !support.reliableTrDimensions() && nodeName( elem, "tr" ) || - - // Fall back to offsetWidth/offsetHeight when value is "auto" - // This happens for inline elements with no explicit setting (gh-3571) - val === "auto" || - - // Support: Android <=4.1 - 4.3 only - // Also use offsetWidth/offsetHeight for misreported inline dimensions (gh-3602) - !parseFloat( val ) && jQuery.css( elem, "display", false, styles ) === "inline" ) && - - // Make sure the element is visible & connected - elem.getClientRects().length ) { - - isBorderBox = jQuery.css( elem, "boxSizing", false, styles ) === "border-box"; - - // Where available, offsetWidth/offsetHeight approximate border box dimensions. - // Where not available (e.g., SVG), assume unreliable box-sizing and interpret the - // retrieved value as a content box dimension. - valueIsBorderBox = offsetProp in elem; - if ( valueIsBorderBox ) { - val = elem[ offsetProp ]; - } - } - - // Normalize "" and auto - val = parseFloat( val ) || 0; - - // Adjust for the element's box model - return ( val + - boxModelAdjustment( - elem, - dimension, - extra || ( isBorderBox ? "border" : "content" ), - valueIsBorderBox, - styles, - - // Provide the current computed size to request scroll gutter calculation (gh-3589) - val - ) - ) + "px"; -} - -jQuery.extend( { - - // Add in style property hooks for overriding the default - // behavior of getting and setting a style property - cssHooks: { - opacity: { - get: function( elem, computed ) { - if ( computed ) { - - // We should always get a number back from opacity - var ret = curCSS( elem, "opacity" ); - return ret === "" ? "1" : ret; - } - } - } - }, - - // Don't automatically add "px" to these possibly-unitless properties - cssNumber: { - animationIterationCount: true, - aspectRatio: true, - borderImageSlice: true, - columnCount: true, - flexGrow: true, - flexShrink: true, - fontWeight: true, - gridArea: true, - gridColumn: true, - gridColumnEnd: true, - gridColumnStart: true, - gridRow: true, - gridRowEnd: true, - gridRowStart: true, - lineHeight: true, - opacity: true, - order: true, - orphans: true, - scale: true, - widows: true, - zIndex: true, - zoom: true, - - // SVG-related - fillOpacity: true, - floodOpacity: true, - stopOpacity: true, - strokeMiterlimit: true, - strokeOpacity: true - }, - - // Add in properties whose names you wish to fix before - // setting or getting the value - cssProps: {}, - - // Get and set the style property on a DOM Node - style: function( elem, name, value, extra ) { - - // Don't set styles on text and comment nodes - if ( !elem || elem.nodeType === 3 || elem.nodeType === 8 || !elem.style ) { - return; - } - - // Make sure that we're working with the right name - var ret, type, hooks, - origName = camelCase( name ), - isCustomProp = rcustomProp.test( name ), - style = elem.style; - - // Make sure that we're working with the right name. We don't - // want to query the value if it is a CSS custom property - // since they are user-defined. - if ( !isCustomProp ) { - name = finalPropName( origName ); - } - - // Gets hook for the prefixed version, then unprefixed version - hooks = jQuery.cssHooks[ name ] || jQuery.cssHooks[ origName ]; - - // Check if we're setting a value - if ( value !== undefined ) { - type = typeof value; - - // Convert "+=" or "-=" to relative numbers (trac-7345) - if ( type === "string" && ( ret = rcssNum.exec( value ) ) && ret[ 1 ] ) { - value = adjustCSS( elem, name, ret ); - - // Fixes bug trac-9237 - type = "number"; - } - - // Make sure that null and NaN values aren't set (trac-7116) - if ( value == null || value !== value ) { - return; - } - - // If a number was passed in, add the unit (except for certain CSS properties) - // The isCustomProp check can be removed in jQuery 4.0 when we only auto-append - // "px" to a few hardcoded values. - if ( type === "number" && !isCustomProp ) { - value += ret && ret[ 3 ] || ( jQuery.cssNumber[ origName ] ? "" : "px" ); - } - - // background-* props affect original clone's values - if ( !support.clearCloneStyle && value === "" && name.indexOf( "background" ) === 0 ) { - style[ name ] = "inherit"; - } - - // If a hook was provided, use that value, otherwise just set the specified value - if ( !hooks || !( "set" in hooks ) || - ( value = hooks.set( elem, value, extra ) ) !== undefined ) { - - if ( isCustomProp ) { - style.setProperty( name, value ); - } else { - style[ name ] = value; - } - } - - } else { - - // If a hook was provided get the non-computed value from there - if ( hooks && "get" in hooks && - ( ret = hooks.get( elem, false, extra ) ) !== undefined ) { - - return ret; - } - - // Otherwise just get the value from the style object - return style[ name ]; - } - }, - - css: function( elem, name, extra, styles ) { - var val, num, hooks, - origName = camelCase( name ), - isCustomProp = rcustomProp.test( name ); - - // Make sure that we're working with the right name. We don't - // want to modify the value if it is a CSS custom property - // since they are user-defined. - if ( !isCustomProp ) { - name = finalPropName( origName ); - } - - // Try prefixed name followed by the unprefixed name - hooks = jQuery.cssHooks[ name ] || jQuery.cssHooks[ origName ]; - - // If a hook was provided get the computed value from there - if ( hooks && "get" in hooks ) { - val = hooks.get( elem, true, extra ); - } - - // Otherwise, if a way to get the computed value exists, use that - if ( val === undefined ) { - val = curCSS( elem, name, styles ); - } - - // Convert "normal" to computed value - if ( val === "normal" && name in cssNormalTransform ) { - val = cssNormalTransform[ name ]; - } - - // Make numeric if forced or a qualifier was provided and val looks numeric - if ( extra === "" || extra ) { - num = parseFloat( val ); - return extra === true || isFinite( num ) ? num || 0 : val; - } - - return val; - } -} ); - -jQuery.each( [ "height", "width" ], function( _i, dimension ) { - jQuery.cssHooks[ dimension ] = { - get: function( elem, computed, extra ) { - if ( computed ) { - - // Certain elements can have dimension info if we invisibly show them - // but it must have a current display style that would benefit - return rdisplayswap.test( jQuery.css( elem, "display" ) ) && - - // Support: Safari 8+ - // Table columns in Safari have non-zero offsetWidth & zero - // getBoundingClientRect().width unless display is changed. - // Support: IE <=11 only - // Running getBoundingClientRect on a disconnected node - // in IE throws an error. - ( !elem.getClientRects().length || !elem.getBoundingClientRect().width ) ? - swap( elem, cssShow, function() { - return getWidthOrHeight( elem, dimension, extra ); - } ) : - getWidthOrHeight( elem, dimension, extra ); - } - }, - - set: function( elem, value, extra ) { - var matches, - styles = getStyles( elem ), - - // Only read styles.position if the test has a chance to fail - // to avoid forcing a reflow. - scrollboxSizeBuggy = !support.scrollboxSize() && - styles.position === "absolute", - - // To avoid forcing a reflow, only fetch boxSizing if we need it (gh-3991) - boxSizingNeeded = scrollboxSizeBuggy || extra, - isBorderBox = boxSizingNeeded && - jQuery.css( elem, "boxSizing", false, styles ) === "border-box", - subtract = extra ? - boxModelAdjustment( - elem, - dimension, - extra, - isBorderBox, - styles - ) : - 0; - - // Account for unreliable border-box dimensions by comparing offset* to computed and - // faking a content-box to get border and padding (gh-3699) - if ( isBorderBox && scrollboxSizeBuggy ) { - subtract -= Math.ceil( - elem[ "offset" + dimension[ 0 ].toUpperCase() + dimension.slice( 1 ) ] - - parseFloat( styles[ dimension ] ) - - boxModelAdjustment( elem, dimension, "border", false, styles ) - - 0.5 - ); - } - - // Convert to pixels if value adjustment is needed - if ( subtract && ( matches = rcssNum.exec( value ) ) && - ( matches[ 3 ] || "px" ) !== "px" ) { - - elem.style[ dimension ] = value; - value = jQuery.css( elem, dimension ); - } - - return setPositiveNumber( elem, value, subtract ); - } - }; -} ); - -jQuery.cssHooks.marginLeft = addGetHookIf( support.reliableMarginLeft, - function( elem, computed ) { - if ( computed ) { - return ( parseFloat( curCSS( elem, "marginLeft" ) ) || - elem.getBoundingClientRect().left - - swap( elem, { marginLeft: 0 }, function() { - return elem.getBoundingClientRect().left; - } ) - ) + "px"; - } - } -); - -// These hooks are used by animate to expand properties -jQuery.each( { - margin: "", - padding: "", - border: "Width" -}, function( prefix, suffix ) { - jQuery.cssHooks[ prefix + suffix ] = { - expand: function( value ) { - var i = 0, - expanded = {}, - - // Assumes a single number if not a string - parts = typeof value === "string" ? value.split( " " ) : [ value ]; - - for ( ; i < 4; i++ ) { - expanded[ prefix + cssExpand[ i ] + suffix ] = - parts[ i ] || parts[ i - 2 ] || parts[ 0 ]; - } - - return expanded; - } - }; - - if ( prefix !== "margin" ) { - jQuery.cssHooks[ prefix + suffix ].set = setPositiveNumber; - } -} ); - -jQuery.fn.extend( { - css: function( name, value ) { - return access( this, function( elem, name, value ) { - var styles, len, - map = {}, - i = 0; - - if ( Array.isArray( name ) ) { - styles = getStyles( elem ); - len = name.length; - - for ( ; i < len; i++ ) { - map[ name[ i ] ] = jQuery.css( elem, name[ i ], false, styles ); - } - - return map; - } - - return value !== undefined ? - jQuery.style( elem, name, value ) : - jQuery.css( elem, name ); - }, name, value, arguments.length > 1 ); - } -} ); - - -function Tween( elem, options, prop, end, easing ) { - return new Tween.prototype.init( elem, options, prop, end, easing ); -} -jQuery.Tween = Tween; - -Tween.prototype = { - constructor: Tween, - init: function( elem, options, prop, end, easing, unit ) { - this.elem = elem; - this.prop = prop; - this.easing = easing || jQuery.easing._default; - this.options = options; - this.start = this.now = this.cur(); - this.end = end; - this.unit = unit || ( jQuery.cssNumber[ prop ] ? "" : "px" ); - }, - cur: function() { - var hooks = Tween.propHooks[ this.prop ]; - - return hooks && hooks.get ? - hooks.get( this ) : - Tween.propHooks._default.get( this ); - }, - run: function( percent ) { - var eased, - hooks = Tween.propHooks[ this.prop ]; - - if ( this.options.duration ) { - this.pos = eased = jQuery.easing[ this.easing ]( - percent, this.options.duration * percent, 0, 1, this.options.duration - ); - } else { - this.pos = eased = percent; - } - this.now = ( this.end - this.start ) * eased + this.start; - - if ( this.options.step ) { - this.options.step.call( this.elem, this.now, this ); - } - - if ( hooks && hooks.set ) { - hooks.set( this ); - } else { - Tween.propHooks._default.set( this ); - } - return this; - } -}; - -Tween.prototype.init.prototype = Tween.prototype; - -Tween.propHooks = { - _default: { - get: function( tween ) { - var result; - - // Use a property on the element directly when it is not a DOM element, - // or when there is no matching style property that exists. - if ( tween.elem.nodeType !== 1 || - tween.elem[ tween.prop ] != null && tween.elem.style[ tween.prop ] == null ) { - return tween.elem[ tween.prop ]; - } - - // Passing an empty string as a 3rd parameter to .css will automatically - // attempt a parseFloat and fallback to a string if the parse fails. - // Simple values such as "10px" are parsed to Float; - // complex values such as "rotate(1rad)" are returned as-is. - result = jQuery.css( tween.elem, tween.prop, "" ); - - // Empty strings, null, undefined and "auto" are converted to 0. - return !result || result === "auto" ? 0 : result; - }, - set: function( tween ) { - - // Use step hook for back compat. - // Use cssHook if its there. - // Use .style if available and use plain properties where available. - if ( jQuery.fx.step[ tween.prop ] ) { - jQuery.fx.step[ tween.prop ]( tween ); - } else if ( tween.elem.nodeType === 1 && ( - jQuery.cssHooks[ tween.prop ] || - tween.elem.style[ finalPropName( tween.prop ) ] != null ) ) { - jQuery.style( tween.elem, tween.prop, tween.now + tween.unit ); - } else { - tween.elem[ tween.prop ] = tween.now; - } - } - } -}; - -// Support: IE <=9 only -// Panic based approach to setting things on disconnected nodes -Tween.propHooks.scrollTop = Tween.propHooks.scrollLeft = { - set: function( tween ) { - if ( tween.elem.nodeType && tween.elem.parentNode ) { - tween.elem[ tween.prop ] = tween.now; - } - } -}; - -jQuery.easing = { - linear: function( p ) { - return p; - }, - swing: function( p ) { - return 0.5 - Math.cos( p * Math.PI ) / 2; - }, - _default: "swing" -}; - -jQuery.fx = Tween.prototype.init; - -// Back compat <1.8 extension point -jQuery.fx.step = {}; - - - - -var - fxNow, inProgress, - rfxtypes = /^(?:toggle|show|hide)$/, - rrun = /queueHooks$/; - -function schedule() { - if ( inProgress ) { - if ( document.hidden === false && window.requestAnimationFrame ) { - window.requestAnimationFrame( schedule ); - } else { - window.setTimeout( schedule, jQuery.fx.interval ); - } - - jQuery.fx.tick(); - } -} - -// Animations created synchronously will run synchronously -function createFxNow() { - window.setTimeout( function() { - fxNow = undefined; - } ); - return ( fxNow = Date.now() ); -} - -// Generate parameters to create a standard animation -function genFx( type, includeWidth ) { - var which, - i = 0, - attrs = { height: type }; - - // If we include width, step value is 1 to do all cssExpand values, - // otherwise step value is 2 to skip over Left and Right - includeWidth = includeWidth ? 1 : 0; - for ( ; i < 4; i += 2 - includeWidth ) { - which = cssExpand[ i ]; - attrs[ "margin" + which ] = attrs[ "padding" + which ] = type; - } - - if ( includeWidth ) { - attrs.opacity = attrs.width = type; - } - - return attrs; -} - -function createTween( value, prop, animation ) { - var tween, - collection = ( Animation.tweeners[ prop ] || [] ).concat( Animation.tweeners[ "*" ] ), - index = 0, - length = collection.length; - for ( ; index < length; index++ ) { - if ( ( tween = collection[ index ].call( animation, prop, value ) ) ) { - - // We're done with this property - return tween; - } - } -} - -function defaultPrefilter( elem, props, opts ) { - var prop, value, toggle, hooks, oldfire, propTween, restoreDisplay, display, - isBox = "width" in props || "height" in props, - anim = this, - orig = {}, - style = elem.style, - hidden = elem.nodeType && isHiddenWithinTree( elem ), - dataShow = dataPriv.get( elem, "fxshow" ); - - // Queue-skipping animations hijack the fx hooks - if ( !opts.queue ) { - hooks = jQuery._queueHooks( elem, "fx" ); - if ( hooks.unqueued == null ) { - hooks.unqueued = 0; - oldfire = hooks.empty.fire; - hooks.empty.fire = function() { - if ( !hooks.unqueued ) { - oldfire(); - } - }; - } - hooks.unqueued++; - - anim.always( function() { - - // Ensure the complete handler is called before this completes - anim.always( function() { - hooks.unqueued--; - if ( !jQuery.queue( elem, "fx" ).length ) { - hooks.empty.fire(); - } - } ); - } ); - } - - // Detect show/hide animations - for ( prop in props ) { - value = props[ prop ]; - if ( rfxtypes.test( value ) ) { - delete props[ prop ]; - toggle = toggle || value === "toggle"; - if ( value === ( hidden ? "hide" : "show" ) ) { - - // Pretend to be hidden if this is a "show" and - // there is still data from a stopped show/hide - if ( value === "show" && dataShow && dataShow[ prop ] !== undefined ) { - hidden = true; - - // Ignore all other no-op show/hide data - } else { - continue; - } - } - orig[ prop ] = dataShow && dataShow[ prop ] || jQuery.style( elem, prop ); - } - } - - // Bail out if this is a no-op like .hide().hide() - propTween = !jQuery.isEmptyObject( props ); - if ( !propTween && jQuery.isEmptyObject( orig ) ) { - return; - } - - // Restrict "overflow" and "display" styles during box animations - if ( isBox && elem.nodeType === 1 ) { - - // Support: IE <=9 - 11, Edge 12 - 15 - // Record all 3 overflow attributes because IE does not infer the shorthand - // from identically-valued overflowX and overflowY and Edge just mirrors - // the overflowX value there. - opts.overflow = [ style.overflow, style.overflowX, style.overflowY ]; - - // Identify a display type, preferring old show/hide data over the CSS cascade - restoreDisplay = dataShow && dataShow.display; - if ( restoreDisplay == null ) { - restoreDisplay = dataPriv.get( elem, "display" ); - } - display = jQuery.css( elem, "display" ); - if ( display === "none" ) { - if ( restoreDisplay ) { - display = restoreDisplay; - } else { - - // Get nonempty value(s) by temporarily forcing visibility - showHide( [ elem ], true ); - restoreDisplay = elem.style.display || restoreDisplay; - display = jQuery.css( elem, "display" ); - showHide( [ elem ] ); - } - } - - // Animate inline elements as inline-block - if ( display === "inline" || display === "inline-block" && restoreDisplay != null ) { - if ( jQuery.css( elem, "float" ) === "none" ) { - - // Restore the original display value at the end of pure show/hide animations - if ( !propTween ) { - anim.done( function() { - style.display = restoreDisplay; - } ); - if ( restoreDisplay == null ) { - display = style.display; - restoreDisplay = display === "none" ? "" : display; - } - } - style.display = "inline-block"; - } - } - } - - if ( opts.overflow ) { - style.overflow = "hidden"; - anim.always( function() { - style.overflow = opts.overflow[ 0 ]; - style.overflowX = opts.overflow[ 1 ]; - style.overflowY = opts.overflow[ 2 ]; - } ); - } - - // Implement show/hide animations - propTween = false; - for ( prop in orig ) { - - // General show/hide setup for this element animation - if ( !propTween ) { - if ( dataShow ) { - if ( "hidden" in dataShow ) { - hidden = dataShow.hidden; - } - } else { - dataShow = dataPriv.access( elem, "fxshow", { display: restoreDisplay } ); - } - - // Store hidden/visible for toggle so `.stop().toggle()` "reverses" - if ( toggle ) { - dataShow.hidden = !hidden; - } - - // Show elements before animating them - if ( hidden ) { - showHide( [ elem ], true ); - } - - /* eslint-disable no-loop-func */ - - anim.done( function() { - - /* eslint-enable no-loop-func */ - - // The final step of a "hide" animation is actually hiding the element - if ( !hidden ) { - showHide( [ elem ] ); - } - dataPriv.remove( elem, "fxshow" ); - for ( prop in orig ) { - jQuery.style( elem, prop, orig[ prop ] ); - } - } ); - } - - // Per-property setup - propTween = createTween( hidden ? dataShow[ prop ] : 0, prop, anim ); - if ( !( prop in dataShow ) ) { - dataShow[ prop ] = propTween.start; - if ( hidden ) { - propTween.end = propTween.start; - propTween.start = 0; - } - } - } -} - -function propFilter( props, specialEasing ) { - var index, name, easing, value, hooks; - - // camelCase, specialEasing and expand cssHook pass - for ( index in props ) { - name = camelCase( index ); - easing = specialEasing[ name ]; - value = props[ index ]; - if ( Array.isArray( value ) ) { - easing = value[ 1 ]; - value = props[ index ] = value[ 0 ]; - } - - if ( index !== name ) { - props[ name ] = value; - delete props[ index ]; - } - - hooks = jQuery.cssHooks[ name ]; - if ( hooks && "expand" in hooks ) { - value = hooks.expand( value ); - delete props[ name ]; - - // Not quite $.extend, this won't overwrite existing keys. - // Reusing 'index' because we have the correct "name" - for ( index in value ) { - if ( !( index in props ) ) { - props[ index ] = value[ index ]; - specialEasing[ index ] = easing; - } - } - } else { - specialEasing[ name ] = easing; - } - } -} - -function Animation( elem, properties, options ) { - var result, - stopped, - index = 0, - length = Animation.prefilters.length, - deferred = jQuery.Deferred().always( function() { - - // Don't match elem in the :animated selector - delete tick.elem; - } ), - tick = function() { - if ( stopped ) { - return false; - } - var currentTime = fxNow || createFxNow(), - remaining = Math.max( 0, animation.startTime + animation.duration - currentTime ), - - // Support: Android 2.3 only - // Archaic crash bug won't allow us to use `1 - ( 0.5 || 0 )` (trac-12497) - temp = remaining / animation.duration || 0, - percent = 1 - temp, - index = 0, - length = animation.tweens.length; - - for ( ; index < length; index++ ) { - animation.tweens[ index ].run( percent ); - } - - deferred.notifyWith( elem, [ animation, percent, remaining ] ); - - // If there's more to do, yield - if ( percent < 1 && length ) { - return remaining; - } - - // If this was an empty animation, synthesize a final progress notification - if ( !length ) { - deferred.notifyWith( elem, [ animation, 1, 0 ] ); - } - - // Resolve the animation and report its conclusion - deferred.resolveWith( elem, [ animation ] ); - return false; - }, - animation = deferred.promise( { - elem: elem, - props: jQuery.extend( {}, properties ), - opts: jQuery.extend( true, { - specialEasing: {}, - easing: jQuery.easing._default - }, options ), - originalProperties: properties, - originalOptions: options, - startTime: fxNow || createFxNow(), - duration: options.duration, - tweens: [], - createTween: function( prop, end ) { - var tween = jQuery.Tween( elem, animation.opts, prop, end, - animation.opts.specialEasing[ prop ] || animation.opts.easing ); - animation.tweens.push( tween ); - return tween; - }, - stop: function( gotoEnd ) { - var index = 0, - - // If we are going to the end, we want to run all the tweens - // otherwise we skip this part - length = gotoEnd ? animation.tweens.length : 0; - if ( stopped ) { - return this; - } - stopped = true; - for ( ; index < length; index++ ) { - animation.tweens[ index ].run( 1 ); - } - - // Resolve when we played the last frame; otherwise, reject - if ( gotoEnd ) { - deferred.notifyWith( elem, [ animation, 1, 0 ] ); - deferred.resolveWith( elem, [ animation, gotoEnd ] ); - } else { - deferred.rejectWith( elem, [ animation, gotoEnd ] ); - } - return this; - } - } ), - props = animation.props; - - propFilter( props, animation.opts.specialEasing ); - - for ( ; index < length; index++ ) { - result = Animation.prefilters[ index ].call( animation, elem, props, animation.opts ); - if ( result ) { - if ( isFunction( result.stop ) ) { - jQuery._queueHooks( animation.elem, animation.opts.queue ).stop = - result.stop.bind( result ); - } - return result; - } - } - - jQuery.map( props, createTween, animation ); - - if ( isFunction( animation.opts.start ) ) { - animation.opts.start.call( elem, animation ); - } - - // Attach callbacks from options - animation - .progress( animation.opts.progress ) - .done( animation.opts.done, animation.opts.complete ) - .fail( animation.opts.fail ) - .always( animation.opts.always ); - - jQuery.fx.timer( - jQuery.extend( tick, { - elem: elem, - anim: animation, - queue: animation.opts.queue - } ) - ); - - return animation; -} - -jQuery.Animation = jQuery.extend( Animation, { - - tweeners: { - "*": [ function( prop, value ) { - var tween = this.createTween( prop, value ); - adjustCSS( tween.elem, prop, rcssNum.exec( value ), tween ); - return tween; - } ] - }, - - tweener: function( props, callback ) { - if ( isFunction( props ) ) { - callback = props; - props = [ "*" ]; - } else { - props = props.match( rnothtmlwhite ); - } - - var prop, - index = 0, - length = props.length; - - for ( ; index < length; index++ ) { - prop = props[ index ]; - Animation.tweeners[ prop ] = Animation.tweeners[ prop ] || []; - Animation.tweeners[ prop ].unshift( callback ); - } - }, - - prefilters: [ defaultPrefilter ], - - prefilter: function( callback, prepend ) { - if ( prepend ) { - Animation.prefilters.unshift( callback ); - } else { - Animation.prefilters.push( callback ); - } - } -} ); - -jQuery.speed = function( speed, easing, fn ) { - var opt = speed && typeof speed === "object" ? jQuery.extend( {}, speed ) : { - complete: fn || !fn && easing || - isFunction( speed ) && speed, - duration: speed, - easing: fn && easing || easing && !isFunction( easing ) && easing - }; - - // Go to the end state if fx are off - if ( jQuery.fx.off ) { - opt.duration = 0; - - } else { - if ( typeof opt.duration !== "number" ) { - if ( opt.duration in jQuery.fx.speeds ) { - opt.duration = jQuery.fx.speeds[ opt.duration ]; - - } else { - opt.duration = jQuery.fx.speeds._default; - } - } - } - - // Normalize opt.queue - true/undefined/null -> "fx" - if ( opt.queue == null || opt.queue === true ) { - opt.queue = "fx"; - } - - // Queueing - opt.old = opt.complete; - - opt.complete = function() { - if ( isFunction( opt.old ) ) { - opt.old.call( this ); - } - - if ( opt.queue ) { - jQuery.dequeue( this, opt.queue ); - } - }; - - return opt; -}; - -jQuery.fn.extend( { - fadeTo: function( speed, to, easing, callback ) { - - // Show any hidden elements after setting opacity to 0 - return this.filter( isHiddenWithinTree ).css( "opacity", 0 ).show() - - // Animate to the value specified - .end().animate( { opacity: to }, speed, easing, callback ); - }, - animate: function( prop, speed, easing, callback ) { - var empty = jQuery.isEmptyObject( prop ), - optall = jQuery.speed( speed, easing, callback ), - doAnimation = function() { - - // Operate on a copy of prop so per-property easing won't be lost - var anim = Animation( this, jQuery.extend( {}, prop ), optall ); - - // Empty animations, or finishing resolves immediately - if ( empty || dataPriv.get( this, "finish" ) ) { - anim.stop( true ); - } - }; - - doAnimation.finish = doAnimation; - - return empty || optall.queue === false ? - this.each( doAnimation ) : - this.queue( optall.queue, doAnimation ); - }, - stop: function( type, clearQueue, gotoEnd ) { - var stopQueue = function( hooks ) { - var stop = hooks.stop; - delete hooks.stop; - stop( gotoEnd ); - }; - - if ( typeof type !== "string" ) { - gotoEnd = clearQueue; - clearQueue = type; - type = undefined; - } - if ( clearQueue ) { - this.queue( type || "fx", [] ); - } - - return this.each( function() { - var dequeue = true, - index = type != null && type + "queueHooks", - timers = jQuery.timers, - data = dataPriv.get( this ); - - if ( index ) { - if ( data[ index ] && data[ index ].stop ) { - stopQueue( data[ index ] ); - } - } else { - for ( index in data ) { - if ( data[ index ] && data[ index ].stop && rrun.test( index ) ) { - stopQueue( data[ index ] ); - } - } - } - - for ( index = timers.length; index--; ) { - if ( timers[ index ].elem === this && - ( type == null || timers[ index ].queue === type ) ) { - - timers[ index ].anim.stop( gotoEnd ); - dequeue = false; - timers.splice( index, 1 ); - } - } - - // Start the next in the queue if the last step wasn't forced. - // Timers currently will call their complete callbacks, which - // will dequeue but only if they were gotoEnd. - if ( dequeue || !gotoEnd ) { - jQuery.dequeue( this, type ); - } - } ); - }, - finish: function( type ) { - if ( type !== false ) { - type = type || "fx"; - } - return this.each( function() { - var index, - data = dataPriv.get( this ), - queue = data[ type + "queue" ], - hooks = data[ type + "queueHooks" ], - timers = jQuery.timers, - length = queue ? queue.length : 0; - - // Enable finishing flag on private data - data.finish = true; - - // Empty the queue first - jQuery.queue( this, type, [] ); - - if ( hooks && hooks.stop ) { - hooks.stop.call( this, true ); - } - - // Look for any active animations, and finish them - for ( index = timers.length; index--; ) { - if ( timers[ index ].elem === this && timers[ index ].queue === type ) { - timers[ index ].anim.stop( true ); - timers.splice( index, 1 ); - } - } - - // Look for any animations in the old queue and finish them - for ( index = 0; index < length; index++ ) { - if ( queue[ index ] && queue[ index ].finish ) { - queue[ index ].finish.call( this ); - } - } - - // Turn off finishing flag - delete data.finish; - } ); - } -} ); - -jQuery.each( [ "toggle", "show", "hide" ], function( _i, name ) { - var cssFn = jQuery.fn[ name ]; - jQuery.fn[ name ] = function( speed, easing, callback ) { - return speed == null || typeof speed === "boolean" ? - cssFn.apply( this, arguments ) : - this.animate( genFx( name, true ), speed, easing, callback ); - }; -} ); - -// Generate shortcuts for custom animations -jQuery.each( { - slideDown: genFx( "show" ), - slideUp: genFx( "hide" ), - slideToggle: genFx( "toggle" ), - fadeIn: { opacity: "show" }, - fadeOut: { opacity: "hide" }, - fadeToggle: { opacity: "toggle" } -}, function( name, props ) { - jQuery.fn[ name ] = function( speed, easing, callback ) { - return this.animate( props, speed, easing, callback ); - }; -} ); - -jQuery.timers = []; -jQuery.fx.tick = function() { - var timer, - i = 0, - timers = jQuery.timers; - - fxNow = Date.now(); - - for ( ; i < timers.length; i++ ) { - timer = timers[ i ]; - - // Run the timer and safely remove it when done (allowing for external removal) - if ( !timer() && timers[ i ] === timer ) { - timers.splice( i--, 1 ); - } - } - - if ( !timers.length ) { - jQuery.fx.stop(); - } - fxNow = undefined; -}; - -jQuery.fx.timer = function( timer ) { - jQuery.timers.push( timer ); - jQuery.fx.start(); -}; - -jQuery.fx.interval = 13; -jQuery.fx.start = function() { - if ( inProgress ) { - return; - } - - inProgress = true; - schedule(); -}; - -jQuery.fx.stop = function() { - inProgress = null; -}; - -jQuery.fx.speeds = { - slow: 600, - fast: 200, - - // Default speed - _default: 400 -}; - - -// Based off of the plugin by Clint Helfers, with permission. -jQuery.fn.delay = function( time, type ) { - time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time; - type = type || "fx"; - - return this.queue( type, function( next, hooks ) { - var timeout = window.setTimeout( next, time ); - hooks.stop = function() { - window.clearTimeout( timeout ); - }; - } ); -}; - - -( function() { - var input = document.createElement( "input" ), - select = document.createElement( "select" ), - opt = select.appendChild( document.createElement( "option" ) ); - - input.type = "checkbox"; - - // Support: Android <=4.3 only - // Default value for a checkbox should be "on" - support.checkOn = input.value !== ""; - - // Support: IE <=11 only - // Must access selectedIndex to make default options select - support.optSelected = opt.selected; - - // Support: IE <=11 only - // An input loses its value after becoming a radio - input = document.createElement( "input" ); - input.value = "t"; - input.type = "radio"; - support.radioValue = input.value === "t"; -} )(); - - -var boolHook, - attrHandle = jQuery.expr.attrHandle; - -jQuery.fn.extend( { - attr: function( name, value ) { - return access( this, jQuery.attr, name, value, arguments.length > 1 ); - }, - - removeAttr: function( name ) { - return this.each( function() { - jQuery.removeAttr( this, name ); - } ); - } -} ); - -jQuery.extend( { - attr: function( elem, name, value ) { - var ret, hooks, - nType = elem.nodeType; - - // Don't get/set attributes on text, comment and attribute nodes - if ( nType === 3 || nType === 8 || nType === 2 ) { - return; - } - - // Fallback to prop when attributes are not supported - if ( typeof elem.getAttribute === "undefined" ) { - return jQuery.prop( elem, name, value ); - } - - // Attribute hooks are determined by the lowercase version - // Grab necessary hook if one is defined - if ( nType !== 1 || !jQuery.isXMLDoc( elem ) ) { - hooks = jQuery.attrHooks[ name.toLowerCase() ] || - ( jQuery.expr.match.bool.test( name ) ? boolHook : undefined ); - } - - if ( value !== undefined ) { - if ( value === null ) { - jQuery.removeAttr( elem, name ); - return; - } - - if ( hooks && "set" in hooks && - ( ret = hooks.set( elem, value, name ) ) !== undefined ) { - return ret; - } - - elem.setAttribute( name, value + "" ); - return value; - } - - if ( hooks && "get" in hooks && ( ret = hooks.get( elem, name ) ) !== null ) { - return ret; - } - - ret = jQuery.find.attr( elem, name ); - - // Non-existent attributes return null, we normalize to undefined - return ret == null ? undefined : ret; - }, - - attrHooks: { - type: { - set: function( elem, value ) { - if ( !support.radioValue && value === "radio" && - nodeName( elem, "input" ) ) { - var val = elem.value; - elem.setAttribute( "type", value ); - if ( val ) { - elem.value = val; - } - return value; - } - } - } - }, - - removeAttr: function( elem, value ) { - var name, - i = 0, - - // Attribute names can contain non-HTML whitespace characters - // https://html.spec.whatwg.org/multipage/syntax.html#attributes-2 - attrNames = value && value.match( rnothtmlwhite ); - - if ( attrNames && elem.nodeType === 1 ) { - while ( ( name = attrNames[ i++ ] ) ) { - elem.removeAttribute( name ); - } - } - } -} ); - -// Hooks for boolean attributes -boolHook = { - set: function( elem, value, name ) { - if ( value === false ) { - - // Remove boolean attributes when set to false - jQuery.removeAttr( elem, name ); - } else { - elem.setAttribute( name, name ); - } - return name; - } -}; - -jQuery.each( jQuery.expr.match.bool.source.match( /\w+/g ), function( _i, name ) { - var getter = attrHandle[ name ] || jQuery.find.attr; - - attrHandle[ name ] = function( elem, name, isXML ) { - var ret, handle, - lowercaseName = name.toLowerCase(); - - if ( !isXML ) { - - // Avoid an infinite loop by temporarily removing this function from the getter - handle = attrHandle[ lowercaseName ]; - attrHandle[ lowercaseName ] = ret; - ret = getter( elem, name, isXML ) != null ? - lowercaseName : - null; - attrHandle[ lowercaseName ] = handle; - } - return ret; - }; -} ); - - - - -var rfocusable = /^(?:input|select|textarea|button)$/i, - rclickable = /^(?:a|area)$/i; - -jQuery.fn.extend( { - prop: function( name, value ) { - return access( this, jQuery.prop, name, value, arguments.length > 1 ); - }, - - removeProp: function( name ) { - return this.each( function() { - delete this[ jQuery.propFix[ name ] || name ]; - } ); - } -} ); - -jQuery.extend( { - prop: function( elem, name, value ) { - var ret, hooks, - nType = elem.nodeType; - - // Don't get/set properties on text, comment and attribute nodes - if ( nType === 3 || nType === 8 || nType === 2 ) { - return; - } - - if ( nType !== 1 || !jQuery.isXMLDoc( elem ) ) { - - // Fix name and attach hooks - name = jQuery.propFix[ name ] || name; - hooks = jQuery.propHooks[ name ]; - } - - if ( value !== undefined ) { - if ( hooks && "set" in hooks && - ( ret = hooks.set( elem, value, name ) ) !== undefined ) { - return ret; - } - - return ( elem[ name ] = value ); - } - - if ( hooks && "get" in hooks && ( ret = hooks.get( elem, name ) ) !== null ) { - return ret; - } - - return elem[ name ]; - }, - - propHooks: { - tabIndex: { - get: function( elem ) { - - // Support: IE <=9 - 11 only - // elem.tabIndex doesn't always return the - // correct value when it hasn't been explicitly set - // Use proper attribute retrieval (trac-12072) - var tabindex = jQuery.find.attr( elem, "tabindex" ); - - if ( tabindex ) { - return parseInt( tabindex, 10 ); - } - - if ( - rfocusable.test( elem.nodeName ) || - rclickable.test( elem.nodeName ) && - elem.href - ) { - return 0; - } - - return -1; - } - } - }, - - propFix: { - "for": "htmlFor", - "class": "className" - } -} ); - -// Support: IE <=11 only -// Accessing the selectedIndex property -// forces the browser to respect setting selected -// on the option -// The getter ensures a default option is selected -// when in an optgroup -// eslint rule "no-unused-expressions" is disabled for this code -// since it considers such accessions noop -if ( !support.optSelected ) { - jQuery.propHooks.selected = { - get: function( elem ) { - - /* eslint no-unused-expressions: "off" */ - - var parent = elem.parentNode; - if ( parent && parent.parentNode ) { - parent.parentNode.selectedIndex; - } - return null; - }, - set: function( elem ) { - - /* eslint no-unused-expressions: "off" */ - - var parent = elem.parentNode; - if ( parent ) { - parent.selectedIndex; - - if ( parent.parentNode ) { - parent.parentNode.selectedIndex; - } - } - } - }; -} - -jQuery.each( [ - "tabIndex", - "readOnly", - "maxLength", - "cellSpacing", - "cellPadding", - "rowSpan", - "colSpan", - "useMap", - "frameBorder", - "contentEditable" -], function() { - jQuery.propFix[ this.toLowerCase() ] = this; -} ); - - - - - // Strip and collapse whitespace according to HTML spec - // https://infra.spec.whatwg.org/#strip-and-collapse-ascii-whitespace - function stripAndCollapse( value ) { - var tokens = value.match( rnothtmlwhite ) || []; - return tokens.join( " " ); - } - - -function getClass( elem ) { - return elem.getAttribute && elem.getAttribute( "class" ) || ""; -} - -function classesToArray( value ) { - if ( Array.isArray( value ) ) { - return value; - } - if ( typeof value === "string" ) { - return value.match( rnothtmlwhite ) || []; - } - return []; -} - -jQuery.fn.extend( { - addClass: function( value ) { - var classNames, cur, curValue, className, i, finalValue; - - if ( isFunction( value ) ) { - return this.each( function( j ) { - jQuery( this ).addClass( value.call( this, j, getClass( this ) ) ); - } ); - } - - classNames = classesToArray( value ); - - if ( classNames.length ) { - return this.each( function() { - curValue = getClass( this ); - cur = this.nodeType === 1 && ( " " + stripAndCollapse( curValue ) + " " ); - - if ( cur ) { - for ( i = 0; i < classNames.length; i++ ) { - className = classNames[ i ]; - if ( cur.indexOf( " " + className + " " ) < 0 ) { - cur += className + " "; - } - } - - // Only assign if different to avoid unneeded rendering. - finalValue = stripAndCollapse( cur ); - if ( curValue !== finalValue ) { - this.setAttribute( "class", finalValue ); - } - } - } ); - } - - return this; - }, - - removeClass: function( value ) { - var classNames, cur, curValue, className, i, finalValue; - - if ( isFunction( value ) ) { - return this.each( function( j ) { - jQuery( this ).removeClass( value.call( this, j, getClass( this ) ) ); - } ); - } - - if ( !arguments.length ) { - return this.attr( "class", "" ); - } - - classNames = classesToArray( value ); - - if ( classNames.length ) { - return this.each( function() { - curValue = getClass( this ); - - // This expression is here for better compressibility (see addClass) - cur = this.nodeType === 1 && ( " " + stripAndCollapse( curValue ) + " " ); - - if ( cur ) { - for ( i = 0; i < classNames.length; i++ ) { - className = classNames[ i ]; - - // Remove *all* instances - while ( cur.indexOf( " " + className + " " ) > -1 ) { - cur = cur.replace( " " + className + " ", " " ); - } - } - - // Only assign if different to avoid unneeded rendering. - finalValue = stripAndCollapse( cur ); - if ( curValue !== finalValue ) { - this.setAttribute( "class", finalValue ); - } - } - } ); - } - - return this; - }, - - toggleClass: function( value, stateVal ) { - var classNames, className, i, self, - type = typeof value, - isValidValue = type === "string" || Array.isArray( value ); - - if ( isFunction( value ) ) { - return this.each( function( i ) { - jQuery( this ).toggleClass( - value.call( this, i, getClass( this ), stateVal ), - stateVal - ); - } ); - } - - if ( typeof stateVal === "boolean" && isValidValue ) { - return stateVal ? this.addClass( value ) : this.removeClass( value ); - } - - classNames = classesToArray( value ); - - return this.each( function() { - if ( isValidValue ) { - - // Toggle individual class names - self = jQuery( this ); - - for ( i = 0; i < classNames.length; i++ ) { - className = classNames[ i ]; - - // Check each className given, space separated list - if ( self.hasClass( className ) ) { - self.removeClass( className ); - } else { - self.addClass( className ); - } - } - - // Toggle whole class name - } else if ( value === undefined || type === "boolean" ) { - className = getClass( this ); - if ( className ) { - - // Store className if set - dataPriv.set( this, "__className__", className ); - } - - // If the element has a class name or if we're passed `false`, - // then remove the whole classname (if there was one, the above saved it). - // Otherwise bring back whatever was previously saved (if anything), - // falling back to the empty string if nothing was stored. - if ( this.setAttribute ) { - this.setAttribute( "class", - className || value === false ? - "" : - dataPriv.get( this, "__className__" ) || "" - ); - } - } - } ); - }, - - hasClass: function( selector ) { - var className, elem, - i = 0; - - className = " " + selector + " "; - while ( ( elem = this[ i++ ] ) ) { - if ( elem.nodeType === 1 && - ( " " + stripAndCollapse( getClass( elem ) ) + " " ).indexOf( className ) > -1 ) { - return true; - } - } - - return false; - } -} ); - - - - -var rreturn = /\r/g; - -jQuery.fn.extend( { - val: function( value ) { - var hooks, ret, valueIsFunction, - elem = this[ 0 ]; - - if ( !arguments.length ) { - if ( elem ) { - hooks = jQuery.valHooks[ elem.type ] || - jQuery.valHooks[ elem.nodeName.toLowerCase() ]; - - if ( hooks && - "get" in hooks && - ( ret = hooks.get( elem, "value" ) ) !== undefined - ) { - return ret; - } - - ret = elem.value; - - // Handle most common string cases - if ( typeof ret === "string" ) { - return ret.replace( rreturn, "" ); - } - - // Handle cases where value is null/undef or number - return ret == null ? "" : ret; - } - - return; - } - - valueIsFunction = isFunction( value ); - - return this.each( function( i ) { - var val; - - if ( this.nodeType !== 1 ) { - return; - } - - if ( valueIsFunction ) { - val = value.call( this, i, jQuery( this ).val() ); - } else { - val = value; - } - - // Treat null/undefined as ""; convert numbers to string - if ( val == null ) { - val = ""; - - } else if ( typeof val === "number" ) { - val += ""; - - } else if ( Array.isArray( val ) ) { - val = jQuery.map( val, function( value ) { - return value == null ? "" : value + ""; - } ); - } - - hooks = jQuery.valHooks[ this.type ] || jQuery.valHooks[ this.nodeName.toLowerCase() ]; - - // If set returns undefined, fall back to normal setting - if ( !hooks || !( "set" in hooks ) || hooks.set( this, val, "value" ) === undefined ) { - this.value = val; - } - } ); - } -} ); - -jQuery.extend( { - valHooks: { - option: { - get: function( elem ) { - - var val = jQuery.find.attr( elem, "value" ); - return val != null ? - val : - - // Support: IE <=10 - 11 only - // option.text throws exceptions (trac-14686, trac-14858) - // Strip and collapse whitespace - // https://html.spec.whatwg.org/#strip-and-collapse-whitespace - stripAndCollapse( jQuery.text( elem ) ); - } - }, - select: { - get: function( elem ) { - var value, option, i, - options = elem.options, - index = elem.selectedIndex, - one = elem.type === "select-one", - values = one ? null : [], - max = one ? index + 1 : options.length; - - if ( index < 0 ) { - i = max; - - } else { - i = one ? index : 0; - } - - // Loop through all the selected options - for ( ; i < max; i++ ) { - option = options[ i ]; - - // Support: IE <=9 only - // IE8-9 doesn't update selected after form reset (trac-2551) - if ( ( option.selected || i === index ) && - - // Don't return options that are disabled or in a disabled optgroup - !option.disabled && - ( !option.parentNode.disabled || - !nodeName( option.parentNode, "optgroup" ) ) ) { - - // Get the specific value for the option - value = jQuery( option ).val(); - - // We don't need an array for one selects - if ( one ) { - return value; - } - - // Multi-Selects return an array - values.push( value ); - } - } - - return values; - }, - - set: function( elem, value ) { - var optionSet, option, - options = elem.options, - values = jQuery.makeArray( value ), - i = options.length; - - while ( i-- ) { - option = options[ i ]; - - /* eslint-disable no-cond-assign */ - - if ( option.selected = - jQuery.inArray( jQuery.valHooks.option.get( option ), values ) > -1 - ) { - optionSet = true; - } - - /* eslint-enable no-cond-assign */ - } - - // Force browsers to behave consistently when non-matching value is set - if ( !optionSet ) { - elem.selectedIndex = -1; - } - return values; - } - } - } -} ); - -// Radios and checkboxes getter/setter -jQuery.each( [ "radio", "checkbox" ], function() { - jQuery.valHooks[ this ] = { - set: function( elem, value ) { - if ( Array.isArray( value ) ) { - return ( elem.checked = jQuery.inArray( jQuery( elem ).val(), value ) > -1 ); - } - } - }; - if ( !support.checkOn ) { - jQuery.valHooks[ this ].get = function( elem ) { - return elem.getAttribute( "value" ) === null ? "on" : elem.value; - }; - } -} ); - - - - -// Return jQuery for attributes-only inclusion -var location = window.location; - -var nonce = { guid: Date.now() }; - -var rquery = ( /\?/ ); - - - -// Cross-browser xml parsing -jQuery.parseXML = function( data ) { - var xml, parserErrorElem; - if ( !data || typeof data !== "string" ) { - return null; - } - - // Support: IE 9 - 11 only - // IE throws on parseFromString with invalid input. - try { - xml = ( new window.DOMParser() ).parseFromString( data, "text/xml" ); - } catch ( e ) {} - - parserErrorElem = xml && xml.getElementsByTagName( "parsererror" )[ 0 ]; - if ( !xml || parserErrorElem ) { - jQuery.error( "Invalid XML: " + ( - parserErrorElem ? - jQuery.map( parserErrorElem.childNodes, function( el ) { - return el.textContent; - } ).join( "\n" ) : - data - ) ); - } - return xml; -}; - - -var rfocusMorph = /^(?:focusinfocus|focusoutblur)$/, - stopPropagationCallback = function( e ) { - e.stopPropagation(); - }; - -jQuery.extend( jQuery.event, { - - trigger: function( event, data, elem, onlyHandlers ) { - - var i, cur, tmp, bubbleType, ontype, handle, special, lastElement, - eventPath = [ elem || document ], - type = hasOwn.call( event, "type" ) ? event.type : event, - namespaces = hasOwn.call( event, "namespace" ) ? event.namespace.split( "." ) : []; - - cur = lastElement = tmp = elem = elem || document; - - // Don't do events on text and comment nodes - if ( elem.nodeType === 3 || elem.nodeType === 8 ) { - return; - } - - // focus/blur morphs to focusin/out; ensure we're not firing them right now - if ( rfocusMorph.test( type + jQuery.event.triggered ) ) { - return; - } - - if ( type.indexOf( "." ) > -1 ) { - - // Namespaced trigger; create a regexp to match event type in handle() - namespaces = type.split( "." ); - type = namespaces.shift(); - namespaces.sort(); - } - ontype = type.indexOf( ":" ) < 0 && "on" + type; - - // Caller can pass in a jQuery.Event object, Object, or just an event type string - event = event[ jQuery.expando ] ? - event : - new jQuery.Event( type, typeof event === "object" && event ); - - // Trigger bitmask: & 1 for native handlers; & 2 for jQuery (always true) - event.isTrigger = onlyHandlers ? 2 : 3; - event.namespace = namespaces.join( "." ); - event.rnamespace = event.namespace ? - new RegExp( "(^|\\.)" + namespaces.join( "\\.(?:.*\\.|)" ) + "(\\.|$)" ) : - null; - - // Clean up the event in case it is being reused - event.result = undefined; - if ( !event.target ) { - event.target = elem; - } - - // Clone any incoming data and prepend the event, creating the handler arg list - data = data == null ? - [ event ] : - jQuery.makeArray( data, [ event ] ); - - // Allow special events to draw outside the lines - special = jQuery.event.special[ type ] || {}; - if ( !onlyHandlers && special.trigger && special.trigger.apply( elem, data ) === false ) { - return; - } - - // Determine event propagation path in advance, per W3C events spec (trac-9951) - // Bubble up to document, then to window; watch for a global ownerDocument var (trac-9724) - if ( !onlyHandlers && !special.noBubble && !isWindow( elem ) ) { - - bubbleType = special.delegateType || type; - if ( !rfocusMorph.test( bubbleType + type ) ) { - cur = cur.parentNode; - } - for ( ; cur; cur = cur.parentNode ) { - eventPath.push( cur ); - tmp = cur; - } - - // Only add window if we got to document (e.g., not plain obj or detached DOM) - if ( tmp === ( elem.ownerDocument || document ) ) { - eventPath.push( tmp.defaultView || tmp.parentWindow || window ); - } - } - - // Fire handlers on the event path - i = 0; - while ( ( cur = eventPath[ i++ ] ) && !event.isPropagationStopped() ) { - lastElement = cur; - event.type = i > 1 ? - bubbleType : - special.bindType || type; - - // jQuery handler - handle = ( dataPriv.get( cur, "events" ) || Object.create( null ) )[ event.type ] && - dataPriv.get( cur, "handle" ); - if ( handle ) { - handle.apply( cur, data ); - } - - // Native handler - handle = ontype && cur[ ontype ]; - if ( handle && handle.apply && acceptData( cur ) ) { - event.result = handle.apply( cur, data ); - if ( event.result === false ) { - event.preventDefault(); - } - } - } - event.type = type; - - // If nobody prevented the default action, do it now - if ( !onlyHandlers && !event.isDefaultPrevented() ) { - - if ( ( !special._default || - special._default.apply( eventPath.pop(), data ) === false ) && - acceptData( elem ) ) { - - // Call a native DOM method on the target with the same name as the event. - // Don't do default actions on window, that's where global variables be (trac-6170) - if ( ontype && isFunction( elem[ type ] ) && !isWindow( elem ) ) { - - // Don't re-trigger an onFOO event when we call its FOO() method - tmp = elem[ ontype ]; - - if ( tmp ) { - elem[ ontype ] = null; - } - - // Prevent re-triggering of the same event, since we already bubbled it above - jQuery.event.triggered = type; - - if ( event.isPropagationStopped() ) { - lastElement.addEventListener( type, stopPropagationCallback ); - } - - elem[ type ](); - - if ( event.isPropagationStopped() ) { - lastElement.removeEventListener( type, stopPropagationCallback ); - } - - jQuery.event.triggered = undefined; - - if ( tmp ) { - elem[ ontype ] = tmp; - } - } - } - } - - return event.result; - }, - - // Piggyback on a donor event to simulate a different one - // Used only for `focus(in | out)` events - simulate: function( type, elem, event ) { - var e = jQuery.extend( - new jQuery.Event(), - event, - { - type: type, - isSimulated: true - } - ); - - jQuery.event.trigger( e, null, elem ); - } - -} ); - -jQuery.fn.extend( { - - trigger: function( type, data ) { - return this.each( function() { - jQuery.event.trigger( type, data, this ); - } ); - }, - triggerHandler: function( type, data ) { - var elem = this[ 0 ]; - if ( elem ) { - return jQuery.event.trigger( type, data, elem, true ); - } - } -} ); - - -var - rbracket = /\[\]$/, - rCRLF = /\r?\n/g, - rsubmitterTypes = /^(?:submit|button|image|reset|file)$/i, - rsubmittable = /^(?:input|select|textarea|keygen)/i; - -function buildParams( prefix, obj, traditional, add ) { - var name; - - if ( Array.isArray( obj ) ) { - - // Serialize array item. - jQuery.each( obj, function( i, v ) { - if ( traditional || rbracket.test( prefix ) ) { - - // Treat each array item as a scalar. - add( prefix, v ); - - } else { - - // Item is non-scalar (array or object), encode its numeric index. - buildParams( - prefix + "[" + ( typeof v === "object" && v != null ? i : "" ) + "]", - v, - traditional, - add - ); - } - } ); - - } else if ( !traditional && toType( obj ) === "object" ) { - - // Serialize object item. - for ( name in obj ) { - buildParams( prefix + "[" + name + "]", obj[ name ], traditional, add ); - } - - } else { - - // Serialize scalar item. - add( prefix, obj ); - } -} - -// Serialize an array of form elements or a set of -// key/values into a query string -jQuery.param = function( a, traditional ) { - var prefix, - s = [], - add = function( key, valueOrFunction ) { - - // If value is a function, invoke it and use its return value - var value = isFunction( valueOrFunction ) ? - valueOrFunction() : - valueOrFunction; - - s[ s.length ] = encodeURIComponent( key ) + "=" + - encodeURIComponent( value == null ? "" : value ); - }; - - if ( a == null ) { - return ""; - } - - // If an array was passed in, assume that it is an array of form elements. - if ( Array.isArray( a ) || ( a.jquery && !jQuery.isPlainObject( a ) ) ) { - - // Serialize the form elements - jQuery.each( a, function() { - add( this.name, this.value ); - } ); - - } else { - - // If traditional, encode the "old" way (the way 1.3.2 or older - // did it), otherwise encode params recursively. - for ( prefix in a ) { - buildParams( prefix, a[ prefix ], traditional, add ); - } - } - - // Return the resulting serialization - return s.join( "&" ); -}; - -jQuery.fn.extend( { - serialize: function() { - return jQuery.param( this.serializeArray() ); - }, - serializeArray: function() { - return this.map( function() { - - // Can add propHook for "elements" to filter or add form elements - var elements = jQuery.prop( this, "elements" ); - return elements ? jQuery.makeArray( elements ) : this; - } ).filter( function() { - var type = this.type; - - // Use .is( ":disabled" ) so that fieldset[disabled] works - return this.name && !jQuery( this ).is( ":disabled" ) && - rsubmittable.test( this.nodeName ) && !rsubmitterTypes.test( type ) && - ( this.checked || !rcheckableType.test( type ) ); - } ).map( function( _i, elem ) { - var val = jQuery( this ).val(); - - if ( val == null ) { - return null; - } - - if ( Array.isArray( val ) ) { - return jQuery.map( val, function( val ) { - return { name: elem.name, value: val.replace( rCRLF, "\r\n" ) }; - } ); - } - - return { name: elem.name, value: val.replace( rCRLF, "\r\n" ) }; - } ).get(); - } -} ); - - -var - r20 = /%20/g, - rhash = /#.*$/, - rantiCache = /([?&])_=[^&]*/, - rheaders = /^(.*?):[ \t]*([^\r\n]*)$/mg, - - // trac-7653, trac-8125, trac-8152: local protocol detection - rlocalProtocol = /^(?:about|app|app-storage|.+-extension|file|res|widget):$/, - rnoContent = /^(?:GET|HEAD)$/, - rprotocol = /^\/\//, - - /* Prefilters - * 1) They are useful to introduce custom dataTypes (see ajax/jsonp.js for an example) - * 2) These are called: - * - BEFORE asking for a transport - * - AFTER param serialization (s.data is a string if s.processData is true) - * 3) key is the dataType - * 4) the catchall symbol "*" can be used - * 5) execution will start with transport dataType and THEN continue down to "*" if needed - */ - prefilters = {}, - - /* Transports bindings - * 1) key is the dataType - * 2) the catchall symbol "*" can be used - * 3) selection will start with transport dataType and THEN go to "*" if needed - */ - transports = {}, - - // Avoid comment-prolog char sequence (trac-10098); must appease lint and evade compression - allTypes = "*/".concat( "*" ), - - // Anchor tag for parsing the document origin - originAnchor = document.createElement( "a" ); - -originAnchor.href = location.href; - -// Base "constructor" for jQuery.ajaxPrefilter and jQuery.ajaxTransport -function addToPrefiltersOrTransports( structure ) { - - // dataTypeExpression is optional and defaults to "*" - return function( dataTypeExpression, func ) { - - if ( typeof dataTypeExpression !== "string" ) { - func = dataTypeExpression; - dataTypeExpression = "*"; - } - - var dataType, - i = 0, - dataTypes = dataTypeExpression.toLowerCase().match( rnothtmlwhite ) || []; - - if ( isFunction( func ) ) { - - // For each dataType in the dataTypeExpression - while ( ( dataType = dataTypes[ i++ ] ) ) { - - // Prepend if requested - if ( dataType[ 0 ] === "+" ) { - dataType = dataType.slice( 1 ) || "*"; - ( structure[ dataType ] = structure[ dataType ] || [] ).unshift( func ); - - // Otherwise append - } else { - ( structure[ dataType ] = structure[ dataType ] || [] ).push( func ); - } - } - } - }; -} - -// Base inspection function for prefilters and transports -function inspectPrefiltersOrTransports( structure, options, originalOptions, jqXHR ) { - - var inspected = {}, - seekingTransport = ( structure === transports ); - - function inspect( dataType ) { - var selected; - inspected[ dataType ] = true; - jQuery.each( structure[ dataType ] || [], function( _, prefilterOrFactory ) { - var dataTypeOrTransport = prefilterOrFactory( options, originalOptions, jqXHR ); - if ( typeof dataTypeOrTransport === "string" && - !seekingTransport && !inspected[ dataTypeOrTransport ] ) { - - options.dataTypes.unshift( dataTypeOrTransport ); - inspect( dataTypeOrTransport ); - return false; - } else if ( seekingTransport ) { - return !( selected = dataTypeOrTransport ); - } - } ); - return selected; - } - - return inspect( options.dataTypes[ 0 ] ) || !inspected[ "*" ] && inspect( "*" ); -} - -// A special extend for ajax options -// that takes "flat" options (not to be deep extended) -// Fixes trac-9887 -function ajaxExtend( target, src ) { - var key, deep, - flatOptions = jQuery.ajaxSettings.flatOptions || {}; - - for ( key in src ) { - if ( src[ key ] !== undefined ) { - ( flatOptions[ key ] ? target : ( deep || ( deep = {} ) ) )[ key ] = src[ key ]; - } - } - if ( deep ) { - jQuery.extend( true, target, deep ); - } - - return target; -} - -/* Handles responses to an ajax request: - * - finds the right dataType (mediates between content-type and expected dataType) - * - returns the corresponding response - */ -function ajaxHandleResponses( s, jqXHR, responses ) { - - var ct, type, finalDataType, firstDataType, - contents = s.contents, - dataTypes = s.dataTypes; - - // Remove auto dataType and get content-type in the process - while ( dataTypes[ 0 ] === "*" ) { - dataTypes.shift(); - if ( ct === undefined ) { - ct = s.mimeType || jqXHR.getResponseHeader( "Content-Type" ); - } - } - - // Check if we're dealing with a known content-type - if ( ct ) { - for ( type in contents ) { - if ( contents[ type ] && contents[ type ].test( ct ) ) { - dataTypes.unshift( type ); - break; - } - } - } - - // Check to see if we have a response for the expected dataType - if ( dataTypes[ 0 ] in responses ) { - finalDataType = dataTypes[ 0 ]; - } else { - - // Try convertible dataTypes - for ( type in responses ) { - if ( !dataTypes[ 0 ] || s.converters[ type + " " + dataTypes[ 0 ] ] ) { - finalDataType = type; - break; - } - if ( !firstDataType ) { - firstDataType = type; - } - } - - // Or just use first one - finalDataType = finalDataType || firstDataType; - } - - // If we found a dataType - // We add the dataType to the list if needed - // and return the corresponding response - if ( finalDataType ) { - if ( finalDataType !== dataTypes[ 0 ] ) { - dataTypes.unshift( finalDataType ); - } - return responses[ finalDataType ]; - } -} - -/* Chain conversions given the request and the original response - * Also sets the responseXXX fields on the jqXHR instance - */ -function ajaxConvert( s, response, jqXHR, isSuccess ) { - var conv2, current, conv, tmp, prev, - converters = {}, - - // Work with a copy of dataTypes in case we need to modify it for conversion - dataTypes = s.dataTypes.slice(); - - // Create converters map with lowercased keys - if ( dataTypes[ 1 ] ) { - for ( conv in s.converters ) { - converters[ conv.toLowerCase() ] = s.converters[ conv ]; - } - } - - current = dataTypes.shift(); - - // Convert to each sequential dataType - while ( current ) { - - if ( s.responseFields[ current ] ) { - jqXHR[ s.responseFields[ current ] ] = response; - } - - // Apply the dataFilter if provided - if ( !prev && isSuccess && s.dataFilter ) { - response = s.dataFilter( response, s.dataType ); - } - - prev = current; - current = dataTypes.shift(); - - if ( current ) { - - // There's only work to do if current dataType is non-auto - if ( current === "*" ) { - - current = prev; - - // Convert response if prev dataType is non-auto and differs from current - } else if ( prev !== "*" && prev !== current ) { - - // Seek a direct converter - conv = converters[ prev + " " + current ] || converters[ "* " + current ]; - - // If none found, seek a pair - if ( !conv ) { - for ( conv2 in converters ) { - - // If conv2 outputs current - tmp = conv2.split( " " ); - if ( tmp[ 1 ] === current ) { - - // If prev can be converted to accepted input - conv = converters[ prev + " " + tmp[ 0 ] ] || - converters[ "* " + tmp[ 0 ] ]; - if ( conv ) { - - // Condense equivalence converters - if ( conv === true ) { - conv = converters[ conv2 ]; - - // Otherwise, insert the intermediate dataType - } else if ( converters[ conv2 ] !== true ) { - current = tmp[ 0 ]; - dataTypes.unshift( tmp[ 1 ] ); - } - break; - } - } - } - } - - // Apply converter (if not an equivalence) - if ( conv !== true ) { - - // Unless errors are allowed to bubble, catch and return them - if ( conv && s.throws ) { - response = conv( response ); - } else { - try { - response = conv( response ); - } catch ( e ) { - return { - state: "parsererror", - error: conv ? e : "No conversion from " + prev + " to " + current - }; - } - } - } - } - } - } - - return { state: "success", data: response }; -} - -jQuery.extend( { - - // Counter for holding the number of active queries - active: 0, - - // Last-Modified header cache for next request - lastModified: {}, - etag: {}, - - ajaxSettings: { - url: location.href, - type: "GET", - isLocal: rlocalProtocol.test( location.protocol ), - global: true, - processData: true, - async: true, - contentType: "application/x-www-form-urlencoded; charset=UTF-8", - - /* - timeout: 0, - data: null, - dataType: null, - username: null, - password: null, - cache: null, - throws: false, - traditional: false, - headers: {}, - */ - - accepts: { - "*": allTypes, - text: "text/plain", - html: "text/html", - xml: "application/xml, text/xml", - json: "application/json, text/javascript" - }, - - contents: { - xml: /\bxml\b/, - html: /\bhtml/, - json: /\bjson\b/ - }, - - responseFields: { - xml: "responseXML", - text: "responseText", - json: "responseJSON" - }, - - // Data converters - // Keys separate source (or catchall "*") and destination types with a single space - converters: { - - // Convert anything to text - "* text": String, - - // Text to html (true = no transformation) - "text html": true, - - // Evaluate text as a json expression - "text json": JSON.parse, - - // Parse text as xml - "text xml": jQuery.parseXML - }, - - // For options that shouldn't be deep extended: - // you can add your own custom options here if - // and when you create one that shouldn't be - // deep extended (see ajaxExtend) - flatOptions: { - url: true, - context: true - } - }, - - // Creates a full fledged settings object into target - // with both ajaxSettings and settings fields. - // If target is omitted, writes into ajaxSettings. - ajaxSetup: function( target, settings ) { - return settings ? - - // Building a settings object - ajaxExtend( ajaxExtend( target, jQuery.ajaxSettings ), settings ) : - - // Extending ajaxSettings - ajaxExtend( jQuery.ajaxSettings, target ); - }, - - ajaxPrefilter: addToPrefiltersOrTransports( prefilters ), - ajaxTransport: addToPrefiltersOrTransports( transports ), - - // Main method - ajax: function( url, options ) { - - // If url is an object, simulate pre-1.5 signature - if ( typeof url === "object" ) { - options = url; - url = undefined; - } - - // Force options to be an object - options = options || {}; - - var transport, - - // URL without anti-cache param - cacheURL, - - // Response headers - responseHeadersString, - responseHeaders, - - // timeout handle - timeoutTimer, - - // Url cleanup var - urlAnchor, - - // Request state (becomes false upon send and true upon completion) - completed, - - // To know if global events are to be dispatched - fireGlobals, - - // Loop variable - i, - - // uncached part of the url - uncached, - - // Create the final options object - s = jQuery.ajaxSetup( {}, options ), - - // Callbacks context - callbackContext = s.context || s, - - // Context for global events is callbackContext if it is a DOM node or jQuery collection - globalEventContext = s.context && - ( callbackContext.nodeType || callbackContext.jquery ) ? - jQuery( callbackContext ) : - jQuery.event, - - // Deferreds - deferred = jQuery.Deferred(), - completeDeferred = jQuery.Callbacks( "once memory" ), - - // Status-dependent callbacks - statusCode = s.statusCode || {}, - - // Headers (they are sent all at once) - requestHeaders = {}, - requestHeadersNames = {}, - - // Default abort message - strAbort = "canceled", - - // Fake xhr - jqXHR = { - readyState: 0, - - // Builds headers hashtable if needed - getResponseHeader: function( key ) { - var match; - if ( completed ) { - if ( !responseHeaders ) { - responseHeaders = {}; - while ( ( match = rheaders.exec( responseHeadersString ) ) ) { - responseHeaders[ match[ 1 ].toLowerCase() + " " ] = - ( responseHeaders[ match[ 1 ].toLowerCase() + " " ] || [] ) - .concat( match[ 2 ] ); - } - } - match = responseHeaders[ key.toLowerCase() + " " ]; - } - return match == null ? null : match.join( ", " ); - }, - - // Raw string - getAllResponseHeaders: function() { - return completed ? responseHeadersString : null; - }, - - // Caches the header - setRequestHeader: function( name, value ) { - if ( completed == null ) { - name = requestHeadersNames[ name.toLowerCase() ] = - requestHeadersNames[ name.toLowerCase() ] || name; - requestHeaders[ name ] = value; - } - return this; - }, - - // Overrides response content-type header - overrideMimeType: function( type ) { - if ( completed == null ) { - s.mimeType = type; - } - return this; - }, - - // Status-dependent callbacks - statusCode: function( map ) { - var code; - if ( map ) { - if ( completed ) { - - // Execute the appropriate callbacks - jqXHR.always( map[ jqXHR.status ] ); - } else { - - // Lazy-add the new callbacks in a way that preserves old ones - for ( code in map ) { - statusCode[ code ] = [ statusCode[ code ], map[ code ] ]; - } - } - } - return this; - }, - - // Cancel the request - abort: function( statusText ) { - var finalText = statusText || strAbort; - if ( transport ) { - transport.abort( finalText ); - } - done( 0, finalText ); - return this; - } - }; - - // Attach deferreds - deferred.promise( jqXHR ); - - // Add protocol if not provided (prefilters might expect it) - // Handle falsy url in the settings object (trac-10093: consistency with old signature) - // We also use the url parameter if available - s.url = ( ( url || s.url || location.href ) + "" ) - .replace( rprotocol, location.protocol + "//" ); - - // Alias method option to type as per ticket trac-12004 - s.type = options.method || options.type || s.method || s.type; - - // Extract dataTypes list - s.dataTypes = ( s.dataType || "*" ).toLowerCase().match( rnothtmlwhite ) || [ "" ]; - - // A cross-domain request is in order when the origin doesn't match the current origin. - if ( s.crossDomain == null ) { - urlAnchor = document.createElement( "a" ); - - // Support: IE <=8 - 11, Edge 12 - 15 - // IE throws exception on accessing the href property if url is malformed, - // e.g. http://example.com:80x/ - try { - urlAnchor.href = s.url; - - // Support: IE <=8 - 11 only - // Anchor's host property isn't correctly set when s.url is relative - urlAnchor.href = urlAnchor.href; - s.crossDomain = originAnchor.protocol + "//" + originAnchor.host !== - urlAnchor.protocol + "//" + urlAnchor.host; - } catch ( e ) { - - // If there is an error parsing the URL, assume it is crossDomain, - // it can be rejected by the transport if it is invalid - s.crossDomain = true; - } - } - - // Convert data if not already a string - if ( s.data && s.processData && typeof s.data !== "string" ) { - s.data = jQuery.param( s.data, s.traditional ); - } - - // Apply prefilters - inspectPrefiltersOrTransports( prefilters, s, options, jqXHR ); - - // If request was aborted inside a prefilter, stop there - if ( completed ) { - return jqXHR; - } - - // We can fire global events as of now if asked to - // Don't fire events if jQuery.event is undefined in an AMD-usage scenario (trac-15118) - fireGlobals = jQuery.event && s.global; - - // Watch for a new set of requests - if ( fireGlobals && jQuery.active++ === 0 ) { - jQuery.event.trigger( "ajaxStart" ); - } - - // Uppercase the type - s.type = s.type.toUpperCase(); - - // Determine if request has content - s.hasContent = !rnoContent.test( s.type ); - - // Save the URL in case we're toying with the If-Modified-Since - // and/or If-None-Match header later on - // Remove hash to simplify url manipulation - cacheURL = s.url.replace( rhash, "" ); - - // More options handling for requests with no content - if ( !s.hasContent ) { - - // Remember the hash so we can put it back - uncached = s.url.slice( cacheURL.length ); - - // If data is available and should be processed, append data to url - if ( s.data && ( s.processData || typeof s.data === "string" ) ) { - cacheURL += ( rquery.test( cacheURL ) ? "&" : "?" ) + s.data; - - // trac-9682: remove data so that it's not used in an eventual retry - delete s.data; - } - - // Add or update anti-cache param if needed - if ( s.cache === false ) { - cacheURL = cacheURL.replace( rantiCache, "$1" ); - uncached = ( rquery.test( cacheURL ) ? "&" : "?" ) + "_=" + ( nonce.guid++ ) + - uncached; - } - - // Put hash and anti-cache on the URL that will be requested (gh-1732) - s.url = cacheURL + uncached; - - // Change '%20' to '+' if this is encoded form body content (gh-2658) - } else if ( s.data && s.processData && - ( s.contentType || "" ).indexOf( "application/x-www-form-urlencoded" ) === 0 ) { - s.data = s.data.replace( r20, "+" ); - } - - // Set the If-Modified-Since and/or If-None-Match header, if in ifModified mode. - if ( s.ifModified ) { - if ( jQuery.lastModified[ cacheURL ] ) { - jqXHR.setRequestHeader( "If-Modified-Since", jQuery.lastModified[ cacheURL ] ); - } - if ( jQuery.etag[ cacheURL ] ) { - jqXHR.setRequestHeader( "If-None-Match", jQuery.etag[ cacheURL ] ); - } - } - - // Set the correct header, if data is being sent - if ( s.data && s.hasContent && s.contentType !== false || options.contentType ) { - jqXHR.setRequestHeader( "Content-Type", s.contentType ); - } - - // Set the Accepts header for the server, depending on the dataType - jqXHR.setRequestHeader( - "Accept", - s.dataTypes[ 0 ] && s.accepts[ s.dataTypes[ 0 ] ] ? - s.accepts[ s.dataTypes[ 0 ] ] + - ( s.dataTypes[ 0 ] !== "*" ? ", " + allTypes + "; q=0.01" : "" ) : - s.accepts[ "*" ] - ); - - // Check for headers option - for ( i in s.headers ) { - jqXHR.setRequestHeader( i, s.headers[ i ] ); - } - - // Allow custom headers/mimetypes and early abort - if ( s.beforeSend && - ( s.beforeSend.call( callbackContext, jqXHR, s ) === false || completed ) ) { - - // Abort if not done already and return - return jqXHR.abort(); - } - - // Aborting is no longer a cancellation - strAbort = "abort"; - - // Install callbacks on deferreds - completeDeferred.add( s.complete ); - jqXHR.done( s.success ); - jqXHR.fail( s.error ); - - // Get transport - transport = inspectPrefiltersOrTransports( transports, s, options, jqXHR ); - - // If no transport, we auto-abort - if ( !transport ) { - done( -1, "No Transport" ); - } else { - jqXHR.readyState = 1; - - // Send global event - if ( fireGlobals ) { - globalEventContext.trigger( "ajaxSend", [ jqXHR, s ] ); - } - - // If request was aborted inside ajaxSend, stop there - if ( completed ) { - return jqXHR; - } - - // Timeout - if ( s.async && s.timeout > 0 ) { - timeoutTimer = window.setTimeout( function() { - jqXHR.abort( "timeout" ); - }, s.timeout ); - } - - try { - completed = false; - transport.send( requestHeaders, done ); - } catch ( e ) { - - // Rethrow post-completion exceptions - if ( completed ) { - throw e; - } - - // Propagate others as results - done( -1, e ); - } - } - - // Callback for when everything is done - function done( status, nativeStatusText, responses, headers ) { - var isSuccess, success, error, response, modified, - statusText = nativeStatusText; - - // Ignore repeat invocations - if ( completed ) { - return; - } - - completed = true; - - // Clear timeout if it exists - if ( timeoutTimer ) { - window.clearTimeout( timeoutTimer ); - } - - // Dereference transport for early garbage collection - // (no matter how long the jqXHR object will be used) - transport = undefined; - - // Cache response headers - responseHeadersString = headers || ""; - - // Set readyState - jqXHR.readyState = status > 0 ? 4 : 0; - - // Determine if successful - isSuccess = status >= 200 && status < 300 || status === 304; - - // Get response data - if ( responses ) { - response = ajaxHandleResponses( s, jqXHR, responses ); - } - - // Use a noop converter for missing script but not if jsonp - if ( !isSuccess && - jQuery.inArray( "script", s.dataTypes ) > -1 && - jQuery.inArray( "json", s.dataTypes ) < 0 ) { - s.converters[ "text script" ] = function() {}; - } - - // Convert no matter what (that way responseXXX fields are always set) - response = ajaxConvert( s, response, jqXHR, isSuccess ); - - // If successful, handle type chaining - if ( isSuccess ) { - - // Set the If-Modified-Since and/or If-None-Match header, if in ifModified mode. - if ( s.ifModified ) { - modified = jqXHR.getResponseHeader( "Last-Modified" ); - if ( modified ) { - jQuery.lastModified[ cacheURL ] = modified; - } - modified = jqXHR.getResponseHeader( "etag" ); - if ( modified ) { - jQuery.etag[ cacheURL ] = modified; - } - } - - // if no content - if ( status === 204 || s.type === "HEAD" ) { - statusText = "nocontent"; - - // if not modified - } else if ( status === 304 ) { - statusText = "notmodified"; - - // If we have data, let's convert it - } else { - statusText = response.state; - success = response.data; - error = response.error; - isSuccess = !error; - } - } else { - - // Extract error from statusText and normalize for non-aborts - error = statusText; - if ( status || !statusText ) { - statusText = "error"; - if ( status < 0 ) { - status = 0; - } - } - } - - // Set data for the fake xhr object - jqXHR.status = status; - jqXHR.statusText = ( nativeStatusText || statusText ) + ""; - - // Success/Error - if ( isSuccess ) { - deferred.resolveWith( callbackContext, [ success, statusText, jqXHR ] ); - } else { - deferred.rejectWith( callbackContext, [ jqXHR, statusText, error ] ); - } - - // Status-dependent callbacks - jqXHR.statusCode( statusCode ); - statusCode = undefined; - - if ( fireGlobals ) { - globalEventContext.trigger( isSuccess ? "ajaxSuccess" : "ajaxError", - [ jqXHR, s, isSuccess ? success : error ] ); - } - - // Complete - completeDeferred.fireWith( callbackContext, [ jqXHR, statusText ] ); - - if ( fireGlobals ) { - globalEventContext.trigger( "ajaxComplete", [ jqXHR, s ] ); - - // Handle the global AJAX counter - if ( !( --jQuery.active ) ) { - jQuery.event.trigger( "ajaxStop" ); - } - } - } - - return jqXHR; - }, - - getJSON: function( url, data, callback ) { - return jQuery.get( url, data, callback, "json" ); - }, - - getScript: function( url, callback ) { - return jQuery.get( url, undefined, callback, "script" ); - } -} ); - -jQuery.each( [ "get", "post" ], function( _i, method ) { - jQuery[ method ] = function( url, data, callback, type ) { - - // Shift arguments if data argument was omitted - if ( isFunction( data ) ) { - type = type || callback; - callback = data; - data = undefined; - } - - // The url can be an options object (which then must have .url) - return jQuery.ajax( jQuery.extend( { - url: url, - type: method, - dataType: type, - data: data, - success: callback - }, jQuery.isPlainObject( url ) && url ) ); - }; -} ); - -jQuery.ajaxPrefilter( function( s ) { - var i; - for ( i in s.headers ) { - if ( i.toLowerCase() === "content-type" ) { - s.contentType = s.headers[ i ] || ""; - } - } -} ); - - -jQuery._evalUrl = function( url, options, doc ) { - return jQuery.ajax( { - url: url, - - // Make this explicit, since user can override this through ajaxSetup (trac-11264) - type: "GET", - dataType: "script", - cache: true, - async: false, - global: false, - - // Only evaluate the response if it is successful (gh-4126) - // dataFilter is not invoked for failure responses, so using it instead - // of the default converter is kludgy but it works. - converters: { - "text script": function() {} - }, - dataFilter: function( response ) { - jQuery.globalEval( response, options, doc ); - } - } ); -}; - - -jQuery.fn.extend( { - wrapAll: function( html ) { - var wrap; - - if ( this[ 0 ] ) { - if ( isFunction( html ) ) { - html = html.call( this[ 0 ] ); - } - - // The elements to wrap the target around - wrap = jQuery( html, this[ 0 ].ownerDocument ).eq( 0 ).clone( true ); - - if ( this[ 0 ].parentNode ) { - wrap.insertBefore( this[ 0 ] ); - } - - wrap.map( function() { - var elem = this; - - while ( elem.firstElementChild ) { - elem = elem.firstElementChild; - } - - return elem; - } ).append( this ); - } - - return this; - }, - - wrapInner: function( html ) { - if ( isFunction( html ) ) { - return this.each( function( i ) { - jQuery( this ).wrapInner( html.call( this, i ) ); - } ); - } - - return this.each( function() { - var self = jQuery( this ), - contents = self.contents(); - - if ( contents.length ) { - contents.wrapAll( html ); - - } else { - self.append( html ); - } - } ); - }, - - wrap: function( html ) { - var htmlIsFunction = isFunction( html ); - - return this.each( function( i ) { - jQuery( this ).wrapAll( htmlIsFunction ? html.call( this, i ) : html ); - } ); - }, - - unwrap: function( selector ) { - this.parent( selector ).not( "body" ).each( function() { - jQuery( this ).replaceWith( this.childNodes ); - } ); - return this; - } -} ); - - -jQuery.expr.pseudos.hidden = function( elem ) { - return !jQuery.expr.pseudos.visible( elem ); -}; -jQuery.expr.pseudos.visible = function( elem ) { - return !!( elem.offsetWidth || elem.offsetHeight || elem.getClientRects().length ); -}; - - - - -jQuery.ajaxSettings.xhr = function() { - try { - return new window.XMLHttpRequest(); - } catch ( e ) {} -}; - -var xhrSuccessStatus = { - - // File protocol always yields status code 0, assume 200 - 0: 200, - - // Support: IE <=9 only - // trac-1450: sometimes IE returns 1223 when it should be 204 - 1223: 204 - }, - xhrSupported = jQuery.ajaxSettings.xhr(); - -support.cors = !!xhrSupported && ( "withCredentials" in xhrSupported ); -support.ajax = xhrSupported = !!xhrSupported; - -jQuery.ajaxTransport( function( options ) { - var callback, errorCallback; - - // Cross domain only allowed if supported through XMLHttpRequest - if ( support.cors || xhrSupported && !options.crossDomain ) { - return { - send: function( headers, complete ) { - var i, - xhr = options.xhr(); - - xhr.open( - options.type, - options.url, - options.async, - options.username, - options.password - ); - - // Apply custom fields if provided - if ( options.xhrFields ) { - for ( i in options.xhrFields ) { - xhr[ i ] = options.xhrFields[ i ]; - } - } - - // Override mime type if needed - if ( options.mimeType && xhr.overrideMimeType ) { - xhr.overrideMimeType( options.mimeType ); - } - - // X-Requested-With header - // For cross-domain requests, seeing as conditions for a preflight are - // akin to a jigsaw puzzle, we simply never set it to be sure. - // (it can always be set on a per-request basis or even using ajaxSetup) - // For same-domain requests, won't change header if already provided. - if ( !options.crossDomain && !headers[ "X-Requested-With" ] ) { - headers[ "X-Requested-With" ] = "XMLHttpRequest"; - } - - // Set headers - for ( i in headers ) { - xhr.setRequestHeader( i, headers[ i ] ); - } - - // Callback - callback = function( type ) { - return function() { - if ( callback ) { - callback = errorCallback = xhr.onload = - xhr.onerror = xhr.onabort = xhr.ontimeout = - xhr.onreadystatechange = null; - - if ( type === "abort" ) { - xhr.abort(); - } else if ( type === "error" ) { - - // Support: IE <=9 only - // On a manual native abort, IE9 throws - // errors on any property access that is not readyState - if ( typeof xhr.status !== "number" ) { - complete( 0, "error" ); - } else { - complete( - - // File: protocol always yields status 0; see trac-8605, trac-14207 - xhr.status, - xhr.statusText - ); - } - } else { - complete( - xhrSuccessStatus[ xhr.status ] || xhr.status, - xhr.statusText, - - // Support: IE <=9 only - // IE9 has no XHR2 but throws on binary (trac-11426) - // For XHR2 non-text, let the caller handle it (gh-2498) - ( xhr.responseType || "text" ) !== "text" || - typeof xhr.responseText !== "string" ? - { binary: xhr.response } : - { text: xhr.responseText }, - xhr.getAllResponseHeaders() - ); - } - } - }; - }; - - // Listen to events - xhr.onload = callback(); - errorCallback = xhr.onerror = xhr.ontimeout = callback( "error" ); - - // Support: IE 9 only - // Use onreadystatechange to replace onabort - // to handle uncaught aborts - if ( xhr.onabort !== undefined ) { - xhr.onabort = errorCallback; - } else { - xhr.onreadystatechange = function() { - - // Check readyState before timeout as it changes - if ( xhr.readyState === 4 ) { - - // Allow onerror to be called first, - // but that will not handle a native abort - // Also, save errorCallback to a variable - // as xhr.onerror cannot be accessed - window.setTimeout( function() { - if ( callback ) { - errorCallback(); - } - } ); - } - }; - } - - // Create the abort callback - callback = callback( "abort" ); - - try { - - // Do send the request (this may raise an exception) - xhr.send( options.hasContent && options.data || null ); - } catch ( e ) { - - // trac-14683: Only rethrow if this hasn't been notified as an error yet - if ( callback ) { - throw e; - } - } - }, - - abort: function() { - if ( callback ) { - callback(); - } - } - }; - } -} ); - - - - -// Prevent auto-execution of scripts when no explicit dataType was provided (See gh-2432) -jQuery.ajaxPrefilter( function( s ) { - if ( s.crossDomain ) { - s.contents.script = false; - } -} ); - -// Install script dataType -jQuery.ajaxSetup( { - accepts: { - script: "text/javascript, application/javascript, " + - "application/ecmascript, application/x-ecmascript" - }, - contents: { - script: /\b(?:java|ecma)script\b/ - }, - converters: { - "text script": function( text ) { - jQuery.globalEval( text ); - return text; - } - } -} ); - -// Handle cache's special case and crossDomain -jQuery.ajaxPrefilter( "script", function( s ) { - if ( s.cache === undefined ) { - s.cache = false; - } - if ( s.crossDomain ) { - s.type = "GET"; - } -} ); - -// Bind script tag hack transport -jQuery.ajaxTransport( "script", function( s ) { - - // This transport only deals with cross domain or forced-by-attrs requests - if ( s.crossDomain || s.scriptAttrs ) { - var script, callback; - return { - send: function( _, complete ) { - script = jQuery( "